<%BANNER%>

Applications of Parallel Global Optimization to Mechanics Problems

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110330_AAAAMH INGEST_TIME 2011-03-30T19:49:54Z PACKAGE UFE0012932_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 101125 DFID F20110330_AABXLI ORIGIN DEPOSITOR PATH schutte_j_Page_121.jp2 GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
6a249a5e89ccef495e32182b7754eae8
SHA-1
e4b9e751fff422ab6fa8d8fdf4cd3d81b29fb398
87847 F20110330_AABXKU schutte_j_Page_106.jp2
6f36618c76e18267660db9634119047d
cda0b00c8dd1175542273354710a8bdcd26b19c7
1053954 F20110330_AABWIG schutte_j_Page_099.tif
b26faa240e7935db8177e371cfbff684
63e32072ee437683ab6252a6454224e0274af026
F20110330_AABWHS schutte_j_Page_085.tif
6c5990e7d39cb2ea0395115514787bf6
c3f40ed763886ae6ad732c50e2e28b93c29350ee
64852 F20110330_AABXLJ schutte_j_Page_122.jp2
0680fef980305ee82db24479b2dad594
9c7024a5acd5cc13a796f9aa94e7fca84fc4404b
83333 F20110330_AABXKV schutte_j_Page_107.jp2
26661445ccb8049e98760d9981d9ea2b
df5f4201717c533dd2e0153de209128d530fd47b
F20110330_AABWIH schutte_j_Page_100.tif
e0e2980270238e483d164a69112d67f2
b7b2908e3dacdeadbde384442fd5a47454ad28f4
F20110330_AABWHT schutte_j_Page_086.tif
bad9034f1085f6082f26e37aa98940c2
27ad472901ab4587e6a230dc0525833446e1b9e5
109996 F20110330_AABXLK schutte_j_Page_123.jp2
49e7fe8ad81034cb1e94f22703d2c72f
63d4f98e897265716aede457027527cbc6b0b1ee
103956 F20110330_AABXKW schutte_j_Page_109.jp2
56d101231d12007b5fa66bf0ec5ae26b
1fa24364df8e31fd825b1b66766e2a5fe8aed47f
F20110330_AABWII schutte_j_Page_101.tif
f0c7e194e64781f194465496e73d8e8b
23fe754ac0313738d2df51b6e7b9bb26c5ae7e2a
F20110330_AABWHU schutte_j_Page_087.tif
9b822a99a4ee3b767d380461375adf6c
e5a614152bee7f0ea3870c247ea86e78ee97c75b
67568 F20110330_AABXLL schutte_j_Page_124.jp2
df34b63f7e9fb43d72a566be96a7e6aa
c9cac537de51cb042fdaccb6a59b095efbc6dbee
87279 F20110330_AABXKX schutte_j_Page_110.jp2
adb0e9376b1c88212b4bbabbf4859eed
df3ad8293d0737b31000f2a25ad416bfe24d1fa3
F20110330_AABWIJ schutte_j_Page_102.tif
f24bcabfe00fd70c3c71e79ae2663f73
fccf700ae912de76eb14e6b6b76272ab92e5b9c1
F20110330_AABWHV schutte_j_Page_088.tif
b399c5b06f8e1e48e89758521710ba35
8cf28532ce1c9edf8d18691e795e2e1723c02dbd
33867 F20110330_AABXMA schutte_j_Page_139.jp2
d7aaf94b0bfd3c5dbfe5be5dd4fa3067
5c00402b47414049c316d9ecf11037b1086cf6af
449416 F20110330_AABXLM schutte_j_Page_125.jp2
3ff82e7d42abc1203f11386f2d954fad
cb9a9e67928dd4a146c364cc203ab42e65ad57e9
69326 F20110330_AABXKY schutte_j_Page_111.jp2
cfbf86a287b3897a0c9114f6bdcd3ad1
60306ca76bf9c5d33473236dda7da04b4ca3f466
F20110330_AABWIK schutte_j_Page_103.tif
7fe91e6bdbf1d018ff8a275bcc7f1bbc
4fd0dcf8bbf65a90e81a30ac1c250fff94f34277
25271604 F20110330_AABWHW schutte_j_Page_089.tif
b5fb670bc1e22752b0fef262a0450493
a4436adbb95b88440cea9b7c8cd451803c5ea53f
1051975 F20110330_AABXMB schutte_j_Page_140.jp2
248426f72bdaedad6bd774bd1796952d
8de3211a2286932169ca695073a86af850f4a8e6
79519 F20110330_AABXLN schutte_j_Page_126.jp2
a3125e8a8e3709fba0c873e8582459cf
771249b997923861ba4656cb673ffeb700f197d4
F20110330_AABWIL schutte_j_Page_104.tif
15a41336633f28d413712cd68c1d91a9
d99b5e2f180a0176fa879b19cf04fa6ebf5f7144
144305 F20110330_AABXMC schutte_j_Page_141.jp2
28f4b8db7fa843812c54f0e05354e480
757e78b3df18af9f0a39c3a566bbb2e37be94eda
62893 F20110330_AABXLO schutte_j_Page_127.jp2
23374eb527bd9f39429c1ff6bc0dcb03
8e3d78db0481041093edf341a91b6217c9b753e1
86085 F20110330_AABXKZ schutte_j_Page_112.jp2
9f7bcc6e8b87ce2b11c19391936be8fd
d8f673d5cb454ca7daa2f260be37b909fbb98225
F20110330_AABWIM schutte_j_Page_105.tif
0a9fa1970bf4a39a10cb234fb05025ed
5b80dac199310b553611649063f23ce7dfb99897
F20110330_AABWHX schutte_j_Page_090.tif
27017e048cb0e3c7ced7db60fb7621e9
c1349a9cfd0eec78516dddf822c03773992a9641
F20110330_AABWJA schutte_j_Page_119.tif
e94eb90ea224e8898243a7faba8894fd
d83f029cd3c42db5fcb2d23df7d6e3e386c10751
147844 F20110330_AABXMD schutte_j_Page_142.jp2
66271614d22c316ea7c8a3125390f865
2a6a26e3a923ccd34bba7b35660618399d61d222
52797 F20110330_AABXLP schutte_j_Page_128.jp2
f5692cc7e617f5482e118cecd5ec8a2d
c77658c96953287044ac47d2af6a528e05a6516e
F20110330_AABWIN schutte_j_Page_106.tif
9a445856977df0f0a9200eda819810e8
3c35b232f4d3f3d327a071e3f8e8cea7f3e33cd8
F20110330_AABWHY schutte_j_Page_091.tif
c65f8a0595471fbde656ee6d3b8d6e1c
9274733c4d1cb070db99af1da6f9cea7cdf20ec8
F20110330_AABWJB schutte_j_Page_120.tif
a3d0889811c3fda5fd865bf010ec26a3
a577f381f12b97d75a5af6d94915b6222da9bfae
142888 F20110330_AABXME schutte_j_Page_143.jp2
fa25ed30b9525846120c06fb992e1193
3b19fb22d425baf07d42b1d5772f1d934012b826
102407 F20110330_AABXLQ schutte_j_Page_129.jp2
aadfbc98763fce2043cc08688e2835a2
299cc081f04b87f60545aea41dfb70bbf92db6d4
F20110330_AABWIO schutte_j_Page_107.tif
1b3a15eaa67298300fd38c56a7eb2452
7d493bb885cf503c5a3eb653739c0f83da4aa015
F20110330_AABWHZ schutte_j_Page_092.tif
8d656bc2907cd98f6a0c3eb4922f9d9e
5293903318e7a6004f850a8c5076976d3a2fd8cd
F20110330_AABWJC schutte_j_Page_121.tif
8aeb26891d2da5b770290a1194b20cf9
0d52c4018621a457b7cb261b21c0c5b421376f90
119074 F20110330_AABXMF schutte_j_Page_144.jp2
cf01bc6a9dc37a3f7777fe881ac0ffcc
2eaaca2c5fcbeb6cf33a43d3e031c13830f81b22
90752 F20110330_AABXLR schutte_j_Page_130.jp2
67f5803cfdc0b6e09d03fbde90e25098
03b30f56d9a47c1e65c86e5b87b97984dc957128
F20110330_AABWIP schutte_j_Page_108.tif
86063ea865c12cd5ede6abbba4b87f0d
4c656a668e092da00dce29c99b2adbb7b70082d2
F20110330_AABWJD schutte_j_Page_122.tif
7449c41d6460efdeb0c684c214c52df3
d4e47a0549bb1bdf6f59949008eb4870977bc53d
128501 F20110330_AABXMG schutte_j_Page_145.jp2
e711af046565f1d1651d8f8ae5c8bd55
c5bf92223f3a617b9d53d0c7304b2f74265b38d0
109097 F20110330_AABXLS schutte_j_Page_131.jp2
bb1e93991f206e2de2a5034ce72b7502
9efc72698a59ac9cb499bea6efc1f96f6847a702
F20110330_AABWIQ schutte_j_Page_109.tif
570b513c2d3b35133d17039ec4c70407
4aba2f8f6ea8e8a0fbd8bd099178ab49c58a8892
F20110330_AABWJE schutte_j_Page_123.tif
c05890981fd3021a0030c4fbd70e504f
d75ab04d73f240f41f38c8e86dc66d74c3913aa7
132917 F20110330_AABXMH schutte_j_Page_146.jp2
de944b7fec313b2182e537deb241c6c0
ddcb989453053e7663af9e8020814efc10833ecc
46629 F20110330_AABXLT schutte_j_Page_132.jp2
38c434f3aaab6627addc621a9ed62506
f115d6ee2296367ee8f8f59cbdc185469a43535b
F20110330_AABWIR schutte_j_Page_110.tif
9e8b4d6828581fa757f7dcf87bfbd72a
80022e624f96f67133a26003a7d04267ab2de529
F20110330_AABWJF schutte_j_Page_124.tif
b8f10cc640afb0b563a800b6f028aa21
7bb28d24a2e0bd1c4077b78a7aa61f1081673134
117970 F20110330_AABXMI schutte_j_Page_147.jp2
d80e766afa281ed47d7cdc71823ab757
37f0b8aa7f2970bd105c5ad359355f376f5f98de
35255 F20110330_AABXLU schutte_j_Page_133.jp2
fc86805c19a90e11edb725834b2b6889
b4751fdb4fc37f70cf99b2a562d9c9a74e975529
F20110330_AABWIS schutte_j_Page_111.tif
58fc53b58b673b337fa83dcdabb0beb2
6934dbfff5195b5de7488dcfbab208ed1d6dbf9e
8423998 F20110330_AABWJG schutte_j_Page_125.tif
d7554705c04ee8442be8bf3d32ee75e9
3bb15f1ea9ada580b2d85cc2f0b0ed9c8300a2dc
140520 F20110330_AABXMJ schutte_j_Page_148.jp2
08ab730800196ed56fd0a9eff53d4b53
a68aa948334605d01e17b4e7faefac7c3455a025
40646 F20110330_AABXLV schutte_j_Page_134.jp2
9d3f16596f7aa93e9b51c35bacedd740
a6a7daa2e825c9114ab8f2d5aeecdc22c0dd11a7
F20110330_AABWIT schutte_j_Page_112.tif
35677c6fe8066bb36f1f58c84c0bcb90
eb3227a1d1c999e78f49d911d3cd8125d735698c
F20110330_AABWJH schutte_j_Page_126.tif
16368ffaed76976516a99d931c4a22b5
dd0749d803bcf02a1c5233e38e8d90ad4a140f73
122433 F20110330_AABXMK schutte_j_Page_149.jp2
bcc6f94b51967e0f19a5941be6e3a96b
5c76966425f9b1a4eda240201b13ad0b8f178e59
18871 F20110330_AABXLW schutte_j_Page_135.jp2
2037014fa5eeae7d7b285ab42cf705ad
2fc820d1fa015a8d0b23f833ef09fbf59aa2f2a1
F20110330_AABWIU schutte_j_Page_113.tif
f17f38ff01c4b72687e3b558388878cf
b0b87e19f156a374e55a69e1a0ebb820a060423d
F20110330_AABWJI schutte_j_Page_127.tif
5fff929cceeb06e064bf3abd21dcd2aa
a08e87d99aa498594ef5dd40a99d736b5106ffd0
128157 F20110330_AABXML schutte_j_Page_150.jp2
94598ff777f0e4018ecb7ff65353573d
a981335c58121646dbce81716ed5f5cebccb67d3
36962 F20110330_AABXLX schutte_j_Page_136.jp2
8bc61eb1cd8baa30b1fdb77c433105ae
2a23c9f3b6cea9c74a25bbdb5c8b3606061123f0
F20110330_AABWIV schutte_j_Page_114.tif
69b9ad997193cedf8c67f5c66eb3fce8
260f0eec3545b4a24736653c14fb75628ed4baaf
F20110330_AABWJJ schutte_j_Page_128.tif
49a31a010f6b0ccca0b3b2492727fec5
160b99725a04226d60f1462a6528787afb18084b
5120 F20110330_AABXNA schutte_j_Page_013thm.jpg
687c27417adcee49eb0769e648c00e51
e7a933edffbadcdc381d1ac0668d052a04bd37d9
63835 F20110330_AABXMM schutte_j_Page_151.jp2
51e8f94ad2cbba71059d831d943ee101
e51f05ec4e5c7fe4b0ce48500310fd56150de7ea
56407 F20110330_AABXLY schutte_j_Page_137.jp2
decd569f15162113490a2ba250a3e357
202c386c5c7011a6bbc11c5c5dc9d6209190d34d
F20110330_AABWIW schutte_j_Page_115.tif
d96fd845c24e55f6507132706fdc1343
7a68c11f6b27db82a957c30eab9e6b7156f1d4a4
F20110330_AABWJK schutte_j_Page_129.tif
ffc06dea828eff6dafc754ae45f5e2b5
0226e9bafb140fd7b9e265fa5ef2a65129bd72bc
2170 F20110330_AABXNB schutte_j_Page_014thm.jpg
addd7d53c00336ea20977821ac978521
6b12d291ecca6c47bdbdfec94bd95c5dd07e8a43
31932 F20110330_AABXMN schutte_j_Page_152.jp2
e0dfc728af3e54031abd2109fa8d1740
1810c8497cea3b91e851e44afc1eebd4fe0e4f9f
50277 F20110330_AABXLZ schutte_j_Page_138.jp2
c09acd6c1a6056c3afed6755e36a899e
5ddb82d15294ed97bc34558a42b0174bc65bbb1b
F20110330_AABWIX schutte_j_Page_116.tif
13fdf43a965baaadf1d3d70dffd6560c
2661edd2539225b24f09ae12a7eeb3ee3a8ad819
F20110330_AABWJL schutte_j_Page_130.tif
1fcfbcef6befc59acde691994a4bd610
06e8a6502fdbd9b6c19c12694bd1247b91b85656
4808 F20110330_AABXNC schutte_j_Page_015thm.jpg
b5bd72b9935c2f037ddfff4688b27c67
0438781454d76cd1f7d230c533d0539a218958bf
1679 F20110330_AABXMO schutte_j_Page_001thm.jpg
ca3ddb15ed52269a681c2452daa2a996
28a0f3129d58c7c8efa67ffc068ec1f891bf3232
F20110330_AABWKA schutte_j_Page_146.tif
56520be7da7a4ca4a2811aeca79d3ab4
1ddddc302b415bd83771b63e0510b2f75860639b
F20110330_AABWJM schutte_j_Page_131.tif
95969244ea6e4e4276889fa640458172
9930f8e5578d5a82984805f646ee9aa123b3f7d3
6240 F20110330_AABXND schutte_j_Page_016thm.jpg
d737c745e51f0750f7ed1c97284f3931
ee17335e7f407d84cdde4d298d8408024a1607ff
589 F20110330_AABXMP schutte_j_Page_002thm.jpg
cd0b8ef3562ade8d692637ee49a1762c
1dc4cd62eb811eae915a30d9b22c76827c116882
F20110330_AABWIY schutte_j_Page_117.tif
d26e8a44f00052a687e6d860e9287c94
333779dad7e06aced0a3453a049ad50c4f7021d1
F20110330_AABWKB schutte_j_Page_147.tif
74c45763757ba28de9fb9842afa28ce9
8fc31caf330efa68448ec90bd0bb754a293b1c70
F20110330_AABWJN schutte_j_Page_132.tif
2c67c9efa2e6b3bd8a6f9bdc596e93ac
96582421a45962a75275bfba9e743d2ec15aa32e
6255 F20110330_AABXNE schutte_j_Page_017thm.jpg
66637d3f43bb34eeacc3b9ee25ff38e2
57600756c1f3de879bac23ebb121647bc2ff51d3
527 F20110330_AABXMQ schutte_j_Page_003thm.jpg
d563cae7c89b2de8eb068f9c58566133
dc72f9408975e947e3320da012bd15a432a8a5c3
F20110330_AABWIZ schutte_j_Page_118.tif
1630c6b00e4d0098c53ba8e803f45259
91ef63c6e8f1e7fee739bc4f25c00e17662585cd
F20110330_AABWKC schutte_j_Page_148.tif
c0a347dd64363c156d6337b235bc54af
0b499ed54f7e8fe5599aa71833ea5c209ff563a3
F20110330_AABWJO schutte_j_Page_133.tif
84119d53305742cad011ddfed8a9d89c
053e8b21e257694653bd111a58cecbbf2d3da8db
3906 F20110330_AABXNF schutte_j_Page_018thm.jpg
7f1ba69d882ece8b800997bb8a355846
68bd8bea6605ae92e008f4d58b83ab88e9c0029d
5492 F20110330_AABXMR schutte_j_Page_004thm.jpg
23adb7769786dfba5dc23a7bf5baf13d
ea86e1a7759be68a08a9566d5ad09428ec9f28aa
F20110330_AABWKD schutte_j_Page_149.tif
cd1850eeb2ee372ff4e44aabb3da294f
d8ae8770b52b894daee0aed3c6a022c59331be73
F20110330_AABWJP schutte_j_Page_134.tif
512f143674d4200a16aacea0736fd74d
66c69e81325570c582fb276edc89a8b3b39872f4
5039 F20110330_AABXNG schutte_j_Page_019thm.jpg
e028e9fd0b05a34f60c30e7a60bb57cb
8ebdbf9404b5efbdc25085e930b3ccf2f28f7722
2221 F20110330_AABXMS schutte_j_Page_005thm.jpg
bf4029dce048d704817933a180b9fd11
37f3255d025d1effe6fbba56de67317722980359
F20110330_AABWKE schutte_j_Page_150.tif
d3f5fc68bbfac3f64c040ee0ef3f6ce2
7a0c8655b57b63744322df6450edba9043bdc81c
F20110330_AABWJQ schutte_j_Page_136.tif
bb7f448b4b4956c5ea374572d694ff6c
c1f179ece98c23f9f851a02b3422d6885e34ce8f
5902 F20110330_AABXNH schutte_j_Page_020thm.jpg
2de90f381b4f4945c2704b80a3c28514
cbb35c44d52b36ac21197bb7de6a5611412743f5
4260 F20110330_AABXMT schutte_j_Page_006thm.jpg
0fe50a7748164158514e48d2d6b07cf8
ec0bf5e9fa70bc08300018b72e7c625a8cea3fa7
F20110330_AABWKF schutte_j_Page_151.tif
82757af89b844cde30b9786482d78a73
dd1e076fa7025a0fe66134d4f6d3a38f6554da35
F20110330_AABWJR schutte_j_Page_137.tif
1f1c7e50ba0b95ec87b8f30287abdd56
e72270c7ade69bc219e601c08da9c4968bbf1baa
5739 F20110330_AABXNI schutte_j_Page_021thm.jpg
8e7db2f4b037c0b437a4753a694f547d
ab39604f97574a43c7af40f27ab289eb251d89e6
5315 F20110330_AABXMU schutte_j_Page_007thm.jpg
7b41e2e3af3f1cf1af2fef75b5d777e8
a127c4d3037e3c3582034aaadfbed61914fe729b
F20110330_AABWKG schutte_j_Page_152.tif
bf122d7d8cf5e56345a67b14b1fc8d7a
26369008c6f03d5c6f0f14f4bcf4187d4866aa0d
F20110330_AABWJS schutte_j_Page_138.tif
2554afe3a22964d65968887cb9300ccf
d047b3ba938ac0ee83730cddd6e1d06c887dd17e
6029 F20110330_AABXNJ schutte_j_Page_022thm.jpg
5988a37c3406c81e17d399ff9d76a909
664ed7986eb0c634ead2ddf361209b262949fa70
3897 F20110330_AABXMV schutte_j_Page_008thm.jpg
2f2f422519c4c1306286ac197048549e
94ab5b43aeaa0ff68269b8a704e5bfe215c2f7bd
460 F20110330_AABWKH schutte_j_Page_001.txt
ffe93b0644b9cb71a7ff3dabf6aecb82
fd0029ce7be4844fd9fb5daf34b1925770e54a0b
F20110330_AABWJT schutte_j_Page_139.tif
406c4e3840cf249d9d406b377189c7c0
4a75c00440ce2bbd3944bbc94cd3a180eaf6feda
6052 F20110330_AABXNK schutte_j_Page_023thm.jpg
e95c5bddce11efb6b3da286961332517
b3d6067383ffe93bab7b83b92c9843fd5725dce5
5182 F20110330_AABXMW schutte_j_Page_009thm.jpg
ebb8ba1519b8f8b85870aae133a137a6
98a5598332e16ad10aafafdaa863e1d2673d4e95
125 F20110330_AABWKI schutte_j_Page_002.txt
1fc9c23f331250692028138286856f65
e1a0a8539f12170dde4d0f0b2d9c35eec7c6d7c2
F20110330_AABWJU schutte_j_Page_140.tif
59dff4ab9bab0bafa5335f214e2c3799
eb8372436a931b4f62cfdf791e93ef2fb4d4065b
4413 F20110330_AABXNL schutte_j_Page_024thm.jpg
ffb73465f156d7a952b47afb394d6324
f42eeefda7be96c30435efe7f968716c48ef91c2
5463 F20110330_AABXMX schutte_j_Page_010thm.jpg
cb1171490253671f6b2fe2582e676075
b6b5236de34092ce136f19bdbbc80760871d1c11
117 F20110330_AABWKJ schutte_j_Page_003.txt
fc408d424a134c38bf89a412299a182d
adeb2efd8a5c73af3e1b8f1c98947423be84a090
F20110330_AABWJV schutte_j_Page_141.tif
b27c07ed3533d5a3904d2f1d509850ea
ed318efb56878956da67b6fbe4d4159b190b633a
4526 F20110330_AABXOA schutte_j_Page_040thm.jpg
7c670c35ac5c11eaf5c8d24e4c1850fa
07b984d18b6d089f3668484d32a15cf6ab5d54c6
5713 F20110330_AABXNM schutte_j_Page_025thm.jpg
89f14b0fd76529cb74c0330c189c29f9
c42972b1a0085f1f9253911b43e68124d79944f9
6533 F20110330_AABXMY schutte_j_Page_011thm.jpg
a38f9fedc6052cfd10992abf8f3ed2d8
e2e38fe46315c4bec658ef2b8d51e6b90f29dab7
1743 F20110330_AABWKK schutte_j_Page_004.txt
95fe71dc09c53b694474be431ad1f9cb
2d66e85ed4f9db6937fcb675198a28928416d377
F20110330_AABWJW schutte_j_Page_142.tif
e983d1c7539b505d23679d5e9d51ea14
1b48f9140931598a5d6623e8f1f38aa042ba07d5
5747 F20110330_AABXOB schutte_j_Page_041thm.jpg
550717625f44de059675141b0dc9832c
9d4c14434e77f23baf58b8357cba74cbd7bea463
5407 F20110330_AABXNN schutte_j_Page_026thm.jpg
dc48ec7d8dedd5e326eaf46e0f6c6194
08ccdf6767a82f51bd7cc5e90f41a635c1a70640
4355 F20110330_AABXMZ schutte_j_Page_012thm.jpg
fc4f75f3d5656cfebd532b51f2bbb13f
095daa3cb5bc00db4a059b4f92f43e9362403466
603 F20110330_AABWKL schutte_j_Page_005.txt
2cb842163d9e9f5ee975e89a41f3f875
71765c5143e1c364307254f4b3d2cd26666f7e08
F20110330_AABWJX schutte_j_Page_143.tif
72c58856378e65c6dfd84701cde4b7c6
d1915b9337d3cafdac18e6a3d4a17569a040d889
6006 F20110330_AABXOC schutte_j_Page_042thm.jpg
88bcd0697623ef63df8f1c430ad08911
f2f9c84e288c82bf8947ab6a2ed5777f9ea750b9
5721 F20110330_AABXNO schutte_j_Page_027thm.jpg
7aa7b351f8798f0d4872ae4213b96389
b1737fdcbc797c55d169ecdc37a86ca8d0197cd1
1999 F20110330_AABWLA schutte_j_Page_020.txt
771ae50d448298fb5c16c13d62599023
d913fcc2ffc26d857a810b8956a41713ee7602c0
2960 F20110330_AABWKM schutte_j_Page_006.txt
dc36ed58820ca04f2479ae604b8a6412
46c753e331531bdbc39c8598f9aeb6e412546c25
F20110330_AABWJY schutte_j_Page_144.tif
4793e22d6e8a3ed90ee299cec663328e
b751a9f654c54daf801ff6e821b2c132b290bb91
6385 F20110330_AABXOD schutte_j_Page_043thm.jpg
149ab556d0d4c8459519c23fe52d45eb
4164cfa5dd77396cd984960b0c2696667cc3858a
5740 F20110330_AABXNP schutte_j_Page_028thm.jpg
c3d3f85c30d2fedce935cca301f3106f
e41e9631272366a61c23195d8b0257647122317e
2133 F20110330_AABWLB schutte_j_Page_021.txt
2bbbe1567829eef83414a4e9de065ff0
019776f8cd4a5421e39b9dbfb2ec443f6974d346
4489 F20110330_AABWKN schutte_j_Page_007.txt
e80b1707384d235891efa1e323faaad8
32f6960663bd13042c727ebbd8dd824e13dad6af
6306 F20110330_AABXOE schutte_j_Page_044thm.jpg
0e094011873c556535b94adc35427a9f
ddd4ff0a38b249f4ba0f143207d08989b009e143
6182 F20110330_AABXNQ schutte_j_Page_029thm.jpg
f567556f1be14a994f54899ae439020e
3776fd2458afa9e21bbf5a9669184cdc1d3dde2a
1803 F20110330_AABWLC schutte_j_Page_022.txt
aaaf4de93f1e059ffb8cf22a7cd81e04
d0833d6b695acb27e9f4afd98d772d37de82c9bd
2760 F20110330_AABWKO schutte_j_Page_008.txt
79e93c892c840f67caa11bcef1442a02
988a1bf3aa59ec1ad79b6292a77e1721db71ed52
F20110330_AABWJZ schutte_j_Page_145.tif
8e807209805ccf2110d3a9a766896c40
61598ddf930dc1160ac09ad1eeda1c336dfa62a5
4738 F20110330_AABXOF schutte_j_Page_045thm.jpg
a4aacc11c075ef9ee92e0f16328eb40d
0616f97a07e0b0d6829f80e615513edc2ae62f3e
4576 F20110330_AABXNR schutte_j_Page_030thm.jpg
dd6db43873434f4d61efa79ef6e10f8e
fcefcd756f061c7c4856666433251fbafaf17296
1868 F20110330_AABWLD schutte_j_Page_023.txt
479ac892a67e54723d7419216628f14c
4209b384915d2e6942773bce82a8d62f4f8c8f19
2426 F20110330_AABWKP schutte_j_Page_009.txt
072e86472f47951ed6a6fa21467489df
7b704c20e62b3f6b903bb4c6388e4643e6b791b3
5301 F20110330_AABXOG schutte_j_Page_046thm.jpg
d3ae039ba15a1a9259f44bc1d6d92d4d
e3dc68aeff2effb6379561015fb6747080fb8833
5633 F20110330_AABXNS schutte_j_Page_032thm.jpg
8885757e8ba4300d8195cf54736e825a
23627544cff05c07d415602e03b014c6681f40a3
873 F20110330_AABWLE schutte_j_Page_024.txt
9bd82ea211108ca7bcd50939e90cb212
a57cbb78bf936f366fd77ce8ea4d3c75d962f9d9
2573 F20110330_AABWKQ schutte_j_Page_010.txt
90a454f62b57039046bfb73b2c7f5254
2fe8dfc30f30951aa7a8d1cd5e1762d51c3fd3d2
3587 F20110330_AABXOH schutte_j_Page_047thm.jpg
2ccc926bbd594024fa44b15d87f2587f
7b64c0b41f98c04145bde6ea2a9a4df25a3dcc90
5969 F20110330_AABXNT schutte_j_Page_033thm.jpg
5510fbc1b9f8513f19885460bf8e4a56
7b773f9b6da8330dce08fe0d55d62affeab16d45
1852 F20110330_AABWLF schutte_j_Page_025.txt
1391ba34c8e924d4dfb6815b88ee396f
a03709dbcbb6c14f2e34898ce0b3969a664cd30f
3027 F20110330_AABWKR schutte_j_Page_011.txt
193632ec7e66c0a0b60a4627d4ca31ca
092ad6f8ace5880bde322b8b7ed753af02832b2a
F20110330_AABXOI schutte_j_Page_048thm.jpg
6ce90fdaf773ef6336b68c6b3266f652
120c8cf0864862a1406a85367889c503a76094a8
6391 F20110330_AABXNU schutte_j_Page_034thm.jpg
2faa0b833b44c8ea90356f18ccc2a405
e56de656f32ee3d6b5109a9c46a45de3e5d689b8
1799 F20110330_AABWLG schutte_j_Page_026.txt
039b4f5690a59509cf8d5d9d00ad6312
e82610c26f489f0a9b8e91436182d9c430e5e24f
1678 F20110330_AABWKS schutte_j_Page_012.txt
ecdef50b403cdea63f805e9b098a666c
c31d588c09f4f58e57e2cec58e09489d4309b326
6193 F20110330_AABXOJ schutte_j_Page_050thm.jpg
1e3db92cb084455ed01e7fca93ab4c69
24a57bc08062091fbd77dccf6ba28aa534ada85d
5503 F20110330_AABXNV schutte_j_Page_035thm.jpg
08f093fcfd47e3b43d0e54ea5fc68601
59e74586a7ea9f794bbd278331d7b5830ca2050e
1070 F20110330_AABWLH schutte_j_Page_027.txt
066a41c12dcbf1a081f1ace46795cab5
fd9a3f00069a80e0b7888e4eb4846d13bdf27fc8
1784 F20110330_AABWKT schutte_j_Page_013.txt
59db7650db8eba67c0450b6bc61c7d1a
863a8a29950706ef7880fa7aec81d44c7af49852
6067 F20110330_AABXOK schutte_j_Page_051thm.jpg
d18422dbf01ca8fbceb09257dbd164e1
e69e6be3829624539ca0a50665ec708a084c892f
6401 F20110330_AABXNW schutte_j_Page_036thm.jpg
6968c78f485d80b30f9ae58081752534
9bbc35c19ff73fe756a788408985e0fa9d90e04a
1786 F20110330_AABWLI schutte_j_Page_028.txt
516030157a4f3454a0ab5a9db11aadd1
4d36adb819d9b3d9cd31d3d0e1c2e9f66ec3497f
598 F20110330_AABWKU schutte_j_Page_014.txt
8b1acc10bb1b715567cf0f87a99345e2
2f8f2412d25267213fd622840b3b2decbe7546d9
4794 F20110330_AABXOL schutte_j_Page_052thm.jpg
5ca075da5e00d33b98221adb63017093
00c21cc042df95141b0cba18f417a775163ca378
4469 F20110330_AABXNX schutte_j_Page_037thm.jpg
793bb31a0e1be161da703f3bc7218d52
d453964b70db7f3f73c9166c1fe7d74e036d718f
1946 F20110330_AABWLJ schutte_j_Page_029.txt
4ac2d27dcfbb5614f4050f5aaf819ba6
4ff46ba6f8ccbf3661d658844f54c307a8dcdb6f
1650 F20110330_AABWKV schutte_j_Page_015.txt
d06fdb34a037b5b04616c747f772df68
673cfcf32ed7d0d92a39dbd6f7ae198f7810940b
5826 F20110330_AABXPA schutte_j_Page_067thm.jpg
09ca76765d0e074f9713970eba6a802e
386e58e75901ed2757432616863fb9f04b3635fc
5840 F20110330_AABXOM schutte_j_Page_053thm.jpg
f1966656dc907ef74481447d13da2d9e
b27895d43c1fa592f2b4948acdfe2d53deed4b64
5185 F20110330_AABXNY schutte_j_Page_038thm.jpg
b44d025bcbc419fea693d9833ef3ecab
f91fa574dfd0b092d4cfe81aedb555d71b529163
1975 F20110330_AABWKW schutte_j_Page_016.txt
f2387fb51baabd11a41722df65ae4683
d9c93df21b464168675ed31b134bfef6253b59db
1135 F20110330_AABWLK schutte_j_Page_030.txt
d21b608dd47dfaeae29b69f2e82a8b82
9de4f0af304b32e656ac00b90ddb8e65005ee97b
3711 F20110330_AABXPB schutte_j_Page_068thm.jpg
d9593050b53e5ce20739d8fbc6a737bb
b2d5ccb002c2237e2553a9d21efa8743da526294
5251 F20110330_AABXON schutte_j_Page_054thm.jpg
6449e90d24a2694ebe816c07fe7618cd
4ffa634d974960fce60703c41967554ad8a6a1c6
4680 F20110330_AABXNZ schutte_j_Page_039thm.jpg
60473de86f478c6df079815dac120691
60767ba8c45fb7c4e6e98c709571ef658f47fac5
1921 F20110330_AABWKX schutte_j_Page_017.txt
6889df0a5009d1345c93ea7690880985
03bdfde10e31842ad9e1d733696a1ab61bc33da1
1667 F20110330_AABWLL schutte_j_Page_031.txt
7d5a1242cbbd5b29ad595b4308dac0e0
4b1e71f615ef173d3144d76764be2f374387b2ac
6247 F20110330_AABXPC schutte_j_Page_069thm.jpg
7c0a53999290ac0638e88bc9ac756129
bc59278530e972b0efab5538d2b6a9d86fb17151
6130 F20110330_AABXOO schutte_j_Page_055thm.jpg
7be2e4c5b678c8ebaea3fe036ee528ea
0ea033a88e6ac153e93c4b0a1c7794637c26c37d
1166 F20110330_AABWKY schutte_j_Page_018.txt
50a84a7e733d88cf0f1b6d09be7b56e6
bfa1399cd9fb7927b7249a7833dc06edfe5614ed
1985 F20110330_AABWMA schutte_j_Page_046.txt
81efb994900a5782b9bd34567b597fba
bd4170c5e4e9a2b3c4f19676a3165015954bfefa
1831 F20110330_AABWLM schutte_j_Page_032.txt
a25c25f512bc20fbda632f14d862d4a3
e1ceeb2542d65790dc8ff4225ed11782dbed2986
4803 F20110330_AABXPD schutte_j_Page_070thm.jpg
a8934e203e3adfe1858b6094cd4b7bea
816e3c12c7ec96faa680250da91546e62bcc4c38
6365 F20110330_AABXOP schutte_j_Page_056thm.jpg
56778e1c7a4724da2acd275583488326
7d445fcc33fb478627b5c3a4d9aecd0898b09778
1712 F20110330_AABWKZ schutte_j_Page_019.txt
3467e41cffe98ba9c4f9e1085f37899e
ad6a8d0a2a2b3b43e542acdd3abf005ee16df061
711 F20110330_AABWMB schutte_j_Page_047.txt
12398742252aae1d49986f384395c073
1095f180a80794595546f957c4ba58809acd7c42
1989 F20110330_AABWLN schutte_j_Page_033.txt
1cf612a12aad3bada7382b021557b24e
94d3c597bd8a785ef05c7a0b3d8b71f05504cc63
3231 F20110330_AABXPE schutte_j_Page_071thm.jpg
292314a9ca2d394a39cd6f24124abf42
22eb401e539a9be4527c59816e0e55e490f195a8
5643 F20110330_AABXOQ schutte_j_Page_057thm.jpg
119198d7f479914580f8246e0dcf3048
73ffd2942be21263af490b4d421405c0dd1b849b
1924 F20110330_AABWMC schutte_j_Page_048.txt
22c2574f3d2fb38cfd4fb9c436a26fb1
03d26b89ac60299204a59cd0d0982a8985eb13c9
1982 F20110330_AABWLO schutte_j_Page_034.txt
c1e8b73c1103d739831ef4c8f8219cb5
2c039c43d82e2101c202a788aa97cf4621774abb
6077 F20110330_AABXPF schutte_j_Page_072thm.jpg
2598eb3e6bd076a433564dd18ca95d51
d4b78a4c5c9e790c3ca57f2edeae06cf7613b8ac
6357 F20110330_AABXOR schutte_j_Page_058thm.jpg
1b643b0cb9883f3c92997295b88268cd
4a2dde10b147dde3e18056184e6fb5684e7e065c
2147 F20110330_AABWMD schutte_j_Page_049.txt
6999651185215bf3fd4ad4cf5601d8da
94746c5459ae2dbfc814087e08da5564ab5638c2
1802 F20110330_AABWLP schutte_j_Page_035.txt
230c49e14ad1b9da36fa80950a3c5f73
04430c10cbd5d97d122f440c1cf92afca10bdcdd
6107 F20110330_AABXPG schutte_j_Page_073thm.jpg
7973be44a0292b8b6895b1e898811a73
9131a56aef77868666102e5bc79a1dae9cdbbee3
6184 F20110330_AABXOS schutte_j_Page_059thm.jpg
2c377e30cb7ab001b02f184db56f5d6e
621131f99af6c6a272ade67cd2bbf4a53ccf1545
1988 F20110330_AABWME schutte_j_Page_050.txt
0f44cc0eff2d174520efd46fdc58294f
71a4b69f5bbdf4056fdb89d6836ae477f6bc7900
1905 F20110330_AABWLQ schutte_j_Page_036.txt
267937e154c2ff2a68c60a92e965f713
cc89697c97f3fe52a614d32b313d67dec7c478f0
6162 F20110330_AABXPH schutte_j_Page_074thm.jpg
ff917e9fa63ab1033b0060508690ddc8
7274a8b8df25cf46ca40067afbd0ae8a25aebe48
3804 F20110330_AABXOT schutte_j_Page_060thm.jpg
e7ad2c6eb6eb05b560dcd760a9c08f11
5e186f53674b4f58b75c80dc17d7d493435ffbd7
2311 F20110330_AABWMF schutte_j_Page_051.txt
fc293187a84ff21339657a7b184f021f
533e4d4c59cd067d420d0ab77829d6da77e796af
1343 F20110330_AABWLR schutte_j_Page_037.txt
e88c9c716b6177422109a63edcace87d
e2482e94b4b62bcb3d8841b5d3cab51e5fefd34b
5907 F20110330_AABXPI schutte_j_Page_076thm.jpg
7f8358e8979e3577611d272b64fbe9a9
91ce6c409526be9f86fc62de76cf460b9bad0362
5126 F20110330_AABXOU schutte_j_Page_061thm.jpg
bed9ec117e848219157735c140a573c4
3d3d032412591948d646f1acd594d14a59676e71
1583 F20110330_AABWMG schutte_j_Page_052.txt
fc4d39987c739628e9b5ec19d0823616
0730dc572093d7371fee29f34cde97edf95e2c3f
1889 F20110330_AABWLS schutte_j_Page_038.txt
e4d1a54294b3fc86d4a50e0aa0b62ec1
2d157a2dbf65eacf485e8da23373a6f1fd88781b
6366 F20110330_AABXPJ schutte_j_Page_077thm.jpg
59dbec0d3feb88db4dc512c5a66b7811
b48ad0c716b999aee59082b069463197d27d5e24
F20110330_AABXOV schutte_j_Page_062thm.jpg
11a649d7692b304673215da1e9912505
1c5b0e376de714140eb93eae4be95743f3dec782
1883 F20110330_AABWMH schutte_j_Page_053.txt
bb4b9e8bfc390ea6ca776d04864664c9
1ed6b4e8483b8d7bc69d1cc838811cb1d8472644
1640 F20110330_AABWLT schutte_j_Page_039.txt
4eff5e7d80aa3586f58b732e16818ddc
b1bff83f443e568482575f98c2dfa8cb5f577e6b
4138 F20110330_AABXPK schutte_j_Page_078thm.jpg
9e32ae94145397eacc9293457b2f89ab
d22507870888df2b66ab16d4be91d152c758c593
6349 F20110330_AABXOW schutte_j_Page_063thm.jpg
3b396bac95f2630702348583334d8def
8feed3f5cecf819ba5cee455f3b49821a02ea346
2228 F20110330_AABWMI schutte_j_Page_054.txt
fb3f806937459aa6efcf155380efc79d
378f26b570c7f339d77f1c53e26029b226946f53
1911 F20110330_AABWLU schutte_j_Page_040.txt
b418e3442dc994922a76fc14444b5fde
d32e78728df69388c8edfe8c67d8b16216d716a4
F20110330_AABXPL schutte_j_Page_079thm.jpg
8078f60ca97276593dc4d155c905a56e
e5e07efd1c7db52b2fb6f995407dd11f48fcb1ab
5917 F20110330_AABXOX schutte_j_Page_064thm.jpg
6f55c9c355c847d0ecf48d4c81d06086
7c7d306f6d817a22bacaf21f4ab91b1d6f288bb7
2052 F20110330_AABWMJ schutte_j_Page_055.txt
c94d1dfc36b87dc9a1b4bc8803d3afbb
4f731e1ba0e9bb95002e5d0a7daa6b3935a65808
2053 F20110330_AABWLV schutte_j_Page_041.txt
8b36894d4afec30b1ea1e46d3d0fd374
4a080676b882fee3694a78da1579060ab5a30b86
5732 F20110330_AABXQA schutte_j_Page_094thm.jpg
b43ca279c3affb547a7e29b6a0a808a7
bffc53baf596237177bf2c8611f35ddedfe7bec7
5724 F20110330_AABXPM schutte_j_Page_080thm.jpg
f6e2a5332a45c9bf4cf8503cc129f4d9
65fb188bec91f797e7cce51f5feb800a0f4e1efb
6348 F20110330_AABXOY schutte_j_Page_065thm.jpg
ebdfc8094dbbe67eb3b87d79554cfbe7
cd12c4b20e85d616c75807789e4cd9e987712644
1974 F20110330_AABWMK schutte_j_Page_056.txt
c57e029b2471b394daa4b5b4e5443ec1
c72109dff75e0ad304f52d3df6c99d01a6fe2f40
1888 F20110330_AABWLW schutte_j_Page_042.txt
561b5cb9158c4046873ef8be00b56fcd
d3d180d5fafef780a37ab6d8199b49da1b117ed6
5511 F20110330_AABXQB schutte_j_Page_095thm.jpg
62f97f53757fe661cca015e2c305c4ab
a31614bdc8759c4c82da395ea4a645db270b3a9a
5514 F20110330_AABXPN schutte_j_Page_081thm.jpg
dd3d653f19fb2217b5b47a6cf58f5fef
0aca17d4b369a07e395e085f5f16a18741509da5
4648 F20110330_AABXOZ schutte_j_Page_066thm.jpg
b3c1af809421ef3a280592784e7f30ec
a6c0ff43ec7e6515847d8a7a442189191fb0b1dd
F20110330_AABWML schutte_j_Page_057.txt
a0f526638aa5b0e58b222cf4a5891bee
a1eb98d6c0e2230e8378c5a37bd94ea306e1574a
F20110330_AABWLX schutte_j_Page_043.txt
3a85f256f6ae8882f82de81e8ca3b3a5
8903b3c2656be2576de5bc346f5c61af07e567ac
6233 F20110330_AABXQC schutte_j_Page_096thm.jpg
bc04612325535900c8faec0d7217ccd8
ddd26fc3f1b7ca1f3aca9e13f255ae182339eafe
3526 F20110330_AABXPO schutte_j_Page_082thm.jpg
8127b73cc315b8ac879ace5a8fddce1b
380f88a9f8a0a81cc5d0233503e57a8cf043ab68
2017 F20110330_AABWNA schutte_j_Page_072.txt
3fe6e264dcb270f61259d3f692c3ceba
58ef70a8b341bb199213484394f1f2f9a1c19a9c
1980 F20110330_AABWMM schutte_j_Page_058.txt
56bec61c3e903986eab766df9907e722
279e3e90cb36cb50dd821ef53cdd2b0ae4a5b0cb
2010 F20110330_AABWLY schutte_j_Page_044.txt
82f32edfbb9a9fcd687c4e3886ff81f2
f50bccaa2d7e9b9e34022a4de57c357223a3d4ee
4622 F20110330_AABXQD schutte_j_Page_097thm.jpg
9261139d4ded248463317813feb3fa09
fe0766ac22ef53588d4249aa6cba450ee12f1626
4809 F20110330_AABXPP schutte_j_Page_083thm.jpg
b09fb66044031e59805df61661fc2d6d
5a5be3a85a63e31242c174700e81ca97e3a276fe
1948 F20110330_AABWNB schutte_j_Page_073.txt
0adc08b66c4e2adddff6ed6f04169a02
4140a84ce1609fbe91775c6c5b01bced19fefd0e
1954 F20110330_AABWMN schutte_j_Page_059.txt
a502af4cf95a9f1d31f08452b282bd66
496d50cea09ac49892490e7c9b588c5ebcfabe60
1671 F20110330_AABWLZ schutte_j_Page_045.txt
27d7fa399807d70a54f6e09020c2a989
08b63ed608ac2a888e2d48d5049881980b335ef0
5023 F20110330_AABXQE schutte_j_Page_098thm.jpg
2836934706b8eb2223a13ac5b7d71468
fbca828e34a2fb51faf88fe06f38f54c614ee985
6333 F20110330_AABXPQ schutte_j_Page_084thm.jpg
882e80b81087cb6346958d77d72a9d1e
36c6a041847bd54ed6e4a5483697c92d30839a5d
2049 F20110330_AABWNC schutte_j_Page_074.txt
a1a921f8eb51646fce58a00af2159c74
a08eaf0d207e8404d457e5e0b6fa30db6f4e8ad6
1156 F20110330_AABWMO schutte_j_Page_060.txt
ce658804272bf66b3856c2a4944615a7
13080034ddc3bbd58e4547f77d973ab3eb750977
5158 F20110330_AABXQF schutte_j_Page_099thm.jpg
28e12c00a54b9590ebb55cd95381653f
5323b8c77f102184effcf06cdab4d44f51330c1f
6364 F20110330_AABXPR schutte_j_Page_085thm.jpg
93708e583a2e219e2c7478bfe30e18e7
9f2225c26dd71f8bca9cc53682844ef4ab529be8
775 F20110330_AABWND schutte_j_Page_075.txt
5eff97b14f9c563cebf6406fc6607b02
9b575976f7699e9b96b379e1383f8b0eff3de157
1686 F20110330_AABWMP schutte_j_Page_061.txt
751bfef8769a6456d8318c0496d3b10a
64c68ab4fe05d0be63a93a989e34d51fd52a4126
F20110330_AABXQG schutte_j_Page_100thm.jpg
4d286428778f27cc83720a21d0154c0f
3eef6f5d757920f6cae32a470796930335898492
6292 F20110330_AABXPS schutte_j_Page_086thm.jpg
c86c8f99d252c2ae9033b48cb2b959fe
0dca0c889978110b7b2e7a96f313352f1043c645
1990 F20110330_AABWNE schutte_j_Page_076.txt
14aa398709c75db96b221d9303092397
767bad8c79c0610f4b9372642c40b88c34a25b01
2028 F20110330_AABWMQ schutte_j_Page_062.txt
f8c42853e37885afe828f4c67bafce38
bc5d2a5f55bea09d323c582ebc4a752f14454f50
5226 F20110330_AABXQH schutte_j_Page_101thm.jpg
319c402cdf482ab3a657a2f80a935e6b
30f4239ce004c4dfc3926a3705df71606b5291c6
F20110330_AABXPT schutte_j_Page_087thm.jpg
fcd71ac8cb695520b6597419b164aa4d
1c9b65737849c923c1575c5093592cc2fe032093
1053 F20110330_AABWNF schutte_j_Page_078.txt
d1892c8b4ca7473f01ca7e56ddffb379
224766d8813b4fa38a38eecfeeb59a6fb4fef013
1991 F20110330_AABWMR schutte_j_Page_063.txt
762af45fb99f7ac0e860a848c1c152d6
e08a4e00ca8468a9c0c53de095755fe90b032c06
5229 F20110330_AABXQI schutte_j_Page_102thm.jpg
e3976e5c13795e7058c301a37d3c47e5
c631217ee07e6417f79023ed8d9194ec8b51d842
5181 F20110330_AABXPU schutte_j_Page_088thm.jpg
dfed0e1a6b406e815e5c19419664b8fc
9613a978f0f7dffa34a1621a49b9af7d59ada3de
2105 F20110330_AABWNG schutte_j_Page_079.txt
a3f840cf0df91d8a20b4942690ed2028
9d48311c81765d7213a3934ac1ef79bb7984f7b7
1882 F20110330_AABWMS schutte_j_Page_064.txt
652e2d546d5cb7b80557e4c3397f4dcf
7df3a0a1fc8c59e475b80bed15ed5487d8ffbf11
6308 F20110330_AABXQJ schutte_j_Page_103thm.jpg
3048af41f0ff64c2a2ef56c3bde9fbfb
948d45ffd2b8bf374a81a096a020cd9dd1c2ea44
5414 F20110330_AABXPV schutte_j_Page_089thm.jpg
503d74cb3102059ab423f85238fa650a
128886980bb85e24b4582b18dcffeef78700cde0
2443 F20110330_AABWNH schutte_j_Page_080.txt
5be41640e0ee2e616cb4d1b50fe2b637
128dd851cf8c2199a1f60a3287b2341096343d70
2141 F20110330_AABWMT schutte_j_Page_065.txt
1c86ea068860e46e968f60c6d98dd107
79e61ef642078644d7c062adb234ab8f74f65eba
5873 F20110330_AABXQK schutte_j_Page_104thm.jpg
897c2fd59eab4415c980e84cd9e2d9ed
00f560f3b4fb1b49e99d0eb6acb925540f9e3470
5626 F20110330_AABXPW schutte_j_Page_090thm.jpg
79835c54c3548236ef7c5b81bf2c8b26
f16ec403e303c072e50318ce085dfb9a2f9d5d0a
1952 F20110330_AABWNI schutte_j_Page_081.txt
04714aac52ec0ad4c22eb1f6d3508a6b
019140bcfe02fe3f497f468bc91376d2b5058c83
1466 F20110330_AABWMU schutte_j_Page_066.txt
6c5589baa89f1d8808b2d23a622cb0be
61ec7cc8f42e90d83fd0957fcb9e433d134e5102
3829 F20110330_AABXQL schutte_j_Page_105thm.jpg
d572d60a99e75eb07837f81cff1ebcd4
71f6e8e2c72896c2a50ac91715e552074fc92bdb
5317 F20110330_AABXPX schutte_j_Page_091thm.jpg
d627aeeae4e91e9c57a3a9edb2ca3a56
8ab33719817ca0dbce86ce8774767e5bf4d7a429
596 F20110330_AABWNJ schutte_j_Page_082.txt
e5aaabb255854667d52413ad65459224
943d1244316fe0fc8184d39e5fad3d2f77a47c28
1834 F20110330_AABWMV schutte_j_Page_067.txt
e8f46c417bb7f2f8a8db1e92b9b3fde7
3c6bbf8cb853b13c3d799c83c4a029185e4c1eaf
4540 F20110330_AABXRA schutte_j_Page_120thm.jpg
ac188aefecda05b6efd55e3e0bf4a196
b2949bd78a58c18d5622989568090419e000b696
F20110330_AABXQM schutte_j_Page_106thm.jpg
7414f24735c6de843a54bd89706e26f8
6ab6cd1a49f9bd1d4288347b049e669e80c6247b
5477 F20110330_AABXPY schutte_j_Page_092thm.jpg
5bf85675d82ec37d3af0f978e2caf46b
3aa55f0f7f2f550a07b9788b43e02a779c7f8b32
1455 F20110330_AABWNK schutte_j_Page_083.txt
b7feaa147f23f824a32cda885c01cece
7c2f41d5547e14e858f12ab075e201cb278bd9ba
1949 F20110330_AABWMW schutte_j_Page_068.txt
9ee8c2197d4af06c5e90cea094708a19
8c693e253c7998104a9e2034862f66306bf40783
6015 F20110330_AABXRB schutte_j_Page_121thm.jpg
d900ee7c4104388af96f15bc6fc04bbe
8f74f4db7f739e9ecd13f2a2e89aa42215bdbeb9
4669 F20110330_AABXQN schutte_j_Page_107thm.jpg
033bc60553a7bfc968b7ea0a11f88e35
c040deacf7ff5bccf67e9c09768fbc17de1e739f
5875 F20110330_AABXPZ schutte_j_Page_093thm.jpg
3f4f3c4fafd44fe28c7393b528d216d9
7eb523803e2e1455af43f78545e31753adfc82ea
2038 F20110330_AABWNL schutte_j_Page_084.txt
3ed2dbfa77e5344e8125c663c5912d8c
6bffe97b95acec61944f47073a3647809e81780b
2002 F20110330_AABWMX schutte_j_Page_069.txt
7b1baba6a216465052580961de42e2a9
710c9d5753e3eb04ec538c8db3b2873ec81cdddd
4609 F20110330_AABXRC schutte_j_Page_122thm.jpg
a0eff0703077c79189e11299b3c89460
9a8105000a2a7ff1ce7648cf995461be1c41cfcf
5389 F20110330_AABXQO schutte_j_Page_108thm.jpg
dd979364f75757e978199f9f4dfb9c68
d6be655e8adf9a1cf6e7e78f7aa73ab590745986
2008 F20110330_AABWNM schutte_j_Page_085.txt
5fa592facbaa1f85372224206fc09754
8e04e1634ab894b83aeef8cd8e7881b2476a681a
1441 F20110330_AABWMY schutte_j_Page_070.txt
67b074747f530ede254e98684dd4a907
d9183f8ed11a1c1852df1cd8cf951a52938286fa
2172 F20110330_AABWOA schutte_j_Page_099.txt
02e3d4932c3ecaa1861ec21dfd3fe0b5
6f880cd597308c2f3219aefcc61b737cde82a16b
6072 F20110330_AABXRD schutte_j_Page_123thm.jpg
32d995513c4957255c5c499a323423dd
d76efdba8fd7148eedbb58578b1d33c92abab970
5882 F20110330_AABXQP schutte_j_Page_109thm.jpg
af0f1bd894113f6bd0d1b151028c568c
90e568fb884195eeeeef5ae0b61815d196b1fe35
1965 F20110330_AABWNN schutte_j_Page_086.txt
64b942d034a2f27c17a41a4143974025
fab4f62ae5ec154201d0688a471017cc5021c873
F20110330_AABWMZ schutte_j_Page_071.txt
6f1cb3cf107ce0e0366852b68e886d22
972ab97c02abfb11f07a47f4c50311d1fabe3389
2248 F20110330_AABWOB schutte_j_Page_100.txt
44cf062dc929a3587ab52335571c1a38
05d9b6a652b93397bbb4c9d479ab3c4f4fb36120
4456 F20110330_AABXRE schutte_j_Page_124thm.jpg
b998bd39987b33f16da181d406a4400b
0c16808512fa39e90313c3602e8728343ac221da
5422 F20110330_AABXQQ schutte_j_Page_110thm.jpg
6938fc0082f3d83f2c6b466f97a00411
c3d2c61a274b6159945d8ed04ee5380f43e2f035
1771 F20110330_AABWNO schutte_j_Page_087.txt
90ac8b0384b0a430617ff336c263ee24
86bbfa218d53ba1b1c38d5ad9e91925bfd9a222a
2270 F20110330_AABWOC schutte_j_Page_101.txt
20dbdefd80bb9968f2152b070f31b551
243039731df91b3f0379a21a0db9d2e7e671a16c
3476 F20110330_AABXRF schutte_j_Page_125thm.jpg
81ef913842771282f73939e73df3c890
5e1b19bb32362c354b5b336a1f6fe614116b5037
4606 F20110330_AABXQR schutte_j_Page_111thm.jpg
17fbba8c31e67e93625135e47bbadfd7
77864d0eaff262ac5bd9704b7f284fcb3f25aed4
1695 F20110330_AABWNP schutte_j_Page_088.txt
bd717bd24546b5fa3dcba09c43532ac2
bfe5ed7b0b66f8ec056cdc99f3ed2c40fb8604b5
1652 F20110330_AABWOD schutte_j_Page_102.txt
a4740960dfe73074b2fbe4f8eace7e73
93109ad0dab8262c40a0af946e723e6087c046e3
4911 F20110330_AABXRG schutte_j_Page_126thm.jpg
8a4c9abde4d9acec928c7777b0322310
ce43de195878f0a0b915dd8f942480f3e6af2f22
5215 F20110330_AABXQS schutte_j_Page_112thm.jpg
63b90acc0f8de6b9f1bdba11e91d2f49
1379b8ef99cd27c7e0b9dc7fbe7b046821d340c1
1483 F20110330_AABWNQ schutte_j_Page_089.txt
40503558c8c05ab255e5301c3eeeb358
9deab4d3480742da68475a441f0bbdced9fead5d
2154 F20110330_AABWOE schutte_j_Page_103.txt
d455a0ccef350bf6598e12c65402e7e7
c0c134f16d2e9f5f40f75081469aa969ac5ca2fa
4465 F20110330_AABXRH schutte_j_Page_127thm.jpg
18c18e096d19d863dbe906484e549f44
53f696ce16a7d0173244935a6a23517114d0fd1a
F20110330_AABXQT schutte_j_Page_113thm.jpg
623be4cdea9f5f19a83cb237d2315f02
76cc9eea050ed623c426543fa297b1dc9b3e29f4
1824 F20110330_AABWNR schutte_j_Page_090.txt
f5ec7cb590d0f1f7a660ff68b97f197d
0c1eff870e9b795eb418fd5f1acb833d5263ea08
2488 F20110330_AABWOF schutte_j_Page_104.txt
2d036dfe289adc548a0897aca62dae6e
0dd560e8061499ef3780032e7fa4f7cba1002763
4410 F20110330_AABXRI schutte_j_Page_128thm.jpg
fa403e5aa1b0e727802082991e8d80c4
a4a22093f266bef2415bbccb89136406b2ef9690
4126 F20110330_AABXQU schutte_j_Page_114thm.jpg
9170cdd00813c7089beac16a59b2eb42
8e8d4cd676d929d0bf9e5791dab02d8970241bc8
1923 F20110330_AABWNS schutte_j_Page_091.txt
197b6bae5695f41d68179ce9f612acd6
7a84dd00ce40f1d0b9b9bf173145973974d11d38
889 F20110330_AABWOG schutte_j_Page_105.txt
9353417b3820939a2f23c97acdb45ac8
57ae34a00c061c0c30fd52b6cb1ad3b702c92bdb
5848 F20110330_AABXRJ schutte_j_Page_129thm.jpg
3bd0c8b7fcbbc6f9f3d879128d7766da
38af314fd81b2905a2a44b51ff1c5905b6de889e
3201 F20110330_AABXQV schutte_j_Page_115thm.jpg
730ad8eaea7c6f56e98f138039f30af6
64a2ff6a3a09ab85781a35ecabda2adac00ac6eb
1873 F20110330_AABWNT schutte_j_Page_092.txt
e3a3b996debda311839c249c36c242f3
86272edf3fd47dfcd6ba0d4e75d70528abd6ded4
1766 F20110330_AABWOH schutte_j_Page_106.txt
74fcc3e6b2ba6a43bb601b9ca50b93b2
133b34333d6d4faf994335506fae941b91fb9e2b
5373 F20110330_AABXRK schutte_j_Page_130thm.jpg
96e063407ea30ab360e2071aaf9eca04
af9383203d24479842d09b513aea3f20929327b8
5170 F20110330_AABXQW schutte_j_Page_116thm.jpg
46a734a92029e6fd33289ca910f9a229
f0afae1f238506149bbfd571e39f031172cccc83
2014 F20110330_AABWNU schutte_j_Page_093.txt
23e4d15bf252d27a734829e70529aca6
064cc48fc35fcbb32f8640138958828560da985a
1495 F20110330_AABWOI schutte_j_Page_107.txt
283e6eec0fcddb3724fffefed0262f37
b5b42c4f54cc936fb05d96db4ebcba5e7e1c4102
6951 F20110330_AABXSA schutte_j_Page_146thm.jpg
727d3bd139eab7304bc5948558e4db94
b505aa07299185c204ae3b0571c13a2a22c5bb93
6191 F20110330_AABXRL schutte_j_Page_131thm.jpg
df1e197a10dd5b0bd2c900a4f2f10197
8d04e35c18725634eabe96f8985236677ead7fd2
5208 F20110330_AABXQX schutte_j_Page_117thm.jpg
fdb236a50f5c7c889e8eb573a37f808b
a68c1279b04b2753d5c63d52837efc1e5cb6250f
F20110330_AABWNV schutte_j_Page_094.txt
a9e243bb62100b563253fab95d9dba70
b90f3f3500d7527a19828b4d1239f75db2a71d43
1787 F20110330_AABWOJ schutte_j_Page_108.txt
078f0d35d7ff38ba28779fb91c75b8b4
b8181d4ba208e3b2dd7569d2e4f2296c251a1bdb
2907 F20110330_AABXRM schutte_j_Page_132thm.jpg
51146fbebf61a15e3ec4d6b417cec50a
995ae53c98871c6b7c1d47f85c6a3dceff0c3be9
F20110330_AABXQY schutte_j_Page_118thm.jpg
52409563e2607fb8ae4ae66203cf9585
49a5df9a4fb4416e6078a44069f21078ee0651f2
1801 F20110330_AABWNW schutte_j_Page_095.txt
6135db4b915f71c3efc78b4d4c927e56
a2af1b72809700005019e2dbc7592ef3fe25f567
1874 F20110330_AABWOK schutte_j_Page_109.txt
e0389c32d5b2d253cdce7a3d8613992a
2d08c52b11b3cf80445a839c37086c1cf84fdc5a
6390 F20110330_AABXSB schutte_j_Page_147thm.jpg
5984f4c6017441f002ddf9b76ed5d658
c1b04e1d72d7bbc744fedd40349a33aab428f8e4
2515 F20110330_AABXRN schutte_j_Page_133thm.jpg
ff0b3faf334e80ca8176fc99e12b36fa
85dd3d0b59a914a3c646208cb15ea8c423653e4a
F20110330_AABXQZ schutte_j_Page_119thm.jpg
452cfabee395d4bcdceebfd7ed1b121b
e520dffc12329e3fd158a0181a6adba32ee7a6c3
2859 F20110330_AABWNX schutte_j_Page_096.txt
225d5c375b31e4647ddef2e9b4dff8dc
3f3b067dcfefd9aaefc7f20b691dbc691d468f5d
1932 F20110330_AABWOL schutte_j_Page_110.txt
fbfe1494aae9ae0d87741ecf7d7cd7a5
d538ca29d7d1f5a9843a6e8789eeaae539af0227
6724 F20110330_AABXSC schutte_j_Page_148thm.jpg
d8153a3977e107bbf5d6145a635ef24c
789a08b5ad0de8c386487f725ceff5ee37ed9910
2837 F20110330_AABXRO schutte_j_Page_134thm.jpg
81d2723376bc2a9edde58e1eeb996914
9aea8958e77fd5aabdde70b3561e0561cfec5882
1579 F20110330_AABWNY schutte_j_Page_097.txt
a94bbff4708e2c4d8545d936840c6849
0c5faa8203cac90f72f45d048f40f01e3393ef4b
2047 F20110330_AABWPA schutte_j_Page_126.txt
0bba63cb7029a8976e284318c13d0da3
5657457b7b06906198e1011e321a8597cbdf3e7c
1885 F20110330_AABWOM schutte_j_Page_111.txt
f0333c0cbb80bd7eafd1928222d52c7a
f644e2b3db72e34196af27f565d42d040534b6d0
6528 F20110330_AABXSD schutte_j_Page_149thm.jpg
f6ee254095d3f40776127f15ab814882
c7f7389168ceb40dc32cf008bf59369f9304f3aa
1520 F20110330_AABXRP schutte_j_Page_135thm.jpg
d831646a0a086fc92eb8c7fe2d400dcc
88c30265771fc2ca52b0b2c8bfb3afb7742a6662
1645 F20110330_AABWNZ schutte_j_Page_098.txt
b0d0b68855ab00bfedde5292678afdf3
c92ba575a307b48ae697b3f543bcecaa82889a76
1068 F20110330_AABWPB schutte_j_Page_127.txt
b05d90158beabad7a83b7e87029974a2
321590626a4587c4061949cd8ddc19f99880bcbf
2135 F20110330_AABWON schutte_j_Page_112.txt
e528342660404564844e0b869c4f7fa4
31d538ad34f13e6743645f1e72dc699a5a06a171
6643 F20110330_AABXSE schutte_j_Page_150thm.jpg
5f02b9bc49295fd73ac3d72857d0d4a2
dac8fd7a15cc883a45edf80b4ed7b62f57d11dd0
2578 F20110330_AABXRQ schutte_j_Page_136thm.jpg
e843f162c78809e38903eaa4779fb4a6
43375323ffb44d5d384af7106e5602239780e3ff
570 F20110330_AABWPC schutte_j_Page_128.txt
f2cc5572dfe08e6d89f12f6c22991cf6
ed801f92911b2807d327bff63ef7f3fa6480cd16
259 F20110330_AABWOO schutte_j_Page_113.txt
c7fd58ee414589a8b2768dc6595afd28
116e3a74c7aa9eda9c20b4a135eff94afd47f515
3430 F20110330_AABXSF schutte_j_Page_151thm.jpg
3a696cf75d1b6d541a72336790d997fd
788bc06522ad2962aa30ba309fdb8fc9ca3ef55c
4002 F20110330_AABXRR schutte_j_Page_137thm.jpg
9e349c990353218f33c043081a4c4e25
76d415076e23dff47e692774a89b27a4bde7868c
1908 F20110330_AABWPD schutte_j_Page_129.txt
42fb5e6a6c4f32789df1fedba56725eb
375b2ae172e7b54046ca98dc2e934b1485c72889
1828 F20110330_AABWOP schutte_j_Page_114.txt
ef152ec07105fbbe95c3714c7bb8d0bf
1f3060d901379435670e852be87985ed328294d3
2076 F20110330_AABXSG schutte_j_Page_152thm.jpg
b2ddf89e43993727dd9aec9fb5e39e13
c15930ea70779c7c0aa0bc622e4788ebecfe5cbf
3864 F20110330_AABXRS schutte_j_Page_138thm.jpg
c202650bd2d13311252c5bca9ca1c987
2a8ceb661802b96f998ead194cad72e15ddd1a4b
2031 F20110330_AABWPE schutte_j_Page_131.txt
3de0fd3dfd14d811feed8c5d2adad438
aac2f0d286cbeb9dd97a67b91caaef5bae5df93b
1718 F20110330_AABWOQ schutte_j_Page_115.txt
cf7c4c311809fb0114a5022323b3d52c
6507129ef78717a2392b818e994d9df35e43fb4a
1442585 F20110330_AABXSH schutte_j.pdf
899a37b8b1713957a6cb5e1501c13636
f681c88caa85f885b7168b8a730001db2fae59f2
2662 F20110330_AABXRT schutte_j_Page_139thm.jpg
dfaa052f3509f2b2ba74182a9f13bfbc
7853495c04bb8f855bdfab10e7018dc6d05877aa
841 F20110330_AABWPF schutte_j_Page_132.txt
a67aa1f0dde553213c11919ba451f26f
bba2912c025a19cb1f0d4dead967d4beda308694
2024 F20110330_AABWOR schutte_j_Page_117.txt
0f9fccca454fd348d66e81f6132ba826
a5cf074b807bc3a6523f32f65b4316420d5405f8
178787 F20110330_AABXSI UFE0012932_00001.mets FULL
3d1bdd53f840e79e0f02c84801302d74
0b33645644c3a07afede4f304ae8890f8fb9f769
6025 F20110330_AABXRU schutte_j_Page_140thm.jpg
1d98985f6e8a22f99a222b143825dce8
9c363706baccd556ddac41403ce5174bb1965689
682 F20110330_AABWPG schutte_j_Page_133.txt
a82c5e635375d7ede5b40146fb576a20
05cc70a50a50adac5420159661d8d7b0dc2d06ca
2387 F20110330_AABWOS schutte_j_Page_118.txt
f87b59ffa2e541c3e711363c772aa0fb
aa8540bd0d92563c898edf16cb26d147884be038
6978 F20110330_AABXRV schutte_j_Page_141thm.jpg
4b3d2a15299b7a9f71b03d0bc6a0829a
0aadf7e07cc9e4193e4d2b5fa71c60d0628699be
953 F20110330_AABWPH schutte_j_Page_134.txt
ed68e3611213480784eb5a22e33ada02
af0907d20960cd6cf9ea3c481e26ca3c4016e632
839 F20110330_AABWOT schutte_j_Page_119.txt
bd77ceda119df146533094fdccbce7f7
d2fcec40b6e6f688a529d93799d7ab675c2e664f
7008 F20110330_AABXRW schutte_j_Page_142thm.jpg
e40114d6eec06bb51794e1b325b5fd25
9d6fe90a8a602ce885dfdd19f34f82977b31bfe5
820 F20110330_AABWPI schutte_j_Page_135.txt
42700d34a03d3455179122effecc0297
f392aec48a2b9d69b070e144535ab404c07260df
1598 F20110330_AABWOU schutte_j_Page_120.txt
b14d49158e3d727c1593562a3f868367
da60a67e0c8761ad05dd6fb0fdcd1dfdbf103e67
6696 F20110330_AABXRX schutte_j_Page_143thm.jpg
112304afa678871bd0b18a0914809695
bb34aa7c8d79b247e2598fdacf03714658ed92a8
651 F20110330_AABWPJ schutte_j_Page_136.txt
134085ddefcc6dfc177eadf14675794f
a2cbb51b53cf425cdb19dd899543fc127a7ca1c7
2225 F20110330_AABWOV schutte_j_Page_121.txt
c62a63c063274cf186005c80a2ca407e
cb0240581895481636489b724f6e88070095689c
6417 F20110330_AABXRY schutte_j_Page_144thm.jpg
0a2f86e1b2dc8b572268cfa812f9d00d
b4ad8eef17607a8f3150204cbf90bf2b8d2b083d
932 F20110330_AABWPK schutte_j_Page_137.txt
693aa710ddab6303f069656a07b32f0c
f0635ec77689bb763f99de1c6ce0654cf11db321
1369 F20110330_AABWOW schutte_j_Page_122.txt
e71c4b92f9b2e47c6b2e3f2025db1fa1
78a45694772c022455af29b78766e873c214dcca
6704 F20110330_AABXRZ schutte_j_Page_145thm.jpg
874cc8a476863bd8d5a8e717e6990868
e76a0fce95491fc0f5964ccf67204173864c2253
1917 F20110330_AABWPL schutte_j_Page_138.txt
7e76f1c9f1832c4fab31c56a1dc1eb9d
be280fc5bbc7b0ad5ef86df03d72be54646c3b62
F20110330_AABWOX schutte_j_Page_123.txt
52b536954ab02d405ff814514a2fbfd4
79a0e56ef5b5fefd52b33ca11b2f9a2eaff94d5e
8087 F20110330_AABWQA schutte_j_Page_001.pro
f1dd4360603300d66cfe3665b4d64a26
251d2b848458d85959f5c57f1f495aa57e9b4e80
710 F20110330_AABWPM schutte_j_Page_139.txt
c9bfaadfed8f96b4a4232a3c150fb8c5
0adc13fcc35ea70c62d2e5d7a7d2658af0497900
1804 F20110330_AABWOY schutte_j_Page_124.txt
d457b0887de7d3481817fc4a8a2ee5d2
65f6d209be552a770dcc1d5866070d5df70a96c3
1332 F20110330_AABWQB schutte_j_Page_002.pro
cebbb86f3882f46f295397b81d1771c4
36d03fe1ed8533a7206916fb381999aafd3f7412
2166 F20110330_AABWPN schutte_j_Page_140.txt
6d561a1d3599fe5f0489fa763a6aa9d0
12ec15bafec941e5a9190f0ab1dbdff1b4723997
837 F20110330_AABWOZ schutte_j_Page_125.txt
86e45041129ef63b567a7ac673fe94a4
27c1012125cad4b1ae1f06e168ab5fff4132fd4f
1692 F20110330_AABWQC schutte_j_Page_003.pro
716cc61308ffad395d4d8333bf870bda
f7b5275a45ee2a9284c667463663796a7b65bd9d
2836 F20110330_AABWPO schutte_j_Page_141.txt
0de078716d51471610f1ffe536b05783
ecb8d9130df37947a6f94d314f16bd1e503adb48
43185 F20110330_AABWQD schutte_j_Page_004.pro
e47a7d74d703b433f38b726360158bae
aac4cdf96fe78a5d15233cdb4b842f4d9e888d8a
2809 F20110330_AABWPP schutte_j_Page_142.txt
40d4a3d29ea6895654682cb6adc42fad
f7b5435e92fe2a58eaf20d9ffce2bc469159cfaa
14704 F20110330_AABWQE schutte_j_Page_005.pro
11e6e90dded19ebb330b58f6d47b3594
fbeed6c56b9f6ce7a7e911fef1cd4333fa0ce39f
2773 F20110330_AABWPQ schutte_j_Page_143.txt
9237d74cf8983662216e86b4439e8cee
6cc60913b8bb584ac945742fd20c1788529f4d8c
72928 F20110330_AABWQF schutte_j_Page_006.pro
6c8023a4af8d7a2d78e63343610335e8
864ec30b80193f9f0e9851ae604e378bfd72b1cc
2374 F20110330_AABWPR schutte_j_Page_144.txt
405cd496be69f1d1d4e00ec063fd00a4
b0a0936996e51f5b5b1cedc64a9d33f6681d48d0
110382 F20110330_AABWQG schutte_j_Page_007.pro
1c2d0e0a5a7dbcd95dec78c7d0450654
9c7fb3a5da3eeb38b24149ee2c534f57ba5fe700
2568 F20110330_AABWPS schutte_j_Page_145.txt
d3a80ad35f59611681dafc05118b0a65
68cb8094661bc7348e1180b91628d8b8ec9d4cfa
2601 F20110330_AABWPT schutte_j_Page_146.txt
ef1282548159f749fa3d2687ee5a8c77
87d235fed1fbca1bded6475b1bc50fa64d12caea
68018 F20110330_AABWQH schutte_j_Page_008.pro
6ad4cf8a21ef3d1e13c25f537c6f08ef
cbc8ea49627c0b290fa1d3b085639217a1c00f52
2324 F20110330_AABWPU schutte_j_Page_147.txt
322b2fb70085bfd7f6eeb2cba085e48c
f012baa064887aec543fcddf71a3c23a166bea02
60063 F20110330_AABWQI schutte_j_Page_009.pro
83b083b24510750f344f427c28fa473d
7f0198b7edf4a00f4aee8c01482ce9d5c1091748
2749 F20110330_AABWPV schutte_j_Page_148.txt
8b2174dc35354a505493c65c8b71ecba
f3dc931dc6b0bdcd4100959a14e15f4e2a47e088
63544 F20110330_AABWQJ schutte_j_Page_010.pro
d40ceeff7f2a2a8429c35a2fe2e0154a
eb58655f6f44495dd14b758be26d975b971ec8de
2360 F20110330_AABWPW schutte_j_Page_149.txt
dad3f231c2422b1a61a499ab1ed83b50
5ae31c8241398ada0a35fdfa1f3b79fc8e48ec52
75594 F20110330_AABWQK schutte_j_Page_011.pro
9b799c97f0ef90046e02954545dca62f
75bc9f7ea7cab42fd14c67680b3827ce49ec37dc
2518 F20110330_AABWPX schutte_j_Page_150.txt
21ef88bb88dff4bd1bdf5751cfe92e56
d99e6024ef53a720a705fac48a2e865271ef265b
25863 F20110330_AABWRA schutte_j_Page_027.pro
24630b90e4bbdc9c371c84dcb20168e0
e172f06f8ed5b8140e47d241fc538cb7142eccaa
41494 F20110330_AABWQL schutte_j_Page_012.pro
8d8e3bf8b6a48bdaa7c4dc296dfad1c4
46792c169c4ba6c310cf7ca7d8a26fd862ade389
1212 F20110330_AABWPY schutte_j_Page_151.txt
a0c97a6c059279bc1f8d039b8252c7e5
8b917517b0b49dff407d134f8d046f28ecb274f2
40480 F20110330_AABWQM schutte_j_Page_013.pro
335b71ef2bd26a7a7afe812cb165c603
b665835ee5e4b1efc8e5bddcbf777008d11be715
587 F20110330_AABWPZ schutte_j_Page_152.txt
60df4bc3c320a04494899fa1cbc5d0e2
890e3cdad1218a5b82b403cdf812be3431885847
44777 F20110330_AABWRB schutte_j_Page_028.pro
ba99e346e85be920839ceaf1e9787cd3
14a59d36993b1aca70e37008e510cea3149e69ce
15016 F20110330_AABWQN schutte_j_Page_014.pro
c9bb363f713fb8cd99750b5ed3869d1a
fdb80868339e86a2a8308a620ad652efe6fcb89b
49234 F20110330_AABWRC schutte_j_Page_029.pro
7c99f63780d02c40577bd9e01a50062d
fcbb1aa8f319fed0912f7ed481e697ad339702f4
37248 F20110330_AABWQO schutte_j_Page_015.pro
b8c2e20ee93b32049382a0ba054a0423
51f6ba6468352e60c1ee1701056f8c608bdbf626
25502 F20110330_AABWRD schutte_j_Page_030.pro
0b896ec40f73e6cf4f245bc76ccface8
58800049c7a3ae207ea0d7c38d89c60c5e4d2b53
50315 F20110330_AABWQP schutte_j_Page_016.pro
aa95fc18737c80b8e438de3863cda887
d6dac93442065fc69a388a0e9db38b43e42d2c91
41817 F20110330_AABWRE schutte_j_Page_031.pro
83f4ab38f97b29fe71fb4b8333d1606e
3a2534079f1ad3c82f5945e704d62eae27812eb4
48702 F20110330_AABWQQ schutte_j_Page_017.pro
52c62c95b1218d299791514792216657
7ef29a893b03561dcc79c305b8007c494436ad69
44177 F20110330_AABWRF schutte_j_Page_032.pro
01ab3e6194b9901a9f215c3d2c03e56f
3a5481d30c8907092db3548c484571c20dfd915e
28226 F20110330_AABWQR schutte_j_Page_018.pro
0adec2e877974198c762b11bcf1baa40
6f954f9d57e36d2c693bc1a0e8df615bcd211ce9
49701 F20110330_AABWRG schutte_j_Page_033.pro
e0f5859d730e9169812157e45f092f70
ad5a0af4c39403b0c0410c178020d83313000927
40678 F20110330_AABWQS schutte_j_Page_019.pro
037daca104d1b11871aef3ecc54a03e1
264bb43ef0a0bb96765ad401e666826ce34c4f09
50340 F20110330_AABWRH schutte_j_Page_034.pro
3b03ccc878d4d610b21ea452a3ddb52e
dd7cc8d235c55e8fdedeaaf52d92866105725efe
49854 F20110330_AABWQT schutte_j_Page_020.pro
31bb4776272783bd68713af873e25372
ecfda239ccfbc9dd777ddd413255c6c3fece230d
42067 F20110330_AABWRI schutte_j_Page_035.pro
d7d04337f28bd01d23fea32999bb18d9
74534600cea37d2a0b630246b4d10890978d30ca
52525 F20110330_AABWQU schutte_j_Page_021.pro
d2555725b3372c8c5ed5c6d66170654b
673029c569a2ab44ffae627b7f18f9059ef0faa0
48569 F20110330_AABWRJ schutte_j_Page_036.pro
57bebefc34921b36b0c7c9ecc8c5fd8c
1af64496a90a007f171ef858389653e811569eb1
44869 F20110330_AABWQV schutte_j_Page_022.pro
eecc320b47272df17b6132386997bcb6
33cfe303772377fa26d6f9ceb8bcba9e9522cc7f
29262 F20110330_AABWRK schutte_j_Page_037.pro
655b6f9dba36aa4b266e5132a30ec3b4
d2d60f62377af815c47d46733d19f6166b592a63
47225 F20110330_AABWQW schutte_j_Page_023.pro
87d77a8e0679d337dcbc3f5fcb1761d1
c7d63151b7fe2a7a1cb8fab8d3117982f56ab2c5
40709 F20110330_AABWSA schutte_j_Page_054.pro
f4feecee849539580887dd6da4c8c7da
dfa4247c5497759f58f1e3fa559ebaf2208f9113
40146 F20110330_AABWRL schutte_j_Page_038.pro
875de8257104d4b52cdd8a88f318d5d2
3683c8e5d689aadc71743dff539c877b31cb4778
19467 F20110330_AABWQX schutte_j_Page_024.pro
149a548a1741754d9e82ad6de8e2a1d8
96b6fa152e47fe18014268df76f95bed64045466
51180 F20110330_AABWSB schutte_j_Page_055.pro
42799c3dfbf07dd532bb9f7e2b24c5d6
140c9237e717866c0c2406f688cdd41a2aa05f51
33983 F20110330_AABWRM schutte_j_Page_039.pro
7f3e821e0630841e348aa3b3d3d71722
e951bdec8203bdf58fb4089cf3e1dd36234aa660
46631 F20110330_AABWQY schutte_j_Page_025.pro
67114eec9c05532c52c1db4c4385de3a
d84a752750f7dbae67d212234f54ce8aabba2caf
32235 F20110330_AABWRN schutte_j_Page_040.pro
91addddc2336335af42f53df51deb21b
8d3cc68f63c834e6b346a98032931e95487b1734
45198 F20110330_AABWQZ schutte_j_Page_026.pro
dcf5c8ea2bc9330dce4ee3d97d3f0c81
254fdd762d8caecf144d60fc7e8ad849ea933130
50237 F20110330_AABWSC schutte_j_Page_056.pro
b90dce8af2425d859479e21068998e78
8ccdaba23ecd1bcf255703b550da94687419b908
46455 F20110330_AABWRO schutte_j_Page_041.pro
c61d3caebc1d3f1f551407a710341734
f705d62869b5c594b405511e427d29fe881e9ece
43151 F20110330_AABWSD schutte_j_Page_057.pro
0745b1e40c5af052fb5598059ccc693a
7d5680d3f90d98715dbf745c3bdd80661033205a
46808 F20110330_AABWRP schutte_j_Page_042.pro
33f1b2ac27922ba0b452e7cbbd27f07c
d6ab7047593c7ef7280a22b0c0d51f3f978ef0d2
50278 F20110330_AABWSE schutte_j_Page_058.pro
c0ea83364478340fbdfac0ff1582a910
5001a5ee64d0cd4f586d3a40eb6b3dddfc6a5f74
51217 F20110330_AABWRQ schutte_j_Page_044.pro
2ca9bce376bb401a094cdf1dab2521fe
19984c5d95ef64c98b8951ce1cefd139c3236f7b
27773 F20110330_AABWSF schutte_j_Page_060.pro
07d41da0d7f2c04ad4ed59f91f7d0a19
3c9cf545148b5681adaead9a40cbd7d79bf82c5d
32972 F20110330_AABWRR schutte_j_Page_045.pro
308d03f4b9230799d4371c9f9c106251
255ca42b4775a017f1ab7fdd3360d2ee6805457f
39620 F20110330_AABWSG schutte_j_Page_061.pro
bd33e02f0b33287f219b64d4266960f9
dff783c3b2ef9dd3d7f7ce16d8b96cf47f4d2104
39365 F20110330_AABWRS schutte_j_Page_046.pro
d440b0960feba175ff18aab1e1f4e1fa
6aed519013ba72a713cd3613f85ee4eda83c16ca
50714 F20110330_AABWSH schutte_j_Page_062.pro
edf08ab5c730fcd7e16f823b301fbf8a
f98785ee4ec7e710e1694b47567d71240d4103d0
13361 F20110330_AABWRT schutte_j_Page_047.pro
9d57f5d9f43e1f097eba5c9705aa0092
85f6556f42926d10c1d59bfdd6879d70c768a1d7
50766 F20110330_AABWSI schutte_j_Page_063.pro
34fac255c94780227b595955a249d861
f49da6e1dfa295034f307fe6b63b66a189753e12
45799 F20110330_AABWRU schutte_j_Page_048.pro
c487cb79efff3adc8e56b78940c0272e
8d89d5f27cb402ff7a3d35e490d01c24b28d6b51
47124 F20110330_AABWSJ schutte_j_Page_064.pro
182fcdb0372f8dadd765951936801f4e
4239edc8994226f03f9e6c4e2c69414e5a8fc434
52132 F20110330_AABWRV schutte_j_Page_049.pro
31c80e9ce93cc777a601baab52b788e4
13d23822a8726db515436b271958d407ce1b426a
52392 F20110330_AABWSK schutte_j_Page_065.pro
7693252900f151d2a620404c35874786
066b7aca4f3bc2390142efe2ea22636c7b5455b2
48211 F20110330_AABWRW schutte_j_Page_050.pro
657a14dd43f5a88e07bbae1a9f5e7e51
44c3d58e1ad6c66aa0e17783eb485328d76c1654
33260 F20110330_AABWSL schutte_j_Page_066.pro
e8835bfe58652d20f0a14bda6b855b58
5976dc6d52315f7c5ec1e31856deea2b88f46cda
50907 F20110330_AABWRX schutte_j_Page_051.pro
74fb9cfdafd9e7af4229f3f96c193d1f
5d12e833c923df6558d64b21ad9a4916ac0f4183
10958 F20110330_AABWTA schutte_j_Page_082.pro
c52bdb8701a2af50e109fbbcfdf31abf
ffa58ca8fcdaac385aa14027ae7c3dd0c67e362b
45531 F20110330_AABWSM schutte_j_Page_067.pro
028601b2d1a22f3843744f29f7d8f626
a39e2aa44eb682104a8aecbd929023166a738e78
32926 F20110330_AABWRY schutte_j_Page_052.pro
5080f8b780ce75974e7044c9c4c749fc
7c43b545f43f4f5d794f583453e77a9035171fb6
29363 F20110330_AABWTB schutte_j_Page_083.pro
9963b6ecbd9a191806fdb9598bfa25cc
6ab90048ece1db4c98c30f94815cb4625ed770a6
25595 F20110330_AABWSN schutte_j_Page_068.pro
b502d1bba14302920c0b79df66b7ad97
34808e00b863e643234f0974fab28fcd1ee761e1
39275 F20110330_AABWRZ schutte_j_Page_053.pro
66ce6307161666674dc1c1bb36e7ee4a
b8767b332a30fb4837a6c37bbb4ae83b2b86f196
51838 F20110330_AABWTC schutte_j_Page_084.pro
e1df8a3f95123d99afc02030f58de1ad
4b99824aa8067744ce3ed70ba12b74e97a75df21
50940 F20110330_AABWSO schutte_j_Page_069.pro
511e5e0e9dc6a531c8e76570bdbd4045
b32274eed705a4d1ef37d3b21c462334b4f01f8a
34726 F20110330_AABWSP schutte_j_Page_070.pro
5b981d695725a0388589252c37a33fc1
e420f14d74a90c271f3244d95d327a61bc5c961e
51161 F20110330_AABWTD schutte_j_Page_085.pro
3a461b22be70e9d74f5d2f7e18288174
6fb9ccd69bc8b6236cb56ffac665720600b4dd9f
20541 F20110330_AABWSQ schutte_j_Page_071.pro
28fe47f129c461b93b7cbacce6f74b43
f3dda41e2db37b0125a77fe15cfff691e3327939
50068 F20110330_AABWTE schutte_j_Page_086.pro
019f36b8cc75cbf31291581ea1b19145
7f5c63c379ae1b65ea8b4861cd5838b179a43d1b
50953 F20110330_AABWSR schutte_j_Page_072.pro
261894c970e68dcf6c746ffc4bb5b664
4c490641e51a13d3d4b36bdb3363823298c38452
43719 F20110330_AABWTF schutte_j_Page_087.pro
182945d2527deea06f53c7d155580d72
4e86181d2ee11648fd1ee7b192455919a8c60f2d
48909 F20110330_AABWSS schutte_j_Page_073.pro
f35b08a380b43fefc58b4c6acecb975b
0fb8579329148221cf1be7e2c1f4709f045fe279
38807 F20110330_AABWTG schutte_j_Page_088.pro
7364daefb0a31eee616845e34ae18b07
50c2186c1f62337c229e9d4be5e9f239f9601563
52190 F20110330_AABWST schutte_j_Page_074.pro
68ac7224b62b92f5bc5941b9c9b0e370
29b86017f6d140f1ca2b8b956230d226ac1780fb
33238 F20110330_AABWTH schutte_j_Page_089.pro
813897dc23f62523969ee9fe2648e3f5
3fb6c519617d84552c0cb9ff68843a45a6f73208
19102 F20110330_AABWSU schutte_j_Page_075.pro
60fdc18b60e04f4f4a1f9690af923776
6dbd4f701546ffd2337355e994b8da71d42c61ac
45945 F20110330_AABWTI schutte_j_Page_090.pro
49c79c6857864789bb63eda1a371ca20
d39affc1abc031eb6a0d801c53a00035ce66589c
47196 F20110330_AABWSV schutte_j_Page_076.pro
86eab9b12872975d29454fa5ee77526c
ac1864388f1fe77c7f435cdfda34c077238414d3
41888 F20110330_AABWTJ schutte_j_Page_091.pro
4683643400b2e1c21594ca280f13ed4b
3827cd1f2f194c678fcf864a72e23f7bfce94937
62200 F20110330_AABWSW schutte_j_Page_077.pro
dac355c9b614232de5c0132ead3ce87d
c73c3227991f92e3ec7ad38719c6abf52ce6d330
42290 F20110330_AABWTK schutte_j_Page_092.pro
54ee690f9d3856c2d1e68ed7a8d06ed4
9f27e7e8dfe78f79bf44f40b3a43277f599a86bd
23048 F20110330_AABWSX schutte_j_Page_078.pro
71c33c39ed65d2b544ebd88dd5d70607
45ac290e2477f8cb8fb79c3700f9c6becd9f4147
41913 F20110330_AABWUA schutte_j_Page_108.pro
df656563804ed9a9b2ec9ac99ed7b696
20db896703f968a00b3ae7f79cdf4ceae2fc605d
48732 F20110330_AABWTL schutte_j_Page_093.pro
c9117ac59213612392c466b350fd29c9
dd88b6d8b340d0529ca57c06c5a4836f40a782cc
52778 F20110330_AABWSY schutte_j_Page_079.pro
2dd23e9a1b369efc66cd4de121da59c8
d363015a6cb1ca5b25e80833cb828e6553b4df94
47194 F20110330_AABWUB schutte_j_Page_109.pro
ee5f7afb1939b5d95054bad574f0ddf6
bc40c0f68abbd01d363ab32173789b53fa029cfe
45028 F20110330_AABWTM schutte_j_Page_094.pro
12fb8bb6013666a001300fca593276cb
84a255366b82d4236445a56f682791d282e77535
53079 F20110330_AABWSZ schutte_j_Page_080.pro
882391c7314a03adb2c331ff2e42c4f2
27957f66cfe561c4e1519950498f2b4eaa47ba66
39597 F20110330_AABWUC schutte_j_Page_110.pro
9144f280f3d0592228e9c555a258d24b
2878642ae673f9832180552cd105c34eaf842437
37205 F20110330_AABWTN schutte_j_Page_095.pro
094f48efba550aee7e81a2fe9f350b93
bfeab175e084f2f7371a7d35424855cdca168414
32270 F20110330_AABWUD schutte_j_Page_111.pro
bd77a2665037f76d7e98a9c108cb2545
d9832a545e836b9dd492f758843934f5df0166ab
58977 F20110330_AABWTO schutte_j_Page_096.pro
8b98367c4423f423ba795f81032b23aa
d281d0b78cbdc9abdec5d05d419ec3dcac87b9fd
32969 F20110330_AABWTP schutte_j_Page_097.pro
3196ba3ff1b35f0e24a91a1662f6e934
06d2082e1397ae6a873ef97e46c5fb26a9dd7873
41879 F20110330_AABWUE schutte_j_Page_112.pro
456f0692c5dcece558fb6ca04ab8b504
1500264cb1c801d73e55dddf230e6121cf359794
38398 F20110330_AABWTQ schutte_j_Page_098.pro
a0a70dddaaace66cf3cc93cf2b2e2d7e
24bdd19e2cff69f59428137b9fe19f5542dda71c
4130 F20110330_AABWUF schutte_j_Page_113.pro
5ff52ca39093c82c8413482b21352ee6
40beb3734981c59a82219a11fff96775441eb658
42837 F20110330_AABWTR schutte_j_Page_099.pro
2389c0133fa0c23e79c085a23c1983cf
33b760d68eac16affcc5a26c9bdc3b135934d23b
14985 F20110330_AABXAA schutte_j_Page_060.QC.jpg
bb52f8762e318f5d3de5c8702766ea40
751c25504a1dc5d72b3e6fca145ba34170eaa4e9
27158 F20110330_AABWUG schutte_j_Page_114.pro
f49827a325d1f1779650471b58bf7ea5
a2ec1306994e2dcdcdb7a38956a0ffffcbac3fb4
45435 F20110330_AABWTS schutte_j_Page_100.pro
7d6489b8112871d6761feb88f7ff03c1
5daa383dcdd100ade412df71d5f79305dc341a4f
68011 F20110330_AABXAB schutte_j_Page_061.jpg
3dffaf7916bbd1e9b66e2971b2e42631
97ff3f6d96aacccb7dffeba0ee36e0edabd33908
21682 F20110330_AABWUH schutte_j_Page_115.pro
b625fc145aafcd61188d454d216156b5
a4d1668a4fcbf0772ed4ee0f61593a353f675eef
46363 F20110330_AABWTT schutte_j_Page_101.pro
5e1e0102f7a2de435df284b90cf69177
71cf92661aba13bdb31219c665dd399cd4748984
21149 F20110330_AABXAC schutte_j_Page_061.QC.jpg
53cdf665ff717915d69a92e5d8a18362
1881c48f5ddf600c71d7187c55cb253a3912feed
42818 F20110330_AABWUI schutte_j_Page_116.pro
ed2f5a4165a1a27605324c9a38ba0677
cbdcc06112e159ac1556c38629dfb27cd5219d79
34415 F20110330_AABWTU schutte_j_Page_102.pro
cc0c1055d86411fcbe09e2c7026f842b
8ada6ff8739de7768833d31a4aeb439789e54d7b
84540 F20110330_AABXAD schutte_j_Page_062.jpg
af32702c72b5af2e2697f1659923d33f
f06640e08e482b947245e33af87b262016bb1555
39133 F20110330_AABWUJ schutte_j_Page_117.pro
442cb77cc06e889a7b8510147d6b7b93
e9085bb7415b34dc3f24ebb465dff7079eea8e6d
52153 F20110330_AABWTV schutte_j_Page_103.pro
948058c37a296f9728f139f7b430c8e1
ddbeec13b1d7a7828bbdc4712075a7d40a16e6c1
26520 F20110330_AABXAE schutte_j_Page_062.QC.jpg
120955ba8ec40ca996b9f32715b29a3d
ae4b2458f5acb703856906cdcc19d0338dda519f
50623 F20110330_AABWUK schutte_j_Page_118.pro
55755c2e99dd11e6eb86ed48041e899a
73693f3695e2b403472c81d71e4393e11422ed78
46927 F20110330_AABWTW schutte_j_Page_104.pro
5441fd702236379a6a151caa52885da4
950cda8ebe89103a8d125190103de73ebb8afa77
84736 F20110330_AABXAF schutte_j_Page_063.jpg
afcb7b06f19a43b88583568c1d255bd3
3c9e54514192581fca60f55c96138d0d02296dda
20556 F20110330_AABWVA schutte_j_Page_137.pro
bab6c30a87d450ef61f3518df12dccb4
25e3bb4ca4695480ad4f3d9ebadbb7aaa963f0af
31651 F20110330_AABWUL schutte_j_Page_120.pro
c13f045971bfed45442be2a5b074f4d4
de7b0b44d69f502893be64082584a98309100465
18779 F20110330_AABWTX schutte_j_Page_105.pro
d97cc2c155002ccdee7052b488844375
6735cb8e3f4cce15c41cf1f24dc17d08e986c4b9
26291 F20110330_AABXAG schutte_j_Page_063.QC.jpg
bce2d923f2a852fe487a5440d8467641
d4c313e6e3a89817d5e211615a9e25033b087c8d
33884 F20110330_AABWVB schutte_j_Page_138.pro
c90f0d62a63859754c7ef4a156f86beb
5be5fa2cbfe18820e73f46bcdd769028e4ee0ac0
48597 F20110330_AABWUM schutte_j_Page_121.pro
8c4a958acc1320f65d6e8ca1bc039066
b8614058aba80766f3a44f0c25ecc3fa5fe7a91b
37262 F20110330_AABWTY schutte_j_Page_106.pro
420437f15e01a4d1cd500cbc512433ae
f26aa24448cdb675b560590e5f0731ca240f5bab
24357 F20110330_AABXAH schutte_j_Page_064.QC.jpg
8b25a7fb6723a368eba7cc8d0c113002
516ac9a699ca74e7037dc5608b844e4b1d6ea08b
11809 F20110330_AABWVC schutte_j_Page_139.pro
ecfd5ae0c5cbf4d8c02f38fb36e15f03
ae22a14c1ee900be908d232d322eac8ce9330dc4
26529 F20110330_AABWUN schutte_j_Page_122.pro
8f3d54508b488e01a10cf1375394d01f
6b98b9ece36d12493e3aaebcb46cf6be1b7b7a87
37412 F20110330_AABWTZ schutte_j_Page_107.pro
1747d8c637cd75952deaeca950540aeb
b8f7fcd925dbc65f06b9eabaf1f5b191c39ed97f
84828 F20110330_AABXAI schutte_j_Page_065.jpg
ac583e7a0d84ec80645e76f94c9e9463
4366cfededac3c6225226d0fc42dac5563ec6171
52899 F20110330_AABWVD schutte_j_Page_140.pro
481d8fed7a54248bdf24a4109d391238
b816151e497895d35353183c41cce7d41b614a17
51630 F20110330_AABWUO schutte_j_Page_123.pro
d6ce06ce3d3d6fb2df4ff94bb08b7ef4
f9a8ceefb735af0254fdf033af33c6eee8f60430
25816 F20110330_AABXAJ schutte_j_Page_065.QC.jpg
c31116aecd775262088201f17da73448
a74c0baaddfda9ea69e1ba3dae30d8d7b7c27b32
70045 F20110330_AABWVE schutte_j_Page_141.pro
62a11b88ec76731ce4f09adee65d3597
6415ba15c19bf4f2c7736994cc4c71ebadfcea19
32086 F20110330_AABWUP schutte_j_Page_124.pro
d354fd8d7f92e297f96b817405cf1b4a
a77cf0bdf61eca6e6de5051322ff7e189a2826d2
54256 F20110330_AABXAK schutte_j_Page_066.jpg
9c704e6aca26b85bda5081237bec2e87
9ab2c6b0a3dc3a386d74491b98f712e496bf52d6
18915 F20110330_AABWUQ schutte_j_Page_125.pro
e6c5d0b2f4cd93923307f87cd880d0a1
247eab7effb895a7688395069f91e0612e1452e9
17681 F20110330_AABXAL schutte_j_Page_066.QC.jpg
2e9d91fba19818c84efa46273366c8c9
c9fba54ccc56c82cfa0f2120f97784234e087a5d
26039 F20110330_AABWUR schutte_j_Page_127.pro
c88d967ac28cadd6c0f397c99a49d664
a5b5e60687535d75c100d0789d47f3989393530b
86957 F20110330_AABXBA schutte_j_Page_074.jpg
cda9bdfa9e1a0c3c29b12f0aa7f51a64
6b2cf829a0b8ae81a5c16aa6bca5ba03c1016df3
76442 F20110330_AABXAM schutte_j_Page_067.jpg
0188f371c832b7a44d026123e4391f3a
7e05e957cf07c2a569517539cb33feddaf6af493
69574 F20110330_AABWVF schutte_j_Page_142.pro
2e553355d6b0eee11151b996aee0ed41
c8ee57535f47dd20f3fc5f30e99defaa5b08a1c7
11729 F20110330_AABWUS schutte_j_Page_128.pro
0b2b211e5b00f1fc7836c3cd5dbe4a3f
76d262d7b43312828095212eed719026eacb9395
27234 F20110330_AABXBB schutte_j_Page_074.QC.jpg
df43271540a628a7cb3eb16d91d1b146
e4b090c5fe2560615389ebb6b0541aa550509a5c
23717 F20110330_AABXAN schutte_j_Page_067.QC.jpg
37ef4af143a48fe913e9c785610cf72e
6d40e4b6318660a675034d8d87314f26d3767b96
68292 F20110330_AABWVG schutte_j_Page_143.pro
39f51c9447ba569a7b97a1c85582aaef
730e5c71cd2fef70682a2a158e5bda34db3bf6bd
41090 F20110330_AABWUT schutte_j_Page_130.pro
9521a60a28eef428c4ca086e54b40dcd
b1ce2c76e69e0ac1f776f3805bf14c3cd84a13d3
62672 F20110330_AABXBC schutte_j_Page_075.jpg
569bbd0dcd3c870f2873ab30d443cc13
3612f8128a0dd585d88df2ca9404f69465e67c79
43026 F20110330_AABXAO schutte_j_Page_068.jpg
62afe03d504ceecd46130dc541ce8703
c8aa14dd67cef6354a1ba51cb37c29e3661b62f8
58548 F20110330_AABWVH schutte_j_Page_144.pro
1a738bd9e169ce06313341768d20233a
0e9e345747cceb633e8262728a40183b3cf16784
50531 F20110330_AABWUU schutte_j_Page_131.pro
75b2a8b8f6cb0582025aa9aac9a93ce3
389d5a480432a88c9991cf02968c6e0e6f6cb031
17892 F20110330_AABXBD schutte_j_Page_075.QC.jpg
f19a628cc2894fbbf0de8ff3a01c41c5
97ad0edcabc541a653c419c756e724569f2245c5
63556 F20110330_AABWVI schutte_j_Page_145.pro
ec37beac24cefde08853efb813da6fb0
67ebc0337308282979d11ba66c99b85424518744
19939 F20110330_AABWUV schutte_j_Page_132.pro
15ad5c721dee2c68cba043e81242c6cb
432500fbd5d0c262f1e7f3eefa5c9760bd4342e3
75783 F20110330_AABXBE schutte_j_Page_076.jpg
5bfe57883255dafb8854e0b01df0e597
a9adbf56c1ab4c3a47bdb3e59fb1173ba485381e
13597 F20110330_AABXAP schutte_j_Page_068.QC.jpg
f33f90a502399b7d7189c1e641a8c769
f37d7d89e9f0de53be567e4fee0a42e10f98ad37
64527 F20110330_AABWVJ schutte_j_Page_146.pro
7349b755488b5c5d346ad4feede2bac2
19cba5c85994cd841b12778a78d1dbe04db64087
14159 F20110330_AABWUW schutte_j_Page_133.pro
9f8ec78a2c6dd50175a12aebd669aa34
88843cdfb40d07206cefab0c26894cfd36830893
23474 F20110330_AABXBF schutte_j_Page_076.QC.jpg
b4569c5304325d0261312fa8af1144c0
25f7de458ed3bf3cf68f40ed102c60ee20d89750
83262 F20110330_AABXAQ schutte_j_Page_069.jpg
a6358ab285ac4c91331897c30c19cd0f
6bea84396f07f91ae536f94c1173a2c32101a15e
57296 F20110330_AABWVK schutte_j_Page_147.pro
9f6b138ae690bb93bcfa02fa3453e89c
ddcb883ea3c2e82fcc9051d93e669c32c9a45cf7
16967 F20110330_AABWUX schutte_j_Page_134.pro
2a199a3b27cba93878bfc089a81ea83e
6ae894fb5e4d495e82d53b0ac4e24904203475a5
100698 F20110330_AABXBG schutte_j_Page_077.jpg
306ef85be820e3b82066f0b815b2f1f8
aa36d8869b2823f72893072cd0157dfef190b766
78418 F20110330_AABWWA schutte_j_Page_006.jpg
7036a299dfc09e632c9f781df2a1271c
5b802ec82c6e0ecb0049c462003ad0202a73dde8
25987 F20110330_AABXAR schutte_j_Page_069.QC.jpg
84ff3f5161dc1e24a746869c6eebf6d6
fd1847c8bbc6c41f31e06ed3532838f937f9ea3d
67946 F20110330_AABWVL schutte_j_Page_148.pro
fb2ffa0d0e02fbc6afb1127105dabfa5
c5bdb3550fe631b6a3be152af7d6d97ec21b7eb5
10642 F20110330_AABWUY schutte_j_Page_135.pro
dad6969aa3165062108b86bacbfbae24
1e68ae0188ae1c6d9906d3bcd8503eca19dc7720
27568 F20110330_AABXBH schutte_j_Page_077.QC.jpg
bb1e694225b2deff85c45226d975442e
0f4cbecb75184d1e770ba6c573cbf896d5bbfbeb
18082 F20110330_AABWWB schutte_j_Page_006.QC.jpg
e8969f925f00a11918fe39eb5122cc19
7e2544d5d019963af775014f0354270e4ac3250e
58610 F20110330_AABXAS schutte_j_Page_070.jpg
ebc390260b279cf58e97870193d825c5
938438b9185cbbbc8e0e6ab23ef10ba1330ee001
58349 F20110330_AABWVM schutte_j_Page_149.pro
ba0ca517bae31c0f931756cbeec508ab
3f7271e2fd59b7c185294c5baf29996b8074973b
13741 F20110330_AABWUZ schutte_j_Page_136.pro
55e7115133f17b7624a0aa054076b5fc
8e36c606ab45df570a4fd4e3723597d17cf3e25a
16140 F20110330_AABXBI schutte_j_Page_078.QC.jpg
4d146f4c2035f1f9603d248cb12f1023
0888cb6ea03028c5dd3d9900dc22af14f25e8064
108490 F20110330_AABWWC schutte_j_Page_007.jpg
719d847941af29c175e095d5d5a153cc
ff8bbf19beaa67ba81705ead00412df6d25c0cb7
19237 F20110330_AABXAT schutte_j_Page_070.QC.jpg
02bba0b7841d50b578e5ac77f3d979aa
1e4960c02611a9e80c6aede434a51ff6a50c4bc3
62020 F20110330_AABWVN schutte_j_Page_150.pro
e24c1ffc739409ed823ab6543cbd0dda
ed2e276e1ff1040c9ba9e4b4dcc80d165900c6ed
86539 F20110330_AABXBJ schutte_j_Page_079.jpg
1d9ec23a3ce54c9f0fe84186c2937b0c
3fdcab4352a2329025648dbeb944f8f06cb706d5
23049 F20110330_AABWWD schutte_j_Page_007.QC.jpg
15c2b52a4621d8019a61cc645d5fe395
a29e239683d7e15d6ba1654aeff49bd4506de9e1
38137 F20110330_AABXAU schutte_j_Page_071.jpg
9c8ee0f7377ddc24df2e4d5814361ef6
637cf474830fa331f8bbf3a2a67e14482472cd81
29493 F20110330_AABWVO schutte_j_Page_151.pro
c5b3c470bfe2332a69bec9a37a983b0c
a3fe6d989bc763aa2369de2a4de77bca3a67f046
27053 F20110330_AABXBK schutte_j_Page_079.QC.jpg
2b72aa8b27dcfdcc8ad13e1d210cb381
77b61230f638e27c91e55f8e310e47194cc3e0e7
75532 F20110330_AABWWE schutte_j_Page_008.jpg
e4ed2524466e0b7c95e7d577cbfb3301
3c80246a41cc5aee1fa831260b548fccbb7aa647
11852 F20110330_AABXAV schutte_j_Page_071.QC.jpg
20c83d13695854e2a6f1f327b77287a1
31b36d9ec6a48aa0918fed8b62be38d9b06ea2d8
13092 F20110330_AABWVP schutte_j_Page_152.pro
0c3a46c48e4499f6004b36df7b57171b
feaa3b34249738bf25ce994a6f29dc2fce1e4abf
85543 F20110330_AABXBL schutte_j_Page_080.jpg
1ab5e18310e027b6256e48017cf94029
1e8f45d50a6281c645e3a4c51fce94b26dbcfcc1
17068 F20110330_AABWWF schutte_j_Page_008.QC.jpg
a12368b1c93cb74d78bdadc2a25ebaa3
a9c155ca3f7b06d994970f1b768a18b85502fe6c
84651 F20110330_AABXAW schutte_j_Page_072.jpg
55086a23a24f4acff8cc1455ae472098
cf3b73b8dfe5ff00dad80f0a2c34196e0e522cd0
21735 F20110330_AABWVQ schutte_j_Page_001.jpg
7a61ab9901bb0f83074c42ed44d91ff1
348fa3e0f5b0fcb98b68aae59e2c6890e4d2c051
23414 F20110330_AABXCA schutte_j_Page_087.QC.jpg
59e840fabf1f8c728d6db54366c6c096
19973463dd5c1224ea76dd726d4dd0a7ec2def95
24179 F20110330_AABXBM schutte_j_Page_080.QC.jpg
4879e0f27fd333f1d5a689afc14909fe
417f96386462ce758739d2c676a22b50e3786663
26193 F20110330_AABXAX schutte_j_Page_072.QC.jpg
1187ed1f368228861eb71bee3e7eadf6
18052b0fd2ad152d09e5ba12f2b4ecacbb3dbb71
6718 F20110330_AABWVR schutte_j_Page_001.QC.jpg
015fb246b5708a15be038636997f1a3d
5035d4f28fa093d5aad778b0538de69680fef98e
67189 F20110330_AABXCB schutte_j_Page_088.jpg
130e2f0f52282de64324e86fa999fc2d
7b887b45114741256265016655bdc476d915a525
77170 F20110330_AABXBN schutte_j_Page_081.jpg
efdcaca9ccc2a59770de8c95427dcbdc
39e890d253cb5821514d8c67c25581840c4f3904
82551 F20110330_AABWWG schutte_j_Page_009.jpg
f6d11b6e2aaa8905acfd50905841b227
bfd3975fa4f8ea5d52ace6ee7835422b9607be03
82639 F20110330_AABXAY schutte_j_Page_073.jpg
e527d86df5731afbaa9c9ec27ead8e2d
b448800f8f8df2a1388f08d1c6b23e7fc0302601
5054 F20110330_AABWVS schutte_j_Page_002.jpg
c7644b175e24b823b6fa6d342fe26a24
9e543554a2607700ac633d2c69a026a36639b2e6
23658 F20110330_AABXBO schutte_j_Page_081.QC.jpg
735ee5756900ae1a10c2dcd0110c86a7
82853346be037fcaa0992c35d756715fc31b3437
22198 F20110330_AABWWH schutte_j_Page_009.QC.jpg
b7c808ba37d26b6494c5cde132dc92d0
7e5f6cd76e907b9d5bb1c600a3cbe518a7688ede
26101 F20110330_AABXAZ schutte_j_Page_073.QC.jpg
9673cfa9e83338d5b926dcd11219209c
743e8f590049144f93f42330b7843148a1421aae
1717 F20110330_AABWVT schutte_j_Page_002.QC.jpg
a49532560ceb58d1497fe870367feb34
9e62776eb055ba1729a36358a8f49733f6d98d80
21607 F20110330_AABXCC schutte_j_Page_088.QC.jpg
abd3cf98e08b7456c6785a386715c6eb
213056e97006f206e5ad7eb0332c2c66cab53593
33893 F20110330_AABXBP schutte_j_Page_082.jpg
3ed4ab590c44f6592154d7a23c1374a9
b627b7da747b46f54758e43e54d207e51b8d3e67
88144 F20110330_AABWWI schutte_j_Page_010.jpg
4265ae061089986e78360ef404e663d4
a50d30651e0534c910d45d36327f07c15257122d
5153 F20110330_AABWVU schutte_j_Page_003.jpg
5fdd0903047d0d03eae687e17735cbaf
cadb88d47452502151893b89626b6feb927464f3
68819 F20110330_AABXCD schutte_j_Page_089.jpg
9234208674580720f23a73a7b6a26eb7
3279fda7c60c80380b17f67013f021a510909fc4
23598 F20110330_AABWWJ schutte_j_Page_010.QC.jpg
b464695ed5b60b952d6a44a62b2b3c35
fbe9cc970b8b2c044e15c13be27ddebcb0ea9723
1339 F20110330_AABWVV schutte_j_Page_003.QC.jpg
c99c493029ae8fdebf8625f377141f56
ada1398af54e7583a6833ec7100df8238f7bc152
21584 F20110330_AABXCE schutte_j_Page_089.QC.jpg
0f60a078789f37f7b67704a7a16436bc
1d1e7f7262b1293748d9b08b33f9ee56ac453ae6
11463 F20110330_AABXBQ schutte_j_Page_082.QC.jpg
3d929f4af7574e06c639ac61e085fd66
b087cb979e4abfb7ebb4204e330b9395441fa7bc
113402 F20110330_AABWWK schutte_j_Page_011.jpg
f6c510b590b467d6ee953a1aca06fc19
b30521f9d364bae10abfa87813d409cd9cfe6141
73603 F20110330_AABWVW schutte_j_Page_004.jpg
777df6dd3dfe0d3c55cb1e2d516e484b
74087ce9fc8ac6555385d19ab813788a33283f21
77121 F20110330_AABXCF schutte_j_Page_090.jpg
0ff28a1416277941be0c95c846b154b6
24188922a5c105b63998d00b4680446bbc47a3f3
21324 F20110330_AABWXA schutte_j_Page_019.QC.jpg
69fa754c9a37546aba68cce0269528f7
4d069fa75ed63d40ef99825919cccadd7ebc1909
55037 F20110330_AABXBR schutte_j_Page_083.jpg
263e3702fe3173437bc07f5ea613967b
3a84f853e39015ca2efb8a4b35164f33808d79f0
28966 F20110330_AABWWL schutte_j_Page_011.QC.jpg
f17e6762be9f9f4ab6bb2a374d9d8fb1
f9906ef36a2cbdb92e80ba70925591fa5db5a17f
23039 F20110330_AABWVX schutte_j_Page_004.QC.jpg
80dc9f0369fcab86732621425a01b744
3dd6d253d9e7f65ae0927dc76c7b8ced8d833dda
24124 F20110330_AABXCG schutte_j_Page_090.QC.jpg
8842e4f34476a0eb2d0bc6d759cbd518
6ea2b1b458a860c5fb69af5c8493877adfb239c7
82732 F20110330_AABWXB schutte_j_Page_020.jpg
b988bf32bf085cdbebf760389812e019
4712c94feb263b4c3f5be026583068a5eabd5fe2
18416 F20110330_AABXBS schutte_j_Page_083.QC.jpg
e3fcb46bff50418704332eb030b51ea7
48b6a25fbd665da7494feb297c4cd99b556ed334
67567 F20110330_AABWWM schutte_j_Page_012.jpg
ddcf991b2d612c0f4ae25c2f4a707c0c
f4ab852725bd0aff91d1e36113e2ca869a14b0e6
27923 F20110330_AABWVY schutte_j_Page_005.jpg
6b605082d15f4a96affd86eafa376b48
1c11c0e977b588f99eff7a4271b49b73ef92ffdb
68612 F20110330_AABXCH schutte_j_Page_091.jpg
74451c77324256394e59d002bbb7f5da
65ff70dda7134f0103eb6e8bf8b7ddd288aa66fc
25464 F20110330_AABWXC schutte_j_Page_020.QC.jpg
ea863cf230d324d36432ca610d7bbf28
6b6a6c0f06c9a334bd3c8ff5c8f8fbdbbe724f52
85437 F20110330_AABXBT schutte_j_Page_084.jpg
b115a2ce6cbe94f49635441f9ec5dbed
cfc83bf9fe1c77d72a9a7b362bc4a07d9d5c5c9d
18381 F20110330_AABWWN schutte_j_Page_012.QC.jpg
fe7598a86da22d9eb9f2d559c0e3cc63
a9efacef43ee8f51b007093aee655f0eee617158
9273 F20110330_AABWVZ schutte_j_Page_005.QC.jpg
0beb90e7acb4bc961e66da8210acc623
1fd1ae17d2d5573268ee90ee1464f231a95715ff
21404 F20110330_AABXCI schutte_j_Page_091.QC.jpg
25488d2b87f74c96eaabb74f892425b4
506a5d6fd333886bb37b22b8cec7ebf0829e9070
86617 F20110330_AABWXD schutte_j_Page_021.jpg
36698d6aa7c1e048ee9d5240f79e01e7
e96324674f01a7cd68881d50dce0093da7867527
26258 F20110330_AABXBU schutte_j_Page_084.QC.jpg
bf9ccd24a8cbb3f931256e0e08e0e9b2
bed78771383e273d137e861a1bf7fd7a9b5a7ead
72011 F20110330_AABWWO schutte_j_Page_013.jpg
ac37a71fadc1f0ccc69f977f31500c8a
f04cc8570235569c5c9af5e49e86e1e01cebe1a6
70748 F20110330_AABXCJ schutte_j_Page_092.jpg
50d90acfd488b4b6f32ef665196c05e2
af66adad413d4748a4d0a8e9b7f008b21c03b5a4
76259 F20110330_AABWXE schutte_j_Page_022.jpg
75677e84fb9895d47c559601e8b9a209
b8a0721a2c3e6494c413865803b1e93ce5a05c38
86044 F20110330_AABXBV schutte_j_Page_085.jpg
b6ea8ba38ac777d752f29bab10765db6
9ebf5373c1f9d631c9df6fbf1cd126c5d75abdb9
21185 F20110330_AABWWP schutte_j_Page_013.QC.jpg
cd745a0711d9b4750b4d59cbf5b07891
625bc5a5ed00e03048b0cd557c4b4ef82170810a
22637 F20110330_AABXCK schutte_j_Page_092.QC.jpg
ca48e19186b51981646db8684855a318
16d6b16b5d8c929c18518e5f6440d704c61903dc
23335 F20110330_AABWXF schutte_j_Page_022.QC.jpg
4158a24943977c3c2a8b71c93756db96
9f687ceab3b4b014ce75c27e8f4e28e1cbabce41
26479 F20110330_AABXBW schutte_j_Page_085.QC.jpg
9a5abe1d086b94fdab337925ae2e8d28
6095622e44adedbdefea0f512f4e76345af5eb63
28072 F20110330_AABWWQ schutte_j_Page_014.jpg
a0b9a5792de2d02cf2ab2a621b382e39
d024fc4f1e0d5cfeb6b73693b57438c34a7f2032
80063 F20110330_AABXCL schutte_j_Page_093.jpg
46d4ed80373fcaaad4187d5934e0e2ff
950cdf91c16890b30eed79ffb8074d710651cd4b
78903 F20110330_AABWXG schutte_j_Page_023.jpg
f3df581e7277ce49a9741c299b3eae65
0ca7628a7a47639f9b33332e7505368e53aeed0e
82238 F20110330_AABXBX schutte_j_Page_086.jpg
3764fdff97237fa4be0b4bd2f781abd2
4bfd410f48866ad7da8519ba4f4e5084a57fd121
9149 F20110330_AABWWR schutte_j_Page_014.QC.jpg
4f68c9c5d933af3809a390fb435214d2
6101da9662695e390b0a2568a1cf685a830f0a5c
22069 F20110330_AABXDA schutte_j_Page_101.QC.jpg
9c3d2880a68db4076c22603f75795f02
a07758d7208d3d52f62fa3d8ab2c93c1cc6ddd8a
24993 F20110330_AABXCM schutte_j_Page_093.QC.jpg
a685a531d4f88512151f1e40cf21321f
19ae77186171824a4ba02300ceb35a406bdf8d10
25855 F20110330_AABXBY schutte_j_Page_086.QC.jpg
faf4f200ba843001d712090eef5cffb0
f49cf4080ca1cf73c6c07cfebefaa3784cfbbe0f
63228 F20110330_AABWWS schutte_j_Page_015.jpg
26a0fc4d603bc4d9897207b87c564bf2
390a2ed0fac524ac2225c64653346425ad81d31f
64793 F20110330_AABXDB schutte_j_Page_102.jpg
6772f85133fd58f33b6d644b8137372a
4023e8e649a036a672b6f060698f331f84a5c265
73613 F20110330_AABXCN schutte_j_Page_094.jpg
c3ba6a8069fe66a3337c0b394ac2523d
b0e59fd62b0fbbc7c1db8560af03a032f8296f89
24332 F20110330_AABWXH schutte_j_Page_023.QC.jpg
cf454661b0e5f999689fa35d66612a70
8dce5c6384264a79ee77bf0529e6f881b90a6abf
73979 F20110330_AABXBZ schutte_j_Page_087.jpg
eaa5d7e6867e3197e18159b17634e133
92e75d7895762844e9cd08451c4db4fca1342637
19100 F20110330_AABWWT schutte_j_Page_015.QC.jpg
3b9a0f72cee0ed5318835da56d1de2fc
0a1da9f462637c20b2a82feb464777ad59fd4bda
20922 F20110330_AABXDC schutte_j_Page_102.QC.jpg
32dd6e80f93111c050338c654bd0a5c7
d7efe69bbb3f13eef921e80d0046d3a5246ced8a
23627 F20110330_AABXCO schutte_j_Page_094.QC.jpg
37c92c5c027f34431e399cda2adf0604
6b867f329e8550abed73191ca4dd47e6925749cb
51179 F20110330_AABWXI schutte_j_Page_024.jpg
3f18366145156de68651829a60091815
f31bff931fa70c795718b5cd3a7dc32de456ed02
82708 F20110330_AABWWU schutte_j_Page_016.jpg
3d074c709372d36f3552ac7c24aed22b
e7ca933998a423eeb90b3c8ee2111e1868216a7a
85695 F20110330_AABXDD schutte_j_Page_103.jpg
e03c8bd0fe88a35af1c7f36ef4739d3e
a043765e9c5d546e386fb76869ff4e0fc29a0374
73469 F20110330_AABXCP schutte_j_Page_095.jpg
d6022b2b20aa32368193304eb493da08
4473932958dfc5a016ecf9fc7b10f968b1dcd95c
16611 F20110330_AABWXJ schutte_j_Page_024.QC.jpg
a9a3186f31b6c0c00d2a8a3d94b555c6
d2004c5f85f767644aa85aab3cd263e91d899fb1
25818 F20110330_AABWWV schutte_j_Page_016.QC.jpg
1ed3cb31e2cd9e894bba0c814a43f675
1b7c41cba449015bf503ce114d14a30ca8554a0d
26413 F20110330_AABXDE schutte_j_Page_103.QC.jpg
867224dd938305e4ee690773bcbe5e58
bfcf79025282c5c379e42a1386c6f009e4f70b42
76934 F20110330_AABXCQ schutte_j_Page_096.jpg
4c81965d2e17f91bc5390afe17beae85
9a51b862af8d999fb9d8e4141dfcedd7aad45e3c
79626 F20110330_AABWXK schutte_j_Page_025.jpg
ec439e5b932e822603833ec2e5863fb2
ba789de026c18b32422f375541d4af233315dfad
82757 F20110330_AABWWW schutte_j_Page_017.jpg
c9fb6bab547a8e29a4dd0a34aa2d5f25
d2fe7eeb06e9335a3f3e8688e6abe59676ce6be7
74232 F20110330_AABXDF schutte_j_Page_104.jpg
92549034037bb8b19d2d1ed972df1dd7
319b6670a2f14ed0d3983bfbbf21dac7c2596c4d
24695 F20110330_AABWXL schutte_j_Page_025.QC.jpg
69ae0f4f6c21d428aeff1303c22366c1
79cf8b6db3ce30850865924df9b9d605ffb18537
25757 F20110330_AABWWX schutte_j_Page_017.QC.jpg
59e3262bca304e094b5a7a357fa89226
749fc95f8989771db47a94f4769a094ac9f219a7
42672 F20110330_AABXDG schutte_j_Page_105.jpg
059ae9743195709e0f363f4d157c05c9
022d19190b349444f2a7ab69d915769394a15be6
25696 F20110330_AABWYA schutte_j_Page_033.QC.jpg
71372caed1bc199aa8342a01072ac0fe
37d8497beda98cd23b0fa52698ca3b8443e1300c
24323 F20110330_AABXCR schutte_j_Page_096.QC.jpg
e5abf80415ac7da722232506a07d8379
9f1a84762af43140e162bcec557832937a737718
75755 F20110330_AABWXM schutte_j_Page_026.jpg
b0bbe30db75d163b82f8e1e797f66b77
16fab45b3a359d86ae34bf6685f37d85a8c732d1
50086 F20110330_AABWWY schutte_j_Page_018.jpg
7e9aa6a458bbc7092ff10a1edd97644f
fa8217ab821d115681f08c7762bc2504847192f2
12973 F20110330_AABXDH schutte_j_Page_105.QC.jpg
86663ab810d389f5a1006ce1982ea7c9
fb5140f95300ce83b932764eeb083523d78cbe1e
83978 F20110330_AABWYB schutte_j_Page_034.jpg
0ad6749d38a410414e0d5783477552b8
981ac04abbde654c0f4fbba6a8dbfc3b38d67e2c
18832 F20110330_AABXCS schutte_j_Page_097.QC.jpg
c7b13b224776ff3d11d97773daf6b58b
63ba1a649200ff27a19947113634e3bf791595ac
22106 F20110330_AABWXN schutte_j_Page_026.QC.jpg
1d35b74cf34bf56ac28f6b1af559faca
348c69c306abcb5d0e46b92f875afde5ee2adbc3
69825 F20110330_AABWWZ schutte_j_Page_019.jpg
3e26e772539c5ead9dd78a8fd0d6aa6d
d5ff00886428aecfed9a176718326e333c0ec5c0
68089 F20110330_AABXDI schutte_j_Page_106.jpg
2fe77288230d57fa1a20576b60621120
b039c4ed35510b6917d9ed86d1fbeebf0317aa9f
26136 F20110330_AABWYC schutte_j_Page_034.QC.jpg
10a2d3dcc3beb0c5988c122684201e90
d4e587d52b6c6feb10fb48547a72ba875b09e468
64960 F20110330_AABXCT schutte_j_Page_098.jpg
c8282543824c3b52cacb9c698b4a4e04
e99aacaf6e68c9275fffb397ebf86d1ee911b5b4
71247 F20110330_AABWXO schutte_j_Page_027.jpg
03aca7c36681412836ff3ba8ae21454d
595ef260c49cbef130c76e21d0838108607f0b61
21749 F20110330_AABXDJ schutte_j_Page_106.QC.jpg
fee95eb2255cac19d59f481748c43acf
4d57687e229f0989a6ca635ef7c9496fadb9aa7e
70740 F20110330_AABWYD schutte_j_Page_035.jpg
90f40093d7ed41311c0ae8cb23e84f30
8178a4cc64243c600c96b9a1776eb66880999982
21621 F20110330_AABXCU schutte_j_Page_098.QC.jpg
8d0600393289c2e9d784e3336e4b8010
ced541708a2ac91c915f6ba0935a013dbf224980
22128 F20110330_AABWXP schutte_j_Page_027.QC.jpg
386e519d8093d7f243f623e09a6f0cd6
aa0f38e374d888eb0294fc3341c60a984157bf98
19153 F20110330_AABXDK schutte_j_Page_107.QC.jpg
19f55622c4335405aec4e8e4081d7bcd
a40c2135b7aa872826999f4d2c5e324da390e1d2
22978 F20110330_AABWYE schutte_j_Page_035.QC.jpg
c0c86426f0fb01b71245fac302d23a5a
dcf29e14712686177c5d5b161c96fcf00d1381ad
72218 F20110330_AABXCV schutte_j_Page_099.jpg
454b5d4c5ec74922ae3e42deb39566e6
2ac7df3df8beddcc4daaa5751da024f00edfde32
23728 F20110330_AABWXQ schutte_j_Page_028.QC.jpg
96a6623671f2211968f686d0f8927824
927584725fb0552d86aab43923fa6eacf664993b
72326 F20110330_AABXDL schutte_j_Page_108.jpg
398a7c0fc41b4bed93ebee76ae07bcee
b30a58d2bd061c094488c5a4317620f340119b99
74436 F20110330_AABWYF schutte_j_Page_036.jpg
06bc96aa131cb63322822a741587242f
75860a493eb2d33b751b95f33dcfc0642946db6f
22093 F20110330_AABXCW schutte_j_Page_099.QC.jpg
6546c9b06fccb8564fc6735b94512a73
1253bbbe2242e9b4848d369df8cc98a5b3c5ed7a
82911 F20110330_AABWXR schutte_j_Page_029.jpg
e54547bec83bcd87a520ee78b0787541
6290a960a5ec695df804fea527eae13799c64d52
68234 F20110330_AABXEA schutte_j_Page_116.jpg
8b64df677f4556e7a2b723359764fce3
5a6ef1588bb53fbcbbb68a470e23514ab8cfb9dc
21852 F20110330_AABXDM schutte_j_Page_108.QC.jpg
472f0e1338b374f2b966fee7bdf017df
38045302036d9aedec54dea956f0e6befd4ce408
26534 F20110330_AABWYG schutte_j_Page_036.QC.jpg
c20050bd45c9e92eca5f1bce6d83e666
ac32ba3360817b6f420703be901b934bdda64fbb
71482 F20110330_AABXCX schutte_j_Page_100.jpg
c1a54ab245b83b557aee594f0c13d892
5dfa93e29019a97416d11fe6b254d6a5ce03388c
25666 F20110330_AABWXS schutte_j_Page_029.QC.jpg
a7a2b58b23037f11f61ba53189ba2fc3
1bf9e180c555c382ae005428c0339a704dd90b54
21076 F20110330_AABXEB schutte_j_Page_116.QC.jpg
7e6dea75b8dc92f2b95c96ff758b6824
2892eb983a0b0428e4d26a2b4e2783088b2b8a04
78286 F20110330_AABXDN schutte_j_Page_109.jpg
3a8724e9d34d85c4d25ab5efd1ea39e5
5ffd2c504e671f87386947672c922c0622b95f97
49220 F20110330_AABWYH schutte_j_Page_037.jpg
424fba1a8c99257919ef2a6c32f3a1c9
043b53b0bf9c70d9aaa456478f6138b54857bff2
21501 F20110330_AABXCY schutte_j_Page_100.QC.jpg
f595beb871349372c87a779d99902cb0
4b5558ab67331903c7faa1ba1cdc58fb9d93cbb5
52336 F20110330_AABWXT schutte_j_Page_030.jpg
37bf173f46099331c58a6d81c50e84bc
215badc52b8736a8ccb8f6c08c7ad6cbf8a4e1fd
68516 F20110330_AABXEC schutte_j_Page_117.jpg
149b7f9b322e90dfd244f3269ef61677
ca2538831fa2236aa9a4e9e3324e03f3ff05f962
24125 F20110330_AABXDO schutte_j_Page_109.QC.jpg
3f5c784f39008cf85eb94406fd17a988
3f19183bd3f6f7e8e7bcd2bc24a58213e17d4178
69148 F20110330_AABXCZ schutte_j_Page_101.jpg
f90672edcbe7a4b8dbdf70db4b60334d
189da374a13f9300a29d2cb9f9ee279480b5fe22
16299 F20110330_AABWXU schutte_j_Page_030.QC.jpg
2fb7861ff159f5dc03d190eb4d4221b8
556bfa4288d143b078a8fe7bdf6ced7e527806b9
20438 F20110330_AABXED schutte_j_Page_117.QC.jpg
fff1f8dcf5fb7c47cf03eeb715c09c14
0dfc2138e39442bb92ca2f2b9c194412a7fedb73
66676 F20110330_AABXDP schutte_j_Page_110.jpg
6148c61eea3d644d3521795d67f3876c
6d7f2abcf457d353eb9703101087ed32ebb5fa3e
15442 F20110330_AABWYI schutte_j_Page_037.QC.jpg
78887e51115b9cebf7cb7efabd4c288d
df77c1535aba35a2f221e9b87f64688a6930cfd6
71144 F20110330_AABWXV schutte_j_Page_031.jpg
4720bc13fd41feee88110c5086f5617d
fa4848846ab841d4ff089a81249b46f251b0961e
78722 F20110330_AABXEE schutte_j_Page_118.jpg
f6b891813d38d0e24531e1593ddd738a
b178782c423eabb05bbc4fea9778ae19364ee780
20468 F20110330_AABXDQ schutte_j_Page_110.QC.jpg
01482c2d21a7d3161b882d7213d97601
a05cbc6df4025071109eb81d142e69159b5c5e8a
63691 F20110330_AABWYJ schutte_j_Page_038.jpg
33b45f9da43e3972c6261b090c3bd6ee
b1a9ea2bf36f4dca5ccceec2293fc341945989d4
22821 F20110330_AABWXW schutte_j_Page_031.QC.jpg
b874e47a02a85d6413bc7b6a6950df5e
8ab2e2e99e0b98e76a0141309ee59f46626a3f99
23466 F20110330_AABXEF schutte_j_Page_118.QC.jpg
6d509cd36c99d89a8c9d2f05f5564bc5
dd88e3d1c5ad09b61e5210a27627a614f79b5907
52247 F20110330_AABXDR schutte_j_Page_111.jpg
9b9d3d41999f1d0903ba22d60fcc9eaa
03ce7a32f6894ead6cb8cc4babbea9e4ba098bd4
19862 F20110330_AABWYK schutte_j_Page_038.QC.jpg
097698820fc2e9d7facc4a0af0f5d14f
52f136943fb4432f74feab784f491a5f9c2964dc
76620 F20110330_AABWXX schutte_j_Page_032.jpg
ecc38dbd5d45f62e95b3e88a097c675e
c60f13e0347e30e9b5be8cb7c7237bb21520074d
60795 F20110330_AABXEG schutte_j_Page_119.jpg
c9d3084ea04ca7eeeabd8a6d5e21fbd1
cc9dd13720de36d2cd74552fc90b71e1bec35afd
20796 F20110330_AABWZA schutte_j_Page_046.QC.jpg
9b60d7ec640d74887be14babd441152b
34eca519b549d87f19bc6188de4cdf0b2822cbd1
55325 F20110330_AABWYL schutte_j_Page_039.jpg
78e019415fb45adc383877f898cfc8cd
74aa1a5d13ef950e6f3612be016f636ad920b5ad
23353 F20110330_AABWXY schutte_j_Page_032.QC.jpg
8abfe60ddaf10e9cb938f76d573c2661
86128dd035e3edbbbc9f82682eb1912a6fc960f1
18207 F20110330_AABXEH schutte_j_Page_119.QC.jpg
3d73ea2a587cdb75db0b5a7e70891eb0
f0ad822e7d5596f57bba47821d28861aabf5144a
12303 F20110330_AABWZB schutte_j_Page_047.QC.jpg
bd0d88f575a5c30906222255ba0f78b5
4bd03771253a53e6d3d1a47e203d08c6f1c8de93
16258 F20110330_AABXDS schutte_j_Page_111.QC.jpg
4a584abed0d1e4395bbc6b6bf2a9f109
0de53493f72e15383b7dfdfc738e60977b39ed08
16671 F20110330_AABWYM schutte_j_Page_039.QC.jpg
b8309a41db92c192ac4eb96a6cfce1c5
565369ab3a0f9feb402c63fdcd69a3f64046bd91
81594 F20110330_AABWXZ schutte_j_Page_033.jpg
026dfc1c8148a751f063518c0ded3734
824e7a5808b30767320fb276aeab151a836fd0c8
58613 F20110330_AABXEI schutte_j_Page_120.jpg
0ab57731fc816ed1b67910e43309dcf9
8ab46d1276460678efc98bae8410bc149c5817a1
75375 F20110330_AABWZC schutte_j_Page_048.jpg
fba78f3d8a82336461656561526fe9b0
86a7e7a758d58e4563364e6fd06bfb680b49e42d
68828 F20110330_AABXDT schutte_j_Page_112.jpg
bb0c7877e0e92d9959316d50d3326082
c5297efa11c2b690d3d82e410bbf629de7306188
53987 F20110330_AABWYN schutte_j_Page_040.jpg
1c2ac283c02e93a0d95298052e1c5a19
5cbcf8b7dc9f83389fd9fb0823c54bf1aee11e4e
18175 F20110330_AABXEJ schutte_j_Page_120.QC.jpg
e0b54310f923a29d34fb5c4ca93ade78
3132419f3a223dd73395471fc5d80ea15b63ac07
23814 F20110330_AABWZD schutte_j_Page_048.QC.jpg
18dd6e67f668fcf26330c0871e4a484d
3911734359ab8dd25a7913d4a5f285d59c6f6499
21047 F20110330_AABXDU schutte_j_Page_112.QC.jpg
04b7b4faa7922d773301f82ec6960cc1
0e27221cfe74ad375a8b9d143e3562053c3d12b7
17333 F20110330_AABWYO schutte_j_Page_040.QC.jpg
b1ee216fa422ac8c71ba6f685e61e37b
1c89151a1dd8eabf29c3a0515dd8e5f59e9a5563
79728 F20110330_AABXEK schutte_j_Page_121.jpg
e423dc29a01dcaebea0a99d3a8120560
e0f399cdee6c312f329de78609220d27113ed427
84981 F20110330_AABWZE schutte_j_Page_049.jpg
1cc13e5e487e3b1f9c2698301cda69d3
334df0dc5cc5f5ddd3aa6cf62dbdaefbb4d615e3
25371 F20110330_AABXDV schutte_j_Page_113.jpg
461e2e6a53fbfcf8af41ce755e8e93d4
a9d48b7392e80f30cea4db5d5618c69d28ec66e7
75271 F20110330_AABWYP schutte_j_Page_041.jpg
1539e5f74794d414faf7aa2be1cdb0a5
6fcdac704d448e6719f7153153f3ae72f4a3738e
24208 F20110330_AABXEL schutte_j_Page_121.QC.jpg
6148d58fc4c4c7ada0cc97c269ee4a95
097c507258313973f90887d1d9fdf999d0e82e00
25930 F20110330_AABWZF schutte_j_Page_049.QC.jpg
57833b055ad182b610ac0f6b79976083
65b468dda9a92f49aa83a3f6a4f3848ba08e1756
9467 F20110330_AABXDW schutte_j_Page_113.QC.jpg
0cef5a12c31e8d5208435b9dff7d3aad
1e0dfbe87a501e0802c596129a58701ae13f97da
22878 F20110330_AABWYQ schutte_j_Page_041.QC.jpg
d56438295ec1fcff241baa39bf1dd723
61f203a5f1ab16114c895dbf0231bdf38cb0e7d7
56777 F20110330_AABXEM schutte_j_Page_122.jpg
047d622c812a812887a05c983d7ebc48
55c3a205addab1e57be14096bf486b99fb0efef0
79781 F20110330_AABWZG schutte_j_Page_050.jpg
7f76100f898381b816dcbb59e2ecc550
1fa822e953e9ae4da4e8dd22d9d23993294c91e2
49812 F20110330_AABXDX schutte_j_Page_114.jpg
0c6c83ad6c427706d66f7888ceb134b0
34391c683661d422843b39c53fb7211933b788c9
79729 F20110330_AABWYR schutte_j_Page_042.jpg
06d26fd0afba927c160fae48f914bb35
23dbc922e8030513bc6f8b7d64c4d9adfcec6304
78226 F20110330_AABXFA schutte_j_Page_129.jpg
72e0a65d0e783fd3fd0765ec37b562ca
4fecf42dc0a4a5932c15312d5446d5bc388699a1
17469 F20110330_AABXEN schutte_j_Page_122.QC.jpg
9b62d2677b48a4adb40bf1eab7b868d7
0cb43e85e2b1841610129c376c5929b64481e3d2
25414 F20110330_AABWZH schutte_j_Page_050.QC.jpg
66b090f01dc715d489042cad817d8f00
8a74d4982fc4ae0bc9af5f9b09b73e868eebda85
15400 F20110330_AABXDY schutte_j_Page_114.QC.jpg
9daeca23da149b255328a4e244b95d63
7536c31de3d5008357083edb3bba839d564c0424
24745 F20110330_AABWYS schutte_j_Page_042.QC.jpg
9d8e1d67d264ffd1e99b56a933b76873
7fac8ba25362fa81dbef87a63214535c31e21c23
24945 F20110330_AABXFB schutte_j_Page_129.QC.jpg
97c0bddb2885cdd4f65764dec2b0c932
f0ade54dd4d3eb4c99674943c490533591c191f2
85331 F20110330_AABXEO schutte_j_Page_123.jpg
7a7b7fce3ece335f212981d07758b3a0
e386689e1b1060cf8cc7d28fe6a7d61d60f91edf
84467 F20110330_AABWZI schutte_j_Page_051.jpg
7ae22a4693848d5f875e9a6998533a6c
ee9b253c56dcce0dcb67234b9210f017d6e9d212
12276 F20110330_AABXDZ schutte_j_Page_115.QC.jpg
0558cc9b61febae86c270d0cb36fe741
16ef2bc7bc3cda13c36fbd2e425907bd0b74276b
85888 F20110330_AABWYT schutte_j_Page_043.jpg
0bca2a84e6f9b614b3c5319f35ec3f2e
c1c00ce89c5c7432639a33727503707ffbd13db3
71300 F20110330_AABXFC schutte_j_Page_130.jpg
95c74894077895f7b405d8e56a69768c
762bfa4d17ef6ac6cf6db53017e68b965bb97b88
26584 F20110330_AABXEP schutte_j_Page_123.QC.jpg
20d6bad2110eaa6033248322f6bb536a
0f156155e3f26c6369169a27db4133c43c2f32cd
26612 F20110330_AABWYU schutte_j_Page_043.QC.jpg
5a4d29750dc2d4efb74b7c3f968eeead
ccddca2aee7bac7da9747e8d24a25834daa19cfc
21245 F20110330_AABXFD schutte_j_Page_130.QC.jpg
ae73a58f869df3b8f724dfcaf4a4a03a
8eaf445daf001328aa33afbedde298ccc11a51af
58560 F20110330_AABXEQ schutte_j_Page_124.jpg
379be27e775baff9b70b06355dbec7e0
6c56c776c73e176fa390c39ad4d99f78255822a7
25014 F20110330_AABWZJ schutte_j_Page_051.QC.jpg
8af5f2e0bb69c3f2e62140855dd1b208
d6974cdab2e67a90f2570add8c5987d77432818d
86457 F20110330_AABWYV schutte_j_Page_044.jpg
1c6cacd652c2a298a6112ee6712d18b5
8298327f9a8a7888636312e5aeef4aa3df806ccf
83655 F20110330_AABXFE schutte_j_Page_131.jpg
679c1713c62b362f286e89d2dd249912
c9039b7e7636047620efb283c3e2f6b663f96edc
17535 F20110330_AABXER schutte_j_Page_124.QC.jpg
d7f1b625d341669f7877bb0da60e71da
4b2ead7203556c125c94a5c256905f8b38e13210
66026 F20110330_AABWZK schutte_j_Page_052.jpg
8d7f9d0afa6b7e43ea45e1a3badf5bd3
60516d8e1ec23dd0fa85d06fdbe3db5c449f7be3
27030 F20110330_AABWYW schutte_j_Page_044.QC.jpg
a26c771f78a9751051a04b75751e6b71
4ef09ff277cc1b4bbd025d57c0f0144fd4b0dfa9
26568 F20110330_AABXFF schutte_j_Page_131.QC.jpg
9a2e882c3079fb4e35a86a30d1c321eb
60edc8ca1d644b4e255a72406a1f6265aba6db0c
39853 F20110330_AABXES schutte_j_Page_125.jpg
c9f55c8fc503d1a512415dd85842322c
a29b22f146588a19031c2cdedd890796ca5bd323
19068 F20110330_AABWZL schutte_j_Page_052.QC.jpg
9c5be3cdc2bdcefb3cfb9423e76d8694
aa5b5688f01356173defdec9514a47c84358aa81
57230 F20110330_AABWYX schutte_j_Page_045.jpg
5aa84c675fda6916f15789b941457b52
e0b8411073279ef3fc4f927b65258871afa3cca2
34908 F20110330_AABXFG schutte_j_Page_132.jpg
1b37600c52a9c673f7658e4f0a786c7a
4d48ff5b92598b93c61143b5cf75fcccb0256600
72186 F20110330_AABWZM schutte_j_Page_053.jpg
4aa9fca5a914cac8e32195916da25d5d
9b39e4368d45bf4466ba31be31d51705115d0269
18965 F20110330_AABWYY schutte_j_Page_045.QC.jpg
7e70d9daf8426e17deacca7592e15dd7
09b509c1235ba5320fa9488b96318d4b563cb020
11461 F20110330_AABXFH schutte_j_Page_132.QC.jpg
5c70330ee551cf838f28f9cbb8b90fca
9cba48f528fdf346e4b08012802cf9d708110ee9
12860 F20110330_AABXET schutte_j_Page_125.QC.jpg
68001efa6ba562082c27cb26c1a1a693
b0a48ede03d3440b593cc3c8e950c4fe014ccaf4
22499 F20110330_AABWZN schutte_j_Page_053.QC.jpg
db548cbe25cde2e22a49f25d8f914e87
134a5e72644eec048bec8b19ac3794f62acdc170
65287 F20110330_AABWYZ schutte_j_Page_046.jpg
855821d652f66fd97bfe616d19d81768
533c8dfd3258480e389b5c398d2702e6307e7ba5
28007 F20110330_AABXFI schutte_j_Page_133.jpg
d6abd8c1c8ec71e9d941caadb1dfc6de
01b3c138c9bd93b4939ee222e1369465a9b786b5
63241 F20110330_AABXEU schutte_j_Page_126.jpg
92c68b08d4c306ce833f458690c6fa56
15a470a89fab4030b5fc291be3b53b288dea25c1
72095 F20110330_AABWZO schutte_j_Page_054.jpg
f9ad26ef804422617c9fc2a2a9445538
166bc27620fd47387284bb154b80fd546b0a4530
8713 F20110330_AABXFJ schutte_j_Page_133.QC.jpg
8a579c9a6700e339f767531d5e6ca1d9
7fc3f6418d259f37f8fda62d2c98f603c02b72aa
18975 F20110330_AABXEV schutte_j_Page_126.QC.jpg
e0ae822f1ed52a4d85e62507cd0a1867
be98584fa81863bdc4b99648f454512d3891e9ce
21051 F20110330_AABWZP schutte_j_Page_054.QC.jpg
c3b1236bac767cd6da9ff3a18577972b
719b5b39674f1da60c534bdc841c5997ec4f2318
33031 F20110330_AABXFK schutte_j_Page_134.jpg
6b38ea8328b5211b2afcdc2740dbf2c2
37dbbe7241d49afff6294cd25535186979f61070
51965 F20110330_AABXEW schutte_j_Page_127.jpg
b990c1e4cf1e1f48023b42b8216c5c45
767e618837232dc28223e1a947a19bb518e7f72d
86222 F20110330_AABWZQ schutte_j_Page_055.jpg
685657864ac6320e741156e0c0a14f60
d57d96a2e24ec717aabf3d83fbcbc48bfa197ebd
11383 F20110330_AABXFL schutte_j_Page_134.QC.jpg
042b2acde8fdccff8361d09f8f15f04d
dd2d223d11326b27c6ed21c250a2e4979e230207
16999 F20110330_AABXEX schutte_j_Page_127.QC.jpg
99823bb1cdc608affdad9e34b9ec1346
f049bc09ecf2ee034ae38ae67b3b51e68c8f0271
26793 F20110330_AABWZR schutte_j_Page_055.QC.jpg
4cc67464be8d696ca420d93b15939b10
d7ffb46c1d3f2c464239de2616980fa858f38046
118685 F20110330_AABXGA schutte_j_Page_142.jpg
36b510a11e6113aae4f3ecd9665ac9db
98f74507384d413e6014988bdab75d77a9efe72e
16889 F20110330_AABXFM schutte_j_Page_135.jpg
b426d69f72687c43aefceb1a435b16fb
039cb00b55d0dba65fccaa80b75cb1ab10bedd84
42074 F20110330_AABXEY schutte_j_Page_128.jpg
91e1d06da7900b89ac497514f6170055
7fa7c4a5c4e8b8500520e242998ef9962771f7a5
84575 F20110330_AABWZS schutte_j_Page_056.jpg
fa0d3ccebd8c4bc999e0e41486e5c70a
f20ff1e924c6dbf711805e036bcf9a3b01910cc2
31895 F20110330_AABXGB schutte_j_Page_142.QC.jpg
f13e0335f40e3aad17f58393a7f2ed47
99adb52e7960eba5060b2b3556b9604ccf6d69d7
5520 F20110330_AABXFN schutte_j_Page_135.QC.jpg
c1648e319ec6b247354ef4554922e815
e35dad2cbcbc9647fc292b8b763613784f65f794
13501 F20110330_AABXEZ schutte_j_Page_128.QC.jpg
433145cec791ea4974634152ab10e07d
c0cf26439752173dbcd544bbba9c1d5269fb5dfe
26152 F20110330_AABWZT schutte_j_Page_056.QC.jpg
e6c5393ddfb45d867858333b9991dafc
9e24030c79c2bad382598710cd88edf122ac6bb8
112759 F20110330_AABXGC schutte_j_Page_143.jpg
731317ac34d7e478d32149130947ebac
2fc1584b7a7b3a73cb5a5582320badce37886d04
33967 F20110330_AABXFO schutte_j_Page_136.jpg
75a5725dddef0cfd1b4c6e6b619dddb7
e60f0ca49d88ebe36622ec67fb920a9abdf0cd85
75207 F20110330_AABWZU schutte_j_Page_057.jpg
46d2076a348e9aea44def9db5099051a
b406ab12d28725f07ccba705e713451824741ed7
30228 F20110330_AABXGD schutte_j_Page_143.QC.jpg
3830ca0d4b8837eb53c872202255e6f5
46b6a5157edaf12f3fc41fdfcaabc88053dbdd8e
10285 F20110330_AABXFP schutte_j_Page_136.QC.jpg
68323fe008325729ae6aab0bdbe1cc36
525d1f1a73557c1fe97d8aae7bf94f9e6df0c8d5
22459 F20110330_AABWZV schutte_j_Page_057.QC.jpg
458f76d96c37b8633d185285bc9de403
fd5147ce092fa37520c5b9c10a8a9d4cb816723a
102987 F20110330_AABXGE schutte_j_Page_144.jpg
330cbaf985c1b8366e7ebde2d72ca176
91c70158c4f5946f82e8c658d5d9c0bbb559faa4
50462 F20110330_AABXFQ schutte_j_Page_137.jpg
04b6e74d051d1ada637b482b4bb4cf01
137e7aa5c0d95064d2a89835f4b9f65abea97f70
25935 F20110330_AABWZW schutte_j_Page_058.QC.jpg
f6c5dfa64621510e346ec66fd1990971
d4467a71bd2690f243531c6e89a17d651e484b60
28350 F20110330_AABXGF schutte_j_Page_144.QC.jpg
a66f764f31de3b27923def13afe6a656
a7fae888250437407b0bebd32e879e870588b16a
15028 F20110330_AABXFR schutte_j_Page_137.QC.jpg
94149c6829bc344153052dbf2e414d83
603eac234b95198f61a3bb7c012962de5844717e
82824 F20110330_AABWZX schutte_j_Page_059.jpg
1aa9ccd1413549451b315ad2d40bd86e
9f6c54a29601fa2cecca12f04d503171ac67e3bc
101140 F20110330_AABXGG schutte_j_Page_145.jpg
b4c6b5b39bb7152d2d174f32c374cd8d
295c03812c78f436685086f083f729e313511c2f
48691 F20110330_AABXFS schutte_j_Page_138.jpg
59e5898e807c374183946934eed88444
3865866b2e2e8a1e58d5ae0ceb8be96bbc3fd8b3
25515 F20110330_AABWZY schutte_j_Page_059.QC.jpg
e7b2029801d74da2a686b0abd38a4c98
b337897faf5cb7e926505fcf962bb1453b4dd4f2
29038 F20110330_AABXGH schutte_j_Page_145.QC.jpg
d459bdffc3b7bb4a557dad008d138f36
3ae3a3abc2a93aabdb3b5f24483c8287a0d9da90
15080 F20110330_AABXFT schutte_j_Page_138.QC.jpg
de8d4346b6fa3ea185ce64480ed270a8
deebe931d63e015b2e7eaf8131206a693aa6cf81
49468 F20110330_AABWZZ schutte_j_Page_060.jpg
2083ed4488dbefeac8ee89e8738dea5b
80560a1d4620fca56e3bb44d642ed35c001dfb4f
22963 F20110330_AABWDG schutte_j_Page_104.QC.jpg
570b89479843281fc9e686d7e4943b5b
770b970df8f4cc01baccc5bc8b20b13c29011cd6
105107 F20110330_AABXGI schutte_j_Page_146.jpg
3435efc372fde87d46fc0a588db9ed7a
b70cdecd6e79839bf854aea67c7e60ce6b2d253a
77189 F20110330_AABWDH schutte_j_Page_028.jpg
27e4815bc685f30d999b50def9751906
a9507968fd1ff4d776d54b9cf80188c65dd59c16
29463 F20110330_AABXGJ schutte_j_Page_146.QC.jpg
12ad324f80d11bc331160c4aaf0326ee
9d4ae2a987becdcc5546f08c98a919ff0e78c76f
34753 F20110330_AABXFU schutte_j_Page_139.jpg
5c22a2fb7756557c8b9ca04c47b216af
c7df63351c204cf5052dc96e71e5afa62def4462
51441 F20110330_AABWDI schutte_j_Page_078.jpg
dbc8bd9b01b083e90c4b0d804a00c880
c7c809b4d836c2ff90bb5e0473f0a23da8f6beff
99203 F20110330_AABXGK schutte_j_Page_147.jpg
120d76e767dd4ae7666f8ff8d00c08d6
997ac2add7c4e8b975cce0853aca48a5193c6499
9985 F20110330_AABXFV schutte_j_Page_139.QC.jpg
ca71580cf2a78b96998bdaa91508313f
179c35c938fbc0948c31f2961978d39791b1ec39
82266 F20110330_AABWDJ schutte_j_Page_058.jpg
86b0f4d2bc557125f4f08c1d977d872e
58be331be6dc9e1bb0706ad903aab8fd42cf8c64
27521 F20110330_AABXGL schutte_j_Page_147.QC.jpg
196dcc2a2bf5a23e79ca68975b999869
102bac37e8326f5d7b89bbe9dcd257d64c476ad8
89900 F20110330_AABXFW schutte_j_Page_140.jpg
9d4e10cfcc57e4885682a97cedd22a3d
7fbb6a076881e2b6728fd4bdf26937f1d72aaf2a
87748 F20110330_AABWDK schutte_j_Page_061.jp2
0a4f1f6e237507f5d6c5bc4fd27f0f70
b13a3da22090ae50df28ca028c5f3d6c822c77ea
114397 F20110330_AABXGM schutte_j_Page_148.jpg
0986b73438a3b64da1de0ca0835b0d01
21d7fdd57257b3ca33ab380911250ed7442af4d7
25041 F20110330_AABXFX schutte_j_Page_140.QC.jpg
6fa1a4699a7c2d6826e5e6423504bd5d
e8e570a9a5b93878290e1a6ed60b7da6a3089666
35698 F20110330_AABXHA schutte_j_Page_005.jp2
e9e422c40fee7a9b6b8964ee51bb0337
a05bbaf5fc0030f3caeea5949ec4fd7be76b0b0b
22749 F20110330_AABWDL schutte_j_Page_095.QC.jpg
0bf604bd1993e15f397fc0c8f0a2ef5c
a23338b56f55d94e99189f57e2b41d47d29e9c8c
30312 F20110330_AABXGN schutte_j_Page_148.QC.jpg
6f3dfd61991e392a17e7443d8a92ad7f
feb62664d8b90430c3cb8ea30510aff2753e16f8
116522 F20110330_AABXFY schutte_j_Page_141.jpg
4f8240adc4ba1b4ee4c9add0a229168c
5d62286e2aafe289d368a1cdc11cef8067158113
1051965 F20110330_AABXHB schutte_j_Page_006.jp2
39697b2164f851b974e3ef77adbd2431
e912ff5ab75eff1ea62f9065abedbc5d2606c8a9
5308 F20110330_AABWDM schutte_j_Page_031thm.jpg
4f2087b9167b7f072dff134b838dfe92
4aeb2a798f0e4a11d995e8a08ab22dea3d203463
104224 F20110330_AABXGO schutte_j_Page_149.jpg
456bfba0ced2cc64aa6c0e44cb4e19a9
d90f3ecc54d2f508307651fbd1492fc292d1b8be
30833 F20110330_AABXFZ schutte_j_Page_141.QC.jpg
4411827cc89033c7ecdcb3dc34884a0a
6456c70e3e164d66be583047a6d29019aecc22da
38068 F20110330_AABWEA schutte_j_Page_126.pro
f5428d7cc07df980b5f05c8d1df0dc78
19d2005f38fe5f4b34bc4bae857acef02641e54f
1051985 F20110330_AABXHC schutte_j_Page_007.jp2
23ab78b145a54c72d1d974b107df88aa
4e5c789fd40c50edd18b890569a77ebe72bf0694
63273 F20110330_AABWDN schutte_j_Page_107.jpg
b0523358a04eec8b00b982274c68fbb3
21833f2fe8ca9ad58c2c35fc28af3eec816a98b9
28404 F20110330_AABXGP schutte_j_Page_149.QC.jpg
bb74935366ee1e490b3a50d930612253
dbee6aa6c323d51e59870c42a59f2c68fb396aee
50523 F20110330_AABWEB schutte_j_Page_043.pro
b3bb2ce4b9608b897aff27ad611ffc0d
61dc4aa424fac9823211aac308642248017fd064
1051986 F20110330_AABXHD schutte_j_Page_008.jp2
b44e09d44911b1f9041c8c0d16b0f062
a1db2d52343996eaa506d71cf33ee6ee3af4c59b
25393 F20110330_AABWDO schutte_j_Page_021.QC.jpg
2f2f334e1b0530c7ea29f17d261d9087
9892b758e0eddbf7ea32622e7f7a0f4ca42ae343
104311 F20110330_AABXGQ schutte_j_Page_150.jpg
548ac9eb2a2d4da9fdf8a06c130dccf2
378db2a149c69883d5e27ab07924811eb9be122c
4744 F20110330_AABWEC schutte_j_Page_075thm.jpg
a32783da3479581d9a0fbfae5a5d2613
51f0880d7dd905016e835b6825e1c51ba5514436
1051984 F20110330_AABXHE schutte_j_Page_009.jp2
9c83cf59162cfc440f1a00ca0f9d7427
6ed6caf5cfdf33c8fbbe057880007605e4ec39ca
37348 F20110330_AABWDP schutte_j_Page_047.jpg
d39a9afd301d844a65c66aca2c6bf24d
6b1d96918f4acef29a874f73831579cbecedab56
28509 F20110330_AABXGR schutte_j_Page_150.QC.jpg
c1b26c64990197624d832a10c1c05afe
00db1bb14310736e6cca1e1da89f06817aac707d
F20110330_AABWED schutte_j_Page_031.tif
7504b122c8692061c99f6f68faf3a5e0
8b63b44a1fb560390afb36c3855e8852711d6727
1051978 F20110330_AABXHF schutte_j_Page_011.jp2
cef5bdcdd26a07259ed1ed496ca87c9e
7a89d673a3f949bc8f8f9af0931c21e54f7ecf81
47042 F20110330_AABWDQ schutte_j_Page_129.pro
63ddf049a25e41bd4143b1360854a984
94ab709728c220f30b9eabf2b1464bc4eff26c96
52930 F20110330_AABXGS schutte_j_Page_151.jpg
1303b6163bd0bae021cc45eec25d21cf
d93ac8dfc0e65912a9f894aa06f0c1bdee40323c
15647 F20110330_AABWEE schutte_j_Page_018.QC.jpg
df95e5f2b823938af4ba9bb85f1ec479
9f1c8f04f71eb0f03840e03f48f47bd867a98d1a
1051968 F20110330_AABXHG schutte_j_Page_012.jp2
9b7945bad028d6b7bfe241f184f176d1
c352d88b1fe31030a5fa8ec6f1f18293f45e26d3
F20110330_AABWDR schutte_j_Page_010.jp2
3782f68f22dfbd1b656e5f2ee9dd133c
ac2ce34a8b8131473c1d059b86e9f826388c520f
14625 F20110330_AABXGT schutte_j_Page_151.QC.jpg
1e930a420b7749c407450e6642704896
7eb0b7b4ecca785b1278e70324c22ae417e4d70a
38697 F20110330_AABWEF schutte_j_Page_115.jpg
aa2838c039135daf4c6287a6933ef75a
ee9332d2fdfc4430c0af8507acb35eb35f87f25c
90497 F20110330_AABXHH schutte_j_Page_013.jp2
4746ad1e9c481b601d5b50c0dd52c8d8
a743194aecc50dcfc21359e03e1db7026a9f1f5c
61407 F20110330_AABWDS schutte_j_Page_097.jpg
b7e43790d525dc3413a196510808517d
5fab93f8143e5500a2960083873ff23cdd68e3cd
24942 F20110330_AABXGU schutte_j_Page_152.jpg
4287992ef247dba3b716d797ef19d64e
02c5a7101a75c54a07ff3fcad7af16d1cd9e691a
1738 F20110330_AABWEG schutte_j_Page_130.txt
f0a0c19142d0e92a0220bd7df7cd7106
08b53a07ae2ee4fcacccb037446207691d9a35dc
36222 F20110330_AABXHI schutte_j_Page_014.jp2
c3b7c4e4d425162f357deb685c2917f6
5440716c45126737558fddecc44efc965f34e785
6413 F20110330_AABWEH schutte_j_Page_049thm.jpg
0b92df8d3b661a29ff11dc420ce808e5
e51d4505ba994d879fbe44269a61e145364c2f71
82424 F20110330_AABXHJ schutte_j_Page_015.jp2
cea1cf6c3171587e327e72b65e53f3e9
682501d678bd90edacdb5b753926fa9cbea44156
2101 F20110330_AABWDT schutte_j_Page_116.txt
8fae31f17f31b0456bffd4c66837a773
129538ea1115502e94cabb11320b76052e53d843
8284 F20110330_AABXGV schutte_j_Page_152.QC.jpg
4a227273ab3e8dc7c384761dc3baad00
6ff7fa1c49b32817c3e2c8111486ae51c9cde4e3
83653 F20110330_AABWEI schutte_j_Page_098.jp2
5f4d48d17d4c19c727d650ca907c5c3d
cb5f4f3d1aff68fe712d03b6a6d5f2dcae61e371
109162 F20110330_AABXHK schutte_j_Page_016.jp2
1f7c7425672c07afe0eb09ed6e087dd4
af95d34ba73c942b49c4bbe702b7dcfec50af12d
F20110330_AABWDU schutte_j_Page_135.tif
966ec60fb60f5d21071bfd0a8e0a9203
90b1b117343604f0c70dda958cf1f2185fa9bdef
23807 F20110330_AABXGW schutte_j_Page_001.jp2
042e83c2e5dd2657cc675630c88fec14
3c2ef6115bad004a4cbbff630d3659973ab5f66e
93655 F20110330_AABWEJ schutte_j_Page_108.jp2
6cfdc3884fb7b33a8dd05e27e23c3675
89841c22583729ac45bc903e8f578ec2c2710460
106590 F20110330_AABXHL schutte_j_Page_017.jp2
660aba5aad714ad6d01f91bdb405d87c
e3c43f9d037f3494bb9032f17bb689c9708a6a19
96973 F20110330_AABXIA schutte_j_Page_032.jp2
33f7bd852c4d59b272c0fb53a2fc36c0
ba200cf598b8acece34f9fd407ad7b51a37a6007
64975 F20110330_AABXHM schutte_j_Page_018.jp2
af21a158c5c5a7dda04effa6e6984eda
81eb69fd914866e8b89e682ed13f9117433a68ad
2619 F20110330_AABWDV schutte_j_Page_077.txt
15515814f39167f549c02f376539ff82
a8542abb9b5e44f500b65df3b5ce5c1e01926390
6377 F20110330_AABXGX schutte_j_Page_002.jp2
085e351333f2edae5a8bc4a96b56eda5
d91184e25f6819cdbb4555ab3abe30cf528427e7
79424 F20110330_AABWEK schutte_j_Page_064.jpg
6c855e394b29dd2918672a5cc8ef50ac
ac1e9711d982fc9b007e68d34c62dbbdb9078dae
108287 F20110330_AABXIB schutte_j_Page_033.jp2
045ec42ce15c993b4a160bb78c42129f
e2e0664422f78607fd0698bf621df39fc360db02
89863 F20110330_AABXHN schutte_j_Page_019.jp2
68791cc68042fc1c19afce31e3a2e03f
5c6a4d019b0d1b15b3605a5aa509135b9895f3c8
F20110330_AABWDW schutte_j_Page_081.pro
664d135c68f237c4657aeaa6e46d923a
c0e1747b5002dbdd14510bc5b81c19bdd3da2702
6607 F20110330_AABXGY schutte_j_Page_003.jp2
a6c754e37594bcbbc24927b50966bae2
a4800094bba689d543e96e8766f4a01fc915cac6
246052 F20110330_AABWEL UFE0012932_00001.xml
021a4bb12c282c38202c2bb2f9f13733
dec6d8ccda6d97eed87a7658a98d8e2c6405d61b
109955 F20110330_AABXIC schutte_j_Page_034.jp2
52ca7c3a486ff06ee3f7e4c11e3310af
296f07216439b8346170203d730d66626ea1ecfc
107781 F20110330_AABXHO schutte_j_Page_020.jp2
ac4492b33dc4334bc235a2e196e00c74
3eb6658e79792636d5e865f4470adb5004940de9
49693 F20110330_AABWDX schutte_j_Page_059.pro
589277bcda5afb0e5a693694ce6c57cb
851ebb0749bbdaa7b627e1edc46113ed027d65fd
93767 F20110330_AABXGZ schutte_j_Page_004.jp2
10968ea036185c61486a77096c2c9211
9d741142ffb65c48490399df891f3d041ec52e98
F20110330_AABWFA schutte_j_Page_013.tif
28810f347584a12be3b71c6c2473f05e
97ee53ab3803b5fb7cbf9543efbb1879701563b4
92162 F20110330_AABXID schutte_j_Page_035.jp2
7f31885d4689abe80c69d8c6c6d35acf
39173fc2c65bdc646e8fb9993d698cf67aa4cbee
112923 F20110330_AABXHP schutte_j_Page_021.jp2
3bae0dadeeecfa39621d05810f36022a
3a18dcd4ea463871d8bb651be10cf84cc3555ef7
F20110330_AABWDY schutte_j_Page_081.tif
588a3684ca4e59dcd7637c4b5e6de4d2
42de53ff3974537d99423150dc9a37d3120e2d4f
F20110330_AABWFB schutte_j_Page_014.tif
bce30d2b9f2ec4720a411e8d92ff592a
fe30d08a3164e28074b5679e4b40008730efb4ca
102498 F20110330_AABXIE schutte_j_Page_036.jp2
1afd2fe4b1924c876f497ca477b8ec3f
742afda5af9a571c2cd0ccbf73747e0dfec4d81a
98997 F20110330_AABXHQ schutte_j_Page_022.jp2
14227f10b0d259bd88c2ea86d39bae81
5ee9894059fbc5e972ef64e308cf61982c2f5630
18178 F20110330_AABWDZ schutte_j_Page_119.pro
906531d0e9143bc46048b29c53cd90ac
bf2705c942642b2851b9cb1bca908b4f7edd8871
F20110330_AABWFC schutte_j_Page_015.tif
0c51f2c6968a8aec96b930fbbe4311b0
32b451bb7c1f58d13c0d7b00bd03712b024236c8
F20110330_AABWEO schutte_j_Page_001.tif
2a73bbce6271a9b56a9bc7b7a4cc0736
e9f488b52055bcba5a09633d4a007dba5a190a99
61837 F20110330_AABXIF schutte_j_Page_037.jp2
cbd46c5a3c68522cc451dca13e114768
a90aa54f5fd96476fa405ba89b518fe5a04f3349
103066 F20110330_AABXHR schutte_j_Page_023.jp2
85e7c8a37ed96a40a8f4a30b24590ad6
b1adf0e4bd5d86ced4c6ebc1eb9cdfa696c15765
F20110330_AABWFD schutte_j_Page_016.tif
2d9da3a0a26764d0f27d5a90b7f5d4b6
1749baed87ce3cfd1f5d90ec06e1faf9d7871cb4
F20110330_AABWEP schutte_j_Page_002.tif
22eedad95712456be4b3f7496ba23b16
976f900629055c61acfaf2e2fe28c9e50f40286a
844416 F20110330_AABXIG schutte_j_Page_038.jp2
56e1b80f816869a131432416c17505a9
f4d801b7ecc06e37680dbb21f02899432885d644
59070 F20110330_AABXHS schutte_j_Page_024.jp2
f94745730f9f6ff6a04b2b220bbe8365
39623d4a27c7999905a9a894b4d850bbd91db617
F20110330_AABWFE schutte_j_Page_017.tif
61a63528bdcd7b06d1dcc5d0960dc460
eb0cdf7f38329dad750956b102612006c891c2ed
F20110330_AABWEQ schutte_j_Page_003.tif
6e003d56c63906be77f2e7e0249d8f79
b9960ab47d7bfccb5b28b1d44be6d42c582b80b8
72649 F20110330_AABXIH schutte_j_Page_039.jp2
6ab1bfce3458ac8c8da89c812eecce94
80189163882c1b5f70053ad2837172fbf983f70c
102837 F20110330_AABXHT schutte_j_Page_025.jp2
6e7828e29faa6767d022c15d4a1640c6
53c117895983b2720c1ea8051eaad55a8c643278
F20110330_AABWFF schutte_j_Page_018.tif
b016aa98d89e9aedbac3fd39af701948
cfea8af74cdd2fd7ab25f45bb11f3099d601ea7a
F20110330_AABWER schutte_j_Page_004.tif
69d91938f60c4ee69073387d072421da
c69c54e2448582b8da7dd4560bfce0f8d8248047
67063 F20110330_AABXII schutte_j_Page_040.jp2
8cdc3a676a78e98eeccfdc380fd4b7b1
0213c77d51ab346301aa5d845763c397ebd8960b
99658 F20110330_AABXHU schutte_j_Page_026.jp2
d1216cfa5420df55bb36bd6c7df8165d
c2d50786078407ea5530c302c8f28d75a8a73ac5
F20110330_AABWFG schutte_j_Page_019.tif
d920cf887d72401a6701dcae46622c62
5482eeeec904a4116a791baf71b496e402981a2c
F20110330_AABWES schutte_j_Page_005.tif
480c7b7145d21c6735c095e63b157206
96e730815d1bc656b568f0776a4e041ca59fe3c8
97364 F20110330_AABXIJ schutte_j_Page_041.jp2
727ccebc833d815bb9efc6b6f0e843f5
16e9d02121f0ca3be91657e2c59e138592a7692f
82756 F20110330_AABXHV schutte_j_Page_027.jp2
9ed7e28ed0e8fd99ce83682b446ce3c3
467c7bba9961eaa7c79449c3be33d5b3389be790
F20110330_AABWFH schutte_j_Page_020.tif
969e54e37a693942803ee23d429b94cf
8782476d504c82b6d05c5b642167eda7f5619d0f
F20110330_AABWET schutte_j_Page_006.tif
8ee87dfe27430e4e05bfa36d30adde20
482564ec457617d4338d2d0c62482f699c5cd4f5
104721 F20110330_AABXIK schutte_j_Page_042.jp2
1aa66e6f58aa1c5925830b8284f66f46
c16fd5934b7111afe3f799b1b8e8caa98310bc2d
F20110330_AABWFI schutte_j_Page_021.tif
162d0a811e954577e635bae220cf962d
58d23f328a36b48324c2507490de3f47fc3325de
111301 F20110330_AABXIL schutte_j_Page_043.jp2
0668f634e476ce7efe8bc2d31cbe93d4
756b2374967f23024eb13b57c47b2aa5e0b8a3d1
100781 F20110330_AABXHW schutte_j_Page_028.jp2
46c3d6be9927f124535c3d2bc67aa618
040512e0a28a5d6a76c192f155f1eae9013115e8
F20110330_AABWFJ schutte_j_Page_022.tif
ad3739e6d7004776e71b311ec701c4cd
097f8bfb3c0bdea91e893d88c88729db1c404a9a
F20110330_AABWEU schutte_j_Page_007.tif
e1208cfffb37a565b342c97c9b92bffc
3be764d58a3564e3ef94691cb282355a1dbd283b
107625 F20110330_AABXJA schutte_j_Page_058.jp2
aa1d0fbcb795b6672d45dea610b8c9e3
0c84b2aa2d95e9c60ac21447f06d4ffa8201b705
112137 F20110330_AABXIM schutte_j_Page_044.jp2
4dfa65d8e0bb0d23124bf1c00803c50b
41f7294f3a8505bdcd497dbbafd9ebd4d9407800
108325 F20110330_AABXHX schutte_j_Page_029.jp2
06927aca9047680b26e8a947c654b60c
1b9a67692e2206212541a3bf9efa9ebc41419448
F20110330_AABWFK schutte_j_Page_023.tif
f594bff8fdcdae4e681b472e96e667c9
587297891fcde4c7de064c0de6caca8c2d5e172e
F20110330_AABWEV schutte_j_Page_008.tif
4fdc337332ce02efde0846f33539a219
7c6d1cd554d31954eab0cf33da4322ca05294ac5
108228 F20110330_AABXJB schutte_j_Page_059.jp2
c50f0fe9201f584f81e484761bd3e870
28237c8986a68eb8bf5e59778d5ae7d9178a0afc
72859 F20110330_AABXIN schutte_j_Page_045.jp2
78ae0a9a38a66e254ffc26aaabe4a745
85595806be02f812d2459918e6c84271389ef162
63957 F20110330_AABXHY schutte_j_Page_030.jp2
c77aa23984c8e12223f309884e2cce78
7d074eb5e009ed609007caa16208df5c0e139b37
F20110330_AABWFL schutte_j_Page_024.tif
44ed2f722bebb0ead571d712cb193694
1dfe22803073c1db53f8b6b12fe2dade05905a3d
F20110330_AABWEW schutte_j_Page_009.tif
9eb6597796eac8778adad2688bab92da
a34d63cedeeedf08d7f36b126237789dbade4448
63235 F20110330_AABXJC schutte_j_Page_060.jp2
11d16de03462001386dc161054ac65fe
2c3f4b81f0bc6e42a0f1dae1a5a2ba584f04c390
83881 F20110330_AABXIO schutte_j_Page_046.jp2
3d73227025e57c8c5ec489b122be41d3
9e6d62d6b045132fc4ff6b03427c3fd7baa42624
94045 F20110330_AABXHZ schutte_j_Page_031.jp2
26fca1a4f678d8e638c7a7bf80a3c5dd
1b30e96ec41c3ce78e4ac752245e5fa5ac662d37
F20110330_AABWGA schutte_j_Page_040.tif
09b27f64f5a5e5780ab5efc1b484d4f5
1bb03b142c235ec550c7845e3bae1ea29105a63b
F20110330_AABWFM schutte_j_Page_025.tif
2d20ff362f6080132610e02e200c820b
8b0643a422c5650a458fb55868874aa6492e08f0
F20110330_AABWEX schutte_j_Page_010.tif
9a4231c9875660bdcff1d53075dd613a
66c07f5a3335fcfb680159c9d7a9121f8ae1d273
110908 F20110330_AABXJD schutte_j_Page_062.jp2
307850b3df6d0e57cfecd0f32d326547
0bcec372e7aa2b0d040541419d0baa8f00d3450a
452752 F20110330_AABXIP schutte_j_Page_047.jp2
a5bdd7fddbf7e7f2cfb082f7d0961d69
d8eceb19689025c28a0534b6526d939de1368bbd
F20110330_AABWGB schutte_j_Page_041.tif
b98cb64f7a58df0a93de241230dde711
7e7cf4d929bf059a9ac57226102f5a268acb3b79
F20110330_AABWFN schutte_j_Page_026.tif
23cc86a3d8c46e01ef03253f1a13df84
ecc327a15d7e934c7d98bd02ca00eff096c63b82
F20110330_AABWEY schutte_j_Page_011.tif
d35cb3715e5e2d347e197e02778e5f79
3143e53ef5e68f93f51aedc82147dbac07fead45
110717 F20110330_AABXJE schutte_j_Page_063.jp2
7dc392e34c7bd562cc15b43dcd80994e
4f077c747fc616a1973b7f15a17b02f008401011
99229 F20110330_AABXIQ schutte_j_Page_048.jp2
d7865cbc6840f03ea4834632082dabd2
ebb97f22827c286b1a6c86a6c04af9b99f1e6e98
F20110330_AABWGC schutte_j_Page_042.tif
582a57ae667d16e3fcd586c562db658d
466becf42f260bbda15043d6aa5280c261af33e0
F20110330_AABWFO schutte_j_Page_027.tif
910338d1aac3501b1111fd0d26554edf
9d8f2ea265e1c71b2210d16aee32245e98e0ec60
F20110330_AABWEZ schutte_j_Page_012.tif
168cf859e273724d0cafc6422ac6ea6e
e22739dfed9186011c4eff13b719e6bc140ddc2c
103771 F20110330_AABXJF schutte_j_Page_064.jp2
f52de2f512972c1c49e11cfd37c70aad
6b96a3c0239019d7daabe7dd783167014926b936
110659 F20110330_AABXIR schutte_j_Page_049.jp2
663002736336d0e586cc1c451ebe4960
aab41d673c4bb51f7e98106969afbcc4d9ddfcde
F20110330_AABWGD schutte_j_Page_043.tif
58ff7684b9b1e5f0e5cceb6b35f24dee
0eb591f4866b7074153e0af6cfc12800176a3b39
F20110330_AABWFP schutte_j_Page_028.tif
642b319fb98b8d17b8a9ef20425f1ea2
853a198e4041a0b445707709d2f49e5ea0bfae79
109084 F20110330_AABXJG schutte_j_Page_065.jp2
9ab4c788e48de452c728c59bdf3cf42e
b943a5c0398a3747d99f21c3206bf7bb9f68d7c3
104194 F20110330_AABXIS schutte_j_Page_050.jp2
5bf9ac1d0788eb726a0ec3278974cc5d
5c93cc439e215958c4fa4017811e2f2b8cd32b3f
F20110330_AABWGE schutte_j_Page_044.tif
af924c819f7f7b81ae54a7c8a03814d2
4f9ab453f33777189970efd4b9340310f198c635
F20110330_AABWFQ schutte_j_Page_029.tif
a253adea121aafc24ccf4d04e1fa6900
a0333fb17ee86ddbe392d8d6f93f456e0d3bdc7d
67254 F20110330_AABXJH schutte_j_Page_066.jp2
5345ac5fb39748afbfb79cd54a6badfc
63be7d10ca0fcff523252ff68d7dd6b541f45896
105637 F20110330_AABXIT schutte_j_Page_051.jp2
1d4cc6e489f05d055c6e52263d13eeb3
c87a326d127ee4965f426ddd6d1228cf1bd8ee99
F20110330_AABWGF schutte_j_Page_045.tif
9c13c238a4d25ff3598eec71e837762d
d3178c4969dcc6fab5b95cd92fc9b3c9a46df49b
F20110330_AABWFR schutte_j_Page_030.tif
06a86518f4c4a19ee7ac97daaaf54230
edc2913dc897a5c12324bef3efcfbdacd3b05ca0
97807 F20110330_AABXJI schutte_j_Page_067.jp2
c907f3709dbd31201d42c0822a7b4468
83de916fce20fef4247b20e53cbf5b7c39744d72
773604 F20110330_AABXIU schutte_j_Page_052.jp2
4b53f8ac6f1a1f84a9a2abbbf0782e2f
79723f839b0b4f8482c0d7821a3d17e07e6debbc
F20110330_AABWGG schutte_j_Page_046.tif
738348ce1b8030680976e7ff42052397
bef2e943a0fbe495fa0446f203dd40be6f5731a4
F20110330_AABWFS schutte_j_Page_032.tif
38ea7c8dc2c1bfae150fbda2eeb157e8
bfd586f1023d77609458b21f4dbac6925846c8df
526629 F20110330_AABXJJ schutte_j_Page_068.jp2
eff1eebef1ba11e412176ccc8fdae4f8
60d1bac331d77fc794de7a1fbcd7e344669384b8
913513 F20110330_AABXIV schutte_j_Page_053.jp2
5c6176110a13cffe29de2d70bc75e507
719728acf9ce699caa19b29ae72c6c77f5ed37e8
F20110330_AABWGH schutte_j_Page_047.tif
c0ea92e62f4208af9f49a29ee965e5de
ff5ba7863647a7102be9dd183306275ce169dab7
F20110330_AABWFT schutte_j_Page_033.tif
7ffcdb9aac73d542490ed8b1ac84bc5b
d87b82e3a294669fa31f49774e7c9d07c7a02fef
110007 F20110330_AABXJK schutte_j_Page_069.jp2
d2d0e3ac816e6285dfd3f57b54b315c5
e540ff9cb781f2f73acf18d22084718b4357befe
851985 F20110330_AABXIW schutte_j_Page_054.jp2
11625b2ec0c18e65d985f0a21d93eafc
e385189c854299c670052bd65bfbf5364908ee27
F20110330_AABWGI schutte_j_Page_048.tif
c78e89c82e08b85981f54e24c42be848
ba58cf08c29ccf4310c0e43b4efc10194550fe73
F20110330_AABWFU schutte_j_Page_034.tif
9f2088574211d34125f4443ce803063a
e4f55c949cd1932239c5d92211732a4058700748
74964 F20110330_AABXJL schutte_j_Page_070.jp2
141f315e58889292269668bff20abe3d
41a47042c38807994c3ab3a364fda2a342ee39cc
F20110330_AABWGJ schutte_j_Page_049.tif
77223830b967906e5fab64af62c9d3c8
a2f2d864946e184458385871659c21cde8e2580d
110653 F20110330_AABXKA schutte_j_Page_085.jp2
ca29e99faab84ccaad5118fcc7f3a35f
68503abf4008d5b26a875d8ccb988098466b72f6
429352 F20110330_AABXJM schutte_j_Page_071.jp2
554aedeb63640a4ac6cbe3d16c8617d3
f68e3d327cb8b1c9fdd8698da8fea1b9c76c810f
111275 F20110330_AABXIX schutte_j_Page_055.jp2
2c305482247b4d2cd6b26d160687f598
1e5cead323b066f3405c2739773958de8b5452ae
F20110330_AABWGK schutte_j_Page_050.tif
c2ca12bff186b54f363af790b9f61168
047592ef25d860b3f8f0b9f8ea07e386a4825d6f
F20110330_AABWFV schutte_j_Page_035.tif
b8fa8e51d574870bd356dd57329a64bc
ec605cf19c3d6a45e0f3b6fe8c54463c4b8d50d7
108098 F20110330_AABXKB schutte_j_Page_086.jp2
c2132ee0d8c716f0c6dbfa2e5c0d50c2
3b552eeae9e2d9b2ef8008cc5ec2aad8e7c757e7
110472 F20110330_AABXJN schutte_j_Page_072.jp2
5d6e0e5ab5e5f41b1accb4be53d88ba8
70c0b2da8e91bf5d455486deddd36c8eee9a9112
110212 F20110330_AABXIY schutte_j_Page_056.jp2
263b78a6d532dcd4e90eeed00193d663
03c0e8fb690afd71152b146947e3f0cbda5550e1
F20110330_AABWGL schutte_j_Page_051.tif
52ee9ad17235715cdaace770fed5b717
fec52b261b8bb52d209057c3d4332e0a60290f9f
F20110330_AABWFW schutte_j_Page_036.tif
6b36c93179684b043385505bae8cc22e
83e2b40f12ef4d65c7fb806e47c77534f4779c22
96230 F20110330_AABXKC schutte_j_Page_087.jp2
0acd06bafcdb3ddaef7c46998f95654f
5845e194a2dc4ec993005500c3736cbb943f16a5
107237 F20110330_AABXJO schutte_j_Page_073.jp2
945dcb9f3025b2475ca3282f0d48a36a
25157f417b5971b43db6df8e8db92087a31b92f1
947149 F20110330_AABXIZ schutte_j_Page_057.jp2
f7ff3360be7343f398eba37135e4f1c3
57e4650a56ec00741d45848b847057842911a04b
F20110330_AABWGM schutte_j_Page_052.tif
ce3cb8244018bef59aeaf46f1455ae6b
63e11141eb02cb69354caa3baf5e25b79304152a
F20110330_AABWFX schutte_j_Page_037.tif
c85212bc2eb4b45052dc5f5973cbae0d
a6b3b3d74f89c83fc8e631a08049e6f3c92f0353
F20110330_AABWHA schutte_j_Page_066.tif
0bb09889bbcf87cd4377a7515fe9fdbb
950ae55567e2659f3c3e3623f1061006fb3b2ad2
85508 F20110330_AABXKD schutte_j_Page_088.jp2
fe53778e50575b4b9b41c5e724503053
4aa4d7c7e525c4643fbdbf412f3eee8e959e235c
111976 F20110330_AABXJP schutte_j_Page_074.jp2
e1881b8836d70ace85562b334a1798f2
4a89e6f822c2a55b23dc9fcec88807cf6b5a31c9
F20110330_AABWFY schutte_j_Page_038.tif
fb26a67df50887737ecb1699221d47cb
8fbf290660fbe148c6a340c4f93726f1ae6ffeaf
F20110330_AABWHB schutte_j_Page_067.tif
78cec9bd070b8f6edbeaf2ef4cceb9f3
966e65d0a13a71a7a48e78a9c37cc5b98ac1e7a4
F20110330_AABWGN schutte_j_Page_053.tif
c2364691cd2c55e65a9db2802751b526
ce799aa503c0b4234d9ba6e5b73aab145970326a
1040593 F20110330_AABXKE schutte_j_Page_089.jp2
c6ae31db4f3b826f472ef1a9b2ae4149
d7d2e98f765152d1d9ec624d117c4ad9c90b3eb7
1039335 F20110330_AABXJQ schutte_j_Page_075.jp2
36f6f21ecc855ce8b33b2faef9290921
5c5ceab4d77cd6a7169ef1796b86c27ba025805f
F20110330_AABWFZ schutte_j_Page_039.tif
6be6820d65b1e855914d920dd973a865
c1ac9300f9379697ec90958d1ad6c3b81f90fbbe
F20110330_AABWHC schutte_j_Page_068.tif
aa70845b49fc155733177b22cca54077
a35542e1ded0be668883f06c51dcb4f7fcf2a662
F20110330_AABWGO schutte_j_Page_054.tif
ad82503b366bd8c8a1a4ce4e153a208f
dbb512196b06bc9de36c773814759e1660953809
99219 F20110330_AABXKF schutte_j_Page_090.jp2
8f9f952abc6c1ee2112f61b4c1dbe6b4
853c3413597e1306cfc22c6aae307ecd47386ea5
99150 F20110330_AABXJR schutte_j_Page_076.jp2
5d43a100e39ca77960d3d8ffe7849ff6
bfa0992834dfe374a760bb94ad42178711b6ef98
F20110330_AABWHD schutte_j_Page_069.tif
032270c8a1aa01405f97d3ada29098d4
d0c4d8aa826a8e9d25f070d5ed96fb2f441c96f9
F20110330_AABWGP schutte_j_Page_055.tif
8df6b5370b57da70b31734a589aeb615
ef1c2bbb0d3c742708b7273fdd45cb99218872dd
84785 F20110330_AABXKG schutte_j_Page_091.jp2
025245ded16c99fe8302616d8e09198f
57186428bae87cc7c72c883e151c614ff3ba2536
128710 F20110330_AABXJS schutte_j_Page_077.jp2
38e2a5a6d4dfd02d6dd22dee379a69f1
014d136b9479828d6f80a7aaedd9642d25e680c7
F20110330_AABWHE schutte_j_Page_070.tif
a5f0b66fe174b768ca10fd1f3819afaa
542f761f9617e9a86b68819729450f8126dbf2c2
F20110330_AABWGQ schutte_j_Page_056.tif
9e5ab83f0c94b5a6e1fdfa0f594fb212
c816a9917fd264aef8cce75ed44d680e6e993e7c
90598 F20110330_AABXKH schutte_j_Page_092.jp2
fc4b6788095acabb0a4d22ef493e5f61
3116a4ae58dccfd4bf8903b15a6e759eab351cdb
617040 F20110330_AABXJT schutte_j_Page_078.jp2
ada6d3e9015641139580a4002287259d
c9a24bd684f52b697571e34080e7c8c53b4a9c41
F20110330_AABWHF schutte_j_Page_071.tif
42d3a8524a58c27336cc5bcc7a69e6bb
155044c98ca424af37db37a52efc57484768a29c
F20110330_AABWGR schutte_j_Page_057.tif
bcd93e82cc9e7b6c8ab7cb8ef9ca850f
e3d4ad2694464910f09b78361f98ea8d4b49a9df
104697 F20110330_AABXKI schutte_j_Page_093.jp2
1ef34cf70a701eb388294734aa5eba37
b9cf95f40392290a9b22eb5477ce4015b0474bc3
113346 F20110330_AABXJU schutte_j_Page_079.jp2
eafd5fc123a139146e80a9668d71e69f
c0c8679e82b4a2833d324373e33c6505b992fe2a
F20110330_AABWHG schutte_j_Page_072.tif
4a10605194045526f59c72435ff9f11e
0db3f2480e62d157d6c2d4b06a4b25c34b41ecf5
F20110330_AABWGS schutte_j_Page_058.tif
8c50deabb631fc458fda230f367cb8e2
e24d7c0ae063d0c35b55c39fa16cdd7c78e57a55
93730 F20110330_AABXKJ schutte_j_Page_094.jp2
a8d13a4eb9635ec8e47a0f89787bf1f4
409cfd73d6bd53712522aa6a7935578f85521f14
106405 F20110330_AABXJV schutte_j_Page_080.jp2
8d98fd64fd83dacf419b58b809417192
643754c2bec52afbdaf487d8092930277a2c07b2
F20110330_AABWHH schutte_j_Page_073.tif
9a41094eb1ff8893100a4b0d6a27b82f
463821670bafb84e18f63932baf981e0b6fab5e3
F20110330_AABWGT schutte_j_Page_059.tif
0032bd8dca23725e22bc9392b600cd5a
4ded457a11f34055a2e3d142d5dd5c30119a418b
90423 F20110330_AABXKK schutte_j_Page_095.jp2
8056c0712af2490c6417eb00900a8831
5075b9f6ba81dfe317ba7710ce9f4f92e3f40c8a
98538 F20110330_AABXJW schutte_j_Page_081.jp2
b2b8a936c66a347f5af01ccb31084225
8ceeba45cbde525d862cc86530f2063a6a2179e0
F20110330_AABWHI schutte_j_Page_074.tif
9d9c849dee9715652792354eec9b9fe4
3b36f3b7caa8bcced13f07efd0df1d7b05f5e6fb
F20110330_AABWGU schutte_j_Page_060.tif
87bf273ea74911d43abf47b0c1341a37
8046e174cefeca3fcb4be2b5b146b6ca31594a6b
99568 F20110330_AABXKL schutte_j_Page_096.jp2
305be6d10026ae2cf07c112309ef2a8d
3c0282b3d3247be40f068a229555d07462ec951c
348545 F20110330_AABXJX schutte_j_Page_082.jp2
5cfa18ad3c38c6ac5226e038dac049b7
9eb329b51325fdeb75217ed6586dbb90f7e0e537
F20110330_AABWHJ schutte_j_Page_075.tif
eec28bb239c59a4d9128f35a987f2aa9
1c1fd70e0bd36ac76e346a9e0d92f0ea35a4851d
F20110330_AABWGV schutte_j_Page_061.tif
eb481bd629206f5b6d3ccd979a5a12d2
1038d870b2c4292fbf503c7cf1f64423cdd3d4e2
25634 F20110330_AABXLA schutte_j_Page_113.jp2
5e81fde15a60a99d4e4b1e4c54681638
a6868360ac26e61eedccb5a906bf5584c850dc6a
71704 F20110330_AABXKM schutte_j_Page_097.jp2
87613d4d4b6d8d656a5514b422650c8e
37ad40116e4bb68150f0b3755dc4dcdff5b1019d
F20110330_AABWHK schutte_j_Page_076.tif
b48bc33e1d5613cdcabc07bf8d74f3f3
77b00b5e80052e51b28e955a48674cdf92d2ef7e
63068 F20110330_AABXLB schutte_j_Page_114.jp2
8cf79540755cfc0bb0b98d38452dd8dc
1235f67d8d7b785463e24dcb70014566b90ec14d
85181 F20110330_AABXKN schutte_j_Page_099.jp2
8cc9a29262fe82bb139efe531e535637
c988f2e671c0a38aeae54b922da2306174cf5063
667293 F20110330_AABXJY schutte_j_Page_083.jp2
09a7fb1870b239d21a1955690fc127e5
2d0e7b3bb0d20faab1e89ea80930e76996464c16
F20110330_AABWHL schutte_j_Page_077.tif
f3684ce26b8a539578b329738f16300d
7a452758a0997cdf7ebc83a671b3f441d36afe77
F20110330_AABWGW schutte_j_Page_062.tif
f2a16f633942b7ed055b6abed26d0bab
486942eeb1ee288cf9970927e9c5207b2b2190b3
47381 F20110330_AABXLC schutte_j_Page_115.jp2
5d612747d1b2dff3d1af13abb55a2e56
76a519170cf30acd7eb8850e1cce6be136103c56
86119 F20110330_AABXKO schutte_j_Page_100.jp2
d0045ce88dff7772114c9aef9c65700b
53f3a17b4c5602027cc07a4d29c920ecc523efb7
111447 F20110330_AABXJZ schutte_j_Page_084.jp2
f88860e9d872bbd3bd4e338c9e2bfa4f
7a0ad854412930fe9acc030a0b677da885b9b17e
F20110330_AABWIA schutte_j_Page_093.tif
1ffa2698518815d48acc076bcbfe8686
6400f1d8ea2c92f1b029816e9291039f324fe8f0
F20110330_AABWHM schutte_j_Page_078.tif
31daf48cae21a1e61a5ef9ac76d2615e
b8093ff62eb611a6cf65e8fcb6cf6fb708ebea40
F20110330_AABWGX schutte_j_Page_063.tif
621d0a8db5e986dc8e49cbab70564282
fb1aae614787ab40a20b84eb915da67552ef93cd
89282 F20110330_AABXLD schutte_j_Page_116.jp2
4b84d71acc4d950db80a78c674a507ed
79acfd5e52b76ed8bced6220ace91221659cf34f
90666 F20110330_AABXKP schutte_j_Page_101.jp2
23dd82f116a7c934b48fca9b716084e2
2f1f5ee92bab693453b3cbf9e00dc188197d5c42
F20110330_AABWIB schutte_j_Page_094.tif
e19e2280eb8e3e596519e124c6f9a14e
0fc3744f638172a9b179e7de61350ebcbde262fb
F20110330_AABWHN schutte_j_Page_079.tif
91e611f8958dbfa9ac6fe47fed5c121c
2a7eb9bec1325220265f7d12840a4825550028ad
F20110330_AABWGY schutte_j_Page_064.tif
b17583f7d7041eba2c1a5d7f4835799c
84442dffc6444a75625ee91c3104d7746d84fd11
895519 F20110330_AABXLE schutte_j_Page_117.jp2
a90eab75d863a9049c25a42f77b285ac
2ae009f75c69e649aa115be4205dc5887fea35b3
82188 F20110330_AABXKQ schutte_j_Page_102.jp2
6d68dc0a3fe671abde32915c72cb6fa5
62502987c2b7555b7e7d36c25681d7f5f6904366
F20110330_AABWIC schutte_j_Page_095.tif
66d444c4d6897a58bb86f5a90a9fe651
13b7353f5ed183faf376465a9a201619ba510434
F20110330_AABWHO schutte_j_Page_080.tif
7c78e80f86b4f5319c97a916671baeda
9dc4ba3a3d0f8f0df61132f4a81eb260b126165b
F20110330_AABWGZ schutte_j_Page_065.tif
870e4214bd928f275d34dcaa4227f051
cbf87a2995075ef9b9e2eb8c700fc97e80842f27
101720 F20110330_AABXLF schutte_j_Page_118.jp2
8c0773be94cf77ea5648634f2dc6def9
60f86bbcc501cf9199a938eada2c5ed33a435386
110277 F20110330_AABXKR schutte_j_Page_103.jp2
ba3961ca797e9f1b064eb376094e0c6c
ee80cb4d2f9993501eeb3510cf3579b7af5299ff
F20110330_AABWID schutte_j_Page_096.tif
3cef92b3b2cfb40d2fd9d06133ccf9aa
f8a32f3e42b9c03752e4e770431106a8cdd4e387
F20110330_AABWHP schutte_j_Page_082.tif
1762d61f078f627dd85a550462a94f3b
e7685a4746f0d7ac6a91ddac0a168f5abca8b4ad
69753 F20110330_AABXLG schutte_j_Page_119.jp2
d31a39131b29970397288012ee86a5aa
2774dc0a5fcde309299a10bf3dceed036b8b82da
83103 F20110330_AABXKS schutte_j_Page_104.jp2
5666eb69a4aa56c931ee7de0627599e7
d16d15b52fe7a90a9177933167e6053d50d0f62d
F20110330_AABWIE schutte_j_Page_097.tif
d149673b85af54677066819916c25114
bbe788ef2ecae1514a024dbf86fc5fdcc0afccff
F20110330_AABWHQ schutte_j_Page_083.tif
56205ecb4d0fe2318e48fa63095d27f9
5661c9235721c0ba314f855909a49d68d84ee5d7
70078 F20110330_AABXLH schutte_j_Page_120.jp2
7720bb08ae04531e6a163c9047a14257
a007d2d681a9eb786e782af12c1a06b163b6264a
52046 F20110330_AABXKT schutte_j_Page_105.jp2
19f809da225a41953e078fd8c4553560
f50d21767a8f3321290d8db654bd54442e4a99df
F20110330_AABWIF schutte_j_Page_098.tif
6566dcb045d2970cd111a94a4dc0c24e
eed937bc291b26628e86e35d67d42cb416cbfc81
F20110330_AABWHR schutte_j_Page_084.tif
06d7fff14cdcae21f6fee61733387328
f7916b4d519793b3331f64ac8fb5ccff12bcff2f



PAGE 1

APPLICATIONS OF PARALLEL GLOBAL OPTIMIZATION TO MECHANICS PROBLEMS By JACO FRANCOIS SCHUTTE A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2005

PAGE 2

Copyright 2005 by JACO FRANCOIS SCHUTTE

PAGE 3

This work is dedicated to my parents and my wife Lisa.

PAGE 4

iv ACKNOWLEDGMENTS First and foremost, I would like to thank Dr. Raphael T. Haftka, chairman of my advisory committee, for the opportunity he pr ovided me to complete my doctoral studies under his exceptional guidance. Without his une nding patience, constant encouragement, guidance and expertise, this work would not have been possible. Dr. Haftka's mentoring has made a lasting impression on both my academic and personal life. I would also like to thank the members of my advisory committee, Dr. Benjamin Fregly, Dr. Alan D. George, Dr Panos M. Pardalos, and Dr. Nam Ho Kim. I am grateful for their willingness to serve on my committee, for the help they provided, for their involvement with my oral examination, and for reviewing this di ssertation. Special thanks go to Dr. Benjamin Fregly, who provide d a major part of the financial support for my studies. Special thanks also go to Dr. Alan George whose parallel processing graduate course provided much of the inspiration for th e research presented in this manuscript, and for reviewing some of my publications. Thanks go also to Dr. Nielen Stander who provided me with the wonderful opportunity to do an internship at the Livermore Software Technology Corporation. My colleagues in the Structural and Multidisciplinary Optimization Research Group at the University of Florida also de serve many thanks for their support and the many fruitful discussions. Speci al thanks go to Tushar Goel, Erdem Acar, and also Dr. Satchi Venkataraman and his wife Beth, who took me in on my arrival in the USA and provided me a foothold for which I will forever be grateful.

PAGE 5

v The financial support provided by AFOSR grant F49620-09-1-0070 to R.T.H. and the NASA Cooperative Agreement NCC3-994, the Institute for Future Space Transport University Research, Engineering and Technol ogy Institute is grat efully acknowledged. I would also like to express my deepest appreciation to my parents. Their limitless love, support and understanding are the main stay of my achievements in life. Lastly, I would like to thank my wife, Lisa Without her love, pa tience and sacrifice I would never have been able to finish this dissertation.

PAGE 6

vi TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES.............................................................................................................ix LIST OF FIGURES.............................................................................................................x ABSTRACT.....................................................................................................................xiii CHAPTER 1 INTRODUCTION........................................................................................................1 Statement of Problem...................................................................................................1 Purpose of Research.....................................................................................................1 Significance of Research..............................................................................................1 Parallelism by Exploitation of Op timization Algorithm Structure........................2 Parallelism through Multiple Indepe ndent Concurrent Optimizations.................3 Parallelism through Concurrent Opti mization of Decomposed Problems............3 Roadmap.......................................................................................................................4 2 BACKGROUND..........................................................................................................5 Population-based Global Optimization.........................................................................5 Parallel Processing in Optimization..............................................................................6 Decomposition in Large Scale Optimization................................................................7 Literature Review: Proble m Decomposition Strategies...............................................8 Collaborative Optimization (CO)..........................................................................8 Concurrent SubSpace Optimization (CSSO).......................................................11 Analytical Target Cascading (ATC)....................................................................15 Quasiseparable Decomposition and Optimization..............................................17 3 GLOBAL OPTIMIZATION THRO UGH THE PARTICLE SWARM ALGORITHM............................................................................................................18 Overview.....................................................................................................................18 Introduction.................................................................................................................19 Theory.........................................................................................................................21 Particle Swarm Algorithm...................................................................................21

PAGE 7

vii Analysis of Scale Sensitivity...............................................................................24 Methodology...............................................................................................................28 Optimization Algorithms.....................................................................................28 Analytical Test Problems....................................................................................30 Biomechanical Test Problem...............................................................................32 Results.................................................................................................................37 Discussion...................................................................................................................41 Conclusions.................................................................................................................46 4 PARALLELISM BY EXPLOITING PO PULATION-BASED ALGORITHM STRUCTURES...........................................................................................................47 Overview.....................................................................................................................47 Introduction.................................................................................................................48 Serial Particle Swarm Algorithm................................................................................50 Parallel Particle Swarm Algorithm.............................................................................53 Concurrent Operation and Scalability.................................................................53 Asynchronous vs. Synchronous Implementation................................................54 Coherence............................................................................................................55 Network Communication....................................................................................56 Synchronization and Implementation..................................................................58 Sample Optimization Problems..................................................................................59 Analytical Test Problems....................................................................................59 Biomechanical System Identification problems..................................................60 Speedup and Parallel Efficiency..........................................................................63 Numerical Results.......................................................................................................65 Discussion...................................................................................................................67 Conclusions.................................................................................................................73 5 IMPROVED GLOBAL CONVERGENCE USING MULTIPLE INDEPENDENT OPTIMIZATIONS.....................................................................................................74 Overview.....................................................................................................................74 Introduction.................................................................................................................74 Methodology...............................................................................................................77 Analytical Test Set..............................................................................................77 Multiple-run Methodology..................................................................................78 Exploratory run and budgeting scheme........................................................81 Bayesian convergence probability estimation..............................................84 Numerical Results.......................................................................................................85 Multi-run Approach for Predetermi ned Number of Optimizations.....................85 Multi-run Efficiency............................................................................................87 Bayesian Convergence Probability Estimation...................................................89 Monte Carlo Convergence Probability Estimation..............................................92 Conclusions.................................................................................................................92 6 PARALLELISM BY DECOMPOSITION METHODOLOGIES.............................94

PAGE 8

viii Overview.....................................................................................................................94 Introduction.................................................................................................................94 Quasiseparable Decomposition Theory......................................................................96 Stepped Hollow Cantilever Beam Example...............................................................98 Stepped hollow beam optimization...................................................................102 Quasiseparable Optimization Approach............................................................104 Results.......................................................................................................................106 All-at-once Approach........................................................................................106 Hybrid all-at-once Approach.............................................................................107 Quasiseparable Approach..................................................................................108 Approximation of Constraint Margins..............................................................109 Discussion.................................................................................................................112 Conclusions...............................................................................................................115 7 CONCLUSIONS......................................................................................................116 Parallelism by Exploitation of Op timization Algorithm Structure...........................116 Parallelism through Multiple Independent Optimizations........................................116 Parallelism through Concurrent Opti mization of Decomposed Problems...............117 Future Directions......................................................................................................117 Summary...................................................................................................................118 APPENDIX A ANALYTICAL TEST PROBLEM SET..................................................................119 Griewank...................................................................................................................119 Hartman 6.................................................................................................................119 Shekel 10..................................................................................................................120 B MONTE CARLO VERIFICATION OF GLOBAL CONVERGENCE PROBABILITY........................................................................................................122 LIST OF REFERENCES.................................................................................................126 BIOGRAPHICAL SKETCH...........................................................................................138

PAGE 9

ix LIST OF TABLES Table page 1 Standard PSO algorithm para meters used in the study............................................24 2 Fraction of successful optimizer runs for the analytical test problems....................37 3 Final cost function values and associat ed marker distance and joint parameter root-mean-square (RMS) errors after 10,000 function evaluations performed by multiple unscaled and scaled PSO, GA, SQP, and BFGS runs................................40 4 Parallel PSO results for the biomechanic al system identification problem using synthetic marker trajectories wi thout and with numerical noise..............................66 5 Parallel PSO results for the biomechanic al system identification problem using synthetic marker trajectories wit hout and with numerical noise..............................67 6 Particle swarm algorithm parameters.......................................................................77 7 Problem convergence tolerances..............................................................................78 8 Theoretical convergence probability results for Hartman problem.........................87 9 Minimum, maximum and median fitness evaluations when applying ratio of change stopping criteria on pool of 1,000 optimizations for Griewank, Hartman and Shekel problems................................................................................................89 10 Beam material properties a nd end load configuration...........................................101 11 Stepped hollow beam global optimum...................................................................104 12 All-at-once approach median solution...................................................................107 13 Hybrid, all-at-once median solution.......................................................................108 14 Quasiseparable optimization result........................................................................110 15 Surrogate lower level approxi mation optimization results....................................112 16 Hartman problem constants....................................................................................120 17 Shekel problem constants.......................................................................................121

PAGE 10

x LIST OF FIGURES Figure page 1 Collaborative optimization flow diagram................................................................10 2 Collaborative optimization subspace constr aint satisfaction procedure (taken from [6])...................................................................................................................10 3 Concurrent subspace optimization methodology flow diagram...............................13 4 Example hierarchical problem structure...................................................................16 5 Sub-problem information flow.................................................................................16 6 Joint locations and orient ations in the parametric ankle kinematic model...............33 7 Comparison of convergence history results for the analytical test problems...........38 8 Final cost function values for ten unscal ed (dark bars) and scaled (gray bars) parallel PSO, GA, SQP, and BFGS runs for the biomechanical test problem.........39 9 Convergence history for unscaled (dark li nes) and scaled (g ray lines) parallel PSO, GA, SQP, and BFGS runs for the biomechanical test problem......................40 10 Sensitivity of gradient calculations to selected finite difference step size for one design variable..........................................................................................................43 11 Serial implementation of PSO algorithm.................................................................54 12 Parallel implementation of the PSO algorithm........................................................57 13 Surface plots of the (a) Griewank and (b ) Corana analytical test problems showing the presence of multiple local minima.......................................................61 14 Average fitness convergen ce histories for the (a) Griewank and (b) Corana analytical test problems for swar m sizes of 16,32,64,and 128 particles and 10000 swarm iterations.......................................................................................................64 15 Fitness convergence and parameter erro r plots for the biomechanical system identification problem using synthetic data with noise............................................68 16 (a) Speedup and (b) parallel efficiency for the analytical and biomechanical optimization problems..............................................................................................69

PAGE 11

xi 17 Multiple local minima for Griewank anal ytical problem surface plot in two dimensions................................................................................................................75 18 Cumulative convergence probability Pc as a function of the number of optimization runs with assumed equal Pi values......................................................81 19 Fitness history and convergence probability Pc plots for Griewank, Hartman and Shekel problems.......................................................................................................82 20 Typical Shekel fitness history plots of 20 optimizations (sampled out of 1000).....83 21 Shekel convergence probability for an individual optimizatio n as a function of fitness evaluations and population size....................................................................85 22 Theoretical cumulative convergence probability Pc as a function of the number of optimization runs with constant Pi for the Hartman problem..............................86 23 Theoretical convergence probability Pc with sets of multiple runs for the Griewank problem....................................................................................................88 24 Theoretical convergence probability Pc using information from exploratory optimizations which are stopped using a ra te of change stopping condition for the Griewank, Hartman and Shekel problems..........................................................90 25 Bayesian Pc estimation comparison to using ex trapolated and randomly sampled optimizations out of pool of 1000 runs for Griewank problem................................91 26 Bayesian Pc estimation comparison to using ex trapolated and randomly sampled optimizations out of pool of 1000 runs for Hartman problem.................................91 27 Bayesian Pc estimation comparison to using ex trapolated and randomly sampled optimizations out of pool of 1000 runs for Shekel problem....................................92 28 Stepped hollow cantilever beam..............................................................................99 29 Dimensional parameters of each cross section.........................................................99 30 Projected displacement in direction ....................................................................100 31 Tip deflection contour plot as a function of beam section 5 with height h and width w with yield stress and aspect ratio constraints indicated by dashed and dash dotted lines respectively.................................................................................103 32 Quasiseperable optimization flow chart.................................................................105 33 Results for 1000 all-at-once optimizations.............................................................106 34 Hybrid PSO-fmincon strategy for 100 optimizations............................................108

PAGE 12

xii 35 Repeated optimizations of section 1 subproblem using fmincon function.............110 36 Summed budget value and constraint margins for individual sections..................111 37 Global and local optimum in section 1 sub-optimization. Scale is 0.1:1...............111 38 Decomposed cross section solution. Scale is 0.1:1................................................112 39 Target tip deflection valu e histories as a functi on of upper-level fitness evaluations..............................................................................................................113 40 Constraint margin value histories as a function of upper-level function evaluations..............................................................................................................114 41 Predicted and Monte Carlo sampled convergence probability Pc for 5 independent optimization runs for the Griewank problem.....................................122 42 Predicted and Monte Carlo sampled convergence probability Pc for 12 independent optimization runs for the Griewank problem.....................................123 43 Monte Carlo sampled convergence probability Pc with sets of multiple runs for the Griewank problem............................................................................................123 44 Monte Carlo sampled convergence probability Pc using information from exploratory optimizations stopped using a rate of change stopping condition for the Griewank, Hartman and Shekel problems........................................................124 45 Bayesian Pc comparison for Griewank, Hart man and Shekel problem..................125

PAGE 13

xiii Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy APPLICATIONS OF PARALLEL GLOBAL OPTIMIZATION TO MECHANICS PROBLEMS By Jaco Francois Schutte December 2005 Chair: Raphael T. Haftka Cochair: Benjamin J. Fregly Major Department: Mechanic al and Aerospace Engineering Global optimization of complex engineer ing problems, with a high number of variables and local minima, requires sophi sticated algorithms with global search capabilities and high computati onal efficiency. With the grow ing availability of parallel processing, it makes sense to address these requirements by increasing the parallelism in optimization strategies. This study proposes th ree methods of concurrent processing. The first method entails exploiting the structure of population-based global algorithms such as the stochastic Particle Swarm Optimizati on (PSO) algorithm and the Genetic Algorithm (GA). As a demonstration of how such an algorithm may be adapted for concurrent processing we modify and apply the PSO to several mechanical optimization problems on a parallel processing machine. Desirable PSO al gorithm features such as insensitivity to design variable scaling and modest sensitivit y to algorithm parameters are demonstrated. A second approach to parallelism and impr oving algorithm efficiency is by utilizing multiple optimizations. With this method a bud get of fitness evaluations is distributed

PAGE 14

xiv among several independent sub-optimizations in place of a single extended optimization. Under certain conditions this strategy obt ains a higher combined probability of converging to the global optimum than a single optimization wh ich utilizes the full budget of fitness evaluations. The third and fina l method of parallelism addressed in this study is the use of quasiseparable decompositi on, which is applied to decompose loosely coupled problems. This yields several subproblems of lesser dimensionality which may be concurrently optimized with reduced effort.

PAGE 15

1 CHAPTER 1 INTRODUCTION Statement of Problem Modern large scale problems often require high-fidelity analyses for every fitness evaluation. In addition to this, these optimiza tion problems are of a global nature in the sense that many local minima exist. These two factors combine to form exceptionally demanding optimization problems which requ ire many hours of computation on high-end single-processor computers. In order to e fficiently solve such challenging problems parallelism may be employed for improve d optimizer throughput on computational clusters or multi-core processors. Purpose of Research The research presented in this manuscrip t is targeted on the investigation of methods of implementing parallelism in gl obal optimization. These methods are (i) parallel processing through the optimization algorithm, (ii) multiple independent concurrent optimizations, and (iii) para llel processing by decomposition. Related methods in the literature are reported and will be compared to the approaches formulated in this study. Significance of Research Parallel processing is becoming a rapidl y growing resource in the engineering community. Large processor farms or Be owulf clusters are becoming increasingly common at research and commercial engineering facilities. In addition to this, processor manufacturers are encounte ring physical limitations such as heat dissipation and

PAGE 16

2 constraints on processor dimensions at cu rrent clock frequencies because of the upper limit on signal speeds. These place an upper li mit on the clock frequencies that can be attained and have forced manufacturers to look at other altern atives to improve processing capability. Both Intel and AMD are currently developing methods of putting multiple processors on a single die and will be releasing multiprocessor cores in the consumer market in the near future. This mu lti-core technology will enable even users of desktop computers to utilize concurrent processing and ma ke it an increasingly cheap commodity in the future. The engineering community is facing more complex and computationally demanding problems as the fide lity of simulation software is improved every day. New methods that can take advantage of the increasing availability of parallel processing will give the engineer powerful tool s to solve previously intractable problems. In this manuscript the specific problem of the optimization of large-scale global engineering problems is addressed by utilizi ng three different avenues of parallelism. Any one of these methods and even combin ations of them may utilize concurrent processing to its advantage. Parallelism by Exploitation of Op timization Algorithm Structure Population-based global optimizers such as the Particle Swarm Optimizer (PSO) or Genetic Algorithms (GAs) coordinate th eir search effort in the design space by evaluating a population of indivi duals in an iterative fashi on. These iterations take the form of discrete time steps for the PSO and generations in the case of the GA. Both the PSO and the GA algorithm structures allow f itness calculations of individuals in the population to be evaluated independently and concurrently. This opens up the possibility of assigning a computational node or processo r in a networked group of machines to each

PAGE 17

3 individual in the population, and calculating the fitness of each indi vidual concurrently for every iteration of the optimization algorithm. Parallelism through Multiple Inde pendent Concurrent Optimizations A single optimization of a large-scale problem will have significant probability of becoming entrapped in a local minimum. This risk is alleviated by utilizing population based algorithms such as PSO and GA's. These global optimizers have the means of escaping from such a local minima if e nough iterations are allowed. Alternatively, a larger population may be used, allowing fo r higher sampling densities of the design space, which also reduces the risk of entrap ment. Both these options require significant additional computation effort, with no guarantee of improvement in global convergence probability. A more effective stra tegy can be followed while utilizing the same amount of resources. By running several independent but limited optimizations it will be shown that in most cases the combined probability of finding the global optimum is greatly improved. The limited optimization runs ar e rendered independent by applying a population based optimizer with different sets of initial population distributions in the design space. Parallelism through Concurrent Optimi zation of Decomposed Problems Some classes of such problems may be s ubdivided into several more tractable subproblems by applying decomposition strategies. This process of decomposition generally involves identifying groups of variables and constraints with minimal influence on one another. The choice of which decomposition strategy to apply depends largely on the original problem structure, and the interac tion among variables. Th e objective is to find an efficient decomposition strategy to separa te such large scale global optimization

PAGE 18

4 problems into smaller sub-problems without introducing spurious local minima, and to apply an efficient optimizer to so lve the resulting sub-problems. Roadmap A background on the global optimization of large scale optimization problems, appropriate optimization algorithms and techni ques, and parallelism will be presented in Chapter 2. Chapter 3 presents an evaluation of the global, stocha stic population based algorithm, the Particle Swarm Optimizer, thr ough several analytical and biomechanical system identification problems. In Chapter 4 the parallelization of this population based algorithm is demonstrated and applied. Chapter 5 details the use of multiple independent concurrent optimizations for significant improvements in combined convergence probability. Chapter 6 shows how complex stru ctural problems with a large number of variables may be decomposed into multiple independent sub-problems which can be optimized concurrently using a two level optimization scheme. In Chapter 7 some conclusions are drawn and avenues for future research are proposed.

PAGE 19

5 CHAPTER 2 BACKGROUND Population-based Global Optimization Global optimization often requires specia lized robust approaches. These include stochastic and/or population-ba sed optimizers such as GAs and the PSO. The focus of this research is on exploring avenues of parallelism in population based optimization algorithms. We demonstrate these methods using the stochastic Particle Swarm Optimizer, which is a stochastic search al gorithm suited to con tinuous problems. Other merits to the PSO include low sensitivity to algorithm parameters, and insensitivity to scaling of design variables. These qualitie s will be investigated in Chapter 3. This algorithm does not require gradients, which is an important consideration when solving problems of high dimensionality, often the cas e in large scale optimization. The PSO has a performance comparable to GAs, which are also candidates for any of the methods of parallelism proposed in this manuscript, and may be more suitable for discrete or mixed variable types of problems. In the research presented in this manuscript the PSO is applied to a biomechanical problem with a large number of continuous variables. This problem has several local minima, and, when attempting to solve it w ith gradient based optimizers, demonstrated high sensitivity to the scaling of design vari ables. This made it an ideal candidate to demonstrate the desirable quantities of the algorithm. Other application problems include structural sizing problems and com posite laminate angle optimization.

PAGE 20

6 Parallel Processing in Optimization There are five approaches which may be utilized to decompose a single computational task into smaller problems whic h may then be solved concurrently. These are geometric decomposition, iterative decomposition, recursive decomposition, speculative decomposition and functional deco mposition, or a combination of these [1,2]. Among these, functional decomposition is most commonly applied and will also be the method of implementation presented in Chapte r 4. The steps followed in parallelizing a sequential program consist of the following from [1]. 1. Decompose the sequential program or data into smaller tasks. 2. Assign the tasks to processes. A process is an abstract entity that performs tasks. 3. Orchestrate the necessary data access, communicati on, and synchronization. 4. Map or assign the processes to computational nodes. The functional decomposition method is ba sed on the premise that applications such as an optimization algorithm may be br oken into many distinct phases, each of which interacts with some or all of the ot hers. These phases can be implemented as coroutines, each of which will execute for as long as it is able and then invoke another and remain suspended until it is again needed. Functional decomposition is the simplest way of parallelization if it can be implemented by turning its hi gh-level descrip tion into a set of cooperating processes [2]. When using this method, ba lancing the throughput of the different computational stages will be highly problematic when there are dependencies between stages, for example, when data requi res sequential processing by several stages. This limits the parallelism that may be achieved using functional decomposition. Any further parallelism must be achieved through using geometric, iterative or speculative decomposition within a functional unit [2].

PAGE 21

7 When decomposing a task into concurrent processes some additional communication among these routines is required for coordination and the interchange of data. Among the methods of communication fo r parallel programming th e parallel virtual machine (PVM) and the message-passing interf ace (MPI) are the most widely used. For the research undertaken in Chapter 4 a portable implementation of the MPI library [3,4] containing a set of parallel communication functions [5] is used. Decomposition in Large Scale Optimization The optimization community developed se veral formalized decomposition methods such as Collaborative Optimization (CO) [6], Concurrent SubSpace Optimization (CSSO) [7], and Quasiseparable decomposition to d eal with the challenges presented by large scale engineering problems. The problems a ddressed using these schemes include multidisciplinary optimization in aerospace design, or large scale structural and biomechanical problems. Decomposition methodologies in large scal e optimization are currently intensely studied because increasingly a dvanced higher fidelity simula tion methods result in large scale problems becoming intractable. Problem decomposition allows for: 1. Simplified decomposed subsystems. In most cases the decomposed sub-problems are of reduced dimensionality, and th erefore less demanding on optimization algorithms. An example of this is the numb er of gradient calc ulations required per optimization iteration, which in the case of gradient based algorithms, scales directly with problem dimensionality. 2. A broader work front to be attacked si multaneously, which results in a problem being solved in less time if the proces sing resources are available. Usually computational throughput is limited in a sequential fashion, i.e., the FLOPS limit for computers. However, if multiple processing units are available this limit can be circumvented by using an array of networ ked computers, for example, a Beowulf cluster.

PAGE 22

8 Several such decomposition strategies have been proposed (see next section for a short review), all differing in the manner in which they address some or all of the following. 1. Decomposition boundaries, which may be disc iplinary, or component interfaces in a large structure. 2. Constraint handling 3. Coordination among decomposed sub-problems. Literature Review: Problem Decomposition Strategies Here follows a summary of methodologi es used for the decomposition and optimization of large scale global problems. This review forms the background for the study proposed in Chapter 6 of this manuscript. Collaborative Optimization (CO) Overview. The Collaborative Optimization (CO) strategy was first introduced by Kroo et al., [8]. Shortly after its introduction this bi-level optimization scheme was extended by Tappeta and Renaud [9,10] to three distinct formulations to address multiobjective optimization of large-scale systems. The CO paradigm is based on the concept that the interaction among several different disciplinary experts optimizing a design is minimal for local changes in each discipline in a MDO problem. This allows a large scale system to be decomposed into sub-systems along domain specific boundaries. These subsystems are optimized through local design variables specific to each subsystem, subject to the domain specific constraints. Th e objective of each subsystem optimization is to maintain agreement on interdisciplinary design variables. A system level optimizer enforces this interdisciplinary compatib ility while minimizing the overall objective function. This is achieved by combining the sy stem level fitness with the cumulative sum

PAGE 23

9 of all discrepancies between in terdisciplinary design variables. This strategy is extremely suited for parallel computation because of th e minimal interaction between the different design disciplines, which results in reduced communication overhead during the course of the optimization. Methodology. This decomposition strategy is desc ribed with the flow diagram in Figure 1. As mentioned previously, the CO st rategy is a bi-level method, in which the system level optimization sets and adjusts a interdisciplinary design variables during the optimization. The subspace optimizer attemp ts both to satisfy local constraints by adjusting local parameters, and to meet the interdisciplinary design variable targets set by the system level optimizer. Departures from th e target interdisciplinary design parameters are allowed, which may occur because of insu fficient local degrees of freedom, but is to be minimized. The system level optimizer attempts to adjust the interdisciplinary parameters such that the objective func tion is minimized, while maximizing the agreement between subsystems. This process of adjusting the system level target design, and the subsystems attempting to match it whils t satisfying local constraints, is repeated until convergence. This procedure can be graphically illustrated in Figure 2 (taken from Kroo and Manning [6]). The system level optimizer sets a design target P, and each subspace optimization attempts to satisfy local constr aints while matching the target design P as closely as possible by moving in directions 1 and 2. During the next system level optimization cycle the target design P will be moved in direction 3 in order to maximize the agreement between the target design and subspace designs that satisfy local constraints.

PAGE 24

10 Figure 1 Collaborative optimization flow diagram Figure 2 Collaborative optimization subspace c onstraint satisfaction procedure (taken from [6]) Refinements on method. Several enhancements to this method have been proposed, among which are the integration of this architecture into a decision based design framework as proposed by Hazelrigg [11,12], the use of response surfaces [13] to Subspace 2 constraint Subspace 1 constraint P Z1 Z2 1 2 3 Subproblem optimization (local variables) Disciplinary analysis Subproblem optimization (local variables) Disciplinary analysis Subproblem optimization (local variables) Disciplinary analysis Converged ? Start Sto p Optimize system approximation problem and set target design Attempt to match system level mandated target design yes no

PAGE 25

11 model disciplinary analyses, and genetic algorithms with scheduling for increased efficiency [14]. Solution quality and computational efficiency. Sobieski and Kroo [15] report very robust performance on their CO scheme with identical solutions being found on both collaborative and single level optimizati ons of 45 design variable, 26 constraint problems. Braun and Kroo [16] showed CO to be unsuited for small problems with strong coupling, but for large scale problems with weak coupli ng the CO methodology becomes more computationally efficient. They also found that the amount of system level iterations is dependent on the level of coup ling of sub-systems, and that the required number of sub-optimizations scales in pr oportion to the overall problem size. Similar findings are reported by Alexandrov [17]. Braun et al. [18] evaluated the performance of CO on a set of quadratic problem s presented by Shankar et al. [19] to evaluate the CSSO method. Unlike CSSO, the CO method did not require an increased amount of iterations for QP problems with strong coupling, and converged successfully in all cases. Applications. This decomposition architecture has been extensively demonstrated using analytical test problems [18,20] and aerospace optimization problems such as trajectory optimization [16,18], vehicle design [13,20-22], and satellite constellation configurations [23]. Concurrent SubSpace Optimization (CSSO) Overview. This method was proposed by Sobieszczanski-Sobieski [24], and, like CO, divides the MDO problem along disc iplinary boundaries. The main difference however is the manner in which the CSSO framework coordinates the subsystem optimizations. A bi-level optimization scheme is used in which the upper optimization

PAGE 26

12 problem consists of a linear [7] or second order [25] system approximation created with the use of Global Sensitivity Equations (G SE's). This system approximation reflects changes in constraints and the objective f unction as a function of design variables. Because of the nonlinearities this approximation is only accurate in the immediate neighborhood of the current design state, and needs to be updated after every upper-level optimization iteration. After establishing a sy stem level approximation the subsystems are independently optimized using only de sign variables local to the subspace. The system level approximation is then updated by a sensitivity analysis to reflect changes in the subspace design. The last two steps of subspace optimization and system approximation is repeated through the uppe r level optimizer until convergence is achieved. Methodology. The basic steps taken to optimize a MDO problem with the CSSO framework are as follows: 1. Select initial set of designs for each subsystem. 2. Construct system approximation using GSE's 3. a) Subspace optimization through local variables and objective b) Update system approximation by performing sensitivity analysis 4. Optimize design variables according to system approximation 5. Stop if converged; otherwise go to 3) where step (3) contains the lower level optim ization in this bi-level framework. This process is illustrated in Figure 3. Refinements on method. As previously mentioned, early CSSO strategies used linear system level approximations obtained with the Global Sensitivity Equations (GSE). Coordination of subspace or disciplinary optim izations is achieved through system level sensitivity information.

PAGE 27

13 Figure 3 Concurrent subspace optimization methodology flow diagram This imposes limits on the allowable deviation from the current design, and requires a new approximation to be constructe d at every system iteration to maintain reasonable accuracy. Recent research focuse d on alternate methods for acquiring system level approximations for the coor dination effort. Several authors [25-28] modified the system approximation to utilize a second orde r response surface approximation. This is combined with a database of previous fitn ess evaluation points, which can be used to create and update the response surface. This response surface then serves to couple the subsystem optimizations and coordinate system level design. Solution quality and computational efficiency. Shankar et al. [19] investigated the robustness of the CSSO on a set of analytical problems. Several quadratic programming problems with weak and st rong coupling between subsystems were Initial design selection System approximation using GSEs Sub-problem optimzation (local variables) Sub-problem sensitivity analysis Sub-problem optimization (local variables) Sub-problem sensitivity analysis Sub-problem optimization (local variables) Sub-problem sensitivity analysis Converged ? Start Optimize system approximation problem Approximated g(x) yes no Stop

PAGE 28

14 evaluated with a modification of Sobieszc zanski-Sobieski's nonhierarchical subspace optimization scheme [7]. Results indicated reasonable performance for problems with weak coupling between subsystems. For large problems with strong interactions between subsystems, this decomposition scheme prove d unreliable in terms of finding global sensitivities, leading to poor solutions. Tappeta et al., using th e iSIGHT software. [29-31], analyzed two analytical and two structural problems, a welding design a nd a stepped beam weight minimization. In this work it is reported that the Karush-KuhnTucker conditions were met in some of the cases, and that most problems converged cl osely to the original problem solution. Lin and Renaud compared the commercial software package LANCELOT [32] which compares the BroydonFletcher-Goldfarb-Shanno (B FGS) method to the CSSO strategy, the latter incorporati ng response surfaces. In this st udy the authors show similar computational efficiencies for small uncoupled analytical problems. For large scale MDO problems however the CSSO method c onsistently outperformed the LANCELOT optimizer in this area. Sellar et al. [26] compared a CSSO with neur al network based response surface enhancements with a full (all at once) sy stem optimization. The CSSO-NN algorithm showed a distinct advantage in computational efficiency over the all-at-once approach, while maintaining a high level of robustness. Applications. This decomposition methodology has been applied to large scale aerospace problems like high temperature and pressure aircraft engine components [29,33], aircraft brake component optimization [34], and aerospace vehicle design [26,35].

PAGE 29

15 Analytical Target Cascading (ATC) Overview. Analytical Target Cascading was introduced by Michelena et al [36] in 1999, and developed further by Kim [37] as a product development tool. This method is typically used to solve object based d ecomposed system optimization problems. Tosserams et al. [38] introduced a Lagrangian relaxa tion strategy which in some cases improves the computational efficiency of th is method by several orders of magnitude. Methodology. ATC is a strategy which coordina tes hierarchically decomposed systems or elements of a problem (see Figure 4) by the introd uction of target and response coupling variables. Targets are set by parent elements which are met by responses from the children elements in the hierarchy (see Figure 5 obtained from [338]). At each element an optimization problem is fo rmulated to find local variables, parent responses and child targets which minimi ze a penalized discrepancy function, while meeting local constraints. The responses are rebalanced up to higher levels by iteratively changing targets in a nested loop in order to obtain consistenc y. Several coordination strategies are available to de termine the sequence of solvi ng the sub-problems and order of exchange in targets and responses [39]. Proof of convergence is also presented for some of these classes of approaches in [39]. Refinements on method. Tosserams et al. [38] introduced the use of augmented Lagrangian relaxation in order to reduce the computational cost associated with obtaining very accurate agreement between sub-problems, and the coordination effort at the inner loop of the method. Allison et al. exploited th e complimentary nature of ATC and CO to obtain an optimization formula tion called nested ATC-MDO [40]. Kokkolaras et al. extended the formulation of ATC to include the design of product families [41].

PAGE 30

16 Figure 4 Example hierarchical problem structure Figure 5 Sub-problem information flow Solution quality and computational efficiency. Michelena et al. [39] prove, under certain convexity assumptions that the ATC pr ocess will yield the optimal solution of the original design target problem. The original ATC formulation had the twofold problem of requiring large penalty weights to accurate solutions, and the excessive repeat in the inner loop which solves sub-problems before the outer loop can proceed. Both these problems are addressed by using augmented Lagrangian relaxation [38] by Tosserams et al which reported a decrease in computational effort on the order of between orders 10 and 10000. rij from parent : responses from children : r(i+1) k responses from parent : tij targets Elementary sub-problem Pij local variables xij local objective fij local constraints gij, hij Optimization inputs Optimization outputs t(i+1) k to children : responses i = 1 i = 2 i = 3 Level index i j = 1 j = 1 j = 1 j = 1 j = 1 j = 1 Element index j

PAGE 31

17 Applications. The ATC strategy is applied the de sign of structural members and a electric water pump in [40], and automotive design [42,43]. The performance of the Augmented Lagrangian Relaxation ATC enhan cement was tested using several geometric programming problems [38]. Quasiseparable Decomposition and Optimization The quasiseparable decomposition and optim ization strategy will be the focus of the research proposed in Chapter 6. This methodology addresses a class of problems common in the field of engineering and can be applied to a wide class of structural, biomechanical and other disciplinary problems. The strategy, which will be explained in detail in Chapter 6, is based on a two-level optimization approach which allows for the global search effort to be concentrated at the lower level sub-probl em optimizations. The system to be optimized is decomposed in to several lower level subsystem optimizations which are coordinated by an upper leve l optimization. A Sequential Quadratic Programming (SQP) based optimizer is applie d in this optimization infrastructure to solve a example structural sizing problem. This example problem entails the maximization of the tip displacement of a hollow stepped cantilever beam with 5 sections. The quasiseparable decomposition methodology is applied to decompose the structure into several sub-problems of redu ced dimensionality. The parallelism with this strategy is achieved by optimizing the independe nt sub-problems (sections of the stepped beam) concurrently, allowing for the uti lization of parallel processing resources.

PAGE 32

18 CHAPTER 3 GLOBAL OPTIMIZATION THROUGH TH E PARTICLE SWARM ALGORITHM Overview This chapter introduces the population based algorithm which will be the target for investigating parallelism throughout the manuscr ipt. This stochastic algorithm mimics swarming or flocking behavior found in animal groups such as bees, fish and birds. The swarm of particles is basically several parallel individual sear ches that are influenced by individual and swarm memory of regions in the design space with high fitness. These positions of the regions are constantly updated and reported to the individuals in the swarm through a simple communication model. Th is model allows for the algorithm to be easily decomposed into concurre nt processes, each representi ng an individual particle in the swarm. This approach to allow para llelism will be detailed in Chapter 4. For the purpose of illustrating the performance of the PSO, it is compared to several other algorithms commonly used when solving problems in biomechanics. This comparison is made through optimization of several analytical problems, and a biomechanical system identification problem. Th e focus of the research presented in this chapter is to demonstrate that the PSO ha s good properties such as insensitivity to the scaling of design variables a nd very few algorithm parameters to fine tune. These make the PSO a valuable addition in the arsenal of optimization methods in biomechanical optimization, in which it ha s never been applied before. The work presented in this Chapter wa s in collaboration with Jeff Reinbold, who supplied the biomechanical test problem [58] and Byung Il Koh, who developed the

PAGE 33

19 parallel SQP and BFGS algorithms [77] used to establish a computational efficiency comparison. The research in this Chapter was also published in [58,76,129]. Thanks goes to Soest and Casius for their willingness to share their numerical results published in [48]. EQUATION CHAPTER 3 SECTION 1 Introduction Optimization methods are used extensivel y in biomechanics research to predict movement-related quantities that cannot be measured experimentally. Forward dynamic, inverse dynamic, and inverse static optimiza tions have been used to predict muscle, ligament, and joint contact forces during e xperimental or predicted movements (e.g., see references [44-55]). System identification optimizatio ns have been employed to tune a variety of musculoskeletal model parameters to experimental movement data (e.g., see references [56-60]). Image matching optimizations have been performed to align implant and bone models to in vivo fluoroscopic images collect ed during loaded functional activities (e.g., see references [61-63]). Since biomechanical optimization problem s are typically nonlin ear in the design variables, gradient-based nonlinear programming has been the most widely used optimization method. The increasing size and co mplexity of biomechanical models has also led to parallelization of gradient-based algorithms, since gradient calculations can be easily distributed to multiple processors [44-46]. However, gradient-based optimizers can suffer from several important limitations. They are local rather than global by nature and so can be sensitive to the initial guess. Experimental or numerical noise can exacerbate this problem by introducing multiple local minima into the problem. For some problems, multiple local minima may exist due to the nature of the problem itself. In most situations, the necessary gradient values cannot be obtained analytically, and finite

PAGE 34

20 difference gradient calculations can be sensitiv e to the selected finite difference step size. Furthermore, the use of design variables with different length scales or units can produce poorly scaled problems that conve rge slowly or not at all [64,65], necessitating design variable scaling to improve performance. Motivated by these limitations and improveme nts in computer speed, recent studies have begun investigating the use of non-grad ient global optimizer s for biomechanical applications. Neptune [47] compared the performance of a simulated annealing (SA) algorithm with that of downhill simplex (D S) and sequential quadratic programming (SQP) algorithms on a forward dynamic optimi zation of bicycle pe daling utilizing 27 design variables. Simulated annealing found a better optimum than the other two methods and in a reasonable amount of CPU time. More recently, Soest and Casius [48] evaluated a parallel implementation of a genetic algorith m (GA) using a suite of analytical tests problems with up to 32 design variables and forward dynamic optimizations of jumping and isokinetic cycling with up to 34 design va riables. The genetic algorithm generally outperformed all other algorithms tested, incl uding SA, on both the analytical test suite and the movement optimizations. This study evaluates a recent addition to th e arsenal of global optimization methods particle swarm optimization (PSO) for use on biomechanical problems. A recentlydeveloped variant of the PSO algorithm is used for the investigation. The algorithms global search capabilities are evaluated usi ng a previously published suite of difficult analytical test problems w ith multiple local minima [48], while its insensitivity to design variable scaling is proven mathematically and verified using a biomechanical test problem. For both categories of problems, PSO robustness, performance, and scale-

PAGE 35

21 independence are compared to that of thr ee off-the-shelf optimization algorithms a genetic algorithm (GA), a sequential quadr atic programming algorithm (SQP), and the BFGS quasi-Newton algorithm. In add ition, previously pub lished results [48] for the analytical test problems permit comparison with a more complex GA algorithm (GA*), a simulated annealing algorithm (SA), a di fferent SQP algorithm (SQP*), and a downhill simplex (DS) algorithm. Theory Particle Swarm Algorithm. Particle swarm optimization is a stochastic global optimiza tion approach introduced by Kennedy and Eberhart [66]. The method's strength lies in its simplicity, being easy to code and requiring few algorithm paramete rs to define convergence behavior. The following is a brief introduction to the opera tion of the particle swarm algorithm based on a recent implementation by Groenwold and Fourie [67] incorporating dyn amic inertia and velocity reduction. Consider a swarm of p particles, where each particle's position i kx represents a possible solution point in the problem design space D For each particle i Kennedy and Eberhart [66] proposed that the position 1 i k x be updated in the following manner: 11 kkk xxv (3.1) with a pseudo-velocity 1 i k v calculated as follows: 11122 iiiii kkkkkkkwcrcr vvpx g x (3.2) Here, subscript k indicates a (unit) pseudotime increment. The point pk i is the bestfound cost location by particle i up to timestep k, which represents the cognitive contribution to the search vector 1 i k v. Each component of 1 i k v is constrained to be less

PAGE 36

22 than or equal to a maximum value defined in max 1 k v. The point k g is the global best-found position among all particles in the swarm up to time k and forms the social contribution to the velocity vector. Cost func tion values associated with i kp and k g are denoted by i best f and g best f respectively. Random numbers r1 and r2 are uniformly distributed in the interval [0,1]. Shi and Eberhart [68] proposed that the cognitive and social scaling parameters 1c and 2c be selected such that 122cc to allow the product 11cr or 22crto have a mean of 1. The result of using these proposed values is that the particles overshoot the attraction points i kp and k g half the time, thereby maintaini ng separation in the group and allowing a greater area to be searched than if the particles did not overshoot. The variable kw, set to 1 at initialization, is a modification to the original PSO algorithm [66]. By reducing its value dynamically based on the cost functi on improvement rate, the search area is gradually reduced [69]. This dynamic reduction behavior is defined by dw, the amount by which the inertia kw is reduced, dv, the amount by which the maximum velocity max 1 k v is reduced, and d, the number of iterations with no improvement in k g before these reduction take place [67] (see algorithm flow description below). Initialization of the algorithm involves se veral important steps. Particles are randomly distributed throughout the desi gn space, and particle velocities 0iv are initialized to random va lues within the limitsmax 000ivv. The particle velocity upper limit max 0v is calculated as a fr action of the distance betw een the upper and lower bound on variables in the design space max 0 UBLB vxx with 0.5 as suggested in [69]. Iteration counters k and t are set to 0. Iteration counter k is used to monitor the total

PAGE 37

23 number of swarm iterations, while iteration counter t is used to monitor the number of swarm iterations since the last improvement in k g Thus, t is periodically reset to zero during the optimization while k is not. The algorithm flow can be represented as follows: 1. Initialize a. Set constants 1c, 2c, maxk, max 0v 0w, dv, dw, and d b. Set counters k = 0, t = 0. Set random number seed. c. Randomly initialize pa rticle positions in0D xi for p i , 1 d. Randomly initialize particle velocities max 0 00 v v i for p i, 1 e. Evaluate cost function values if0 using design space coordinates i 0x for p i, 1 f. Set 0 ii best f f and i i 0 0x p for p i, 1 g. Set g bestf to best i bestf and 0g to corresponding i 0x 2. Optimize h. Update particle velocity vectorsi k 1 v using Eq. (3.2) i. If i k 1 v > max 1 kv for any component, then set that component to its maximum allowable value j. Update particle position vectors i k 1 x using Eq. (3.1) k. Evaluate cost function values i kf1 using design space coordinates i k 1 x for p i , 1 l. If i best i kf f 1, then i k i bestf f1 i k i k 1 1 x p for p i , 1

PAGE 38

24 m. If g best i kf f 1, then i k g bestf f1 i k k 1 1 x g for p i , 1 n. If g bestf was improved in (e), then reset t = 0. Else increment t o. If maximum number of function evaluations is exceeded, then go to 3 p. If t = d, then multiply 1kw by dw 1 and max 1 k vby dv 1 q. Increment k r. Go to 2(a). 3. Report results 4. Terminate This algorithm was coded in the C pr ogramming language by the author [70] and used for all PSO analyses performed in the study. A standard population size of 20 particles was used for all runs, and other al gorithm parameters were also selected based on standard recommendations (Table 1) [70-72]. The C source code for our PSO algorithm is freely available at http://www.mae.ufl.edu/~fregly/downloads/pso.zip (last accessed 12/2005). Table 1 Standard PSO algorithm parameters used in the study Analysis of Scale Sensitivity. One of the benefits of the PSO algorithm is its insensitivity to design variable scaling. To prove this characteristic, we will us e a proof by induction to show that all Parameter Description Value p Population size (number of particles) 20 c1 Cognitive trust parameter 2.0 c2 Social trust parameter 2.0 w0 Initial inertia 1 wd Inertia reduction parameter 0.01 Bound on velocity fraction 0.5 vd Velocity reduction parameter 0.01 d Dynamic inertia/velocity reduction delay (function evaluations) 200

PAGE 39

25 particles follow an identical path through the design space regardless of how the design variables are scaled. In actual PSO runs intend ed to investigate this property, use of the same random seed in scaled and unscaled cases will ensure that an identical sequence of random 1r and 2r values are produced by the comput er throughout the course of the optimization. Consider an optimization problem with n design variables. An n-dimensional constant scaling vector can be used to scale any or all dimensions of the problem design space: 1 2 3 n (3.3) We wish to show that for any time step 0k, kk v v (3.4) kk x x (3.5) where kx and kv (dropping superscript i) are the unscaled position and velocity, respectively, of an i ndividual particle and k kx x and k kv v are the corresponding scaled versions. First, we must show that our proposition is true for the base case, which involves initialization (0 k) and the first time step (1 k). Applying the scaling vector to an individual particle position 0x during initialization produces a scaled particle position0x : 00 x x (3.6) where the right hand side is a compone nt-by-component product of vectors and 0x. This implies that

PAGE 40

26 0000, p p g g (3.7) In the unscaled case, the pseudovelocity is calculated as 0 UBLB vxx (3.8) In the scaled case, this becomes 0 0 UBLB UBLB UBLB vxx x x xx v (3.9) From Eqs. (3.1) and (3.2) and these initial conditions the particle pseudo-velocity and position for the first time step can be written as 10011002200wcrcr vvpx g x (3.10) 101 xxv (3.11) in the unscaled case and 10011002200 0011002200 0011002200 1wcrcr wcrcr wcrcr vvpxgx v p x g x vpxgx v (3.12) 101 01 01 1 xxv x v xv x (3.13) in the scaled case. Thus, our propos ition is true for the base case. Next, we must show that our proposition is true for the i nductive step. If we assume our proposition holds for any time step k = j, we must prove that it also holds for time step k = j + 1. We begin by replacing subscript k with subscript j in Eqs. (3.4) and (3.5). If we then replace subscript 0 with subscript j and subscript 1 with subscript j + 1 in Eqs. (3.12) and (3.13), we arrive at Eqs. (3.4) and (3.5) where subscript k is replaced by subscript j + 1. Thus, our proposition is true for any time step j + 1.

PAGE 41

27 Consequently, since the base case is true and the inductive step is true, Eqs. (3.4) and (3.5) are true for all k 0. From Eqs. (3.4) and (3.5), we can conclude that any linear scaling of the design variables (or subset thereof) will have no effect on the final or any intermediate result of the optimization, since all velocities and positions are scaled accordingly. This fact leads to identical step intervals being taken in the design space for scaled and unscaled version of the same problem, assuming infinite precision in all calculations. In contrast, gradient-based optimization methods are often sensitive to design variable scaling due to algor ithmic issues and numerical approximations. First derivative methods are sensitive because of algorithmic issues, as illustrated by a simple example. Consider the following minimization pr oblem with two design variables (x,y) where the cost function is 2 2100 y x (3.14) with initial guess (1,1). A scaled versi on of the same problem can be created by letting ,/10 xxyy so that the cost function becomes 22 x y (3.15) with initial guess (1,10). Ta king first derivatives of each cost function with respect to the corresponding design variables and eval uating at the initial guesses, the search direction for the unscaled problem is along a line rotated 5.7 from the positive x axis and for the scaled problem along a line rotated 45. To reach the optimum in a single step, the unscaled problem requires a search direction rotated 84.3 and the scaled problem 45. Thus, the scaled problem can theoretically reach the optimum in a single step while the unscaled problem cannot due to the effect of scaling on the calculated search direction.

PAGE 42

28 Second derivative methods are sensitive to design variable scaling because of numerical issues related to approximation of the Hessian (second derivative) matrix. According to Gill et al. [64], Newton methods utilizing an exact Hessian matrix will be insensitive to design variable scaling as long as the Hessian matrix remains positive definite. However, in practice, exact Hessian calculations are almost never available, necessitating numerical approxi mations via finite differencing. Errors in these approximations result in different search direc tions for scaled versus unscaled versions of the same problem. Even a small amount of desi gn variable scaling can significantly affect the Hessian matrix so that design variable changes of similar magnitude will not produce comparable magnitude cost function changes [64]. Common gradient-based algorithms that employ an approximate Hessian in clude Newton and quasi-Newton nonlinear programming methods such as BFGS, SQP me thods, and nonlinear least-squares methods such as Levenberg-Marquardt [64]. A detailed discussion of the influence of design variable scaling on optimization algorithm performance can be found in Gill et al. [64]. Methodology Optimization Algorithms In addition to our PSO algorithm, three o ff-the-shelf optimization algorithms were applied to all test problems (analytical a nd biomechanical see below) for comparison purposes. One was a global GA algorithm developed by Deb [73-75]. This basic GA implementation utilizes one mutation operator and one crossover opera tor along with real encoding to handle continuous variables. Th e other two algorithms were commercial implementations of gradient-based SQP and BFGS algorithms (VisualDOC, Vanderplaats R & D, Colorado Springs, CO).

PAGE 43

29 All four algorithms (PSO, GA, SQP, and BFGS) were parallelized to accommodate the computational demands of the biomechan ical test problem. For the PSO algorithm, parallelization was performed by distributing indivi dual particle functi on evaluations to different processors as de tailed by the author in [76]. For the GA algorithm, individual chromosome function evaluations were parallelized as described in [48]. Finally, for the SQP and BFGS algorithms, finite difference gr adient calculations were performed on different processors as outlined by Koh et al. in [77]. A master-slave paradigm using the Message Passing Interface (MPI) [3,4] was employed for all parallel implementations. Parallel optimizations for the biomechanical test problem were run on a cluster of Linuxbased PCs in the University of Florid a High-performance Computing and Simulation Research Laboratory (1. 33 GHz Athlon CPUs with 256MB memory on a 100Mbps switched Fast Ethernet network). While the PSO algorithm used standard algorithm parameters for all optimization runs, minor algorithm tuning was performed on the GA, SQP, and BFGS algorithms for the biomechanical test problem. The goal was to give these algorithms the best possible chance for success against the PSO algorit hm. For the GA algorithm, preliminary optimizations were performed using populat ion sizes ranging from 40 to 100. It was found that for the specified maximum number of function evalua tions, a population size of 60 produced the best results. Consequen tly, this population size was used for all subsequent optimization runs (analytical and biomechanical). For the SQP and BFGS algorithms, automatic tuning of the finite di fference step size (FDSS) was performed separately for each design variable. At the st art of each gradient-bas ed run, forward and central difference gradients were calculated for each design variable beginning with a

PAGE 44

30 relative FDSS of 10-1. The step size was then incrementally decreased by factors of ten until the absolute difference between forw ard and central gradient results was a minimum. This approach was taken since the amount of noise in the biomechanical test problem prevented a single stable gradient va lue from being calculate d over a wide range of FDSS values (see Discussion). The forward difference step size automatically selected for each design variable was used for the remainder of the run. Analytical Test Problems The global search capabilities of our PSO implementation were evaluated using a suite of difficult analytical test problems previously published by Soest and Casius [48]. In that study, each problem in the suite was ev aluated using four different optimizers: SA, GA*, SQP*, and DS, where a star indicates a different version of an algorithm used in our study. One thousand optimization runs were performed with each optimizer starting from random initial guesses and using standard optimization algorithm parameters. Each run was terminated based on a pre-define d number of function evaluations for the particular problem being solved. We followe d an identical procedure with our four algorithms to permit comparison between our results and those published by Soest and Casius in [48]. Since two of the algorithms used in our study (GA and SQP) were of the same general category as algorithms used by Soest and Casius in [48] (GA* and SQP*), comparisons could be made between differe nt implementations of the same general algorithm. Failed PSO and GA runs were allo wed to use up the full number of function evaluations, whereas failed SQP and BFGS r uns were re-started from new random initial guesses until the full number of function ev aluations was completed. Only 100 rather than 1000 runs were performed with the SQ P and BFGS algorithms due to a database size problem in the VisualDOC software.

PAGE 45

31 A detailed description of the six analytic al test problems can be found in Soest and Casius [48]. Since the design variables for each problem possessed the same absolute upper and lower bound and appeared in the co st function in a similar form, design variable scaling was not an issue in these pr oblems. The six analytic al test problems are described briefly below. 1H: This simple 2-dimensional function [48] has several local maxima and a global maximum of 2 at the coordinates (8.6998, 6.7665). 22 21 12 112sinsin 88 (,) 1 x x xx Hxx d (3.16) 100 100 ,2 1 x x where 22 128.69986.7665 dxx Ten thousand function evaluations were used for this problem. 2H: This inverted version of the F6 function used by Schaffer et al. [78] has 2 dimensions with several local maxima ar ound the global maximum of 1.0 at (0,0). 222 12 212 2 22 12sin0.5 ,0.5 10.001 xx Hxx xx (3.17) 12,100,100xx This problem was solved using 20,000 f unction evaluations per optimization run. 3H : This test function fr om Corana et al. [79] was used with dimensionality n = 4, 8, 16, and 32. The function contains a larg e number of local minima (on the order of n 410) with a global minimum of 0 at 05 0ix.

PAGE 46

32 2 31 2 1sgnif ,, otherwisen iiiii n i iitzzcdxzt Hxx dx (3.18) 1000,1000ix where s x s x zi i i sgn 49999 0, 15 0 c, 2 0 s, 05 0 t, and 12 8 4 100 11 7 3 10 10 6 2 1000 9 5 1 1 i i i i di The use of the floor function in Eq. (3.18) makes the search space for this problem the most discrete of all problems tested. The number of function evalua tions used for this problem was 50,000 ( n = 4), 100,000 ( n = 8), 200,000 ( n = 16), and 400,000 ( n = 32). For all of the analytical test problems, an algorithm was considered to have succeeded if it converged to within 10-3 of the known optimum cost function value within the specified number of function evaluations [48]. Biomechanical Test Problem In addition to these analytical test problem s, a biomechanical test problem was used to evaluate the scale-independent natu re of the PSO algorithm. Though our PSO algorithm is theoretical ly insensitive to design variab le scaling, numerical round-off errors and implementation details could pot entially produce a scali ng effect. Running the other three algorithms on scaled and unscaled versions of this test problem also permitted investigation of the extent to which other algorithms are influenced by design variable scaling. The biomechanical test problem involved de termination of an ankle joint kinematic model that best matched noisy synthetic (i .e., computer generated) movement data.

PAGE 47

33 Similar to that used by van der Bogert et al. [56], the ankle was modeled as a threedimensional linkage with two non-intersecti ng pin joints defined by 12 subject-specific parameters (Figure 6). Figure 6 Joint locations and orientations in the parametric ankle kinematic model. Each pi (i = 1,...,12) represents a different position or orientation parameter in the model

PAGE 48

34 These parameters represent the positions and orientations of the talocrural and subtalar joint axes in the tibia, talus, and cal caneous. Position paramete rs were in units of centimeters and orientation parameters in units of radians, resulting in parameter values of varying magnitude. This model was part of a larger 27 degree-of-freedom (DOF) fullbody kinematic model used to optimize other joints as well [58]. Given this model structure, noisy syntheti c movement data were generated from a nominal model for which the "true" model pa rameters were known. Joint parameters for the nominal model along with a nominal motion were derived from in vivo experimental movement data using the optimization me thodology described below. Next, three markers were attached to the tibia and calcaneous segments in the model at locations consistent with the experiment, and the 27 model DOFs were moved through their nominal motions. This process created syntheti c marker trajectories consistent with the nominal model parameters and motion and also representative of the original experimental data. Finally, numerical noise was added to the synthetic marker trajectories to emulate skin and soft tissue movement artifacts. For each marker coordinate, a sinusoidal noise function was used with unifo rmly distributed random period, phase, and amplitude (limited to a maximum of 1 cm). The values of the sinusoidal parameters were based on previous studies reported in the literature [80,53]. An unconstrained optimization problem with bounds on the design variables was formulated to attempt to recover the known joint parameters from the noisy synthetic marker trajectories. The cost function was min() f pp (3.19) with

PAGE 49

35 2 5063 111 ()min,ijkijk kjifccqppq (3.20) where p is a vector of 12 design variable s containing the joint parameters, q is a vector of 27 generalized coordi nates for the kinematic model, cijk is the i th coordinate of synthetic marker j at time frame k and ,ijkcpq is the corresponding marker coordinate from the kinematic model. At each time frame, ,ijkcpq was computed from the current model parameters p and an optimized model configuration q A separate LevenbergMarquardt nonlinear least-squares optimizati on was performed for each time frame in Eq. (3.20) to determine this optimal configura tion. A relative convergence tolerance of 10-3 was chosen to achieve good accuracy with minimum computational cost. A nested optimization formulation (i.e., minimization occurs in Eqs. (3.19) and (3.20)) was used to decrease the dimensionality of the design space in Eq. (3.19). Equation (3.20) was coded in Matlab and exported as stand-alone C code using the Matlab Compiler (The Mathworks, Natick, MA). The executable read in a file containing the 12 design variables and output a file containing the resulting cost function value. This approach facilitated the use of different optimizers to solve Eq. (3.19). To investigate the influence of design variable scaling on optimization algorithm performance, two versions of Eq. (3.20) were generated. The fi rst used the original units of centimeters and radians for the position a nd orientation design va riables respectively. Bounds on the design variables were chosen to enclose a physical ly realistic region around the solution point in design space. Each position design variable was constrained to remain within a cube centered at the midpoi nt of the medial and lateral malleoli, where the length of each side was equal to the di stance between the malleoli (i.e., 11.32 cm). Each orientation design variable was constrained to remain w ithin a circular cone defined

PAGE 50

36 by varying its true value by 30. The second version normalized all 12 design variables to be within [-1,1] using 2norm UBLB UBLB xxx x xx (3.21) where UBx and LBx denote the upper and lower bounds respectively, on the design variable vector [81]. Two approaches were used to compare PSO scale sensitivity to that of the other three algorithms. For the first approach, a fixe d number of scaled a nd unscaled runs (10) were performed with each op timization algorithm starting from different random initial seeds, and the sensitivity of the final cost function value to algorithm choice and design variable scaling was evaluated. The stopping condition for PSO and GA runs was 10,000 function evaluations, while SQP and BFGS runs were terminated when a relative convergence tolerance of 10-5 or absolute convergence tolerance of 10-6 was met. For the second approach, a fixed number of function evaluations (10,000) we re performed with each algorithm to investigate unscaled ve rsus scaled convergence history. A single random initial guess was used for the PSO a nd GA algorithms, and each algorithm was terminated once it reached 10,000 function eval uations. Since indivi dual SQP and BFGS runs require much fewer than 10,000 function evaluations, repeated runs were performed with different random initial guesses until the total number of function evaluations exceeded 10,000 at the termination of a run. This approach essentially uses SQP and BFGS as global optimizers, where the separate runs are like indi vidual particles that cannot communicate with each another but have access to local gradient information. Finite difference step size tuning at the star t of each run was included in the computation of number of function evaluations. Once the total number of runs required to reach

PAGE 51

37 10,000 function evaluations was known, the lowe st cost function valu e from all runs at each iteration was used to represent the cost over a range of function evaluations equal to the number of runs. Results For the analytical test problems, our PSO algorithm was more robust than our GA, SQP, and BFGS algorithms (Table 2, top ha lf). PSO converged to the correct global solution nearly 100% of the time on four of the six test problems (H1 and H3 with n = 4, 8, and 16). It converged 67% of the time for problem H2 and only 1.5% of the time for problem H3 with n = 32. In contrast, none of the ot her algorithms converged more than 32% of the time on any of the analytical test problems. T hough our GA algorithm typically exhibited faster initial conv ergence than did our PSO algorithm (Figure 7, left column), it leveled off and rarely reached th e correct final point in design space within the specified number of function evaluations. Table 2 Fraction of successful optimizer runs for the analytical test problems. Top half: Results from the PSO, GA, SQP, and BFGS algorithms used in the present study. Bottom half: Results from the SA, GA, SQP, and DS algorithms used in Soest and Casius 448. The GA and SQP algorithms used in that study were different from the ones used in our study. Successful runs were identified by a final cost function value within of the known optimum value, consistent with [48] H3 Study Algorithm H1 H2 (n = 4) (n = 8) (n = 16) (n = 32) PSO 0.972 0.688 1.000 1.000 1.000 0.015 GA 0.000 0.034 0.000 0.000 0.000 0.002 SQP 0.09 0.11 0.00 0.00 0.00 0.00 Present BFGS 0.00 0.32 0.00 0.00 0.00 0.00 SA 1.000 0.027 0.000 0.001 0.000 0.000 GA 0.990 0.999 1.000 1.000 1.000 1.000 SQP 0.279 0.810 0.385 0.000 0.000 0.000 Soest and Casius (2003) DS 1.000 0.636 0.000 0.000 0.000 0.000

PAGE 52

38 Figure 7 Comparison of converg ence history results for the analytical test problems. Left column: Results from the PSO, GA, SQP, and BFGS algorithms used in the present study. Right column: Resu lts from the SA, GA, SQP, and DS algorithms used in Soest and Casius [5]. The GA and SQP algorithms used in that study were different from the ones used in our study. (a) Problem H1. The SA results have been updated using corrected data provided by Soest and Casius, since the results in 48 accidentally used a temperature reduction rate of 0.5 rather than the standard va lue of 0.85 as reported. (b) Problem H2 (c) Problem H3 with n = 4. (d) Problem H3 with n = 32. Error was computed using the known cost at the global op timum and represents the average of 1000 runs (100 multi-start SQP and BFGS runs in our study) with each algorithm.

PAGE 53

39 Figure 8 Final cost function values for ten unscaled (dark bars) and scaled (gray bars) parallel PSO, GA, SQP, and BFGS runs for the biomechanical test problem. Each pair of unscaled and scaled runs was started from the same initial point(s) in design space, and each run was terminated when the specified stopping criteria was met (see text). In contrast, the SQP and BFGS algorithms were highly sensitive to design variable scaling in the biomechanical test problem. Fo r the ten trials, unscaled and scaled SQP or BFGS runs rarely converged to similar poi nts in design space ( note y axis scale in Figure 8) and produced large differences in final cost function value from one trial to the next (Figure 8c and d). Scaling improved the final re sult in seven out of ten SQP trials and in five of ten BFGS trials. The best unscaled a nd scaled SQP final cost function values were 255 and 121, respectively, while those of BFGS were 355 and 102 (Table 3). Thus, scaling yielded the best result found with bot h algorithms. The best SQP and BFGS trials generally produced larger RMS marker di stance errors (up to two times worse), orientation parameter errors (up to five times worse), and position parameter errors (up to six times worse) than those found by PSO or GA.

PAGE 54

40 Table 3 Final cost function values and asso ciated marker distance and joint parameter root-mean-square (RMS) e rrors after 10,000 function evaluations performed by multiple unscaled and scaled PSO, GA, SQP, and BFGS runs. See Figure 9 for corresponding convergence histories Figure 9 Convergence history fo r unscaled (dark lines) and scaled (gray lines) parallel PSO, GA, SQP, and BFGS runs for th e biomechanical test problem. Each algorithm was run terminated after 10 ,000 function evaluations. Only one unscaled and scaled PSO and GA run we re required to reach 10,000 function evaluations, while repeated SQP and BFGS runs were required to reach that number. Separate SQP and BFGS runs we re treated like individual particles in a single PSO run for calculating convergence history (see text). RMS Error Optimizer Formulation Cost Function Marker Distances (mm) Orientation Parameters (deg) Position Parameters (mm) Unscaled 69.5 5.44 2.63 4.47 PSOt Scaled 69.5 5.44 2.63 4.47 Unscaled 77.9 5.78 2.65 6.97 GA Scaled 74.0 5.64 3.76 4.01 Unscaled 255 10.4 3.76 14.3 SQP Scaled 121 7.21 3.02 9.43 Unscaled 69.5 5.44 2.63 4.47 BFGS Scaled 69.5 5.44 2.63 4.47

PAGE 55

41 Discussion This chapter evaluated a recent variation of the PSO algorithm with dynamic inertia and velocity updating as a possibl e addition to the arsenal of methods that can be applied to difficult biomechanical optimization problems. For all problems investigated, our PSO algorithm with standard algorithm parameters pe rformed better than did three off-the-self optimizers GA, SQP, and BFGS. For the anal ytical test problems PSO robustness was found to be better than that of two other global algorithms but worse than that of a third. For the biomechanical test problem with a dded numerical noise, PSO was found to be insensitive to design variable scaling while GA was only mildly sensitive and SQP and BFGS highly sensitive. Overall, the result s suggest that our PSO algorithm is worth consideration for difficult biomechanical optimization problems, especially those for which design variable scaling may be an issue. Though our biomechanical optimization invol ved a system identification problem, PSO may be equally applicable to pr oblems involving forward dynamic, inverse dynamic, inverse static, or image matching an alyses. Other global methods such as SA and GA have already been applied successfully to such problems [47,48,62], and there is no reason to believe that PSO would not pe rform equally well. As with any global optimizer, PSO utilization would be limited by the computational cost of function evaluations given the large numbe r required for a global search. Our particle swarm implementation may also be applicable to some large-scale biomechanical optimization problems. Outside the biomechanics arena [71,72,82-91], PSO has been used to solve problems on the order of 120 design variables [89-91]. In the present study, our PSO algorithm was uns uccessful on the largest test problem, H3 with n = 32 design variables. However, in a recent study, our PSO algorithm successfully solved

PAGE 56

42 the Griewank global test problem with 128 design variables using population sizes ranging from 16 to 128 76. When the Corana test problem (H3) was attempted with 128 DVs, the algorithm exhibited worse converg ence. Since the Griewank problem possesses a bumpy but continuous search space and the Corana problem a highly discrete search space, our PSO algorithm may work best on gl obal problems with a continuous search space. It is not known how our PSO algor ithm would perfor m on biomechanical problems with several hundred DVs, such as the forward dynamic optimizations of jumping and walking performed with parallel SQP in [44-46]. One advantage of global algorithms such as PSO, GA, and SA is that they often do not require significant algorithm parameter tuni ng to perform well on difficult problems. The GA used by Soest and Casius in [48] (which is not freely available) required no tuning to perform well on all of these partic ular analytical test problems. The SA algorithm used by Soest and Casius in [48] required tuning of two parameters to improve algorithm robustness significantly on those prob lems. Our PSO algorithm (which is freely available) required tuning of one parameter (wd, which was increased from 1.0 to 1.5) to produce 100% success on the two problems where it had significant failures. For the biomechanical test problem, our PSO al gorithm required no tuning, and only the population size of our GA algor ithm required tuning to improve convergence speed. Neither algorithm was sensitive to the two sour ces of noise present in the problem noise added to the synthetic mark er trajectories, and noise due to a somewhat loose convergence tolerance in the Levenberg-Ma rquardt sub-optimizations. Thus, for many global algorithm implementations, poor perf ormance on a particular problem can be rectified by minor tuning of a sma ll number of algorithm parameters.

PAGE 57

43 Figure 10 Sensitivity of gradient calculations to selected finite difference step size for one design variable. Forward and centra l differencing were evaluated using relative convergence tolerances of 10-3 and 10-6 for the nonlinear least squares sub-optimizations performed during cost function evaluation (see Eq. (3.20)). In contrast, gradient-based algorithms such as SQP and BFGS can require a significant amount of tuning even to begin to approach global optimizer results on some problems. For the biomechanical test pr oblem, our SQP and BFGS algorithms were highly tuned by scaling the design variables and determining the optimal FDSS for each design variable separately. FDSS tuning was espe cially critical due to the two sources of noise noted above. When forward and central difference gradient results were compared for one of the design variables using tw o different Levenberg-Marquardt relative convergence tolerances (10-3 and 10-6), a "sweet spot" was found near a step size of 10-2 (Figure 10). Outside of that "sweet spot," which was automatically identified and used in generating our SQP and BFGS results, forwar d and central difference gradient results diverged quickly when the looser tolerance wa s used. Since most users of gradient-based optimization algorithms do not scale the design variables or tune the FDSS for each design variable separately, and many do not perform multiple runs, our SQP and BFGS

PAGE 58

44 results for the biomechanical test problem represent best-case rather than typical results. For this particular problem, an off-the-sh elf global algorithm such as PSO or GA is preferable due to the significant reduction in effort required to obtain repeatable and reliable solutions. Another advantage of PSO and GA algorithms is the ease with which they can be parallelized [48,76] and their resu lting high parallel efficien cy. For our PSO algorithm, Schutte et al. [76] recently reported near ideal parallel efficiency for up to 32 processors. Soest and Casius [48] reported near ideal parallel e fficiency for their GA algorithm with up to 40 processors. Though SA has historic ally been considered more difficult to parallelize [92], Higginson et al. [493]444 recently developed a new parallel SA implementation and demonstrated near ideal parallel efficiency fo r up to 32 processors. In contrast, Koh et al. [77] reported poor SQP parallel effi ciency for up to 12 processors due to the sequential nature of the line search portion of the algorithm. The caveat for these parallel efficiency resu lts is that the time required per function evaluation was approximately constant and the computational nodes were homogeneous. As shown in [76], when function evaluations take different amounts of time, parallel efficiency of our PSO algorithm (and any othe r synchronous parallel algorithm, including GA, SA, SQP, and BFGS) will degrade with increasing number of processors. Synchronization between indivi duals in the population or between individual gradient calculations requires slave computational nodes that have completed their function evaluations to sit idle unt il all nodes have returned thei r results to the master node. Consequently, the slowest computational node (whether loaded by other users, performing the slowest function evaluation, or possessing the slowest processor in a

PAGE 59

45 heterogeneous environment) will dictate the overall time for each parallel iteration. An asynchronous PSO implementation with load balancing, where the global best-found position is updated continuously as each par ticle completes a function evaluation, could address this limitation. However, the exte nt to which convergen ce characteristics and scale independence would be affected is not yet known. To put the results of our study into prope r perspective, one must remember that optimization algorithm robustness can be influenced heavily by algorithm implementation details, and no single optimiza tion algorithm will work for all problems. For two of the analytical test problems (H2 and H3 with n = 4), other studies have reported PSO results using formulations that did not include dynamic inertia and velocity updating. Comparisons are di fficult given differences in the maximum number of function evaluations and number of particles, but in general, al gorithm modifications were (not surprisingly) found to influe nce algorithm converg ence characteristics [94-496]. For our GA and SQP algorithms, results for th e analytical test problems were very different from those obtaine d by Soest and Casius in [48] using different GA and SQP implementations. With seven mutation and f our crossover operat ors, the GA algorithm used by Soest and Casius in [48] was obviously much more complex than the one used here. In contrast, both SQP algorithms were highly-developed commercial implementations. In contrast, poor performan ce by a gradient-based algorithm can be difficult to correct even with design variab le scaling and careful tuning of the FDSS. These findings indicate that specific algorithm implementations, rather than general classes of algorithms, must be evaluated to reach any conclusions about algorithm robustness and performance on a particular problem.

PAGE 60

46 Conclusions In summary, the PSO algorithm with dynamic inertia and velocity updating provides another option for difficult biomechan ical optimization problems with the added benefit of being scale independent. There are few algorithm-specific parameters to adjust, and standard recommended settings work well for most problems [70,94]. In biomechanical optimization problems, noise, multiple local minima, and design variables of different scale can limit the reliability of gradient-based algorithms. The PSO algorithm presented here provides a simple -to-use off-the-shelf alternative for consideration in such cases. The algorithms main drawback is the high cost in terms of function evaluations because of slow convergence in the final st ages of the optimization, a common trait among global search algorithms. The time requirements associat ed with the high computational cost may be circumvented by utilizing the parallelism inherent in the swarm algorithm. The development of such a parallel PSO algorithm will be detailed in the next chapter.

PAGE 61

47 CHAPTER 4 PARALLELISM BY EXPLOITING PO PULATION-BASED ALGORITHM STRUCTURES Overview The structures of population based optimi zers such as Genetic Algorithms and the Particle Swarm may be exploite d in order to enable these al gorithms to utilize concurrent processing. These algorithms require a set of fitness values for the population or swarm of individuals for each iterat ion during the search. The fitness of each individual is independently evaluated, and may be assigne d to separate computational nodes. The development of such a parallel computational in frastructure is detailed in this chapter and applied to a set of large-scale analyti cal problems and a biomechanical system identification problem for the purpose of quan tifying its efficiency. The parallelization of the PSO is achieved with a master-slave, coarse-grained implementation where slave computational nodes are associated with indi vidual particle search trajectories and assigned their fitness evaluations. Greatly enhanced computation throughput is demonstrated using this infrastructure, w ith efficiencies of 95% observed for load balanced conditions. Numerical example problem s with large load imbalances yield poor performance, which decreases in a linear fa shion as additional nodes are added. This infrastructure is based on a two-level approa ch with flexibility in terms of where the search effort can be concentrated. For the two problems presented, the global search effort is applied in the upper level.

PAGE 62

48 This work was done in collaboration with Jeff Reinbolt, who created the biomechanical kinematic analysis software [58] and evaluated the quality of the solutions found by the parallel PSO. The work presente d in this chapter wa s also published in [58,76,129]. EQUATION CHAPTER 4 SECTION 1 Introduction Present day engineering optimization pr oblems often impose large computational demands, resulting in long solution times ev en on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained paralle lization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima largescale analytical test problems with com putationally cheap function evaluations and medium-scale biomechanical system identi fication problems with computationally expensive function evaluations. For load-balan ced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In c ontrast, for load-imbalanced biomechanical system identification problems with 12 desi gn variables, speedup pl ateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the sync hronization requirem ent of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical prob lems were solved using a fixed number of swarm iterations, a single population of 128 pa rticles produced a bett er convergence rate than did multiple independent runs perfor med using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 part icles).These results suggest that

PAGE 63

49 (1) parallel PSO exhibits ex cellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-lif e problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available. Numerical optimization has been widely used in engineering to solve a variety of NP-complete problems in areas such as structural optimization, neural network training, control system analysis and design, and la yout and scheduling problems. In these and other engineering disciplines, two major obs tacles limiting the solution efficiency are frequently encountered. First, even medi um-scale problems can be computationally demanding due to costly fitn ess evaluations. Second, engi neering optimization problems are often plagued by multiple local optima, requiring the use of global search methods such as population-based algorithms to de liver reliable results. Fortunately, recent advances in microprocessor and network techno logy have led to increased availability of low cost computational power through clusters of low to medium performance computers. To take advantage of these a dvances, communication layers such as MPI [3, 5] and PVM [97] have been used to develop paralle l optimization algorithms, the most popular being gradient-based, genetic (GA), and simulated annealing (SA) algorithms [48,98,99]. In biomechanical optimizations of human movement, for example, parallelization has allowed problems requiring days or weeks of co mputation on a singleprocessor computer to be solved in a matter of hours on a multi-processor machine [98]. The particle swarm optimization (PSO) algorithm is a recent addition to the list of global search methods [100].This derivative-free method is particularly suited to continuous variable problems and has received increasi ng attention in the optimization community. It

PAGE 64

50 has been successfully applied to large-scale problems [69,100,101] in several engineering disciplines and, being a populat ion-based approach, is readily parallelizable. It has few algorithm parameters, and generic settings for these parameters work well on most problems [70,94]. In this study, we present a para llel PSO algorithm fo r application to computationally demanding optimization problems. The algorithms enhanced throughput due to parallelization and impr oved convergence due to increased population size are evaluated using large-scale anal ytical test problems and medium-scale biomechanical system identification problem s. Both types of problems possess multiple local minima. The analytical test problems utilize 128 design variables to create a tortuous design space but with computationally cheap fitness evaluations. In contrast, the biomechanical system identification problem s utilize only 12 desi gn variables but each fitness evaluation is much more costly computationally. These two categories of problems provide a range of load balan ce conditions for evaluating the parallel formulation. Serial Particle Swarm Algorithm Particle swarm optimization was introdu ced in 1995 by Kennedy and Eberhart [66]. Although several modifications to the origin al swarm algorithm have been made to improve performance [68,102-105] and adapt it to specific types of problems [69,106,107], a parallel version ha s not been previously implemented. The following is a brief introduction to the operation of th e PSO algorithm. Consider a swarm of p particles, with each particles position representing a pos sible solution point in the design problem space D. For each particle i, Kennedy and Eberhart pr oposed that its position xi be updated in the following manner:

PAGE 65

51 11,iii kkk xxv (4.1) with a pseudo-velocity vi calculated as follows: 11122 iiiii kkkkkkkwcrcr vvpx g x (4.2) Here, subscript k indicates a (unit) pseudo-time increment, pi k represents the best ever position of particle i at time k (the cognitive contributio n to the pseudo-velocity vector vi k+1), and pg k represents the global best position in the swarm at time k (social contribution). r1 and r2 represent uniform random number s between 0 and 1.To allow the product c1r1 or c2r2 to have a mean of 1, Kennedy and Eberhart proposed that the cognitive and social scaling parameters c1 and c2 be selected such that c1 = c2 = 2 .The result of using these proposed values is th at the particles overshoot the target half the time, thereby maintain ing separation within the group and allowing for a greater area to be searched than if no over shoot occurred. A modificati on by Fourie and Groenwold [69] on the original PSO algorithm 66 allows transition to a more refined search as the optimization progresses. This operator reduces the maximum allowable velocity vmax k and particle inertia wk in a dynamic manner, as dictated by the dynamic reduction parameters d, wd. For the sake of brevity, further details of this operator are omitted, but a detailed description can be found in References [69,70]. The serial PSO algorithm as it would typically be implemented on a single CPU computer is described below, where p is the total number of particles in the swarm. The best ever fitness value of a particle at design coordinates pi k is denoted by fi best and the best ever fitness value of the overall swarm at coordinates pg k by fg best. At time step k = 0, the particle velocities vi 0 are initialized to values within the limits 0 v 0 vmax 0 .The vector vmax 0 is calculated as a fraction of the distance between the upper and lower bounds vmax 0 = (xUB xLB) [69], with = 0 .5. With this background, the PSO algorithm flow can be described as follows:

PAGE 66

52 6. Initialize a. Set constants 1c, 2c, maxk, max 0v, 0w, dv, dw, and d b. Initialize dynamic maximum velocity max kvand inertia wk c. Set counters k = 0, t = 0, i =1. Set random number seed. d. Randomly initialize pa rticle positions in0D xi for p i, 1 e. Randomly initialize particle velocities max 0 00v v i for p i, 1 f. Evaluate cost function values if0 using design space coordinates i 0x for p i , 1 g. Set i i bestf f0 and i i 0 0x p for p i, 1 h. Set g bestf to best i bestf and 0g to corresponding i 0x 7. Optimize a. Update particle velocity vectors i k 1v using Eq.(4.2) b. If i k 1 v > max 1 kv for any component, then set that component to its maximum allowable value c. Update particle position vectors i k 1 x using Eq. (4.1) d. Evaluate cost function values i kf1 using design space coordinates i k 1x for p i , 1 e. If i best i kf f 1, then i k i bestf f1 i k i k 1 1 x p for p i , 1 f. If g best i kf f 1, then i k g bestf f1, i k k 1 1 x g for p i, 1 g. If g bestf was improved in (e), then reset t = 0. Else increment t. If k > kmax go to 3 h. If t = d, then multiply 1 kw by dw 1 and max 1 kvby dv 1 i. If maximum number of function evaluations is exceeded, then go to 3 j. Increment i. If i > p then increment k, and set i = 1

PAGE 67

53 k. Go to 2(a). 8. Report results 9. Terminate The above logic is illustra ted as a flow diagram in Figure 11 without detailing the working of the dynamic reduction parameters Problem independent stopping conditions based on convergence tests are difficult to defi ne for global optimizers. Consequently, we typically use a fixed number of fitness evaluations or swarm iterations as a stopping criteria. Parallel Particle Swarm Algorithm The following issues had to be addressed in order to create a parallel PSO algorithm. Concurrent Operation and Scalability The algorithm should operate in such a fashi on that it can be easily decomposed for parallel operation on a multi-processor machine. Furthermore, it is highly desirable that it be scalable. Scalability implies that the natu re of the algorithm s hould not place a limit on the number of computational nodes that can be utilized, thereby permitting full use of available computational resources. An example of an algorithm with limited scalability is a parallel implementation of a gradient-based optimizer. This algorithm is decomposed by distributing the workload of the derivativ e calculations for a single point in design space among multiple processors. The upper limit on concurrent operations using this approach is therefore set by th e number of design variables in the problem. On the other hand, population-based methods such as the GA and PSO are better suited to parallel computing. Here the population of individuals representing designs can be increased or decreased according to the avai lability and speed of processo rs Any additional agents in the population will allow for a higher fidelity search in the design space, lowering

PAGE 68

54 susceptibility to entrapment in local minima However, this come s at the expense of additional fitness evaluations. Figure 11 Serial implementation of PSO algor ithm. To avoid complicating the diagram, we have omitted velocity/in ertia reduction operations. Asynchronous vs. Synchronous Implementation The original PSO algorithm was implemented with a synchronized scheme for updating the best remembered ind ividual and group fitness values fi k and fg k

PAGE 69

55 respectively, and their associated positions pi k and pg k This approach entails performing the fitness evaluations for th e entire swarm before updati ng the best fitness values. Subsequent experimentation re vealed that improved conve rgence rates can be obtained by updating the fi k and fg k values and their positions after each individual fitness evaluation (i.e. in an asynchronous fashion) [70,94]. It is speculated that because the updati ng occurs immediately after each fitness evaluation, the swarm reacts more quickly to an improvement in the best-found fitness value. With the parallel implementation, however, this asynchronous improvement on the swarm is lost since fitness evaluations are pe rformed concurrently. The parallel algorithm requires updating fi k and fg k for the entire swarm after all fitness evaluations have been performed, as in the original particle swar m formulation. Conseque ntly, the swarm will react more slowly to changes of the best fitness value posi tion in the design space. This behavior produces an unavoidable perfor mance loss in terms of convergence rate compared to the asynchronous implementation and can be considered part of the overhead associated with parallelization. Coherence Parallelization should have no adverse aff ect on algorithm operation. Calculations sensitive to program order should appear to have occurred in exactly the same order as in the serial synchronous formulation, leading to the exact same final answer. In the serial PSO algorithm the fitness evaluations form th e bulk of the computational effort for the optimization and can be performed independen tly. For our parallel implementation, we therefore chose a coarse decomposition sche me where the algorithm performs only the fitness evaluations concurrently on a parallel machine. Step 2 of the particle swarm optimization algorithm was modified accordi ngly to operate in a parallel manner:

PAGE 70

56 2) Optimize a) Update particle velocity vectorsi k 1 v using Eq.(4.2) b) If i k 1v > max 1kv for any component, then set that component to its maximum allowable value c) Update particle position vectors i k 1x using Eq. (4.1) d) Concurrently evaluate tness values i kf1 using design space co-ordinates i k 1 xfor p i , 1 e) If i best i kf f 1, then i k i bestf f1, i k i k 1 1 x p for p i, 1 f) If g best i kf f 1, then i k g bestf f1 i k k 1 1 x g for p i , 1 g) If g bestf was improved in (e), then reset t = 0. Else increment t. If k > kmax go to 3 h) If t = d, then multiply 1 kw by dw 1 and max 1kvby dv 1 i) If maximum number of function evaluations is exceeded, then go to 3 j) Increment k k) Go to 2(a). The parallel PSO algorithm is re presented by the flow diagram in Figure 12. Network Communication In a parallel computational environment, the main performance bottleneck is often the communication latency between processors. This issue is especially relevant to large clusters of computers where the use of high performance ne twork interfaces are limited due to their high cost. To keep communicati on between different computational nodes at a minimum, we use fitness evaluation tasks as the level of granularity for the parallel software. As previously mentioned each of these evaluations can be performed

PAGE 71

57 independently and requires no communication aside from receiving design space coordinates to be evaluated a nd reporting the fitness value at the end of the analysis. Figure 12 Parallel implementation of th e PSO algorithm. We have again omitted velocity/inertia reduction operations to avoid complicating the diagram.

PAGE 72

58 The optimization infrastructure is organi zed into a coordinating node and several computational nodes. PSO algorithm functions and task orchestrati on are performed by the coordinating node, which assigns the design co-ordinates to be evaluated, in parallel, to the computational nodes. With this a pproach, no communication is required between computational nodes as individua l fitness evaluations are inde pendent of each other. The only necessary communication is between the coordinating node and the computational nodes and encompasses passing the following information: 1) Several distinct design variable configur ation vectors assigned by coordinating node to slave nodes for fitness evaluation. 2) Fitness values reported from sl ave nodes to coordinating node. 3) Synchronization signals to ma intain program coherence. 4) Termination signals from coordinating node to slave nodes on completion of analysis to stop the program cleanly. The parallel PSO scheme and required co mmunication layer were implemented in ANSI C on a Linux operating system using the me ssage passing interface (MPI) libraries. Synchronization and Implementation From the parallel PSO algorithm, it is clea r that some means of synchronization is required to ensure that all of the particle fitness evaluations have been completed and results reported before the velocity and position calculations can be executed (steps 2a and 2b).Synchronization is done using a ba rrier function in the MPI communication library which temporarily stops the coordi nating node from proceeding with the next swarm iteration until all of the computational nodes have responded w ith a fitness value. Because of this approach, the time require d to perform a single parallel swarm fitness evaluation will be dictated by the slowes t fitness evaluation in the swarm. Two networked clusters of computers were used to obtain the numerical results. The first

PAGE 73

59 cluster was used to solve the analytical te st problems and comprised 40 1 .33 GHz Athlon PCs located in the High-performance Co mputing and Simulation (HCS) Research Laboratory at the University of Florida. The second group was used to solve the biomechanical system identification problem s and consisted of 32 2 .40 GHz Intel PCs located in the HCS Research Laboratory at Fl orida State University. In both locations, 100 Mbps switched networks were u tilized for connecting nodes. Sample Optimization Problems Analytical Test Problems Two well-known analytical test problems were used to evaluate parallel PSO algorithm performance on large-scale problems with multiple local minima (see Appendix A for mathematical description of both problems).The first was a test function (Figure 13 (a)) introduced by Griewank [108] which superimposes a high-frequency sine wave on a multi-dimensional parabola. In cont rast, the second problem used the Corana test function [109] which exhibits discrete jumps throughout the design space (Figure 13(b)).For both problems, the number of local minima increases exponentially with the number of design variables. To investigate la rge-scale optimization issues, we formulated both problems using 128 design va riables. Since fitness evalua tions are extremely fast for these test problems, a delay of approximatel y half a second was built into each fitness evaluation so that total computation time would not be swamped by communication time. Since parallelization opens up the possibility of utilizing large numbers of processors, we used the analytical test problems to inves tigate how convergence ra te and final solution are affected by the number of particles employed in a parallel PSO run. To ensure that all swarms were given equally fair starting pos itions, we generated a pool of 128 initial

PAGE 74

60 positions using the Latin Hypercube Sampler (LHS ). Particle positions selected with this scheme will be distributed uniformly throughout the design space [110]. This initial pool of 128 pa rticles was divided into th e following sub-swarms: one swarm of 128 particles, two sw arms of 64 particles, four swarms of 32 particles, and eight swarms of 16 particles. Each sub-swar m was used independen tly to solve the two analytical test problems. This approach allowed us to investigate whether is it more efficient to perform multiple parallel optimi zations with smaller population sizes or one parallel optimization with a larger po pulation size given a sufficient number of processors. To obtain comparisons for conve rgence speed, we allowe d all PSO runs to complete 10,000 iterations before the search was terminated. This number of iterations corresponded to between 160,000 and 1,280,000 fitness evaluations depending on the number of particles em ployed in the swarm. Biomechanical System Iden tification problems In addition to the analytical test prob lems, medium-scale biomechanical system identification problems were used to eval uate parallel PSO performance under more realistic conditions. These problems were variati ons of a general problem that attempts to find joint parameters (i.e. positions and orientations of joint axes) that match a kinematic ankle model to experimental surface marker data [56]. The data are collected with an optoelectronic system that uses multiple cam eras to record the positions of external markers placed on the body segments. To permit measurement of three-dimensional motion, we attach three non-colinear markers to the foot and lower leg. The recordings are processed to obtain marker trajectories in a laboratory-fixed co-ordinate system [55111, 112] The general problem possesses 12 design variables and requires approximately 1 minute for each fitness evaluation. Thus, wh ile the problem is only medium-scale in

PAGE 75

61 terms of number of design variables, it is still computationally costly due to the time required for each fitness evaluation.. Figure 13 Surface plots of the (a) Griewank a nd (b) Corana analytical test problems showing the presence of multiple lo cal minima. For both plots, 126 design variables were fixed at their optimal values a nd the remaining 2 design variables varied in a small re gion about the global minimum. The first step in the system identification procedure is to formulate a parametric ankle joint model that can emulate a patient s movement by possessing sufficient degrees of freedom. For the purpose of this paper, we approximate the talocrural and subtalar

PAGE 76

62 joints as simple 1 degree of freedom revolute joints. The resulting ankle joint model (Figure 6) contains 12 adjustable parameters that define its kinematic structure [56]. The model also has a set of virtual markers fixed to the limb segments in positions corresponding to the locations of real marker s on the subject. The linkage parameters are then adjusted via optimization until marker s on the model follow the measured marker trajectories as closely as possible. To qua ntify how closely the kinematic model with specified parameter values can follow meas ured marker trajectories, we define a cumulative marker error e as follows: 2 11 nm ij jie (4.3) where 2222 ,,,, ijijijij x yz (4.4) where xi,j, yi,j and zi,j are the spatial displacement errors for marker i at time frame j in the x ,y ,and z directions as measured in the laboratory-fixed coordinate system, n = 50 is the number of time frames, and m = 6 (3 on the lower leg and 3 on the foot) is the number of markers. These errors are calculated between the experimental marker locations on the human subject and the virtual marker locations on the kinematic model. For each time frame, a non-linear least s quares sub-optimization is performed to determine the joint angles that minimize i,j 2 given the current set of model parameters. The first sub-optimization is started from an initial guess of zero for all joint angles. The sub-optimization for each subsequent time frame is started with the solution from the previous time frame to speed convergence. By performing a separate sub-optimization for each time frame and then calculating the sum of the squares of the marker co-ordinate errors, we obtain an estimate of how well th e model fits the data for all time frames

PAGE 77

63 included in the analysis. By varying the model parameters and repeating the suboptimization process, the parall el PSO algorithm finds the best set of model parameters that minimize e over all time frames. For numerical testing, three variations of this general problem were analyzed as described below. In all cases the number of particles used by the parallel PSO algorithm was set to a recommended value of 20 [94]. 1) Synthetic data without numerical noise: Synthetic (i.e. computer generated) data without numerical noise were ge nerated by simulating marker movements using a lower body kinematic model with virtual markers. The synthetic motion was based on an experimentally meas ured ankle motion (see 3 below).The kinematic model used anatomically realistic joint positions and orientations. Since the joint parameters associated with the synthetic data were known, his optimization was used to verify that the parallel PSO algorithm could accurately recover the original model. 2) Synthetic data with numerical noise: Numerical noise was superimposed on each synthetic marker coordinate traject ory to emulate the effect of marker displacements caused by skin movement artifacts [53]. A previously published noise model requiring three random parameters was used to generate a perturbation N in each marker coordinate [80]: sinNAt (4.5) where A is the amplitude, the frequency, and the phase angle of the noise. These noise parameters were treated as uniform random variables within the bounds 0 A 1cm, 0 25 rad/s, and 0 2 (obtained from [80]). 3) Experimental data: Experimental marker trajec tory data were obtained by processing three-dimensional recordings from a subject performing movements with reflective markers attached to the f oot and lower leg as previously described. Institutional review board approval was obtained for the experiments and data analysis, and the subject gave informed consent prior to participation. Marker positions were reported in a laboratory-fixed coordinate system. Speedup and Parallel Efficiency Parallel performance for both classes of problems was quantified by calculating speedup and parallel efficiency for different numbers of pr ocessors. Speedup is the ratio of sequential execution time to parallel execution time and ideally should equal the

PAGE 78

64 number of processors. Parallel efficiency is the ratio of speedup to number of processors and ideally should equal 100%.For the analytical test proble ms, only the Corana problem was run since the half second delay adde d to both problems makes their parallel performance identical. For the biomechanical system identification problems, only the synthetic data with numerical noise case wa s reported since experimentation with the other two cases produced sim ilar parallel performance. Figure 14 Average fitness convergence histor ies for the (a) Griewank and (b) Corana analytical test problems for swarm sizes of 16,32,64,and 128 particles and 10000 swarm iterations. Triangles indicat e the location on ea ch curve where 160 000 fitness evaluations were completed.

PAGE 79

65 The number of particles and nodes used fo r each parallel evaluation was selected based on the requirements of the problem. Th e Corana problem with 128 design variables was solved using 32 particles and 1, 2, 4, 8, 16, and 32 nodes. The biomechanical problem with 12 design variables was solved using 20 particles and 1, 2, 5, 10, and 20 nodes. Both problems were allowed to run unt il 1000 fitness evaluations were completed. Numerical Results Convergence rates for the two analytical test problems differed significantly with changes in swarm size. For the Griewank problem (Figure 14(a)), individual PSO runs converged to within 1e-6 of the global minimum after 10 000 optimizer iterations, regardless of the swarm size. R un-to-run variations in final fi tness value (not shown) for a fixed swarm size were small compared to va riations between swarm sizes. For example, no runs with 16 particles produced a better final fitness value than any of the runs with 32 particles, and similarly for the 16, 32, and 64 combinations. When number of fitness evaluations was considered instead of number of swarm iterations, runs with a smaller swarm size tended to converge more qui ckly than did runs with a larger swarm size (see triangles in Figure 14). However, two of the eight runs with the smallest number of particles failed to show continued improvement near the maximum number of iterations, indicating pos sible entrapment in a local mini mum. Similar results were found for the Corana problem (Figure 14(b)) with two exceptions. First, the optimizer was unable obtain the global minimum for any swar m size within the specified number of iterations (Figure 14(b)), and second, overlapping in results between different swarm sizes was observed. For example, some 16 par ticle results were better than 32 particles results, and similarly for the other neighbor ing combinations. On average, however, a larger swarm size tended to produce better results for both problems.

PAGE 80

66 Table 4 Parallel PSO results for the biom echanical system identification problem using synthetic marker trajectories without and with numerical noise. Optimizations on synthetic with noise and without noise were with 20 particles and were terminated after 40000 fitness evaluations. The parallel PSO algorithm found ankle joint parameters consistent with the known solution or results in the literature [61-63].The algorithm had no difficulty recovering the original parameters fr om the synthetic date set without noise (Table 4), producing a final cumulative error e on the order of 10 13.The original model was recovered with mean orientation errors less than 0.05 and mean position errors less than 0.008cm. Furthermore, the parallel implementation produced identical fitness and parameter histories as did a synchronous serial implementation. For th e synthetic data set with superimposed noise, a RMS marker distance e rror of 0.568cm was found, which is on the order of the imposed numerical noise w ith maximum amplitude of 1cm. For the experimental data set, the RMS ma rker distance error was 0.394cm (Table 5), comparable to the error for the synthetic data with noise. Conve rgence characteristics were similar for the three data sets consider ed in this study. The initial convergence rate Model Upper Lower SyntheticSynthetic data parameter bound bound solution Without noise With noise p1 (deg) 48.67 11.6318.3718.36 15.13 p2 (deg) 30.00 30.000.00 0.01 8.01 p3 (deg) 70.2310.2340.2340.26 32.97 p4 (deg) 53.00 7.0023.0023.03 23.12 p5 (deg) 72.0012.0042.0042.00 42.04 p6 (cm) 6.27 6.270.000.00 0.39 p7 (cm) 33.70 46.24 39.97 39.97 39.61 p8 (cm) 6.27 6.270.00 0.00 0.76 p9 (cm) 0.00 6.27 1.00 1.00 2.82 p10 (cm) 15.272.729.009.00 10.21 p11 (cm) 10.42 2.124.154.15 3.03 p12 (cm) 6.89 5.650.620.62 0.19

PAGE 81

67 was quite high (Figure 15(a)), where after it slowed when the approximate location of the global minimum was found. Table 5 Parallel PSO results for the biom echanical system identification problem using synthetic marker trajectories without and with numerical noise. As the solution process proceeded, the optim izer traded off increases in RMS joint orientation error (Figure 15(b)) for decreases in RMS joint position error (Figure 15(c)) to achieve further minor reductions in the fitness value. The analytical and biomechanical pr oblems exhibited different parallel performance characteristics. The analytical problem demonstrated al most perfectly linear speedup (Figure 16(a), squares) resulti ng in parallel efficienci es above 95% for up to 32 nodes (Figure 16(b), squares). In contrast, the biomechanical problem exhibited speedup results that plateaued as the number of nodes was increased (Figure 16(a), circles), producing parallel efficiencies that decreased almost linea rly with increasing number of nodes (Figure 16(b), circles). Each additional node produced roughly a 3% reduction in parallel efficiency. Discussion This study presented a parallel implemen tation of the particle swarm global optimizer. The implementation was evaluate d using analytical test problems and biomechanical system identification problems. Speedup and parallel efficiency results were excellent when each fitness eval uation took the same amount of time. Synthetic Data Experimental RMS errors Without noiseWith noise data Marker distances (cm) 3.58E-04 0.568 0.394 Orientation parameters (deg)1.85E-02 5.010 N/A Position parameters (cm) 4.95E-04 1.000 N/A

PAGE 82

68 Figure 15 Fitness convergence and parameter error plots for the biomechanical system identification problem using synthetic data with noise

PAGE 83

69 Figure 16 (a) Speedup and (b) parallel effici ency for the analytical and biomechanical optimization problems. For problems with large numbers of design variables and multiple local minima, maximizing the number of particles produced bette r results than repeated runs with fewer particles. Overall, parallel PSO makes e fficient use of comput ational resources and provides a new option for computationa lly demanding engineering optimization problems. The agreement between optimized and known orientation parameters p1 p4 for the biomechanical problem using synthetic da ta with noise was poor er than initially expected. This finding was the direct result of the sensitivity of orie ntation calculations to errors in marker positions caused by the inj ected numerical noise. Because of the close proximity of the markers to each other, even relatively small amplitude numerical noise

PAGE 84

70 in marker positions can result in large fluctuat ions in the best-fit joint orientations. While more time frames could be used to offset the effects of noise, this approach would increase the cost of each fitness evalua tion due to an increased number of suboptimizations. Nonetheless, the fitness value for the optimized parameters was lower than that for the parameters used to genera te the original noise less synthetic data. Though the biomechanical optimizati on problems only involved 12 design variables, multiple local minima existed wh en numerical or experimental noise was present. When the noisy synthetic data set wa s analyzed with a gradient-based optimizer using 20 random starting points, the optimi zer consistently found distinct solutions, indicating a large number of local minima. Si milar observations were made for a smaller number of gradient-based runs performed on th e experimental data set. To evaluate the parallel PSOs ability to avoid entrapment in these local minima, we performed 10 additional runs with the algorithm. All 10 r uns converged to the same solution, which was better than any of the soluti ons found by gradient-based runs. Differences in parallel PSO performance be tween the analytical test problem and the biomechanical system identification pr oblem can be explained by load balancing issues. The half second delay added to the analytical test prob lem made all fitness evaluations take approximately the same amount of time and substant ially less time than communication tasks. Consequently, load imbala nces were avoided and little degradation in parallel performance was observed with in creasing number of processors. In contrast, for the biomechanical system identification problem, the time required to complete the 50 sub-optimizations was sensitive to the sele cted point in design space, thereby producing load imbalances. As the number of processors increased, so did the likelihood that at least

PAGE 85

71 one fitness evaluation would take much longer than the others. Due to the synchronization requirement of the current pa rallel implementation, the resulting load imbalance caused by even one slow fitness ev aluation was sufficient to degrade parallel performance rapidly with increasing numb er of nodes. An asynchronous parallel implementation could be developed to addre ss this problem with the added benefit of permitting high parallel efficiency on inhom ogeneous clusters. Our results for the analytical and biomechanical optimization problems suggest that PSO performs best on problems with continuous rather than discrete noise. The al gorithm consistently found the global minimum for the Griewank problem, even when the number of particles was low. Though the global minimum is unknown for the biomechanical problem using synthetic data with noise, multiple PSO runs consisten tly converged to the sa me solution. Both of these problems utilized continuous, sinusoidal noise functions. In contrast, PSO did not converge to the global minima for the Corana problem with its disc rete noise function. Thus, for large-scale problems with multiple local minima and discrete noise, other optimization algorithms such as GA may provide better results [48]. Use of a LHS rather than uniform random sampling to generate initial points in design space may be a worthwhile PSO algor ithm modification. Experimentation with our random number generator indicated that in itial particle positions can at times be grouped together. This motivated our use of LH S to avoid re-sampling the same region of design space when providing initial guesses to sub-swarms. To inve stigate the influence of sampling method on PSO convergence rate, we performed multiple runs with the Griewank problem using uniform random sampling and a LHS with the default design variable bounds ( 600 to +600) and with th e bounds shifted by 200 ( 400 to +800).We

PAGE 86

72 found that when the bounds were shifted, convergence rate with uniform random sampling changed while it did not with a LH S. Thus, swarm behavior appears to be influenced by sampling method, and a LHS may be helpful for minimizing this sensitivity. A secondary motivation for running the anal ytical test problems with different numbers of particles was to determine whet her the use of sub-swarms would improve convergence. The question is whether a larg er swarm where all particles communicate with each other is more efficient than multiple smaller swarms where particles communicate within each sub-swarm but not be tween sub-swarms. It is possible that the global best position found by a large swarm may unduly in fluence the motion of all particles in the swarm. Creating sub-swarms that do not communicate eliminates this possibility. In our approach, we performed th e same number of fitness evaluations for each population size. Our results for both analyt ical test problems suggest that when a large numbers of processors are available, increasing the swarm size will increase the probability of finding a better solution. Analys is of PSO convergence rate for different numbers of particles also suggests an interest ing avenue for future investigation. Passing an imaginary curve through the triangles in Figure 14 reveals that for a fixed number of fitness evaluations, convergence rate increase s asymptotically with decreasing number of particles. While the soluti on found by a smaller number of particles may be a local minimum, the final particle positions may s till identify the general region in design space where the global minimum is located. Consequently, an adaptive PSO algorithm that periodically adjusts the number of particle s upward during the course of an optimization may improve convergence speed. For example, an initial run with 16 particles could be

PAGE 87

73 performed for a fixed number of fitness evalua tions. At the end of that phase, the final positions of those 16 particles would be kept but 16 new particles would be added to bring the total up to 32 particles. The algor ithm would continue using 32 particles until the same number of fitness evaluations wa s completed. The process of gradually increasing the number of particles would c ontinue until the maximum specified swarm size (e.g.128 particles) was analyzed. To ensu re systematic sampling of the design space, a LHS would be used to generate a pool of sample points equal to the maximum number of particles and from which sub-samples woul d be drawn progressively at each phase of the optimization. In the scenar io above with a maximum of 128 particles, the first phase with 16 particles would remove 16 sampled points from the LHS pool, the second phase another 16 points, the third phase 32 points, and the final phase the remaining 64 points. Conclusions In summary, the parallel Particle Swarm Optimization algorithm presented in this chapter exhibits excellent pa rallel performance as long as in dividual fitness evaluations require the same amount of time. For optimi zation problems where the time required for each fitness evaluation varies substantially, an asynchr onous implementation may be needed to reduce wasted CPU cycles and main tain high parallel efficiency. When large numbers of processors are available, us e of larger population sizes may result in improved convergence rates to the global so lution. An adaptive PSO algorithm that increases population size incrementally may also improve algorithm convergence characteristics.

PAGE 88

74 CHAPTER 5 IMPROVED GLOBAL CONVERGENCE USING MULTIPLE INDEPENDENT OPTIMIZATIONS Overview This chapter presents a methodology fo r improving the global convergence probability in large scale global optimizati on problems in cases where several local minima are present. The optimizer applied in this methodology is the PSO, but the strategy outlined here is applic able to any type of algorith m. The controlling idea behind this optimization approach is to utilize se veral independent optimizations, each using a fraction of a budget of computational reso urces. Although optimizations may have a limited probability of converg ence individually as compared to a single optimization utilizing the full budget, it is shown that wh en they are combined they will have a cumulative convergence probability far in excess of the single optimization. Since the individual limited optimizations are independent they may be executed concurrently on separate computation nodes w ith no interaction. Th is exploitation of parallelism allows us to vastly increase the probability of c onvergence to the global minimum while simultaneously reducing the required wall clock time if a parallel machine is used. EQUATION CHAPTER 5 SECTION 1 Introduction If we consider the general global uncons trained optimization problem for the real valued function fx defined on the set D xin n one cannot state that a global solution has been found unless an ex haustive search of the set is D x is performed

PAGE 89

75 [115]. With a finite number of function evaluations, at best we can only estimate the probability of arriving at or near the gl obal optimum. To solve global optimization problems reliably the optimizer needs to achieve an efficient balance between sampling the entire design space and directing progressi vely more densely spaced sampling points towards promising regions for a more refined search [116]. Many algorithms achieve this balance, such as the deterministic DIRECT optimizer [117] or stochastic algorithms such as genetic algorithms [118], simulated annealing [119,120], clustering [121], and the particle swarm optimizer [122]. Although these population-based global optimi zation algorithms are fairly robust, they can be attracted, at least temporarily, towards local optima whic h are not global (see, for example, the Griewank problem in Figure 17). Figure 17 Multiple local minima for Griewank analytical problem surface plot in two dimensions This difficulty can be addressed by allo wing longer optimization runs or an increased population size. Both these options often result in a decr ease in the algorithm efficiency, with no guarantee that the optim izer will escape from the local optimum.

PAGE 90

76 It is possible that restarting the algorith m when it gets stuck in a local minimum and allowing multiple optimization runs may be a more efficient approach. This follows from the hypothesis that several limited indepe ndent optimization runs, each with a small likelihood of finding the global op timum, may be combined in a synergistic effort which yields a vastly improved global convergence pr obability. This approach is routinely used for global search using multi-start local optimizers [123]. Le Riche and Haftka have also suggested the use of this approach with genetic algorithms for solving complex composite laminate optimization problems [124]. The main difficulty in the application of such a multi-run strategy is deciding when the optimizer should be stopped. The objective of this manuscript is to solve this problem by developing an efficient and robust scheme by which to allocate computational resources to individual optimizations in a set of multiple optimizations. The organization of this manuscript is as follows: First a brief description of the optimization algorithm applied in this study, th e PSO algorithm is given. Next a set of analytical problems are described, along w ith details on calculating global convergence probabilities. After that, the multiple run methodology is outlined and a general budget strategy is presented for dividing a fixed number of fitness evaluations among multiple searches on a single processor. The use of this method on a parallel processing machine is also discussed. Then, numerical results base d on the multiple run strategy for both single and multi-processor machines are reported and discussed. Finally, general conclusions about the multi-run methodology are presented.

PAGE 91

77 Methodology Analytical Test Set The convergence behavior of the PSO al gorithm was analyzed with the Griewank [108], Shekel [114] and Hartman [114] analytical problems (see Appendix A for problem definitions), each of which possess multiple lo cal minima. Analytical test problems were used because global solutions f* are known a-priori. The known solution value allows us to ascertain if an optimization has converg ed to the global minimum. To estimate the probability of converging to the global opt imum, we performed 1000 optimization runs for each problem, with each run limited to 500,000 fitness evaluations. These optimization runs are performed with identical parameter settings, with the exception of a different random number seed for each optimi zation run in order to start the population at different initial points in the design space. To evaluate the global convergence probability of the PSO algorithm as a function of populat ion size, we solved each problem using a swarm of 10, 20, 50 and 100 particles. A standa rd set of parameters were used for the other algorithm parameters (Table 6). Table 6 Particle swarm algorithm parameters ParameterDescription Value c1 Cognitive trust parameter 2.0 c2 Social trust parameter 2.0 w0 Initial inertia 1 wd Inertia reduction parameter 0.01 Bound on velocity fraction 0.5 vd Velocity reduction parameter 0.01 d Dynamic inertia/velocity reduction delay (function evaluations) 200

PAGE 92

78 We assumed that convergence to the globa l optimum was achieved when the fitness value was within a predetermined fixed tolerance t (see Table 7) of the known global optimum* f tff (5.1) For the Shekel and Hartman problems, the tolerance ensures that the minimum corresponding to the global optimum for the problem has been found. That is, because 0 the exact optimum is not obtained, but if a local optimizer is started from the PSO solution found with the give n tolerance it will converge to the global optimum. For the Griewank problem, however, starting a local optimizer at the PSO solution will not guarantee convergence to the gl obal optimum, since this noisy, shallow convex problem has several local minima grouped around the gl obal optimum that wi ll defeat a local optimizer Table 7 Problem convergence tolerances Multiple-run Methodology The use of multiple optimizations using a global optimizer such as a GA was first proposed by Le Riche and Haftka [124]. However, no criterio n was given on the division of computational resources between the multiple optimizations, and the efficiency of the approach was not investigated. The method en tails running multiple optimizations with a reduced number of fitness evaluations, either by limiting the number of algorithm iterations or reducing the popul ation while keeping the numbe r of iterations constant. Individually, the convergence probability of such a limited optimization may only be a Problem Convergence tolerance t Griewank 0.1 Shekel 0.001 Hartman 0.001

PAGE 93

79 fraction of a single traditi onal optimization run. Howeve r, the cumulative convergence probability obtained by combining the limited ru ns can be significantly higher than that of the single run. Previously similar studies have been undertaken to investigate the efficiency of repeated optimizations using simple search algorithms such as pure random search, grid search, and random walk [130,131]. The use of multiple local optimizations or clustering [132] is a common practice but for some algorithms the efficiency of this approach decreases rapidly when problems with a high number of local minima are encountered [130]. For estimating the efficiency of the pr oposed strategy and for comparison with optimizations with increased populations/allow ed iterations, we are required to calculate the probability of convergence to the global op timum for an individual optimization run, Pi. This convergence probability cannot be eas ily calculated for pr actical engineering problems with unknown solutions. For the se t of analytical problems however, the solutions are known and a large number of optimizations of these problems can be performed at little computational cost. With some reasonable assumptions these two facts allow us to estimate the probability of converg ence to the global optimum for individual optimization runs. The efficiency and explor ation run considerations derived from the theoretical analytical results are equally applicable to practical engineering problems where solutions are not known a prio ri. The first step in calculating Pi is using the convergence ratio, Cr, which is calculated as follows: cN C r N (5.2) where Nc is the number of globally converged optimizations and N is the number of optimizations, in this case 1000. For a very la rge number of optimizations the probability

PAGE 94

80 Pi that any individual run converges to the global optimum approaches Cr. For a finite number of runs however the standard error se in Pi can be quantified using: 1ii ePP s N (5.3) which is an estimate of th e standard deviation of Cr. For example, if we obtain a convergence probability of Pi = 0.5 with N = 1000 optimizations, the standard error would be se = 0.016. To obtain the combined cumulative probability of finding the global optimum by multiple independent optimizations we appl y the statistical law for calculating the probability for success with repeated indepe ndent events. We denote the combined or cumulative probability of N multiple independent optimization runs converging to the solution as Pc, and using the fact that the conve rgence events are uncorrelated then 11N ciPP (5.4) where Pi is the probability of the i th single individual optimiza tion run converging to the global optimum. If we assume that individua l optimization runs with similar parameter settings, as in the case of the following st udy, have equal probability of convergence we can simplify Eq. (5.4) to 11N ciPP (5.5) The increase in cumulative probability Pc with fixed values of Pi for increasing number of optimization runs N is illustrated in Figure 18. It must be stressed that the above relations are only valid for uncorrelate d optimizations, which may not be the case when a poor quality random number generator is used to generate initial positions in the design space. Certain generato rs can exhibit a tendency to favor regions in the design space, biasing the search and probability of convergence to a minimum in these regions.

PAGE 95

81 1 2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of runsConvergence reliability0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure 18 Cumulative convergence probability Pc as a function of the number of optimization runs with assumed equal Pi values To verify the cumulative probability va lues predicted in theory with Eq. (5.5), the Monte Carlo method is applied, sampling random pairs, trip lets, quintuplets etc. of optimizations in the pool of 1000 runs. For exam ple, to estimate the experimental global convergence probability of two runs, we sele cted a large number of random pairs of optimizations among the 1000 runs. Applying Eq. (5.2), the number of cases in which either or both of the pairs converged Nc for the limited optimization run, divided by N (total number of pairs selected) will yi eld the experimental global convergence probability. Exploratory run and budgeting scheme Using the multiple run methodology requires a budget strategy by which to divide a fixed budget of fitness evaluations among the independent optimization runs. The budget of fitness evaluations nb is usually dictated by how mu ch time the user is willing to allocate on a machine in order to solve a problem divided by how long a single fitness evaluation takes to execute. An explor atory optimization utilizing a fraction, nl of this

PAGE 96

82 budget is required to determine the interac tion between the optimizer and problem. The fitness history of this optimization is used to obtain an estimate of the number of fitness evaluations to be allocated to each run, ni. This strategy is based on the assumption that a single fitness history will be sufficient to quantify the opt imizer behavior on a problem. For the set of problems it is observed that a co rrelation exists between the point where the fitness history levels off and the convergence history levels off (Figure 19). 0 5 10 x 104 0 0.2 0.4 0.6 0.8 1 Fitness EvaluationsConvergence probability 0 5 10 x 104 100 102 Griewank, 20 particles Fitness EvaluationsFitness Value 0 0.5 1 1.5 2 x 104 0 0.1 0.2 0.3 0.4 Fitness EvaluationsConvergence probability 0 0.5 1 1.5 2 x 104 -3 -2 -1 0 Hartman, 20 particles Fitness EvaluationsFitness Value 0 5 10 x 104 0 0.2 0.4 0.6 0.8 1 Fitness EvaluationsConvergence probability 0 5 10 x 104 -10 -8 -6 -4 -2 0 Shekel, 20 particles Fitness EvaluationsFitness Value Figure 19 Fitness history a nd convergence probability Pc plots for Griewank, Hartman and Shekel problems We hypothesize that the algorithm will converg e quickly to the optimum or stall at a similar number of fitness evaluations (Figure 20). The explorator y run is stopped using a stopping criterion which monitors the rate of change of the objective fitness value as a function of the number of fitness evaluations. As soon as this rate of improvement drops below a predetermined value (i.e. the fitne ss value plot levels off), the exploratory optimization is stopped and the number of fitness evaluations noted as nl. The stopping criterion parameters used for obtaining the numer ical results is a change of less than 0.01 in fitness value for at least 500 functions evaluations.

PAGE 97

83 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 105 -12 -10 -8 -6 -4 -2 0 Fitness evaluationsFitness value Figure 20 Typical Shekel fitnes s history plots of 20 optimiza tions (sampled out of 1000) The remainder of the budgeted fitness eval uations is distribu ted among a number of N independent optimizations, whic h may be calculated as follows bl lnn N n (5.6) with an allowed number of fitne ss evaluations per run calculate by ,bl iilnn nnn N (5.7) If a multi-processor machine is available, very high Pc values may be reached using a multiple run strategy. If we take the si mple case where each independent optimization run is assigned to a separate node, the multiple run approach will be constrained somewhat differently than the previous singl e-processor case. Rather than the number of multiple optimizations being limited by a fixed budget of fitness evaluations (which is divided equally among the set of multiple independent optimizations using Eq.(5.6)), the number of optimization runs will be defi ned by the number of computational nodes and the wall clock time available to the user. A similar method to that followed for a single

PAGE 98

84 processor machine for determining algorithm/problem behavior must still be followed to determine the optimal number of fitness evaluations for a single independent optimization run. This exploratory ru n can, however, be done using a parallel implementation of the population-based algorithm under consideration, in which concurrent processing is achieved through functional decomposition [125]. Bayesian convergence probability estimation If the amount of time and the global probability of convergence are competing considerations a Bayesian convergence proba bility estimation method may be used, as proposed by Groenwold et al. [134,135]. This criterion states that the optimization is stopped once a certain confidence level is reached, which is that the best solution found among all optimizations f will be the global solution* f This probability or confidence measure is given in [135] as *!2! Pr1 2!!NaNb ffq NaNb (5.8) where q is the predetermined confidence level se t by the user, usually 0.95. N is the total number of optimizations performed up to the time of evaluating the stopping criteria, and 1aab 1cbbNwith a, b suitable parameters of a Beta distribution ,ab. The number of optimizations among the total N which yield a final value of f is defined as Nc. Values of parameter a and b were chosen as 1 and 5 respectively, as recommended by Groenwold et. al.[135].

PAGE 99

85 Numerical Results Multi-run Approach for Predetermined Number of Optimizations For the three problems under consideration only a limited improvement in global convergence probability is achieved by applying the traditional approaches of increasing the number of fitness evaluations or the population size (Figure 21). 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 105 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fitness evaluationsConvergence probability Pi 10 particles 20 particles 50 particles 100 particles Figure 21 Shekel convergence probability for an individual optimizat ion as a function of fitness evaluations and population size For the Shekel problem, us ing larger swarm sizes and/ or allowing an increased number of fitness evaluations yielded higher co nvergence probabilities only up to a point. Similar results were obtained for the Griewa nk and Hartman problem cases. On the other hand, optimizations with a small number of particles reach ed moderate global convergence probabilities at significantly fewer fitness evaluations than did optimizations with large swarms. This behavior was obser ved for all the problems in the test set (Figure 19). To exploit this behavior we replace a single optimization with several PSO runs, each with a limited population and number of iterations. These individual optimizations

PAGE 100

86 utilize the same amount of resources allocated to the original single optimization (in this case the number of fitness evaluations). To illustrate the merit of such an approach we optimize the Hartman analytical problem with and without multiple limited optimizations. We observe that for a single optimization the probability of convergen ce is not significantly improved by allowing more fitness evaluations, or by increasing the population size (Figure 22). 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 105 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fitness evaluationsConvergence ratio 10 particles 20 particles 50 particles 100 particles Pc with 10 indep. optimizations Figure 22 Theoretical cumulative convergence probability Pc as a function of the number of optimization runs with constant Pi for the Hartman problem. Multiple independent runs with 10 particles. We also observe that an optimization with 10 particles quickly attains a probability of convergence of Pi = 0.344 after only 10,000 fitness evaluations. Using a multiple run strategy with 10 independent optimizations of 10,000 fitness eval uations yields the theoretical Pc values reported in Table 8 (calculated using Eq. (5.5) with Pi = 0.344 and n = 1,,10). These values are indicated as circ led data points at a sum equivalent number of fitness evaluations in Figure 22 for comparison with a single extended optimization.

PAGE 101

87 The cumulative conve rgence probability Pc using multiple optimizations is far superior to that of using a single optimiza tion run of up to 100 particles. Table 8 Theoretical convergence proba bility results for Hartman problem Multi-run Efficiency To investigate the most efficient manner in which to divide a budget of fitness evaluations among multiple independent optimiza tions, we compare the efficiency of the 2, 5 10 and 12 independent optimiza tions of the Griewank problem (Figure 23). A budget of 200,000 fitness evaluations is allowed for op timizing this problem. This results in each optimization in the set being stopped at ni = 100,000, 40,000, 20,000 and 16,500 fitness evaluations. It can be seen that using a combination of independent runs with high Pi values (with a high number of the associated of fitness evaluations) or multiple runs with low Pi values will not yield the same efficiency (as defined by Pc per fitness evaluation). If the predicted Pc values are plotted on the same graph (Figure 23) it is observed that the two combinations of 5 a nd 10 optimizations yield the highest Pc values for a given number of fitness evalua tions. In both cases the indepe ndent runs are stopped at a number of fitness evaluations close to the point where Pi levels off. The dependence of Number of runs n Cumulative convergence probability Pc Cumulative fitness evaluations 1 0.344 10000 2 0.570 20000 3 0.718 30000 4 0.815 40000 5 0.879 50000 6 0.920 60000 7 0.948 70000 8 0.966 80000 9 0.978 90000 10 0.985 100000

PAGE 102

88 efficiency on the choice of nl can be explained as follows: If the independent optimization is stopped prematurely it will result in very low values of Pi. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 105 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fitness evaluationsConvergence probability Pc 1 optimization run 2 optimization runs 5 optimization runs 10 optimization runs 12 optimization runs Figure 23 Theoretical convergence probability Pc with sets of multiple runs for the Griewank problem Although a greater number of independent optimizations may then be performed within the single processor budget (Eq. (5.6)) it may still result in very poor efficiency, such as the 12 independent run case in Figure 23. If on the other hand an excessive amount of fitness evaluations are allowed for each independent run the strategy also suffer since only a reduced number of optimi zations can then be performed (see the 2 independent run case in Figure 23). To maximize the effici ency of the multi-run strategy it is therefore desirable to allow only a number of fitness evaluations corresponding to the point where Pi starts leveling off. From the above we can conclude that to obtain maximum efficiency, the individual runs should be terminated after reaching a num ber of fitness evaluations corresponding to

PAGE 103

89 the point where Pi starts leveling off. The exact Pi convergence probability data, however, is not available unless a great number of optimizations are performed, and must be estimated. This estimation is done by obser ving where the fitness history of a single optimization starts leveling off. The impact and robustness of using a single exploratory optimization fitness history to determine ni for each optimization was investigated for all three analytical problems. The exploratory optimization was stopped using a rate of change stopping criteria, and N and ni was calculated using (5.6)-(5.7). To verify that a single optimization will be sufficient to give robust re sults the rate of change stopping criteria was applied to the pool of 1000 optimizations for each of the three analytical test problems, and yielded minimum, maximum and median values of nl reported in Table 9. Table 9 Minimum, maximum and median fitn ess evaluations when applying ratio of change stopping criteria on pool of 1,000 optimizations for Griewank, Hartman and Shekel problems The minimum and maximum nl fitness evaluation hist ory plots and corresponding convergence probability plots are given in Figure 24. It can be seen that applying the rate of change stopping criteria on a single run to estimate nl gives fairly robust results, with only a small variation in Pc efficiency for all problems. Bayesian Convergence Probability Estimation The Bayesian budgeting scheme (Eq. (5.8)) indicates a consistently conservative estimation of the cumulative convergence probability for the Griewank (Figure 25), Hartman (Figure 26), and Shekel (Figure 27) problems. In order to show the accuracy of this method as the Pc values approach 1 the probability of failure (1 Pc) is used on a Problem Minimum nl Median nl Maximum nl Griewank 23,480 31,810 45,780 Hartman 14,180 16,440 23,360 Shekel 15,460 18,660 35,860

PAGE 104

90 logarithmic ordinate axis. The Bayesian es timation method is sens itive to the problem and type of optimization algorithm u nder consideration, and values of a and b parameters in the Beta distribution require fine t uning to obtain a more accurate estimation. 0 1 2 3 4 5 x 104 -12 -10 -8 -6 -4 -2 0 Shekel 0 1 2 3 4 5 x 104 -3.4 -3.3 -3.2 -3.1 -3.0 -2.9 -2.8 Hartman 0 0.5 1 1.5 2 x 104 0 10 20 30 40 50 FitnessGriewank 0 0.5 1 1.5 2 x 105 0 0.2 0.4 0.6 0.8 1 Fitness evaluationsConvergence probability 0 0.5 1 1.5 2 x 105 0 0.2 0.4 0.6 0.8 1 Fitness evaluations 0 0.5 1 1.5 2 x 105 0 0.2 0.4 0.6 0.8 1 Fitness evaluations Figure 24 Theoretical convergence probability Pc using information from exploratory optimizations which are stopped using a ra te of change stopping condition for the Griewank, Hartman and Shekel problem s. A solid line denotes the longest exploratory run in the pool of 1000 op timizations, and a dashed line denote shortest. The parameters recommended by Groenwold et.al. [135], however, yield a consistently conservative estimation of the confidence level. The results suggest the Bayesian prediction of Pc may be useful when confidence in the solution is traded of with the optimization time, on either a si ngle or parallel processor machine.

PAGE 105

91 0 0.5 1 1.5 2 2.5 3 x 105 10-3 10-2 10-1 100 Fitness evaluations1 PcGriewank a = 1, b = 5 Individual run convergence ratio Predicted multi-run Pc Sampled Bayesian Pc estimate Figure 25 Bayesian Pc estimation comparison to usi ng extrapolated and randomly sampled optimizations out of pool of 1000 runs for Griewank problem. 0 0.5 1 1.5 2 2.5 3 x 105 10-3 10-2 10-1 100 Fitness evaluations1 PcHartman a = 1, b = 5 Individual run convergence ratio Predicted multi-run Pc Sampled Bayesian Pc estimate Figure 26 Bayesian Pc estimation comparison to usi ng extrapolated and randomly sampled optimizations out of pool of 1000 runs for Hartman problem.

PAGE 106

92 0 0.5 1 1.5 2 2.5 3 x 105 10-3 10-2 10-1 100 Fitness evaluations1 PcShekel a = 1, b = 5 Individual run convergence ratio Predicted multi-run Pc Sampled Bayesian Pc estimate Figure 27 Bayesian Pc estimation comparison to usi ng extrapolated and randomly sampled optimizations out of pool of 1000 runs for Shekel problem. Monte Carlo Convergence Probability Estimation To verify the cumulative probability va lues predicted in theory with Eq. (5.5), the Monte Carlo method is applied, sampling random pairs, trip lets, quintuplets etc. of optimizations in the pool of 1000 runs. For exam ple, to estimate the experimental global convergence probability of two runs, we sele cted a large number of random pairs of optimizations among the 1000 runs. Applying Eq. (5.2), the number of cases in which either or both of the pairs converged Nc for the limited optimization run, divided by N (total number of pairs selected) will yi eld the experimental global convergence probability. This was done for Figure 22 to Figure 24, and presented in Appendix B. Conclusions For the set of large scale optimization problems evaluated with the PSO, the multirun strategy with small PSO populations de livers higher global c onvergence probability than a single run with a large population and an equal number of fitness evaluations. On

PAGE 107

93 both serial and parallel machin es, a fraction of the allocated budget of fitness evaluations or computer time is required to evaluate the optimizer/probl em behavior. This exploratory optimization is terminated usi ng a rate of change stopping criteria. The number of fitness evaluations required by expl oratory run is used to calculate the total number of runs and the remainder of the budget of evaluations is divided among them. This approach allows the strategy to util ize the computational resources efficiently, preventing premature termination or wasted fitness evaluations on already converged optimizations. Close correlation between theoretically predicted cumulative convergence probability and the experimentally sampled pr obability is obtained for the strategy on a single processor machine. A Bayesian convergence probability estimation method may be used to stop a serial or parallel optimization when optimization re liability is traded off with time for optimization. This Bayesian prediction of the cumulative convergence probability is consistently conservative for all the pr oblems tested when using parameters recommended in the literature. Very high global convergence probabilities can be achieved in a limited time span on a massively parallel machine using the mu lti-run strategy, making this method a useful tool to solve difficult and computationally intensive large scale engineering problems.

PAGE 108

94 CHAPTER 6 PARALLELISM BY DECOMPOSITION METHODOLOGIES Overview Some classes of engineering problems may be subdivided into several more tractable sub-problems by applying decom position strategies. This process of decomposition generally involves identifying gr oups of variables and constraints with minimal influence on each other. The choi ce of which decompositi on strategy to apply depends largely on the original problem stru cture and the interaction among variables. This chapter details the application of one such methodology -the quasiseparable decomposition strategyto a structural sizing problem. A structural optimization problem is used to illustrate the methodology of this strategy, and the resulting two-level optimization required to solve it. The research detailed in this chap ter was also published as [138] in collaboration with R.T. Haftka and L.T. Watson. Introduction Several decomposition strategies have been proposed in the past in order to allow the solution of problems that would otherwise be too demanding computationally. One such method, the Quasiseparable decompos ition strategy addresses a wide class of problems found in structural engineering. These problems allow themselves to be subdivided into lesser dimensional subsyste ms containing system level and component level variables. Interactions among these subsystems occur only through system level variables. Optimizing a several lower dimensi onal problems is desi rable over the all-atonce approach for the reason that the number of local minima ma y increase exponentially

PAGE 109

95 for certain classes of problems. An additional bonus is that parallel processing may also be utilized. EQUATION CHAPTER 6 SECTION 1 Throughout this chapter the te rms system level and component level refer to the upper and lower level of the bi-level decomposed problem. The terms global and local optimizations are used exclusively to refer to the optimization techni que used, e.g. global PSO vs. local SQP. The method outlining the decomposition and optimization approach followed for solving the above types of problems, the Quasiseparable Decomposition strategy, was recently proposed by Haftka and Watson [136]. This approach intr oduces the concept of a "budget" of objective function leeway assigned to each of the decomposed component subsystems. This leeway allows for a global search to be performed for each subsystem. A two-level optimization scheme is applied to the decomposed problem, with subsystem optimizations at the lower level, and coordination achie ved through an upper-level optimization. Each of the subsystems is i ndependently optimized by adjusting subsystem variables, while constrained within its a llowed budget, by maximizing the constraint margins. The upper-level optimi zer coordinates the subsyste m optimizations by adjusting the system variables and assigned budgets. It is formally proven in [136] that a decomposition of the system using this approach will not induce spurious local minima. Previously th is strategy was applied to a portal beam example with both continuous [137] and discrete [138] variables. The numerical example in this work cons ists of maximizing the deflection of an end loaded stepped cantilever beam Figure 28. This problem has its roots in structural applications where large deformations without plastic strain are desirable, for example

PAGE 110

96 MEMS micro actuation devices. In this ex ample the heights and widths and wall thicknesses of 5 sections of a stepped cantilever beam are optimized subject to aspect ratio and stress constraints in order to attain maximum tip deflection. Due to the nature of the problem there are several local minima pr esent in the 20 dimensional design space. This problem serves to illustrate the methodology followed to decompose a structure for solution by the quasiseparable optim ization strategy. It al so demonstrates the reduced search effort required to obtain the solution by the quas iseparable method as compared to the traditional all-at-once a pproach where all variables are optimized simultaneously. Quasiseparable Decomposition Theory The method considers the special case where a system problem may be decomposed with the objectiv e and constraints separable in to a function containing only global variables s and one or more global and local variables l(i) in the form 0 ,,min,i i slfslfsfsl (6.1) subject to 00 ,0,1,,iigs gsliN (6.2) In terms of global optimization the objective functions ,i i f sl should have as much leeway as possible. This can be done by introducing a budget ibE for each objective function ,i i f sl and attacking the subsystem constraints ,0iigsl by maximizing the constraint margin i for each subsystem. This decomposition strategy can then be formulated as follows: 0 1,minN i sl i f slfsb (6.3)

PAGE 111

97 subject to 00 ,0,1,,igs sbiN (6.4) where ,isb is the (global) solution to the i th lower-level problem given by minii l (6.5) subject to 1max,0 ,,0,1,,iii ji jm i iiigsl f slbsbiN (6.6) The above strategy is only appropriate to minimization however. Since the optimization in the following example is ta rgeted towards maximization, with the standard form 0 ,,max,i i sl f slfsfsl (6.7) subject to 00 ,0,1,,iigs gsliN (6.8) We need to recast the above decompositi on and optimization method into a strategy where a set of performance measures di is maximized, in place of the minimization of budgets bi. Again, in terms of global optim ization the objective functions ,i i f sl should have as much leeway as possible. In place of using minimization of budgets we establish performance targets idE for each objective function ,i i f sl. Similar to the minimization strategy we attack the subsystem constraints ,0iigsl by maximizing the constraint margin i for each subsystem. This yields the following 0 1,maxN i sl i f slfsd (6.9) subject to

PAGE 112

98 00 ,0,1,,igs sdiN (6.10) where ,isd is the (global) solution to the ith lower level problem given by maxii l (6.11) which may be computed by selecting the ma ximum constraint margin and maximizing 1max,0 ,,0,1,,iii ji jm i iiigsl dfslsdiN (6.12) alternatively formulated as 1minmin,,,,1,,iiii ijii jmgslfsldiN (6.13) Stepped Hollow Cantilever Beam Example For an illustration of the application of the quasiseparable decomposition method to a structural problem we consider a hollow stepped cantilever beam (see Figure 28) consisting of 5 sections, each section de fined by four design variables, width (w), height (h), top and bottom wall thickness (th) and left and right wall thickness (tw) (see Figure 29). A load P with y component Py and z component Pz is applied to the tip of the beam. The tip displacement of this prismatic can tilever beam is maximized subject to a material stress constraint in addition to asp ect ratio constraints. Equations describing tip displacement contributions yi and zi for each section i can be obtained using Castiglianos method. 53 4 210 22 2 545354 543 000 22 254315432 21 00 L y ll l yyy zzz ll yy zzMM dx EIP PxPxlPxll dxdxdx EIEIEI PxlllPxllll dxdx EIEI (6.14) where E is the Youngs modules and Iz1 through Iz5 are the moment of inertias about the neutral z-axis of the particular beam section under consideration.

PAGE 113

99 Figure 28 Stepped hollow cantilever beam Figure 29 Dimensional parameters of each cross section y h w z tw th y z P P L l1 l2 l3 l4 l5 y x z y y P x1 x2 x3 x4 x5 h w P z

PAGE 114

100 If we choose l1 = l2 = l3 = l4 = l5 = 1m. the above becomes 54321 543217193761 33333yyyyy y zzzzz yyyyyPPPPP EIEIEIEIEI (6.15) Similarly the displacement in the z-direction is 54321 543217193761 33333zzzzz z yyyyy zzzzzPPPPP EIEIEIEIEI (6.16) Figure 30 Projected displacement in direction The tip deflection maximizati on is subject to the stress constraints for each beam section i (with the allowable stress reduced by safety factor SF). The stresses are calculated at the root of each beam segmen t, at a maximum distance from the bending axis (four corners of each root cross section) us ing the Euler-Bernoulli beam equations. 0 0 22allow i Q allow R zy i iSF Mh Mw IISF (6.17) with 3 2 3 2 3 32 2 12122 2 22 12122hiiwi iwiiwi yiwii iwihi wiiihi zihiiwitwt htwt Ith wtt thht Itwt (6.18) In addition to the stress constraints the aspe ct ratio of each beam segment is limited to y z i yi z i

PAGE 115

101 0.25i iw h (6.19) The hollow cavity along the axial direc tion and lower bounds on wall thicknesses are accommodated through the fo llowing geometric constraints 0.01, 4 0.01 4i hi i wih t w t (6.20) The above constraints are then normalized, 1 2 3 4 5 6 7/10 4 10 4 10 10 0.01 10 0.01 /510 /510allow i hi i wi i hi wi i i i ig SF t g h t g w t g t g w g h h g w (6.21) This yields seven constraint s per section, for a total of 35 constraints. The beam material properties and applied load is given in Table 10 Table 10 Beam material propertie s and end load configuration. Material property Value Young's Modulus E 72e9 Pa Safety factor SF 3 Allowable stress allow 505e6 Pa l1, l2,l3,l4,l5 1m. Py 3535.5 N Pz 3535.5 N

PAGE 116

102 Stepped hollow beam optimization. For comparative purposes the above optimi zation problem is first solved using the traditional all-at-once fashion (single level optimization) with the entire set of 20 design parameters as input variables. Th e system is then decomposed using the quasiseparable strategy, and optimized as 5 se ts of 4 dimensional sub-problems. For both approaches the tip displacement is maximized subject to the stated stress and geometric constraints. Using the principle of supe rposition and considering contributions by individual sections to the tip displacements in the z and y directions separately, we can formulate the tip displacement projected in a specified direction (see Figure 30) as follows: 5 1cossinyizi i (6.22) where yi and zi are the section tip displacement contributions in the y and z directions respectively. The maximum tip displacement may be calculated using Eq. (6.22) 1111225555 ,,,,,,, 11maxcossinwhwhyizi whttwhtt ii (6.23) The maximization of the tip deflection (Eq. (6.22)) can then be reformulated as follows 111122555 ,,,,,,, 1maxwhwhi whttwhtt i (6.24) with cossiniyizi (6.25) There are 2 local optima for each beam s ection when considering a solid beam cross section. This is explaine d by the fact that either the be am width or height will be the dominantly thin optimized dimension, which is primarily constrained by the yield stress and aspect ratio constraints (Eq. (6.19)). The maximum tip deflection may therefore be

PAGE 117

103 achieved in two equally possible directions, if is chosen as 4 As an example, these two possible local minima are shown in Figure 31 for a solid cross section beam. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 hw 0.2972 0.2972 Figure 31 Tip deflection contour plot as a function of b eam section 5 with height h, and width w with yield stress and aspect ratio constraints indicated by dashed and dash dotted lines respectively. Crosse s indicate the two solutions (one dominated by bending in y-axis, and the other in the z-axis) for maximum tip displacement. This yields a total of 25 = 32 possible local minima for a solid beam, which are the possible combinations of tall thin sections (dominant bending in z-direction) and flat thin sections (dominant bending in y-direction). For the purpose of clearly identifying the global minimum in this work, we choose 3 which biases the tip deflection in the y direction. With the addi tional two design variables twi and thi for each section, used to define the hollow beam, several additional lo cal minima are induced in the system, but the global minimum remains biased in the z di rection. The global solution to the hollow beam and the associated design vectors and normalized constraint values are given in Table 11. The total tip deflection of this op timal design is 0.755m. It can be seen in Table

PAGE 118

104 11 that the constraints g4 and g5, which require the hole to be at least 10mm square are not active for all but the last section, and then only in g5. Table 11 Stepped hollow beam global optimum Quasiseparable Optimization Approach By reformulating the equations gov erning the tip displacement (Eq. (6.22)) into the summation of individual contributions (Eq (6.24)) we observed that they are of the quasiseparable form (Eq.(6.9)). This allows us to optimize the problem using the decomposed formulation in Eq. (6.9)-(6.13). 111122555 111122550 ,,,,,,, 1,,,,,,,,maxwhwhwhwhi whttwhtt i f whttwhttfd (6.26) where the di's are the target displacement cont ributions by each beam section to the total tip displacement. There is no global variable contribution (f0) in this example thus (Eq (6.26)) can be written simply as 111122555 11112255 ,,,,,,, 1,,,,,,,,maxwhwhwhwhi whttwhtt i f whttwhttd (6.27) subject to ,0,1,,5iidi (6.28) where i is the (global) solution to the ith beam section optimization Section i 1 2 3 4 5 i,, (Tip deflection contribution m.) 0.2579 0.2107 0.1587 0.1004 0.0267 w (width, m.) 0.2722 0.2527 0.2296 0.2005 0.1136 h (height m.) 0.0544 0.0505 0.0459 0.0401 0.0400 twi (thickness, m.) 0.0680 0.0632 0.0574 0.0501 0.0284 thi (thickness, m.) 0.0136 0.0126 0.0115 0.0100 0.0100 constraint g1 0.0000 0.0000 0.0000 0.0000 0.0000 constraint g2 0.0000 0.0000 0.0000 0.0000 0.0000 constraint g3 0.0000 0.0000 0.0000 0.0000 0.0000 constraint g4 -5.8044-5.3167-4.7391 -4.0136 -1.8399 constraint g5 -0.3609-0.2633-0.1478 -0.0027 0.0000 constraint g6 0.0000 0.0000 0.0000 0.0000 -0.4320 constraint g7 -0.9600-0.9600-0.9600 -0.9600 -0.9296 Constraints defined in Eq. (6.21)

PAGE 119

105 ,,,,,maxiiwihii whtt (6.29) which is obtained by selecting the maxi mum of the constraints defined in (6.21) 17maxmax,,,,,,,,,i ijiiwihiiiiiwihi jgwhttdfwhtt (6.30) Using the quasiseparable formulation of the optimization problem allows us to independently and concurrently optimize th e sub-problems (sections) on a lower level. The solutions obtained in the lower level optimization are th en coordinated in the upper level (Eq. (6.27)) optimization of the budgets in order to obtain the maximum tip deflection. The system level and component le vel optimization interact ions are illustrated in Figure 32. Figure 32 Quasiseperable optimization flow chart. di System level Optimizer 1 calculation (Eq. (6.30)) 5 1 i i d Component level optimizer f(s,l1) gj(s,l1)) Section 1 2 calculation (Eq. (6.30)) Component level optimizer f(s,l2) gj(s,l2)) Section 2 5 calculation (Eq. (6.30)) Component level optimizer f(s,l5) gj(s,l5)) Section 5 i di ,s, li

PAGE 120

106 Results All-at-once Approach For the all-at-once approach an optimi zation was performed with the Matlab fmincon function, an implementatio n of the sequential quadrat ic programming algorithm. using 20 design variables and 30 constraints. The optimization converg ed to the total tip deflection and sectional tip defl ection contributions reported in Table 12. Although all stress and geometric constraints were satisfied it can be seen that the tip deflection did not reach the optimal value of 0.7545. Re peated optimization using random starting points with the Matlab fmincon optimizer revealed that seve ral such local optima exist. This is illustrated in Figure 33, which was generated by sorting the solu tions of the 1000 optimizations in an ascending order and then plotting. 0 200 400 600 800 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 OptimizationsTip deflection Median local optimum Figure 33 Results for 1000 al l-at-once optimizations. Tip deflection values sorted ascending and plotted as a function of op timizations. Flat areas in the graph represent optimizations which converg ed to the same local minimum.

PAGE 121

107 It can be seen that the SQP has severe difficulty obtaining th e global minimum due to the presence of several local minima in the design space. The tip deflection contributions of the median optimum indicated in Figure 33 are reported in Table 12. Only a single optimization out of th e 1000 converged to the global optimum. Table 12 All-at-once a pproach median solution Hybrid all-at-once Approach To attempt to improve over the result s obtained in the previous section (fmincon applied in the all-at-once optim ization) a hybrid approach is applied to the problem. This approach used a population based search method, the PSO, coupled with the fmincon function. The solution vector obtained by th e PSO is used as a starting point for the fmincon function. Due to the increase in required computational effort to solve the problem only 100 optimizations are performed. The median optimization result is given in Table 13, and the ascending fitn ess value plot is given in Figure 34. A marked increase in optimizations converging to the global optimum is observed when using the hybrid approach, as compared to fmincon only. The number of optimizations which converged to the global optimum however, is still less than 7% of the total, and applying this strategy comes at significant additional computationa l cost. Despite the improvement in the average tip deflection this method did not obtain the global optimum of 0.7545, the maximum tip deflection attained was only 0.7278 (upper plateau in Figure 34). Optimal tip deflection contributions All-at-once tip deflection contributions Section 1 0.2579 0.0001 Section 2 0.2107 0.2107 Section 3 0.1588 0.0000 Section 4 0.1004 0.0599 Section 5 0.0267 0.0000 Total tip deflection 0.7545 0.2707

PAGE 122

108 0 10 20 30 40 50 60 70 80 90 100 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 OptimizationsTip deflection Median local optimum Figure 34 Hybrid PSO-fmincon st rategy for 100 optimizations Table 13 Hybrid, all-at-once median solution Quasiseparable Approach The quasiseparable optimization strategy is applied to the tip deflection problem by applying the Matlab fmincon optimizer to both the upper and lower level optimization. When optimizing the sub-problem using the fmincon function we observed that a substantial fraction of optimizations converg ed to the global optim um for that section Figure 35. The other plateau indicate conve rgence to the less favorable optimum (as Optimal tip deflection contributions Hybrid all-at-once tip deflection contributions Section 1 0.2579 0.2579 Section 2 0.2107 0.1271 Section 3 0.1588 0.1587 Section 4 0.1004 0.0001 Section 5 0.0267 0.0000 Total tip deflection 0.7545 0.5439

PAGE 123

109 previously indicated in Figure 31) and has the exact same cross section design configuration, but the section is rotated 90 degrees, which favors a displacement in the zdirection. A total of 140 runs out of the 1000 converged to the gl obal optimum in this component optimization. This fraction of converged optimizations correspond to a 0.14 convergence probability, and led us to appl y the multiple run strategy proposed in Chapter 5. The local optimizer SQP in the function fmincon is applied repeatedly to the sub optimization, for a total of 50 op timizations. This repeated use of fmincon proved to be far more computationally efficient at the lower level optimization compared to using a hybrid method such as PSO-fmincon. Applying Eq (5.5) with the assumption that all runs are independent yields a failure probability of approximately 5.31e-4 for each multiple run, and a combined 0.27% for all sections during the system level optimization. This results in a 7.7% probability of failure fo r a problem optimization with 30 upper level iterations (typical requi red number of iterations for this problem). The hybrid method was also applied to th e lower level optimization sub-problem and yielded the true optimum reliably, but took considerable time because of the high number of fitness evaluations required by the PSO, the effect of which are exacerbated by multiple iterations by the upper level optimizer. From Figure 36 it can be seen that the constraint margins are reduced to at least 10-10 after only 5 iterations of the upper level optimizer. Approximation of Constraint Margins Because of the expense of repeated lower level optimizations it may be advantageous to create a surr ogate approximation for calculating constraint margin values for each beam section, similar to an approach followed by Liu et. al. [139]. This surrogate serves as a simple lookup of the response ma ximized constraint as a function of the

PAGE 124

110 section target tip displacement value at th e component level optimization. For each beam section a surrogate function is created by sa mpling a 100 target tip deflection points and fitting a one dimensional spline to the set of constraint margin response values. 0 100 200 300 400 500 600 700 800 900 1000 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 OptimizationsSection 1 tip deflection contribution Figure 35 Repeated optimizations of section 1 sub-problem using fmincon function. Table 14 Quasiseparable optimization result This reduces the cost of obtaining marg in values to a simple interpolation calculation, in place of a gl obal optimization. The reduced computational cost at the lower level allows the use of robust but expe nsive optimizers such as the PSO algorithm at the upper level. Optimal tip deflection contributions Quasiseparable tip deflection contributions Section 1 target 0.2579 0.2579 Section 2 target 0.2107 0.2107 Section 3 target 0.1588 0.1588 Section 4 target 0.1004 0.1004 Section 5 target 0.0267 0.0267 Total tip deflection 0.7545 0.7545

PAGE 125

111 0 5 10 15 20 25 30 35 10-20 10-15 10-10 10-5 100 105 Upper level iterations 1 2 3 4 5 Sum budget Figure 36 Summed budget value and constr aint margins for individual sections. The exact optimum design variab les are all found to within 10-6m tolerances. The solution is also illustrated as a sequence of cross sections in Figure 38. Figure 37 Global and local optimum in sec tion 1 sub-optimization. Scale is 0.1:1 Numerical results obtained by using the PSO with the su rrogate approach on this problem are presented in Figure 39 and Figure 40. From these fi gures it can be observed y z Global optimum, 1 = 0.258 Local optimum, 1 = 0.156

PAGE 126

112 that the target tip displacements for all sections reach steady values after approximately 4000 function evaluations at the upper level. Figure 38 Decomposed cross sect ion solution. Scale is 0.1:1 The search enters a refined stage where the constraint margins are continued to be adjusted until about 8000 fitness evaluations. The final tip displacement contributions at 10000 fitness evaluations are reported in Table 15. Values in Table 15 indicate that the tip deflection contributions ar e all slightly overestimated, due to the use of a penalty method in the upper level optimization, require d for the accommodation of constraints. Table 15 Surrogate lower level approximation optimization results Discussion Repeated application of the all-at-once a pproach obtained only a single solution out of the 1000 optimizations which converged to the global optimum, giving a probability of convergence of 0.1%. The quasiseparable optim ization strategy combined with a multiOptimal tip deflection contributions Quasiseparable tip deflection contributions Section 1 target 0.2579 0.2580 Section 2 target 0.2107 0.2107 Section 3 target 0.1588 0.1588 Section 4 target 0.1004 0.1005 Section 5 target 0.0267 0.0268 Total tip deflection 0.7545 0.7547 y z Section 1 Section 2 Section 3 Section 4 Section 5

PAGE 127

113 run approach at the component level yields a probability of convergence of 92.3%. The quasiseparable strategy requires, (for a typical optimization of this problem) 7750 component level optimizations, but thes e are of reduced dimensionality (and computational effort), when compared to the original problem formulation. 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Coordinate valueFunction Evaluations Figure 39 Target tip deflec tion value histories as a f unction of upper-level fitness evaluations Although the component level optimizations are of re duced effort they are repeatedly performed for optimization iterations of the upper level. It may therefore be advantageous for some types of problems to approximate the response margin calculated by the lower level optimization using a surrogate model. For this particular structural problem a very simple surrogate model was constructed, a one-dimensional spline for each section of the beam.

PAGE 128

114 0 2000 4000 6000 8000 10000 10-10 10-5 100 1 0 2000 4000 6000 8000 10000 10-10 10-5 100 2 0 2000 4000 6000 8000 10000 10-10 10-5 100 3 0 2000 4000 6000 8000 10000 10-10 10-5 100 4 0 2000 4000 6000 8000 10000 10-10 10-5 100 5Fitness evaluations Figure 40 Constraint margin value histories as a func tion of upper-level function evaluations

PAGE 129

115 These models, in place of the component le vel optimizations maintain a high level of reliability, and deliver a reasonably accura te solution. The deviation for the solution obtained using the surrogate approach (Table 15) was caused by the penalty multiplier in the constraint handling scheme being set too low, and not by the surrogate model itself. This was proven by using set of increasingly higher penalty multipliers which improved the solution to within the same accuracy as the surrogate model (which was calculated to within 10e-6). Higher accurac ies however, required additional computational effort. Conclusions This method shows promise in the fact that it can decompose weakly coupled structural problems and attack the sub-probl ems on a wider work front, using parallel processing and a two-level optimization scheme The results obtained for this approach show that the strategy can recover the same solution as for the original problem formulation. It is also shown that the stra tegy transfers the global search effort to the lower-level component optimization, which is of lower dimensionality, thereby making the problem more tractable to the optimizer A vastly improved probability of obtaining the global optimum is made possible by util izing the multi-run approach presented in Chapter 5 at the component-level global optimization. To further reduce computational effort it is demonstrated that for some problem types the component level optim izations may be replaced with a surrogate model. This approach reliably found the global optimum to the example problem with high accuracy. The quasiseparable decompos ition and optimization approach presented in this chapter is shown to be a robust and efficien t means of optimizing structural problems.

PAGE 130

116 CHAPTER 7 CONCLUSIONS The approaches investigated in this dissertation yield si gnificantly increased throughput in global optimization by means of exploiting parallelism at different levels. This allows the user to solve more co mplex and computationally demanding problems than previously possible by fully utilizing parallel resources. These strategies are increasingly relevant because of the rapid development of parallel processing infrastructures, and its ever more common availability. Parallelism by Exploitation of Op timization Algorithm Structure The PSO algorithm applied in the concurrent processing strategies outlined in Chapters 3-5 is a useful candidate gl obal optimization method when considering continuous variable problems. The PSO also has the advantage of insensitivity to the scale of the design variables. The technique of exploiting parallelism in this algorithm presented in Chapter 4 is also applicable to other population based methods such as Genetic algorithms. The parallel Particle Swarm Optimization algorithm exhibits excellent parallel performance when indivi dual fitness evaluations require the same amount of time. For optimization problems where the time required for each fitness evaluation varies substantially, an as ynchronous implementation may be more computationally efficient. Parallelism through Multiple Independent Optimizations The decomposition and multiple independent optimization methods presented in Chapter 5 may be applied to problems using any optimization method appropriate to the

PAGE 131

117 type of problem. For the set of large scale optimization problems evaluated with the PSO, the multi-run strategy with small PSO popul ations delivers higher global convergence probability than a single run with a larg e population and an equal number of fitness evaluations. On both serial and parallel mach ines, a fraction of the allocated budget of fitness evaluations or computer time is re quired to evaluate the optimizer/problem behavior. This fraction of the budget is used to calculate the total number of runs which will yield the most efficient distribution of the remainder of the budget. A Bayesian convergence estimation which is updated as th e optimizations complete may be used to stop a serial or parallel optimization when the available time is traded of with an confidence measure. Very high global conve rgence probabilities can be achieved in a limited time span on a massively parallel machine using the multi-run strategy, making this method a useful tool to solve difficult and computationally intensive large scale engineering problems Parallelism through Concurrent Optimi zation of Decomposed Problems The application of the quas iseparable decomposition me thod yields an efficient manner of distributing the global search effort required by a problem into several reduced and manageable portions. These decomposed el ements of the problems may be optimized concurrently, allowing faster optimization times, as compared to an all at once solution of the problem. The use of multiple independent optimizations at the component level optimizations also improved the reliability and efficiency of the strategy, and demonstrates that two of the methods of parallelism may be ef fectively combined. Future Directions The multiple independent optimization run methodology should be further refined by studying multiple stopping criteria that may be used to terminate the exploratory

PAGE 132

118 optimization run. The Bayesian global conve rgence estimation may also be improved by selecting different distributi ons than the Beta distributi on used in this work, or by performing a parameter sensitivity study on and which may yield a more robust or accurate estimation. Summary All strategies presented in this dissert ation yield improved parallelism at various aspects of structural optimization. These st rategies may be effectively combined as shown in the example presented in Chapter 6, obtaining increased levels of efficiency, robustness, utilization of para llel resources in the optimiza tion of large scale problems. This will allow the use of large numbers of processors to solve optimization problems, further increasing throughput.

PAGE 133

119 APPENDIX A ANALYTICAL TEST PROBLEM SET Griewank Objective function: EQUATION CHAPTER 8 SECTION 1 2 1 1cos1n n ii i ixx f d i x (1) with n = 10 and d = 4000 Search domain 10 1210,,:600600,1,2,10iDxxxRxi Solution: **0.0,0.0,0.0,0.0f x Hartman 6 Objective function: 2 11expmn iijjij ijfcaxp x (2) Search domain 6 16,,:01,1,,6iDxxRxi Solution (with m = 4): *0.2017,0.1500,0.4769,0.2753,0.3117,0.6573,3.322368f x See Table 16 for values of aij, ci, and pi,j.

PAGE 134

120 Shekel 10 Objective function: 11m T i iiif aac x xx (3) Search domain 4:010,1,,4iiDxRxi Solution (with m = 10): *4.00074671,4.00059326,3.99966290,3.99950981,10.536410f x See Table 17 for values of aij and ci. Table 16 Hartman problem constants aij ci 10.0 3.0 17.0 3.5 1.7 8.0 1.0 0.05 10.0 17.0 0.1 8.0 14.0 1.2 3.0 3.5 1.7 10.0 17.0 8.0 3.0 17.0 8.0 0.05 10.0 0.1 14.0 3.2 pij 0.1312 0.1696 0.5569 0.0124 0.8283 0.5886 0.2329 0.4135 0.8307 0.3736 0.1004 0.9991 0.2348 0.1451 0.3522 0.2883 0.3047 0.6650 0.4047 0.8828 0.8732 0.5743 0.1091 0.0381

PAGE 135

121 Table 17 Shekel problem constants m ai ci 1 4.0 4.0 4.0 4.0 0.1 2 1.0 1.0 1.0 1.0 0.2 3 8.0 8.0 8.0 8.0 0.2 4 6.0 6.0 6.0 6.0 0.4 5 3.0 7.0 3.0 7.0 0.4 6 2.0 9.0 2.0 9.0 0.6 7 5.0 5.0 3.0 3.0 0.3 8 8.0 1.0 8.0 1.0 0.7 9 6.0 2.0 6.0 2.0 0.5 10 7.0 3.6 7.0 3.6 0.5

PAGE 136

122 APPENDIX B MONTE CARLO VERIFICATION OF GL OBAL CONVERGENCE PROBABILITY 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 105 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fitness evaluationsConvergence ratio Convergence ratio Calculated convergence ratio Sampled convergence ratio Figure 41 Predicted and Monte Carlo sampled convergence probability Pc for 5 independent optimization runs for the Griewank problem. Each optimization run is limited to 40,000 fitness evaluations with 20 particles.

PAGE 137

123 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 105 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fitness evaluationsConvergence ratio Convergence ratio Calculated convergence ratio Sampled convergence ratio Figure 42 Predicted and Monte Carlo sampled convergence probability Pc for 12 independent optimization runs for the Griewank problem. Each optimization run is limited to 16,000 fitness evaluations with 20 particles. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 105 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fitness evaluationsConvergence probability Pc 1 optimization run 2 optimization runs 5 optimization runs 10 optimization runs 12 optimization runs Figure 43 Monte Carlo sampled convergence probability Pc with sets of multiple runs for the Griewank problem.

PAGE 138

124 0 1 2 3 4 5 x 104 0 10 20 30 40 50 GriewankFitness 0 1 2 3 4 5 x 104 -3.4 -3.3 -3.2 -3.1 -3 -2.9 -2.8 Hartman 0 1 2 3 4 5 x 104 -12 -10 -8 -6 -4 -2 0 Shekel 0 0.5 1 1.5 2 x 105 0 0.2 0.4 0.6 0.8 1 Fitness evaluations 0 0.5 1 1.5 2 x 105 0 0.2 0.4 0.6 0.8 1 Fitness evaluations 0 0.5 1 1.5 2 x 105 0 0.2 0.4 0.6 0.8 1 Fitness evaluationsConvergence probability Figure 44 Monte Carlo sampled convergence probability Pc using information from exploratory optimizations stopped usi ng a rate of change stopping condition for the Griewank, Hartman and Shekel problems. A solid line denotes the longest exploratory run in the pool of 1000 optimizations, and a dashed line denote shortest.

PAGE 139

125 0 1 2 3 x 105 10-4 10-3 10-2 10-1 100 Fitness evaluations Hartman a = 1, b = 5 0 1 2 3 x 105 10-4 10-3 10-2 10-1 100 Fitness evaluations1 Convergence probabilityGriewank a = 1, b = 5 0 1 2 3 x 105 10-4 10-3 10-2 10-1 100 Fitness evaluations Shekel a = 1, b = 5 Individual run convergence ratio Bayesian Pc estimate Monte Carlo sampled Pc Figure 45 Bayesian Pc comparison for Griewank, Ha rtman and Shekel problem.

PAGE 140

126 LIST OF REFERENCES 1. Culler, D. E. and Singh, J. P., 1998 "Parallel Computer Architecture: A Hardware/Software Approach," Morgan Kaufman Publishers Inc., San Francisco, CA. 2. Wilson, G.V., 1995, "Practical Parallel Programming," The MIT press, Cambridge, Massachusetts. 3. Gropp, W. and Lusk, E., 1996, "Users Guide for MPICH, A Portable Implementation of MPI," Argonne Na tional Laboratory, Mathematics and Computer Science Division, http://www.mcs.anl.gov/mpi/mpiuserguide/paper.html last accessed 12/2005. 4. Gropp, W., Lusk, E., Doss, N., and Sk jellum, A., 1996, "A High Performance, Portable Implementation of the MPI Message Passing Interface Standard," Parallel Computing, 22, pp. 789. 5. Snir, M., Otto, S., Huss-Lederman, S., Wa lker, D., and Dongarra, J., 1996, "MPI: The Complete Reference," Massachusetts Institute of Technology: Cambridge, MA 6. Kroo, I., and Manning, V., 2000, "Co llaborative Optimization: Status and Directions," 8th AIAA/US AF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Long Beach, CA, September 6-8. 7. Sobieszczanski-Sobieski, J., "Optimization by Decomposition: A Step from Hierarchic to Non-Hierachi c Systems," NASA Conference Publication 3031, Part 1, Second NASA/Air Force Symposium On Recent Advances in Multidisciplinary Analysis and Optimization, Hampton, Virginia September 1988, pp. 28-30. 8. Kroo, I., Altus, S., Braun, R., Gage, P. a nd Sobieski, I.M., 1994, "Multidisciplinary Optimization Methods for Aircraft Preliminary Design," Proceedings of the 5th AIAA/USAF/NASA/OAI Symposium on Multidisciplinary Analysis and Optimization, Panama City, FL, AIAA-94-4325-CP. 9. Tappeta, R.V., 1996, "An Investigation of Alternative Problem Formulations for Multidisciplinary Optimization," M.S. thesis, University of Notre Dame. 10. Tappeta, R.V. and Renaud, J.E., 1997, "Multi-objective Collaborative Optimization," Journal of Mechanical De sign, Transactions Of the ASME vol 119 no 3. pp. 403-411.

PAGE 141

127 11. Gu, X., and Renaud, J.E., 2000, "Deci sion-Based Collaborative Optimization," Proceedings of the 8th ASCE Joint Sp ecialty Conference on Probabilistic Mechanics and Structural Reliabil ity, PMC2000-217, pp. 1-6, CD-ROM Proceedings, Eds. A. Kareem, A. Halder, B.F. Spencer, E.A., Johnson, July 24-26, Notre Dame, IN. 12. Gu, X., Renaud, J.E., Ashe, L.M., Batill, S.M., Budhiraja, A.S., and Krajewski, L.J., 2002, "Decision Based Collaborative Optimi zation," ASME Journal of Mechanical Design, Vol. 124, No. 1, pp. 1-13, Published by the American Society of Mechanical Engineers, USA. 13. Sobieski, I. P., and Kroo, I. M., 2000, "Collaborative Optimization using Response Surface Estimation," AIAA Journal, Vol. 38, No. 10, pp. 1931-1938. 14. Altus, S.S.; Kroo, I. M.; and Gage, P. J., 1996, "A Genetic Algorithm for Scheduling and Decomposition of Multidisciplinary Design Problems," Journal of Mechanical Design, Transactions of the ASME, Vol. 118 No 4, pp. 486-489. 15. Sobieski, I. and Kroo, I., 1996, "Aircraft Design using Collaborative Optimization," AIAA, Aerospace Sciences Meeting and Ex hibit, 34th, Reno, NV, Jan. 15-18. 16. Braun, R.D., and Kroo, I.M., 1997, "Development and Application of the Collaborative Optimization Architecture in a Multidisciplinary Design Environment," Multidisciplinary Design Optimization State of the Art, N. Alexandrov and M.Y. Hussaini (Ed.), SIAM Series: Proceedings in Applied Mathematics 80, pp. 98116. 17. Alexandrov, N. M. and Lewis, R.M., 1999, "Comparative Properties of Collaborative Optimization and other Approaches to MDO," Proceedings of the First ASMO UK/ISSMO Conference on En gineering Design Optimization. MCB University Press, 1999. Availabl e as ICASE technical report 99-24. 18. Braun, R.D., Gage, P., Kroo, I.M. and Sobieski, I., 1996, "Implementation and Performance Issues in Collaborative Optimization," In 6th Annual AIAA/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization. 19. Shankar, J., Ribbens, C.J., Haftka, R.T ., and Watson, L.T., 1993, "Computational Study of a Nonhierarchical Decomposition Algorithm," Journal of Computational Optimization and Applications, Vol. 2, No. 3, pp. 273-293. 20. Braun, R., 1996, "Collaborative Optimiza tion: An Architecture for Large Scale Distributed Design," Ph.D. Thesis, Dept. of Aeronautics and Astronautics, Stanford Univ., Stanford, CA. 21. Cornier, T., Scott, A., Ledsinger, L., Mc Cormick, D., Way, W. and Olds, J., 2000, "Comparison of Collaborativ e Optimization to Conventio nal Design Techniques," 8th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Long Beach, CA.

PAGE 142

128 22. Braun, R.D., A.A. Moore and Kroo, I.M., 1997, "Collaborative Approach to Launch Vehicle Design," Journal of Spacecraft and Rockets Vol. 34 No. 4, pp 478-486. 23. Budianto, I., and Olds, J., 2000, "A Collabo rative Optimization Approach to Design and Deployment of a Space Based Infrar ed System Constellation," IEEE P335E, 2000 IEEE Aerospace Conference, Big Sky, MT, March 18-25. 24. Sobieszczanski-Sobieski, J., 1982, "A Linear Decomposition Method for Large Scale Optimization problems," NASA TM 83248. 25. Renaud, J.E. and Gabriele, G.A., 1994, "A pproximation in Non-hierarchic System Optimition," AIAA Journal, Vol. 32, No. 1, pp. 198-205. 26. Sellar, R.S., Batill, S.M., and Renaud, J.E., 1996, "Response Surface Based, Concurrent Subspace Optimization for Multidisciplinary System Design," AIAA Paper 96-0714, AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada. 27. Renaud, J.E. and Gabriele, G.A., 1991, "Sequential Global Approximation in Nonhierarchic System Decomposition and Op timization," Adv. Design Automation, Vol. 1, pp. 191-200. 28. Renaud, J.E. and Gabriele, G.A., 1993, "Improved Coordination in Nonhierarchic System Optimization," AIAA Journal, Vol. 31, No. 12, pp. 2367-2373. 29. Tappeta, R., Nagendra, S., Renaud, J.E ., and Badhrinath, K., 1998, "Concurrent Sub-Space Optimization (CSSO) MDO Algor ithms in iSIGHT," General Electric Corporate Research and Developmen t Technical Report 97CRD187, January, Class1, General Electric Corporate Res earch and Development, Niskayuna, NY. 30. Tappeta, R., Nagendra, S., Renaud, J.E ., and Badhrinath, K., 1998, "Concurrent Sub-Space Optimization (CSSO) Code Us age in iSIGHT," General Electric Corporate Research and Developmen t Technical Report 97CRD188, January, Class1, General Electric Corporate Res earch and Development, Niskayuna, NY. 31. Tappeta, R., Nagendra, S., Renaud, J.E ., and Badhrinath, K., 1998, "Concurrent Sub-Space Optimization (CSSO) MDO Algor ithms in iSIGHT, CSSO in iSIGHT: Validation and Testing," General Electric Corporate Research and Development Technical Report 97CRD186, January, Class1 General Electric Corporate Research and Development, Niskayuna, NY. 32. Lin, W. and Renaud, J.E., 2001, "A Co mparative Study of Trust Region Managed Approximate Optimization," In Proc. AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conf erence and Exhibit, 42nd, Seattle, WA, Apr. 16-19. 33. Tappeta, R. V., Nagendra, S., and Renaud, J.E, 1999, "A Multidisciplinary Design Optimization Approach for High Temper ature Aircraft Engine Components," Structural Optimization, Vol. 18, No. 2/3, pp. 134-145.

PAGE 143

129 34. Stelmack, M.A., Batill, S.M., Beck, B.C., and Flask, D.J., 1998, "Application of the Concurrent Subspace Design Framework to Aircraft Brake Component Design Optimization," AIAA Paper 98-2033, In Proc. AIAA/ASME/ASCE/AHS/ASC 38th Structures, Structural Dynamics and Mate rials Conference, Long Beach, California. 35. Yu, X.Q., Stelmack, M. and Batill, S.M., 1998, "An Application of Concurrent Subspace Design(CSD) to the Preliminar y Design of a Low-Reynolds Number UAV," In Proceedings AIAA Paper 98-4917, AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, St. Louis, Missouri. 36. Michelena, N., Kim, H.M., and Papalamb ros, P.Y., 1999, "A System Partitioning and Optimization Approach to Target Cascading," In Proc. of the 12th International Conference on Engineering Design, Munich, Germany. 37. Kim, H.M., "Target Cascading in Optimal System Design, 2001, "Ph.D. Thesis, University of Michigan. 38. Tosserams, S., Etman, L.F.P., Papalambros, P.Y. and Rooda, J.E., 2005, "An Augmented Lagrangian Relaxation for Analytical Target Cascading using Alternating Directions Method of Multipliers," In Proc. 6th World Congress of Structural and Multidisciplinary Optimization, Rio de Janeiro. 39. Michelena, N., Park, H., and Papalambro s, P., 2002, "Convergence Properties of Analytical Target Cascading," In Proc. 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Atlanta, Georgia, September 4-6. 40. Allison, J., Kokkolaras, M., Zawislak, M ., and Papalambros, P.Y., 2005, "On the Use of Analytical Target Cascading an Collaborative Optimization for Complex System Design," In 6th World Congress of Structural and Multidisciplinary Optimization, Rio de Janeiro. 41. Kokkolaras, M., Fellini, R., Kim, H. M., Mi chelena, N., and Papalambros, P., 2002, "Extension of the Target Cascading Fo rmulation to the Design of Product Families," Structural and Multidisciplinary Optimization, Vol. 24, No. 4, pp. 293301. 42. Louca, L.S., Kokkolaras, M, Delagrammatikas, G.J., Michelena, N.F. ,Filipi, Z.S., Papalambros, P.Y., and Assanis, D.N., 2002, "Analytical Target Cascading for the Design of an Advanced Technology Heavy Truck," In Proceedings ASME International Mechanical Engineering Congress and Exposition, Paper ASME2002-32860, New Orleans, Louisiana. 43. Kim, H. M., Kokkolaras, M., Louca, L., Delagrammatikas, G., Michelena, N., Filipi, Z., Papalambros, P., and Assanis, D., 2002, "Target Cascading in Vehicle Redesign: A Class VI Truck Study, Interna tional Journal of Vehicle Design," Vol. 29, No. 3, pp. 199-225.

PAGE 144

130 44. Anderson, F. C. and Pandy, M. G., 1999, "A Dynamic Optimization Solution for Vertical Jumping in Three Dimensions ," Comp. Meth. Biomech. Biomed. Eng., Vol. 2, pp. 201-231. 45. Anderson, F. C. and Pandy, M. G., 2001, "Dynamic Optimization of Human Walking," J. Biomech. Eng., Vol. 123, pp. 381-390. 46. Pandy, M. G., 2001, "Computer Modeli ng and Simulation of Human Movement," Ann. Rev. Biomed. Eng., Vol. 3, pp. 245-273, 2001. 47. Neptune, R.R., 1999, "Optimization Algorithm Performance in Determining Optimal Controls in Human Movement Anal yses," J. Biomech. Eng., Vol. 121, pp. 249-252. 48. Soest, A. J. and Casius, L. J. R., 2003, "The Merits of a Parallel Genetic Algorithm in Solving Hard Optimization Problems," J. Biomech. Eng., Vol. 125, pp. 141-146. 49. Buchanan, T. S. and Shreeve, D. A., 1996, "An Evaluation of Optimization Techniques for the Prediction of Muscle Activation Patterns During Isometric Tasks," J. Biomech. Eng., Vol. 118, pp. 565-574. 50. Crowninshield, R. D. and Brand, R. D ., 1981, "A Physiologically Based Criterion of Muscle Force Prediction in Locomotion," J. Biomech., Vol. 14, pp. 793-801. 51. Glitsch, U. and Baumann, W., 1997, "The Three-Dimensional Determination of Internal Loads in the Lower Extremity," J. Biomech., Vol. 11, pp. 1123-1131. 52. Kaufman, K. R., An, K.-N., Litchy, W. J., and Chao, E. Y. S., 1991, "Physiological Prediction of Muscle Forces I. Theoreti cal Formulation," Neuroscience, Vol. 40, pp. 781-792. 53. Lu, T.-W. and OConnor, J. J., 1999, "Bone Position Estimation from Skin Marker Co-ordinates using Global Optimisation with Joint Constraints," J. Biomech., Vol. 32, pp. 129-124. 54. Raasch, C. C., Zajac, F. E., Ma, B., and Levine, W. S., 1997, "Muscle Coordination of Maximum-Speed Pedaling," J. Biomech., Vol. 30, pp. 595-602. 55. Prilutsky, B. I., Herzog, W., and Allinger, T. L., 1997, "Forces of Individual Cat Ankle Extensor Muscles During Locomotio n Predicted Using Static Optimization," J. Biomech., Vol. 30, pp. 1025-1033. 56. van den Bogert, A. J., Smith, G. D., and Nigg B. M., 1994, "In Vivo Determination of the Anatomical Axes of the Ankle Joint Complex: An Optimization Approach," J. Biomech., Vol. 12, pp. 1477-88.

PAGE 145

131 57. Mommerstag, T. J. A., Blankevoort, L., Huiskes, R., Kooloos, J. G. M., and Kauer, J. M. G., 1996, "Characterization of the Mechanical Behavior of Human Knee Ligaments: A Numerical-Experimental Appr oach," J. Biomech., Vol. 29, pp. 151160. 58. Reinbolt, J. A., Schutte, J. F., Fregly, B. J., Haftka, R. T., George, A. D., and Mitchell, K. H., 2005, "Determination of Patient-specific Mult i-joint Kinematic Models through Two-level Optimization, J. Biomech., Vol. 38, pp.621-626. 59. Sommer, H. J. III, and Miller, N.R., 1980, "Technique for Kinematic Modeling of Anatomical Joints," J. Biomech. Eng., Vol. 102, pp. 311-317. 60. Vaughan, C. L., Andrews, J. G., and Hay, J. G., 1982, "Selection of Body Segment Parameters by Optimization Methods," J. Biomech. Eng., Vol. 104, pp. 38-44. 61. Kaptein, B. L., Valstar, E. R., Stoel, B. C., Rozing, P. M., and Reiber, J. H. C., 2003, "A New Model-Based RSA Method Validated Using CAD Models and Models from Reversed Engineering," J. Biomech., Vol. 36, pp. 873-882. 62. Mahfouz, M. R., Hoff, W. A., Komistek, R. D., and Dennis, D. A., 2003, "A Robust Method for Registration of Three-Dimensional Knee Implant Models to TwoDimensional Fluoroscopy Images," IEEE T Med. Imaging, Vol. 22, pp. 1561-1574. 63. You, B.-M., Siy, P., Anderst, W., and Tashman, S., 2001, "In Vivo Measurement of 3-D Skeletal Kinematics from Sequences of Biplane Radiographs: Application to Knee Kinematics," IEEE T. Med. Imaging, Vol. 20, pp. 514-525. 64. Gill, P. E., Murray, W., and Wright, M. H., 1986, "Practical Optimization," Academic Press, New York. 65. Wilcox, K. and Wakayama, S., 2003, Si multaneous Optimization of a MultipleAircraft Family. J. Aircraft, Vol. 40, pp. 616-622. 66. Kennedy J., and Eberhart R.C., 1995, "P article Swarm Optimization," Proc. IEEE Intl. Conf. Neural Networks, Perth, Australia, Vol. 4, pp. 1942-1948. 67. Groenwold, A. A. and Fourie, P. C., 20 02, "The Particle Swarm Optimization in Size and Shape Optimization," J. Struct. Mu ltidiscip. Opt., Vol. 23, pp. 259-267. 68. Shi, Y. and Eberhart, R.C., 1998, "Par ameter Selection in Particle Swarm Optimization," Lect. Notes Comput. Sc. Vol. 1447, Springer-Verlag, Berlin, pp. 591-600. 69. Fourie, P.C. and Groenwold, A.A., 200 1, "The Particle Swarm Algorithm in Topology Optimization," In Proc. of the Fourth World Congress of Struct. Multidiscip. Opt., Dalian, China. pp. 52-53.

PAGE 146

132 70. Schutte, J.F., 2001, "Particle Swarms in Sizing and Global Optimization," Masters thesis, University of Pretoria, South Africa. 71. Schutte, J.F. and Groenwold, A.A., 2003, "Sizing Design of Truss Structures using Particle Swarms," J. Struct. Multidiscip. Opt., Vol. 25, pp. 261-269. 72. Schutte, J.F. and Groenwold, A.A., 2005, "A Study of Global Optimization using Particle Swarms," J. Global Opt., J. Global Opt., Vol. 31 pp 93-108. 73. Deb, K., 2001, "Multi-Objective Optimiza tion using Evolutionary Algorithms," Wiley-Interscience Series in Systems and Optimization, Chapter 4. 74. Deb, K. and Agrawal, R.B., 1995, "Simulated Binary Crossover for Continuous Search Space," Complex Systems, Vol. 9, pp. 115-148. 75. Deb, K. and Goyal, M., 1996, "A Combined Genetic Adaptive Search (GeneAS) for Engineering Design," Comp. Sci. Informatics, Vol. 26, pp. 30-45. 76. Schutte, J. F., Reinbolt, J. A., Fregly, B. J., Haftka, R. T., and George, A. D., 2004, "Parallel Global Optimization with the Part icle Swarm Algorithm," Int. J. Num. Methods Eng., Vol. 61 No. 13, pp. 2296-2315. 77. Koh, B. I., Reinbolt, J. A., Fregly, B. J., and George, A. D., 2004, "Evaluation of Parallel Decomposition Methods for Biomechanical Optimizations," Comp. Meth. Biomech. Biomed. Eng., (in press). 78. Schaffer, J. D., Caruana, R. A., Eshelman, L. J., and Das, R., 1989, "A Study of Control Parameters Affecting Online Performance of Genetic Algorithms for Function Optimizing," Proc. 3rd Int. Conf. Genetic Alg., David J. D., ed., Morgan Kaufmann Publishers, San Ma teo, California, pp. 51. 79. Corana, A., Marchesi, M., Martini, C ., and Ridella, S., 1987, "Minimizing Multimodal Functions of Continuous Vari ables with the "Simulated Annealing" Algorithm," ACM Trans. Math. Softw., Vol. 13, pp. 262. 80. Chze, L., Fregly, B. J., and Dimnet, J. 1995, "A Solidification Procedure to Facilitate Kinematic Analyses Based on Vi deo System Data," J. Biomech., Vol. 28, pp. 879-884. 81. Reference Manual for VisualDOC C/C++ AP I, 2001, Vanderplaats Research and Development, Inc., Colorado Springs, CO. 82. Boeringer, D. W. and Werner, D. H., 20 04, "Particle Swarm Optimization Versus Genetic Algorithms for Phased Array Synthesis," IEEE Trans. Antennas Propagation, Vol. 52, pp. 771-779. 83. Brandstatter, B. and Baumgartner, U., 2 002, "Particle Swarm Optimization MassSpring System Analogon," IEEE T Magn., Vol. 38, pp. 997-1000.

PAGE 147

133 84. Cockshott, A. R. and Hartman, B. E., 2001, "Improving the Fermentation Medium for Echinocandin B Production Part II: Particle Swarm Optimization," Process Biochem., Vol. 36, pp. 661-669. 85. Costa, E. F. J., Lage, P. L. C., and Biscaia, E. C. Jr., 2003, "On the Numerical Solution and Optimization of Sstyrene Polymerization in Tubular Reactors," Computers Chem. Eng., Vol. 27, pp. 1591-1604. 86. Lu, W. Z., Fan, H.-Y., and Lo, S. M., 2 003, "Application of Evolutionary Neural Network Method in Predicting Pollutant Levels in Downtown Area of Hong Kong," Neurocomputing, Vol. 51, pp. 387-400. 87. Pidaparti, R. M. and Jayanti, S., 2003, "Corrosion Fatigue through Particle Swarm Optimization," AIAA J., Vol. 41, pp. 1167-1171. 88. Tandon, V., El-Mounayri, H., and Kishawy, H., 2002, "NC End Milling Optimization using Evolutionary Computati on," Int. J Mach. Tool Manu., Vol. 42, pp. 595-605. 89. Abido, M. A., 2002, "Optimal Power Flow Using Particle Swarm Optimization," Int. J. Elec. Power Energy Sys., Vol. 24, pp. 563-571. 90. Abido, M. A., 2002, "Optimal Design of Power System Stabilizers using Particle Swarm Optimization," IEEE Trans. Energy Conv., Vol. 17, pp. 406-413. 91. Gies, D. and Rahmat-Samii, Y., 2003 "Particle Swarm Optimization for Reconfigurable Phase-Differentiated Array Design," Microwave Opt. Tech. Letters, Vol. 38, pp. 168-175. 92. Leite, J. P. B. and Topping, B. H. V., 1999, "Parallel Simulated Annealing for Structural Optimization," Comp Struct., Vol. 73, pp. 545-564. 93. Higginson, J. S., Neptune, R. R., and An derson, F. C., 2004, "Simulated Parallel Annealing within a Neighborhood for Optim ization of Biomechan ical Systems," J. Biomech, (in press). 94. Carlisle, A., and Dozier, G., 2001, "An Off-the-Shelf PSO," Proc. Workshop on Particle Swarm Optimization, Indianapolis, USA. 95. Parsopoulos, K.E. and Vrahatis, M.N., 2002, "Recent Approaches to Global Optimization Problems through Particle Swarm Optimization." Nat. Comp., Vol. 1, pp. 235-306. 96. Trelea, I.C., 2002, "The Particle Swar m Optimization Algorithm: Convergence Analysis and Parameter Selection," Info rm. Process. Lett., Vol. 85, pp. 317.

PAGE 148

134 97. Geist, A., Beguelin, A., Dongarra, J., Manc hek, R., Jiang, W., and Sunderam, V., 1994, "PVM 3 Users Guide and Reference Manual," Technical Report ORNL/TM12187, Oak Ridge National Laboratory, Knoxville,TN. 98. Anderson, F.C., Ziegler, J., Pandy, M.G ., and Whalen, R.T., 1995, "Application of High-Performance Computing to Numeri cal Simulation of Human Movement," Journal of Biomechanical Engineering Vol. 117, pp.300. 99. Monien, B., Ramme, F., and Salmen, H., 1995, "A Parallel Simulated Annealing Algorithm for Generating 3D Layouts of Un directed Graphs," In Proc. of the 3rd International Symposium on Graph Drawing GD ,Franz JB (ed.). Springer, Berlin, Germany, pp. 396. 100. Eberhart, R.C., Shi, Y., 2001, "Partic le Swarm Optimization: Developments, Applications, and Resources," In Proc. of the 2001 Congress on Evolutionary Computation CEC2001, Seoul, Korea, IEEE Press: New York, pp. 81. 101. Venter, G., Sobieszczanski-Sobieski, J., 2002, "Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization," In Proc. 9th AIAA/ISSMO Symposium on Multidiscip linary Analysis and Optimization, Atlanta, GA. 102. Shi, Y., Eberhart, R.C., 1998, "A Modified Particle Swarm Optimizer," In Proc. of the IEEE International Conference on Evolutionary computation, Anchorage, Alaska. IEEE Press: Piscataway, USA, pp.69. 103. Eberhart, R.C. and Shi, Y., 2000, "Com paring Inertia Weights and Constriction Factors in Particle Swarm Optimizatio n," In Proc. of the 2000 Congress on Evolutionary Computation .IEEE Servi ce Center: Piscataway, NJ, pp. 84. 104. Clerc, M., 1999, "The Swarm and the Queen: towards a Deterministic and Adaptive Particle Swarm Optimization," In Proc. of the Congress of Evolutionary Computation, Angeline P.J., Michalewicz, Z ., Schoenauer, M., Yao, X., Zalzala, A. (eds), Vol. 3, Washington DC, US A. IEEE Press: New York, pp.1951. 105. L vbjerg, M., Rasmussen, T.K., Krink, T., 2001, "Hybrid Particle Swarm Optimizer with Breeding and Subpopulations ," In Proc. of the Third Genetic and Evolutionary Computation Conference (GECCO-2001), San Francisco, CA. 106. Carlisle, A. and Dozier, G., 2000, "Ada pting Particle Swarm Optimization to Dynamic Environments," In Proc. In ternational Conference on Artificial Intelligence, Vol. 1, Las Vegas, NV, pp. 429. 107. Kennedy, J. and Eberhart, R.C., 1997, "A Discrete Binary Version of the Particle Swarm Algorithm," In Proc. of the 1997 Conference on Systems, Man and Cybernetics .IEEE Service Cent er: Piscataway, NJ, pp. 4104.

PAGE 149

135 108. Griewank, A.O., 1981, "Generalized Descent for Global Optimization," J. Optim. Theory and Appl., Vol. 34, pp.11. 109. Corana, A., Marchesi, M., Martini, C., Ridella, S., 1987, "Minimizing Multimodal Functions of Continuous Variables with the Simulated Annealing Algorithm," ACM Trans. in Mathematical Software, Vol. 13, No. 3, pp.262. 110. Wyss, G.D. and Jorgensen, K.H., 1998, "A User s Guide to LHS: Sandias Latin Hypercube Sampling Software," Sandia National Laboratories Technical Report SAND98-0210, Albuquerque, NM. 111. Soderkvist, I., Wedin, P.A., 1993, "Det ermining the Movements of the Skeleton using Well-con gured Markers," J. Biomech., Vol. 26 pp.1473. 112. Spoor, C.W. and Veldpaus, F.E., 1980. "Rigid Body Motion Calculated from Spatial Co-ordinates of Markers, J. Biomech., Vol. 13, pp. 391. 113. Le Riche, R. and Haftka, R.T., 1993, "Op timization of Laminate Stacking Sequence for Buckling Load Minimization by Genetic Algorithm," In Proc. AIAA/ASME/AHS/ASCE/ASC 33rd Structures Structural Dynamics and Materials Conference, Dallas, TX. Also AIAA Jo urnal Vol. 31, No. 5, pp. 951-956. 114. Dixon, L.C.W. and Szeg, G.P., 1975, "Towards Global Optimization," N-Holland Publ. Co. 115. T rn,A., and Zilinskas, A.,1989, "Globa l Optimization," Springer-Verlag, New York. 116. Kemenade, C.H.M. 1995, "A Two-level Ev olution Strategy (Balancing Global and Local Search," Technical report CS-R 9559 1995, Centrum voor Wiskunde en Informatica, Netherlands. 117. Jones, D.R. Perttunen, C.D., and Stuckman, B.E., 1993 "Lipschitzian Optimization without the Lipschitz Constant," J. Opt. Theory and Appl., Vol. 79, pp. 157-181. 118. Srinivas, M. and Patnaik, L. M., 1994, "Genetic algorithms: A Survey," Computer, Vol. 27, No. 6, pp. 17-26. 119. Kirkpatrick, S. Gelatt, C.D. and Vecchi, M.P., 1993, "Optimization by Simulated Annealing," Science, Vol. 220, pp. 621-680. 120. Geman, S. and Hwang, C.R., 1986, "Diffu sions for Global Optimization," SIAM J. on Control and Opt., Vol. 24, pp. 1031-1043. 121. Kan, A.H.G.R and Timmer, G.T., 1987, "Stochastic Global Optimization Methods. Part I: Clustering methods," Mathematical Programming. Vol. 39 No. 1, pp. 27-56.

PAGE 150

136 122. Kennedy J., and Eberhart R.C., 1995, "P article Swarm Optimization," Proc. IEEE Intl. Conference on Neural Networks, Pe rth, Australia, Vol. 4, pp. 1942-1948. 123. Loureno, H. R., Martin, O., and Sttzle, T., 2002, "Iterated Local Search," In Handbook of Meta-heuristics, F. Glover and G. Kochenberger, Eds. International Series in Operations Research & Manageme nt Science, Vol. 57. Kluwer Academic Publishers, Norwell, MA, pp. 321-353. 124. Le Riche, R. and Haftka, R.T., 1993, "Op timization of Laminate Stacking Sequence for Buckling Load Minimization by Genetic Algorithm,", In Proc. AIAA/ASME/AHS/ASCE/ASC 33rd Structures Structural Dynamics and Materials Conference, Dallas, TX. Also AIAA Jo urnal, Vol. 31, No. 5, pp. 951-956. 125. Schutte, J.F., Reinbolt, J.A., Fregly B.J ., and Haftka, R.T., 2004, "Parallel Global Optimization with the Particle Swarm Algorithm," J. Numer. Meth. Eng., Vol. 61, pp. 2296-2315. 126. Shi, Y. and Eberhart, R.C., 1998, "Par ameter Selection in Particle Swarm Optimization," Lecture Notes in Computer Science Vol. 1447, Springer-Verlag, Berlin, pp. 591-600. 127. Fourie, P.C. and Groenwold, A.A., 2002, "The Particle Swarm Optimization Algorithm in Size and Shape Optimization," Struct Multidisc Optim, Vol. 23, No. 4, pp. 259-267. 128. Carlisle A., Dozier G., 2001, "An off-the -shelf PSO," Proc. Workshop on Particle Swarm Optimization, Purdue School of En gineering and Technology, Indianapolis, USA. 129. Schutte, J.F., Koh, B.I., Reinbolt, J.A., Fr egly, B.J., Haftka R.T. and George, A.D. 2005, "Evaluation of a Particle Swarm Algorithm for Biomechanical Optimization," J. Biomech., Vol. 127, pp. 465-474. 130. Muselli, M., 1996, "A theoretical Approach to Restart in Global Optimization," J Global Opt. Vol. 10, pp. 1-16. 131. Muselli, M., and Rabbia, M., 1991, "Paral lel Trials versus Single Search in Supervised Learning," Proc. of the S econd International Conference on Artificial Neural Networks, Bournemouth, The Institu tion of Electrical Engineers, pp. 24-28. 132. Trn, A.A., 1976, "Probabilistic Global Optimization, a Cluster Analysis Approach," In Proc. of the Second Eu ropean Congress on Operations Research, Stockholm, pp. 521-527. 133. Sugarthan, P.N., 1999, "Particle Swarm Optimizer with Neighborhood Operator," Congress on Evolutionary Computing, Vol. 3, pp. 1958-1964.

PAGE 151

137 134. Groenwold, A.A. and Snyman, J.A., 20 02, "Global Optimization using Dynamic Search Trajectories," J. Global. Opt., Vol. 24, pp. 51-60. 135. Groenwold, A.A. and Hindley, M.P., 2002, "Competing Parallel Algorithms in Structural Optimization," Struct. Multidisc. Optim., Vol. 24, pp. 343-350. 136. Haftka, R.T. and Watson, L.T., 2002, "Mul tidisciplinary Design Optimization with Quasiseparable Subsystems," Technical Report TR-02-09, Computer Science, Virginia Tech. 137. Liu, B., Haftka, R.T., and Watson, L. T., 2004, "Global-local Structural Optimization using Response Surfaces of Lo cal Optimization Margins," Structural and Multidisciplinary Optimizatio n, Vol. 27, No. 5, pp. 352. 138. Schutte, J.F. Haftka, R.T., Watson, L. T., 2004, "Decomposition and Two-level Optimization of Structures with Discrete Sizing Variables," 45th AIAA/ASME/ASCE/AHS/ASC St ructures, Structural Dynamics, and Materials Conference, April, pp. 19-22. 139. Liu, B., Haftka, R.T., and Akgun, M.A., 2000, "Two-level Composite Wing Structural Optimization using Response Su rfaces", Structural Optimization Vol. 20, No. 2, pp. 87.

PAGE 152

138 BIOGRAPHICAL SKETCH Jaco Francois Schutte was born in Pretor ia, South Africa, on 12 November 1975. He attended the University of Pretoria wh ere he received both bachelors (1999) and masters (2001) degrees in mechanical engin eering. He also received the Sasol award for best masters student from the Faculty of En gineering at the University of Pretoria. Mr. Schutte continued his graduate stud ies at the University of Florida in Gainesville, Florida, in the pursuit of a doctoral degree.


Permanent Link: http://ufdc.ufl.edu/UFE0012932/00001

Material Information

Title: Applications of Parallel Global Optimization to Mechanics Problems
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0012932:00001

Permanent Link: http://ufdc.ufl.edu/UFE0012932/00001

Material Information

Title: Applications of Parallel Global Optimization to Mechanics Problems
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0012932:00001


This item has the following downloads:


Full Text












APPLICATIONS OF PARALLEL GLOBAL OPTIMIZATION TO MECHANICS
PROBLEMS

















By

JACO FRANCOIS SCHUTTE


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2005

































Copyright 2005

by

JACO FRANCOIS SCHUTTE
































This work is dedicated to my parents and my wife Lisa.















ACKNOWLEDGMENTS

First and foremost, I would like to thank Dr. Raphael T. Haftka, chairman of my

advisory committee, for the opportunity he provided me to complete my doctoral studies

under his exceptional guidance. Without his unending patience, constant encouragement,

guidance and expertise, this work would not have been possible. Dr. Haftka's mentoring

has made a lasting impression on both my academic and personal life.

I would also like to thank the members of my advisory committee, Dr. Benjamin

Fregly, Dr. Alan D. George, Dr. Panos M. Pardalos, and Dr. Nam Ho Kim. I am grateful

for their willingness to serve on my committee, for the help they provided, for their

involvement with my oral examination, and for reviewing this dissertation. Special

thanks go to Dr. Benjamin Fregly, who provided a major part of the financial support for

my studies. Special thanks also go to Dr. Alan George whose parallel processing graduate

course provided much of the inspiration for the research presented in this manuscript, and

for reviewing some of my publications. Thanks go also to Dr. Nielen Stander who

provided me with the wonderful opportunity to do an internship at the Livermore

Software Technology Corporation.

My colleagues in the Structural and Multidisciplinary Optimization Research

Group at the University of Florida also deserve many thanks for their support and the

many fruitful discussions. Special thanks go to Tushar Goel, Erdem Acar, and also Dr.

Satchi Venkataraman and his wife Beth, who took me in on my arrival in the USA and

provided me a foothold for which I will forever be grateful.









The financial support provided by AFOSR grant F49620-09-1-0070 to R.T.H. and

the NASA Cooperative Agreement NCC3-994, the "Institute for Future Space Transport"

University Research, Engineering and Technology Institute is gratefully acknowledged.

I would also like to express my deepest appreciation to my parents. Their limitless

love, support and understanding are the mainstay of my achievements in life.

Lastly, I would like to thank my wife, Lisa. Without her love, patience and sacrifice

I would never have been able to finish this dissertation.
















TABLE OF CONTENTS



A C K N O W L E D G M E N T S ................................................................................................. iv

L IST O F TA B L E S .................... ...... ..... ....................... .. .. ..... ............ .. ix

LIST OF FIGURES ............................... ... ...... ... ................. .x

A B S T R A C T .......................................... ..................................................x iii

CHAPTER

1 IN TR OD U CTION ............................................... .. ......................... ..

State ent of Problem ....................................... ...... .......... .............. .. 1
Purpose of R research .................................. .. .. .. ........ .... ............. .
Significance of R research .......................... ................ ... .... .... .... ............... .... .
Parallelism by Exploitation of Optimization Algorithm Structure........................2
Parallelism through Multiple Independent Concurrent Optimizations ...............3
Parallelism through Concurrent Optimization of Decomposed Problems ............3
R o ad m ap ................................................................................................... . 4

2 B A C K G R O U N D ............................................................ ........ .......... .... ....

Population-based Global O ptim ization..................................... ......... ............... 5
Parallel Processing in Optimization .................................... ... ..................... ....... 6
D ecom position in Large Scale Optim ization..................................... .....................7
Literature Review: Problem Decomposition Strategies ..........................................8
Collaborative O ptim ization (C O ) .................................. ..................................... 8
Concurrent SubSpace Optimization (CSSO) ................................................11
A nalytical Target Cascading (A TC)................................................................ 15
Quasiseparable Decomposition and Optimization ...........................................17

3 GLOBAL OPTIMIZATION THROUGH THE PARTICLE SWARM
ALGORITHM .................................. ... .. .......... .......... .... 18

O overview ............. ..... ..... ... ............................................. ............... 18
In tro du ctio n ......... ....................... ........................... ... .. ................19
Theory .................................................... ............... 21
Particle Swarm Algorithm ........................... ...............21










A analysis of Scale Sensitivity. ........................................ ......................... 24
M eth odology ................................................................................. ....................2 8
Optimization Algorithms......... .......................................... 28
A nalytical Test Problem s ............................................................................. 30
Biom echanical Test Problem ........................................ ........................... 32
R e su lts ........................................................................................................... 3 7
D iscu ssio n .................41..............................................
C o n c lu sio n s........................................................................................................... 4 6

4 PARALLELISM BY EXPLOITING POPULATION-BASED ALGORITHM
S T R U C T U R E S ..................................................................................................... 4 7

O v e rv iew .........................................................................................4 7
Introduction ........................................................................................................ 48
Serial Particle Sw arm A lgorithm ...................................................................... ..... 50
Parallel Particle Sw arm A lgorithm ................................................................... 53
Concurrent Operation and Scalability ........................................ ......53
Asynchronous vs. Synchronous Implementation .................................... 54
C o h eren c e ...................................................................................... ............. 5 5
Network Communication ................................................56
Synchronization and Implementation....................... .............. 58
Sample Optimization Problems .................................................59
Analytical Test Problems .................................................. 59
Biomechanical System Identification problems ............. .... ............ 60
Speedup and Parallel Efficiency .................................................................. 63
N u m eric al R e su lts................................................................................................. 6 5
D isc u ssio n ............................................................................................................. 6 7
C o n c lu sio n s........................................................................................................... 7 3

5 IMPROVED GLOBAL CONVERGENCE USING MULTIPLE INDEPENDENT
O P T IM IZ A T IO N S ............................................................................................... 74

Overview .............. ..................................................74
Introdu action ............. ..................................................................................... 74
M methodology ............. ....................................................................................77
A nalytical T est Set ............................................................77
M ultiple-run M ethodology ........................................ ...... ........ 78
Exploratory run and budgeting scheme ................................................81
Bayesian convergence probability estimation......................... ...84
N um erical R esults................................ .... .. .................................. 85
Multi-run Approach for Predetermined Number of Optimizations .....................85
M ulti-run Efficiency ........................... .... ..................... .. .............. 87
Bayesian Convergence Probability Estimation ...................................... 89
Monte Carlo Convergence Probability Estimation......................... ...92
C o n c lu sio n s........................................................................................................... 9 2

6 PARALLELISM BY DECOMPOSITION METHODOLOGIES ..........................94









O v e rv iew .................................................................................. 9 4
Introdu action ...........................................................................................94
Quasiseparable Decomposition Theory .......................... ......... ............... 96
Stepped H ollow Cantilever Beam Exam ple ........................................ ....................98
Stepped hollow beam optimization. ...................................... ............... 102
Quasiseparable Optimization Approach ............. ............. .............. ........ 104
R e su lts .................. ....................... ..........................................................1 0 6
All-at-once Approach ............................... ................................. 106
Hybrid all-at-once Approach ..............................................107
Quasiseparable Approach ....... ............................................................. 108
Approximation of Constraint Margins ............................. ............. 109
D iscu ssion ......... .... ................................................ ............ ...... .... ... 112
C o n clu sio n s.................................................... ................ 1 15

7 CONCLUSIONS ...................... ........ ................... 116

Parallelism by Exploitation of Optimization Algorithm Structure ...........................116
Parallelism through Multiple Independent Optimizations......................................116
Parallelism through Concurrent Optimization of Decomposed Problems .............17
Future Directions .................. ................................................. ........ 117
S u m m a ry ........................................................................................1 1 8

APPENDIX

A ANALYTICAL TEST PROBLEM SET ..............................119

G riew an k ......... ... .............................................................................. 1 19
H artm an 6 ................................ ... ................................. ........... 119
S h e k e l 1 0 ......................................................................... 12 0

B MONTE CARLO VERIFICATION OF GLOBAL CONVERGENCE
PR O B A B IL ITY ...... ... .............. .. .... .................................... ........ 122

LIST OF REFEREN CES ........................................... ........................ ............... 126

BIOGRAPHICAL SKETCH ............... ................. ............... 138
















LIST OF TABLES


Table page

1 Standard PSO algorithm parameters used in the study ........................................24

2 Fraction of successful optimizer runs for the analytical test problems ..................37

3 Final cost function values and associated marker distance and joint parameter
root-mean-square (RMS) errors after 10,000 function evaluations performed by
multiple unsealed and scaled PSO, GA, SQP, and BFGS runs.............................40

4 Parallel PSO results for the biomechanical system identification problem using
synthetic marker trajectories without and with numerical noise.............................66

5 Parallel PSO results for the biomechanical system identification problem using
synthetic marker trajectories without and with numerical noise..............................67

6 Particle swarm algorithm parameters ............................................... ............... 77

7 Problem convergence tolerances................................... .............................. ....... 78

8 Theoretical convergence probability results for Hartman problem .........................87

9 Minimum, maximum and median fitness evaluations when applying ratio of
change stopping criteria on pool of 1,000 optimizations for Griewank, Hartman
and Shekel problem s .......................... ........................ .. ............ .... 89

10 Beam material properties and end load configuration. ........................................101

11 Stepped hollow beam global optimum ................. ......... ............ ..... ............. 104

12 All-at-once approach median solution ....................................... ............... 107

13 Hybrid, all-at-once median solution.... ................... ........................108

14 Quasiseparable optimization result .......................... ......... .... ............... 110

15 Surrogate lower level approximation optimization results ..................................112

16 H artm an problem constants......................................................... ............... 120

17 Shekel problem constants..................... ................. ........................... 121















LIST OF FIGURES


Figure page

1 Collaborative optimization flow diagram ..................................... .................10

2 Collaborative optimization subspace constraint satisfaction procedure (taken
from [6]) ...................................... .................................. ........... 10

3 Concurrent subspace optimization methodology flow diagram.............................13

4 Example hierarchical problem structure ..................... .............................16

5 Sub-problem inform ation flow .......................................................... ............... 16

6 Joint locations and orientations in the parametric ankle kinematic model ..............33

7 Comparison of convergence history results for the analytical test problems........... 38

8 Final cost function values for ten unsealed (dark bars) and scaled (gray bars)
parallel PSO, GA, SQP, and BFGS runs for the biomechanical test problem.........39

9 Convergence history for unsealed (dark lines) and scaled (gray lines) parallel
PSO, GA, SQP, and BFGS runs for the biomechanical test problem...................40

10 Sensitivity of gradient calculations to selected finite difference step size for one
d esig n v ariab le .............................................................................. ............... 4 3

11 Serial implementation of PSO algorithm ...................................... ............... 54

12 Parallel implementation of the PSO algorithm ........... .......... ............... 57

13 Surface plots of the (a) Griewank and (b) Corana analytical test problems
showing the presence of multiple local minima ....................................................61

14 Average fitness convergence histories for the (a) Griewank and (b) Corana
analytical test problems for swarm sizes of 16,32,64,and 128 particles and 10000
sw arm iteration s .................................................... ................ 64

15 Fitness convergence and parameter error plots for the biomechanical system
identification problem using synthetic data with noise................ .............. ....68

16 (a) Speedup and (b) parallel efficiency for the analytical and biomechanical
optim ization problem s .............................................................. .... 69









17 Multiple local minima for Griewank analytical problem surface plot in two
dim en sion s ........... ............ ....................................... ................... 75

18 Cumulative convergence probability Pc as a function of the number of
optimization runs with assumed equal P, values....................................................81

19 Fitness history and convergence probability Pc plots for Griewank, Hartman and
S h ek el p rob lem s .................................................... ................ 82

20 Typical Shekel fitness history plots of 20 optimizations (sampled out of 1000).....83

21 Shekel convergence probability for an individual optimization as a function of
fitness evaluations and population size ........................................ ............... 85

22 Theoretical cumulative convergence probability Pc as a function of the number
of optimization runs with constant P, for the Hartman problem ...........................86

23 Theoretical convergence probability Pc with sets of multiple runs for the
G riew ank prob lem .............................................................................. ...... 88

24 Theoretical convergence probability Pc using information from exploratory
optimizations which are stopped using a rate of change stopping condition for
the Griewank, Hartman and Shekel problems............................... ............... 90

25 Bayesian Pc estimation comparison to using extrapolated and randomly sampled
optimizations out of pool of 1000 runs for Griewank problem.............................91

26 Bayesian Pc estimation comparison to using extrapolated and randomly sampled
optimizations out of pool of 1000 runs for Hartman problem. .............................91

27 Bayesian Pc estimation comparison to using extrapolated and randomly sampled
optimizations out of pool of 1000 runs for Shekel problem ................................92

28 Stepped hollow cantilever beam ........................................ ......................... 99

29 Dimensional parameters of each cross section...................................................... 99

30 Projected displacement in direction 0 ........................................ ............... 100

31 Tip deflection contour plot as a function of beam section 5 with height h, and
width w with yield stress and aspect ratio constraints indicated by dashed and
dash dotted lines respectively .......................................................... ............... 103

32 Quasiseperable optimization flow chart. ...................................... ............... 105

33 Results for 1000 all-at-once optimizations..................... ..................... 106

34 Hybrid PSO-fmincon strategy for 100 optimizations ........................................108









35 Repeated optimizations of section 1 subproblem using fnincon function............110

36 Summed budget value and constraint margins for individual sections ................111

37 Global and local optimum in section 1 sub-optimization. Scale is 0.1:1 ..............111

38 Decomposed cross section solution. Scale is 0.1:1 .............. ...... ....................112

39 Target tip deflection value histories as a function of upper-level fitness
ev alu atio n s ...................................... ......... .................. ................ 1 13

40 Constraint margin value histories as a function of upper-level function
evaluations ............ ... ....................... ......... .............. ......... 114

41 Predicted and Monte Carlo sampled convergence probability Pc for 5
independent optimization runs for the Griewank problem.................................122

42 Predicted and Monte Carlo sampled convergence probability Pc for 12
independent optimization runs for the Griewank problem................................... 123

43 Monte Carlo sampled convergence probability Pc with sets of multiple runs for
the Griewank problem ...................................... ......... .................... 123

44 Monte Carlo sampled convergence probability Pc using information from
exploratory optimizations stopped using a rate of change stopping condition for
the Griewank, Hartman and Shekel problems............................................124

45 Bayesian Pc comparison for Griewank, Hartman and Shekel problem................ 125















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

APPLICATIONS OF PARALLEL GLOBAL OPTIMIZATION TO MECHANICS
PROBLEMS

By

Jaco Francois Schutte

December 2005

Chair: Raphael T. Haftka
Cochair: Benjamin J. Fregly
Major Department: Mechanical and Aerospace Engineering

Global optimization of complex engineering problems, with a high number of

variables and local minima, requires sophisticated algorithms with global search

capabilities and high computational efficiency. With the growing availability of parallel

processing, it makes sense to address these requirements by increasing the parallelism in

optimization strategies. This study proposes three methods of concurrent processing. The

first method entails exploiting the structure of population-based global algorithms such as

the stochastic Particle Swarm Optimization (PSO) algorithm and the Genetic Algorithm

(GA). As a demonstration of how such an algorithm may be adapted for concurrent

processing we modify and apply the PSO to several mechanical optimization problems on

a parallel processing machine. Desirable PSO algorithm features such as insensitivity to

design variable scaling and modest sensitivity to algorithm parameters are demonstrated.

A second approach to parallelism and improving algorithm efficiency is by utilizing

multiple optimizations. With this method a budget of fitness evaluations is distributed









among several independent sub-optimizations in place of a single extended optimization.

Under certain conditions this strategy obtains a higher combined probability of

converging to the global optimum than a single optimization which utilizes the full

budget of fitness evaluations. The third and final method of parallelism addressed in this

study is the use of quasiseparable decomposition, which is applied to decompose loosely

coupled problems. This yields several sub-problems of lesser dimensionality which may

be concurrently optimized with reduced effort.














CHAPTER 1
INTRODUCTION

Statement of Problem

Modem large scale problems often require high-fidelity analyses for every fitness

evaluation. In addition to this, these optimization problems are of a global nature in the

sense that many local minima exist. These two factors combine to form exceptionally

demanding optimization problems which require many hours of computation on high-end

single-processor computers. In order to efficiently solve such challenging problems

parallelism may be employed for improved optimizer throughput on computational

clusters or multi-core processors.

Purpose of Research

The research presented in this manuscript is targeted on the investigation of

methods of implementing parallelism in global optimization. These methods are (i)

parallel processing through the optimization algorithm, (ii) multiple independent

concurrent optimizations, and (iii) parallel processing by decomposition. Related

methods in the literature are reported and will be compared to the approaches formulated

in this study.

Significance of Research

Parallel processing is becoming a rapidly growing resource in the engineering

community. Large "processor farms" or Beowulf clusters are becoming increasingly

common at research and commercial engineering facilities. In addition to this, processor

manufacturers are encountering physical limitations such as heat dissipation and









constraints on processor dimensions at current clock frequencies because of the upper

limit on signal speeds. These place an upper limit on the clock frequencies that can be

attained and have forced manufacturers to look at other alternatives to improve

processing capability. Both Intel and AMD are currently developing methods of putting

multiple processors on a single die and will be releasing multiprocessor cores in the

consumer market in the near future. This multi-core technology will enable even users of

desktop computers to utilize concurrent processing and make it an increasingly cheap

commodity in the future. The engineering community is facing more complex and

computationally demanding problems as the fidelity of simulation software is improved

every day. New methods that can take advantage of the increasing availability of parallel

processing will give the engineer powerful tools to solve previously intractable problems.

In this manuscript the specific problem of the optimization of large-scale global

engineering problems is addressed by utilizing three different avenues of parallelism.

Any one of these methods and even combinations of them may utilize concurrent

processing to its advantage.

Parallelism by Exploitation of Optimization Algorithm Structure

Population-based global optimizers such as the Particle Swarm Optimizer (PSO) or

Genetic Algorithms (GA' s) coordinate their search effort in the design space by

evaluating a population of individuals in an iterative fashion. These iterations take the

form of discrete time steps for the PSO and generations in the case of the GA. Both the

PSO and the GA algorithm structures allow fitness calculations of individuals in the

population to be evaluated independently and concurrently. This opens up the possibility

of assigning a computational node or processor in a networked group of machines to each









individual in the population, and calculating the fitness of each individual concurrently

for every iteration of the optimization algorithm.

Parallelism through Multiple Independent Concurrent Optimizations

A single optimization of a large-scale problem will have significant probability of

becoming entrapped in a local minimum. This risk is alleviated by utilizing population

based algorithms such as PSO and GA's. These global optimizers have the means of

escaping from such a local minima if enough iterations are allowed. Alternatively, a

larger population may be used, allowing for higher sampling densities of the design

space, which also reduces the risk of entrapment. Both these options require significant

additional computation effort, with no guarantee of improvement in global convergence

probability. A more effective strategy can be followed while utilizing the same amount of

resources. By running several independent but limited optimizations it will be shown that

in most cases the combined probability of finding the global optimum is greatly

improved. The limited optimization runs are rendered independent by applying a

population based optimizer with different sets of initial population distributions in the

design space.

Parallelism through Concurrent Optimization of Decomposed Problems

Some classes of such problems may be subdivided into several more tractable sub-

problems by applying decomposition strategies. This process of decomposition generally

involves identifying groups of variables and constraints with minimal influence on one

another. The choice of which decomposition strategy to apply depends largely on the

original problem structure, and the interaction among variables. The objective is to find

an efficient decomposition strategy to separate such large scale global optimization









problems into smaller sub-problems without introducing spurious local minima, and to

apply an efficient optimizer to solve the resulting sub-problems.

Roadmap

A background on the global optimization of large scale optimization problems,

appropriate optimization algorithms and techniques, and parallelism will be presented in

Chapter 2. Chapter 3 presents an evaluation of the global, stochastic population based

algorithm, the Particle Swarm Optimizer, through several analytical and biomechanical

system identification problems. In Chapter 4 the parallelization of this population based

algorithm is demonstrated and applied. Chapter 5 details the use of multiple independent

concurrent optimizations for significant improvements in combined convergence

probability. Chapter 6 shows how complex structural problems with a large number of

variables may be decomposed into multiple independent sub-problems which can be

optimized concurrently using a two level optimization scheme. In Chapter 7 some

conclusions are drawn and avenues for future research are proposed.














CHAPTER 2
BACKGROUND

Population-based Global Optimization

Global optimization often requires specialized robust approaches. These include

stochastic and/or population-based optimizers such as GAs and the PSO. The focus of

this research is on exploring avenues of parallelism in population based optimization

algorithms. We demonstrate these methods using the stochastic Particle Swarm

Optimizer, which is a stochastic search algorithm suited to continuous problems. Other

merits to the PSO include low sensitivity to algorithm parameters, and insensitivity to

scaling of design variables. These qualities will be investigated in Chapter 3. This

algorithm does not require gradients, which is an important consideration when solving

problems of high dimensionality, often the case in large scale optimization. The PSO has

a performance comparable to GAs, which are also candidates for any of the methods of

parallelism proposed in this manuscript, and may be more suitable for discrete or mixed

variable types of problems.

In the research presented in this manuscript the PSO is applied to a biomechanical

problem with a large number of continuous variables. This problem has several local

minima, and, when attempting to solve it with gradient based optimizers, demonstrated

high sensitivity to the scaling of design variables. This made it an ideal candidate to

demonstrate the desirable quantities of the algorithm. Other application problems include

structural sizing problems and composite laminate angle optimization.









Parallel Processing in Optimization

There are five approaches which may be utilized to decompose a single

computational task into smaller problems which may then be solved concurrently. These

are geometric decomposition, iterative decomposition, recursive decomposition,

speculative decomposition and functional decomposition, or a combination of these [1,2].

Among these, functional decomposition is most commonly applied and will also be the

method of implementation presented in Chapter 4. The steps followed in parallelizing a

sequential program consist of the following from [1].

1. Decompose the sequential program or data into smaller tasks.

2. Assign the tasks to processes. A process is an abstract entity that performs tasks.

3. Orchestrate the necessary data access, communication, and synchronization.

4. Map or assign the processes to computational nodes.

The functional decomposition method is based on the premise that applications

such as an optimization algorithm may be broken into many distinct phases, each of

which interacts with some or all of the others. These phases can be implemented as co-

routines, each of which will execute for as long as it is able and then invoke another and

remain suspended until it is again needed. Functional decomposition is the simplest way

of parallelization if it can be implemented by turning its high-level description into a set

of cooperating processes [2]. When using this method, balancing the throughput of the

different computational stages will be highly problematic when there are dependencies

between stages, for example, when data requires sequential processing by several stages.

This limits the parallelism that may be achieved using functional decomposition. Any

further parallelism must be achieved through using geometric, iterative or speculative

decomposition within a functional unit [2].









When decomposing a task into concurrent processes some additional

communication among these routines is required for coordination and the interchange of

data. Among the methods of communication for parallel programming the parallel virtual

machine (PVM) and the message-passing interface (MPI) are the most widely used. For

the research undertaken in Chapter 4 a portable implementation of the MPI library [3,4]

containing a set of parallel communication functions [5] is used.

Decomposition in Large Scale Optimization

The optimization community developed several formalized decomposition methods

such as Collaborative Optimization (CO) [6], Concurrent SubSpace Optimization (CSSO)

[7], and Quasiseparable decomposition to deal with the challenges presented by large

scale engineering problems. The problems addressed using these schemes include multi-

disciplinary optimization in aerospace design, or large scale structural and biomechanical

problems.

Decomposition methodologies in large scale optimization are currently intensely

studied because increasingly advanced higher fidelity simulation methods result in large

scale problems becoming intractable. Problem decomposition allows for:

1. Simplified decomposed subsystems. In most cases the decomposed sub-problems
are of reduced dimensionality, and therefore less demanding on optimization
algorithms. An example of this is the number of gradient calculations required per
optimization iteration, which in the case of gradient based algorithms, scales
directly with problem dimensionality.

2. A broader work front to be attacked simultaneously, which results in a problem
being solved in less time if the processing resources are available. Usually
computational throughput is limited in a sequential fashion, i.e., the FLOPS limit
for computers. However, if multiple processing units are available this limit can be
circumvented by using an array of networked computers, for example, a Beowulf
cluster.









Several such decomposition strategies have been proposed (see next section for a

short review), all differing in the manner in which they address some or all of the

following.

1. Decomposition boundaries, which may be disciplinary, or component interfaces in
a large structure.

2. Constraint handling

3. Coordination among decomposed sub-problems.

Literature Review: Problem Decomposition Strategies

Here follows a summary of methodologies used for the decomposition and

optimization of large scale global problems. This review forms the background for the

study proposed in Chapter 6 of this manuscript.

Collaborative Optimization (CO)

Overview. The Collaborative Optimization (CO) strategy was first introduced by

Kroo et al., [8]. Shortly after its introduction this bi-level optimization scheme was

extended by Tappeta and Renaud [9,10] to three distinct formulations to address multi-

objective optimization of large-scale systems. The CO paradigm is based on the concept

that the interaction among several different disciplinary experts optimizing a design is

minimal for local changes in each discipline in a MDO problem. This allows a large scale

system to be decomposed into sub-systems along domain specific boundaries. These

subsystems are optimized through local design variables specific to each subsystem,

subject to the domain specific constraints. The objective of each subsystem optimization

is to maintain agreement on interdisciplinary design variables. A system level optimizer

enforces this interdisciplinary compatibility while minimizing the overall objective

function. This is achieved by combining the system level fitness with the cumulative sum









of all discrepancies between interdisciplinary design variables. This strategy is extremely

suited for parallel computation because of the minimal interaction between the different

design disciplines, which results in reduced communication overhead during the course

of the optimization.

Methodology. This decomposition strategy is described with the flow diagram in

Figure 1. As mentioned previously, the CO strategy is a bi-level method, in which the

system level optimization sets and adjusts a interdisciplinary design variables during the

optimization. The subspace optimizer attempts both to satisfy local constraints by

adjusting local parameters, and to meet the interdisciplinary design variable targets set by

the system level optimizer. Departures from the target interdisciplinary design parameters

are allowed, which may occur because of insufficient local degrees of freedom, but is to

be minimized. The system level optimizer attempts to adjust the interdisciplinary

parameters such that the objective function is minimized, while maximizing the

agreement between subsystems. This process of adjusting the system level target design,

and the subsystems attempting to match it whilst satisfying local constraints, is repeated

until convergence.

This procedure can be graphically illustrated in Figure 2 (taken from Kroo and

Manning [6]). The system level optimizer sets a design target P, and each subspace

optimization attempts to satisfy local constraints while matching the target design P as

closely as possible by moving in directions 1 and 2. During the next system level

optimization cycle the target design P will be moved in direction 3 in order to maximize

the agreement between the target design and subspace designs that satisfy local

constraints.














Attempt to match system level Optimize system
mandated target design approximation problem
and set target design




I Subproblem Subproblem
optimization optimization
(local variables) (local variables)
I -- L- .


Figure 1 Collaborative optimization flow diagram


Subspace 2
constraint


Subspace 1
constraint


Figure 2 Collaborative optimization subspace constraint satisfaction procedure (taken
from [6])

Refinements on method. Several enhancements to this method have been

proposed, among which are the integration of this architecture into a decision based

design framework as proposed by Hazelrigg [11,12], the use of response surfaces [13] to









model disciplinary analyses, and genetic algorithms with scheduling for increased

efficiency [14].

Solution quality and computational efficiency. Sobieski and Kroo [15] report

very robust performance on their CO scheme, with identical solutions being found on

both collaborative and single level optimizations of 45 design variable, 26 constraint

problems.

Braun and Kroo [16] showed CO to be unsuited for small problems with strong

coupling, but for large scale problems with weak coupling the CO methodology becomes

more computationally efficient. They also found that the amount of system level

iterations is dependent on the level of coupling of sub-systems, and that the required

number of sub-optimizations scales in proportion to the overall problem size. Similar

findings are reported by Alexandrov [17]. Braun et al. [18] evaluated the performance of

CO on a set of quadratic problems presented by Shankar et al. [19] to evaluate the CSSO

method. Unlike CSSO, the CO method did not require an increased amount of iterations

for QP problems with strong coupling, and converged successfully in all cases.

Applications. This decomposition architecture has been extensively demonstrated

using analytical test problems [18,20] and aerospace optimization problems such as

trajectory optimization [16,18], vehicle design [13,20-22], and satellite constellation

configurations [23].

Concurrent SubSpace Optimization (CSSO)

Overview. This method was proposed by Sobieszczanski-Sobieski [24], and, like

CO, divides the MDO problem along disciplinary boundaries. The main difference

however is the manner in which the CSSO framework coordinates the subsystem

optimizations. A bi-level optimization scheme is used in which the upper optimization









problem consists of a linear [7] or second order [25] system approximation created with

the use of Global Sensitivity Equations (GSE's). This system approximation reflects

changes in constraints and the objective function as a function of design variables.

Because of the nonlinearities, this approximation is only accurate in the immediate

neighborhood of the current design state, and needs to be updated after every upper-level

optimization iteration. After establishing a system level approximation the subsystems

are independently optimized using only design variables local to the subspace. The

system level approximation is then updated by a sensitivity analysis to reflect changes in

the subspace design. The last two steps of subspace optimization and system

approximation is repeated through the upper level optimizer until convergence is

achieved.

Methodology. The basic steps taken to optimize a MDO problem with the CSSO

framework are as follows:

1. Select initial set of designs for each subsystem.
2. Construct system approximation using GSE's
3. a) Subspace optimization through local variables and objective
b) Update system approximation by performing sensitivity analysis
4. Optimize design variables according to system approximation
5. Stop if converged; otherwise go to 3)

where step (3) contains the lower level optimization in this bi-level framework. This

process is illustrated in Figure 3.

Refinements on method. As previously mentioned, early CSSO strategies used linear

system level approximations obtained with the Global Sensitivity Equations (GSE).

Coordination of subspace or disciplinary optimizations is achieved through system level

sensitivity information.



































Figure 3 Concurrent subspace optimization methodology flow diagram

This imposes limits on the allowable deviation from the current design, and

requires a new approximation to be constructed at every system iteration to maintain

reasonable accuracy. Recent research focused on alternate methods for acquiring system

level approximations for the coordination effort. Several authors [25-28] modified the

system approximation to utilize a second order response surface approximation. This is

combined with a database of previous fitness evaluation points, which can be used to

create and update the response surface. This response surface then serves to couple the

subsystem optimizations and coordinate system level design.

Solution quality and computational efficiency. Shankar et al. [19] investigated

the robustness of the CSSO on a set of analytical problems. Several quadratic

programming problems with weak and strong coupling between subsystems were









evaluated with a modification of Sobieszczanski-Sobieski's nonhierarchical subspace

optimization scheme [7]. Results indicated reasonable performance for problems with

weak coupling between subsystems. For large problems with strong interactions between

subsystems, this decomposition scheme proved unreliable in terms of finding global

sensitivities, leading to poor solutions.

Tappeta et al., using the iSIGHT software. [29-31], analyzed two analytical and

two structural problems, a welding design and a stepped beam weight minimization. In

this work it is reported that the Karush-Kuhn-Tucker conditions were met in some of the

cases, and that most problems converged closely to the original problem solution.

Lin and Renaud compared the commercial software package LANCELOT [32]

which compares the Broydon-Fletcher-Goldfarb-Shanno (BFGS) method to the CSSO

strategy, the latter incorporating response surfaces. In this study the authors show similar

computational efficiencies for small uncoupled analytical problems. For large scale MDO

problems however the CSSO method consistently outperformed the LANCELOT

optimizer in this area.

Sellar et al. [26] compared a CSSO with neural network based response surface

enhancements with a full (all at once) system optimization. The CSSO-NN algorithm

showed a distinct advantage in computational efficiency over the all-at-once approach,

while maintaining a high level of robustness.

Applications. This decomposition methodology has been applied to large scale

aerospace problems like high temperature and pressure aircraft engine components

[29,33], aircraft brake component optimization [34], and aerospace vehicle design

[26,35].









Analytical Target Cascading (ATC)

Overview. Analytical Target Cascading was introduced by Michelena et al. [36] in

1999, and developed further by Kim [37] as a product development tool. This method is

typically used to solve object based decomposed system optimization problems.

Tosserams et al. [38] introduced a Lagrangian relaxation strategy which in some cases

improves the computational efficiency of this method by several orders of magnitude.

Methodology. ATC is a strategy which coordinates hierarchically decomposed

systems or elements of a problem (see Figure 4) by the introduction of target and

response coupling variables. Targets are set by parent elements which are met by

responses from the children elements in the hierarchy (see Figure 5 obtained from [38]).

At each element an optimization problem is formulated to find local variables, parent

responses and child targets which minimize a penalized discrepancy function, while

meeting local constraints. The responses are rebalanced up to higher levels by iteratively

changing targets in a nested loop in order to obtain consistency. Several coordination

strategies are available to determine the sequence of solving the sub-problems and order

of exchange in targets and responses [39]. Proof of convergence is also presented for

some of these classes of approaches in [39].

Refinements on method. Tosserams et al. [38] introduced the use of augmented

Lagrangian relaxation in order to reduce the computational cost associated with obtaining

very accurate agreement between sub-problems, and the coordination effort at the inner

loop of the method. Allison et al. exploited the complimentary nature of ATC and CO to

obtain an optimization formulation called nested ATC-MDO [40]. Kokkolaras et al.

extended the formulation of ATC to include the design of product families [41].







16



Element index
----


i=I



/=2


i= 3 l j= L

Level index i
Figure 4 Example hierarchical problem structure


Optimization inputs



from parent:
targets






from children:
responses


r(+l) k


t(+1) k


Optimization outputs



from parent:
responses






to children:
responses


Figure 5 Sub-problem information flow

Solution quality and computational efficiency. Michelena et al. [39] prove, under

certain convexity assumptions that the ATC process will yield the optimal solution of the

original design target problem. The original ATC formulation had the twofold problem of

requiring large penalty weights to accurate solutions, and the excessive repeat in the inner

loop which solves sub-problems before the outer loop can proceed. Both these problems

are addressed by using augmented Lagrangian relaxation [38] by Tosserams et al., which

reported a decrease in computational effort on the order of between orders 10 and 10000.









Applications. The ATC strategy is applied the design of structural members and a

electric water pump in [40], and automotive design [42,43]. The performance of the

Augmented Lagrangian Relaxation ATC enhancement was tested using several geometric

programming problems [38].

Quasiseparable Decomposition and Optimization

The quasiseparable decomposition and optimization strategy will be the focus of

the research proposed in Chapter 6. This methodology addresses a class of problems

common in the field of engineering and can be applied to a wide class of structural,

biomechanical and other disciplinary problems. The strategy, which will be explained in

detail in Chapter 6, is based on a two-level optimization approach which allows for the

global search effort to be concentrated at the lower level sub-problem optimizations. The

system to be optimized is decomposed in to several lower level subsystem optimizations

which are coordinated by an upper level optimization. A Sequential Quadratic

Programming (SQP) based optimizer is applied in this optimization infrastructure to

solve a example structural sizing problem. This example problem entails the

maximization of the tip displacement of a hollow stepped cantilever beam with 5

sections. The quasiseparable decomposition methodology is applied to decompose the

structure into several sub-problems of reduced dimensionality. The parallelism with this

strategy is achieved by optimizing the independent sub-problems (sections of the stepped

beam) concurrently, allowing for the utilization of parallel processing resources.














CHAPTER 3
GLOBAL OPTIMIZATION THROUGH THE PARTICLE SWARM ALGORITHM

Overview

This chapter introduces the population based algorithm which will be the target for

investigating parallelism throughout the manuscript. This stochastic algorithm mimics

swarming or flocking behavior found in animal groups such as bees, fish and birds. The

swarm of particles is basically several parallel individual searches that are influenced by

individual and swarm memory of regions in the design space with high fitness. These

positions of the regions are constantly updated and reported to the individuals in the

swarm through a simple communication model. This model allows for the algorithm to be

easily decomposed into concurrent processes, each representing an individual particle in

the swarm. This approach to allow parallelism will be detailed in Chapter 4.

For the purpose of illustrating the performance of the PSO, it is compared to

several other algorithms commonly used when solving problems in biomechanics. This

comparison is made through optimization of several analytical problems, and a

biomechanical system identification problem. The focus of the research presented in this

chapter is to demonstrate that the PSO has good properties such as insensitivity to the

scaling of design variables and very few algorithm parameters to fine tune. These make

the PSO a valuable addition in the arsenal of optimization methods in biomechanical

optimization, in which it has never been applied before.

The work presented in this Chapter was in collaboration with Jeff Reinbold, who

supplied the biomechanical test problem [58] and Byung II Koh, who developed the









parallel SQP and BFGS algorithms [77] used to establish a computational efficiency

comparison. The research in this Chapter was also published in [58,76,129]. Thanks goes

to Soest and Casius for their willingness to share their numerical results published in

[48].

Introduction

Optimization methods are used extensively in biomechanics research to predict

movement-related quantities that cannot be measured experimentally. Forward dynamic,

inverse dynamic, and inverse static optimizations have been used to predict muscle,

ligament, and joint contact forces during experimental or predicted movements (e.g., see

references [44-55]). System identification optimizations have been employed to tune a

variety of musculoskeletal model parameters to experimental movement data (e.g., see

references [56-60]). Image matching optimizations have been performed to align implant

and bone models to in vivo fluoroscopic images collected during loaded functional

activities (e.g., see references [61-63]).

Since biomechanical optimization problems are typically nonlinear in the design

variables, gradient-based nonlinear programming has been the most widely used

optimization method. The increasing size and complexity of biomechanical models has

also led to parallelization of gradient-based algorithms, since gradient calculations can be

easily distributed to multiple processors [44-46]. However, gradient-based optimizers can

suffer from several important limitations. They are local rather than global by nature and

so can be sensitive to the initial guess. Experimental or numerical noise can exacerbate

this problem by introducing multiple local minima into the problem. For some problems,

multiple local minima may exist due to the nature of the problem itself. In most

situations, the necessary gradient values cannot be obtained analytically, and finite









difference gradient calculations can be sensitive to the selected finite difference step size.

Furthermore, the use of design variables with different length scales or units can produce

poorly scaled problems that converge slowly or not at all [64,65], necessitating design

variable scaling to improve performance.

Motivated by these limitations and improvements in computer speed, recent studies

have begun investigating the use of non-gradient global optimizers for biomechanical

applications. Neptune [47] compared the performance of a simulated annealing (SA)

algorithm with that of downhill simplex (DS) and sequential quadratic programming

(SQP) algorithms on a forward dynamic optimization of bicycle pedaling utilizing 27

design variables. Simulated annealing found a better optimum than the other two methods

and in a reasonable amount of CPU time. More recently, Soest and Casius [48] evaluated

a parallel implementation of a genetic algorithm (GA) using a suite of analytical tests

problems with up to 32 design variables and forward dynamic optimizations of jumping

and isokinetic cycling with up to 34 design variables. The genetic algorithm generally

outperformed all other algorithms tested, including SA, on both the analytical test suite

and the movement optimizations.

This study evaluates a recent addition to the arsenal of global optimization methods

- particle swarm optimization (PSO) for use on biomechanical problems. A recently-

developed variant of the PSO algorithm is used for the investigation. The algorithm's

global search capabilities are evaluated using a previously published suite of difficult

analytical test problems with multiple local minima [48], while its insensitivity to design

variable scaling is proven mathematically and verified using a biomechanical test

problem. For both categories of problems, PSO robustness, performance, and scale-









independence are compared to that of three off-the-shelf optimization algorithms a

genetic algorithm (GA), a sequential quadratic programming algorithm (SQP), and the

BFGS quasi-Newton algorithm. In addition, previously published results [48] for the

analytical test problems permit comparison with a more complex GA algorithm (GA*), a

simulated annealing algorithm (SA), a different SQP algorithm (SQP*), and a downhill

simplex (DS) algorithm.

Theory

Particle Swarm Algorithm.

Particle swarm optimization is a stochastic global optimization approach introduced

by Kennedy and Eberhart [66]. The method's strength lies in its simplicity, being easy to

code and requiring few algorithm parameters to define convergence behavior. The

following is a brief introduction to the operation of the particle swarm algorithm based on

a recent implementation by Groenwold and Fourie [67] incorporating dynamic inertia and

velocity reduction.

Consider a swarm ofp particles, where each particle's position x' represents a

possible solution point in the problem design space D. For each particle i, Kennedy and

Eberhart [66] proposed that the position x'k+ be updated in the following manner:

xk+1 = Xk + Vk+1 (3.1)
with a pseudo-velocity vk+1 calculated as follows:


vk +1= kVkck cl ( -x) +cr k (g -X ) (3.2)
Here, subscript k indicates a (unit) pseudo-time increment. The point p' is the best-

found cost location by particle i up to timestep k, which represents the cognitive

contribution to the search vector vk+. Each component of vk+1 is constrained to be less









than or equal to a maximum value defined in vma The point gk is the global best-found

position among all particles in the swarm up to time k and forms the social contribution to

the velocity vector. Cost function values associated with p' and gk are denoted by fb,

and fbg respectively. Random numbers rl and r2 are uniformly distributed in the interval

[0,1]. Shi and Eberhart [68] proposed that the cognitive and social scaling parameters c,

and c2 be selected such that c, = c2 = 2 to allow the product czr, or c2r2 to have a mean of

1. The result of using these proposed values is that the particles overshoot the attraction

points p' and gk half the time, thereby maintaining separation in the group and allowing

a greater area to be searched than if the particles did not overshoot. The variable wk, set

to 1 at initialization, is a modification to the original PSO algorithm [66]. By reducing its

value dynamically based on the cost function improvement rate, the search area is

gradually reduced [69]. This dynamic reduction behavior is defined by wd, the amount

by which the inertia wk is reduced, vd, the amount by which the maximum velocity vm+a

is reduced, and d, the number of iterations with no improvement in gk before these

reduction take place [67] (see algorithm flow description below).

Initialization of the algorithm involves several important steps. Particles are

randomly distributed throughout the design space, and particle velocities v' are

initialized to random values within the limits 0 < v' < vm The particle velocity upper

limit vmax is calculated as a fraction of the distance between the upper and lower bound

on variables in the design space vmax = K(x, x,) with K = 0.5 as suggested in [69].

Iteration counters k and t are set to 0. Iteration counter k is used to monitor the total









number of swarm iterations, while iteration counter t is used to monitor the number of

swarm iterations since the last improvement in g Thus, t is periodically reset to zero

during the optimization while k is not.



The algorithm flow can be represented as follows:


1. Initialize

a. Set constants KC, C1, C2, kmax, Ivmax 0V, v wd, and d

b. Set counters k = 0, t = 0. Set random number seed.

c. Randomly initialize particle positions x' E D in 91 for i = 1,..., p

d. Randomly initialize particle velocities 0 < v, < vmax for i = 1,...,p

e. Evaluate cost function values fj using design space coordinates x' for
i=l,...,p

f Set fb,,, = j and p0 = x0 for i = ,..., p

g. Set f,, to best fe,, and go to corresponding xo

2. Optimize

h. Update particle velocity vectors vk1 using Eq. (3.2)

i. If vk+ > vm for any component, then set that component to its maximum
allowable value

j. Update particle position vectors xk,+ using Eq. (3.1)

k. Evaluate cost function values fk' using design space coordinates xk,+ for
i= ,...,p

1. If f+l < fdest, then f/st = f/+, p.+, = xk+ for i= 1,..., p









m. If f < f~st, then fbst = f-+, gk+ = xk+' for i = 1..., p

n. If fg, was improved in (e), then reset t = 0. Else increment t

o. If maximum number of function evaluations is exceeded, then go to 3

p. If t= d, then multiply wk+ by ( wd) and vk by (1- v )

q. Increment k

r. Go to 2(a).

3. Report results

4. Terminate

This algorithm was coded in the C programming language by the author [70] and

used for all PSO analyses performed in the study. A standard population size of 20

particles was used for all runs, and other algorithm parameters were also selected based

on standard recommendations (Table 1) [70-72]. The C source code for our PSO

algorithm is freely available at http://www.mae.ufl.edu/-fregly/downloads/pso.zip (last

accessed 12/2005).

Table 1 Standard PSO algorithm parameters used in the study
Parameter Description Value
p Population size (number of particles) 20
cl Cognitive trust parameter 2.0
c2 Social trust parameter 2.0
wo Initial inertia 1
Wd Inertia reduction parameter 0.01
K Bound on velocity fraction 0.5
Vd Velocity reduction parameter 0.01
d Dynamic inertia/velocity reduction delay (function 200
evaluations)


Analysis of Scale Sensitivity.

One of the benefits of the PSO algorithm is its insensitivity to design variable

scaling. To prove this characteristic, we will use a proof by induction to show that all









particles follow an identical path through the design space regardless of how the design

variables are scaled. In actual PSO runs intended to investigate this property, use of the

same random seed in scaled and unsealed cases will ensure that an identical sequence of

random r, and r2 values are produced by the computer throughout the course of the

optimization.

Consider an optimization problem with n design variables. An n-dimensional

constant scaling vector C can be used to scale any or all dimensions of the problem

design space:



C2
C= 3 (3.3)



We wish to show that for any time step k > 0,

vk = vk, (3.4)
xk = xk (3.5)
where xk and vk (dropping superscript i) are the unsealed position and velocity,

respectively, of an individual particle and xk = Cxk and vk = 'vk are the corresponding

scaled versions.

First, we must show that our proposition is true for the base case, which involves

initialization (k = 0) and the first time step (k = 1). Applying the scaling vector C to an

individual particle position xo during initialization produces a scaled particle position x :

x' = x0 (3.6)
where the right hand side is a component-by-component product of vectors 4 and

x0 This implies that









P' = Po, g' = g, (3.7)
In the unsealed case, the pseudo-velocity is calculated as

v0 = K(X xLB) (3.8)
In the scaled case, this becomes

v, = (x' x'
= 0 (X' xLB)
=K(XUB XLB)(3.9)

= [(XUB -XLB)]
= 5v,
-V0
From Eqs. (3.1) and (3.2) and these initial conditions, the particle pseudo-velocity

and position for the first time step can be written as

v,= wo vo+c1ir (po-Xo)+c2r (g- Xo) (3.10)
x1 = x0 +V (3.11)
in the unsealed case and

v' = w v + c1 (p' -x')+Cr, (g' -x)

= wWoVo + cir (5po Xo ) + Czrz (go Xo )


= 5v,
=x + v

= x0 + Iv1
= 4[XO + v1]
= ;X

in the scaled case. Thus, our proposition is true for the base case.

Next, we must show that our proposition is true for the inductive step. If we assume

our proposition holds for any time step k =j, we must prove that it also holds for time

step k =j + 1. We begin by replacing subscript k with subscriptj in Eqs. (3.4) and (3.5). If

we then replace subscript 0 with subscriptj and subscript 1 with subscriptj + 1 in Eqs.

(3.12) and (3.13), we arrive at Eqs. (3.4) and (3.5) where subscript k is replaced by

subscripts + 1. Thus, our proposition is true for any time step + 1.









Consequently, since the base case is true and the inductive step is true, Eqs. (3.4)

and (3.5) are true for all k> 0. From Eqs. (3.4) and (3.5), we can conclude that any linear

scaling of the design variables (or subset thereof) will have no effect on the final or any

intermediate result of the optimization, since all velocities and positions are scaled

accordingly. This fact leads to identical step intervals being taken in the design space for

scaled and unsealed version of the same problem, assuming infinite precision in all

calculations.

In contrast, gradient-based optimization methods are often sensitive to design

variable scaling due to algorithmic issues and numerical approximations. First derivative

methods are sensitive because of algorithmic issues, as illustrated by a simple example.

Consider the following minimization problem with two design variables (x,y) where the

cost function is

2
x2 + (3.14)
100
with initial guess (1,1). A scaled version of the same problem can be created by

letting x = x,y = y /10 so that the cost function becomes

X2 +y2 (3.15)
with initial guess (1,10). Taking first derivatives of each cost function with respect

to the corresponding design variables and evaluating at the initial guesses, the search

direction for the unsealed problem is along a line rotated 5.70 from the positive x axis and

for the scaled problem along a line rotated 45. To reach the optimum in a single step, the

unsealed problem requires a search direction rotated 84.3 and the scaled problem 45.

Thus, the scaled problem can theoretically reach the optimum in a single step while the

unsealed problem cannot due to the effect of scaling on the calculated search direction.









Second derivative methods are sensitive to design variable scaling because of

numerical issues related to approximation of the Hessian (second derivative) matrix.

According to Gill et al. [64], Newton methods utilizing an exact Hessian matrix will be

insensitive to design variable scaling as long as the Hessian matrix remains positive

definite. However, in practice, exact Hessian calculations are almost never available,

necessitating numerical approximations via finite differencing. Errors in these

approximations result in different search directions for scaled versus unsealed versions of

the same problem. Even a small amount of design variable scaling can significantly affect

the Hessian matrix so that design variable changes of similar magnitude will not produce

comparable magnitude cost function changes [64]. Common gradient-based algorithms

that employ an approximate Hessian include Newton and quasi-Newton nonlinear

programming methods such as BFGS, SQP methods, and nonlinear least-squares methods

such as Levenberg-Marquardt [64]. A detailed discussion of the influence of design

variable scaling on optimization algorithm performance can be found in Gill et al. [64].

Methodology

Optimization Algorithms

In addition to our PSO algorithm, three off-the-shelf optimization algorithms were

applied to all test problems (analytical and biomechanical see below) for comparison

purposes. One was a global GA algorithm developed by Deb [73-75]. This basic GA

implementation utilizes one mutation operator and one crossover operator along with real

encoding to handle continuous variables. The other two algorithms were commercial

implementations of gradient-based SQP and BFGS algorithms (VisualDOC,

Vanderplaats R & D, Colorado Springs, CO).









All four algorithms (PSO, GA, SQP, and BFGS) were parallelized to accommodate

the computational demands of the biomechanical test problem. For the PSO algorithm,

parallelization was performed by distributing individual particle function evaluations to

different processors as detailed by the author in [76]. For the GA algorithm, individual

chromosome function evaluations were parallelized as described in [48]. Finally, for the

SQP and BFGS algorithms, finite difference gradient calculations were performed on

different processors as outlined by Koh et al. in [77]. A master-slave paradigm using the

Message Passing Interface (MPI) [3,4] was employed for all parallel implementations.

Parallel optimizations for the biomechanical test problem were run on a cluster of Linux-

based PCs in the University of Florida High-performance Computing and Simulation

Research Laboratory (1.33 GHz Athlon CPUs with 256MB memory on a 100Mbps

switched Fast Ethernet network).

While the PSO algorithm used standard algorithm parameters for all optimization

runs, minor algorithm tuning was performed on the GA, SQP, and BFGS algorithms for

the biomechanical test problem. The goal was to give these algorithms the best possible

chance for success against the PSO algorithm. For the GA algorithm, preliminary

optimizations were performed using population sizes ranging from 40 to 100. It was

found that for the specified maximum number of function evaluations, a population size

of 60 produced the best results. Consequently, this population size was used for all

subsequent optimization runs (analytical and biomechanical). For the SQP and BFGS

algorithms, automatic tuning of the finite difference step size (FDSS) was performed

separately for each design variable. At the start of each gradient-based run, forward and

central difference gradients were calculated for each design variable beginning with a









relative FDSS of 10-1. The step size was then incrementally decreased by factors often

until the absolute difference between forward and central gradient results was a

minimum. This approach was taken since the amount of noise in the biomechanical test

problem prevented a single stable gradient value from being calculated over a wide range

of FDSS values (see Discussion). The forward difference step size automatically selected

for each design variable was used for the remainder of the run.

Analytical Test Problems

The global search capabilities of our PSO implementation were evaluated using a

suite of difficult analytical test problems previously published by Soest and Casius [48].

In that study, each problem in the suite was evaluated using four different optimizers: SA,

GA*, SQP*, and DS, where a star indicates a different version of an algorithm used in

our study. One thousand optimization runs were performed with each optimizer starting

from random initial guesses and using standard optimization algorithm parameters. Each

run was terminated based on a pre-defined number of function evaluations for the

particular problem being solved. We followed an identical procedure with our four

algorithms to permit comparison between our results and those published by Soest and

Casius in [48]. Since two of the algorithms used in our study (GA and SQP) were of the

same general category as algorithms used by Soest and Casius in [48] (GA* and SQP*),

comparisons could be made between different implementations of the same general

algorithm. Failed PSO and GA runs were allowed to use up the full number of function

evaluations, whereas failed SQP and BFGS runs were re-started from new random initial

guesses until the full number of function evaluations was completed. Only 100 rather

than 1000 runs were performed with the SQP and BFGS algorithms due to a database

size problem in the VisualDOC software.









A detailed description of the six analytical test problems can be found in Soest and

Casius [48]. Since the design variables for each problem possessed the same absolute

upper and lower bound and appeared in the cost function in a similar form, design

variable scaling was not an issue in these problems. The six analytical test problems are

described briefly below.

H1: This simple 2-dimensional function [48] has several local maxima and a global

maximum of 2 at the coordinates (8.6998, 6.7665).


8 8
HI(x,,x2) =2sin2 2 (3.16)
d+1
x,, x2 [- 100,100]
where

d = j(x 8.6998)2+ (x2 -6.7665)2
Ten thousand function evaluations were used for this problem.

H, : This inverted version of the F6 function used by Schaffer et al. [78] has 2

dimensions with several local maxima around the global maximum of 1.0 at (0,0).

sin2( X2 + 0.5
H2(x,,x2) =0.5 -(X- 0- (3.17)
1+0001 (x+x

x,,x,2 [-100,100]
This problem was solved using 20,000 function evaluations per optimization run.

H3 : This test function from Corana et al. [79] was used with dimensionality n = 4,

8, 16, and 32. The function contains a large number of local minima (on the order of

104") with a global minimum of 0 at x, <0.05.










H ) (t'sgn(z,)+z+)2 x c.d ifoeiX-Z H .x...x..,' Y(3.18)
di x, otherwise
x, [-1000,1000]
where


z = +0.49999 sgn(x s, c = 0.15,

1
1 i = 1,5,9,...
1000 i = 2,6,10,...
s= 0.2, t =0.05, and d, = 0 ,,1,.
10 i 3,7,11,...
100 i =4,8,12,...

The use of the floor function in Eq. (3.18) makes the search space for this problem

the most discrete of all problems tested. The number of function evaluations used for this

problem was 50,000 (n = 4), 100,000 (n = 8), 200,000 (n = 16), and 400,000 (n = 32).

For all of the analytical test problems, an algorithm was considered to have

succeeded if it converged to within 10-3 of the known optimum cost function value within

the specified number of function evaluations [48].

Biomechanical Test Problem

In addition to these analytical test problems, a biomechanical test problem was used

to evaluate the scale-independent nature of the PSO algorithm. Though our PSO

algorithm is theoretically insensitive to design variable scaling, numerical round-off

errors and implementation details could potentially produce a scaling effect. Running the

other three algorithms on scaled and unsealed versions of this test problem also permitted

investigation of the extent to which other algorithms are influenced by design variable

scaling.

The biomechanical test problem involved determination of an ankle joint kinematic

model that best matched noisy synthetic (i.e., computer generated) movement data.











Similar to that used by van der Bogert et al. [56], the ankle was modeled as a three-


dimensional linkage with two non-intersecting pin joints defined by 12 subject-specific


parameters (Figure 6).


Y


Tibia

r Pe


^ -


Center


N 7


Y
(superior)




z x
(lateral) Lab (anterior)


Figure 6 Joint locations and orientations in the parametric ankle kinematic model. Each
pi (i = 1,...,12) represents a different position or orientation parameter in the
model









These parameters represent the positions and orientations of the talocrural and

subtalar joint axes in the tibia, talus, and calcaneous. Position parameters were in units of

centimeters and orientation parameters in units of radians, resulting in parameter values

of varying magnitude. This model was part of a larger 27 degree-of-freedom (DOF) full-

body kinematic model used to optimize other joints as well [58].

Given this model structure, noisy synthetic movement data were generated from a

nominal model for which the "true" model parameters were known. Joint parameters for

the nominal model along with a nominal motion were derived from in vivo experimental

movement data using the optimization methodology described below. Next, three

markers were attached to the tibia and calcaneous segments in the model at locations

consistent with the experiment, and the 27 model DOFs were moved through their

nominal motions. This process created synthetic marker trajectories consistent with the

nominal model parameters and motion and also representative of the original

experimental data. Finally, numerical noise was added to the synthetic marker trajectories

to emulate skin and soft tissue movement artifacts. For each marker coordinate, a

sinusoidal noise function was used with uniformly distributed random period, phase, and

amplitude (limited to a maximum of+ 1 cm). The values of the sinusoidal parameters

were based on previous studies reported in the literature [80,53].

An unconstrained optimization problem with bounds on the design variables was

formulated to attempt to recover the known joint parameters from the noisy synthetic

marker trajectories. The cost function was

min f(p) (3.19)
p
with









50 6 3
f(p) = min (ck k (p, q)) (3.20)
k=l q j=1 1=1
where p is a vector of 12 design variables containing the joint parameters, q is a

vector of 27 generalized coordinates for the kinematic model, cijk is the ith coordinate of

synthetic marker at time frame k, and yk (p, q) is the corresponding marker coordinate

from the cinematic model. At each time frame, ck (p, q) was computed from the current

model parameters p and an optimized model configuration q. A separate Levenberg-

Marquardt nonlinear least-squares optimization was performed for each time frame in Eq.

(3.20) to determine this optimal configuration. A relative convergence tolerance of 10-3

was chosen to achieve good accuracy with minimum computational cost. A nested

optimization formulation (i.e., minimization occurs in Eqs. (3.19) and (3.20)) was used to

decrease the dimensionality of the design space in Eq. (3.19). Equation (3.20) was coded

in Matlab and exported as stand-alone C code using the Matlab Compiler (The

Mathworks, Natick, MA). The executable read in a file containing the 12 design variables

and output a file containing the resulting cost function value. This approach facilitated the

use of different optimizers to solve Eq. (3.19).

To investigate the influence of design variable scaling on optimization algorithm

performance, two versions of Eq. (3.20) were generated. The first used the original units

of centimeters and radians for the position and orientation design variables respectively.

Bounds on the design variables were chosen to enclose a physically realistic region

around the solution point in design space. Each position design variable was constrained

to remain within a cube centered at the midpoint of the medial and lateral malleoli, where

the length of each side was equal to the distance between the malleoli (i.e., 11.32 cm).

Each orientation design variable was constrained to remain within a circular cone defined









by varying its "true" value by + 300. The second version normalized all 12 design

variables to be within [-1,1] using

norm 2x XUB XLB (3.21)
XUB XLB
where x and xL denote the upper and lower bounds, respectively, on the design

variable vector [81].

Two approaches were used to compare PSO scale sensitivity to that of the other

three algorithms. For the first approach, a fixed number of scaled and unsealed runs (10)

were performed with each optimization algorithm starting from different random initial

seeds, and the sensitivity of the final cost function value to algorithm choice and design

variable scaling was evaluated. The stopping condition for PSO and GA runs was 10,000

function evaluations, while SQP and BFGS runs were terminated when a relative

convergence tolerance of 10-5 or absolute convergence tolerance of 10-6 was met. For the

second approach, a fixed number of function evaluations (10,000) were performed with

each algorithm to investigate unsealed versus scaled convergence history. A single

random initial guess was used for the PSO and GA algorithms, and each algorithm was

terminated once it reached 10,000 function evaluations. Since individual SQP and BFGS

runs require much fewer than 10,000 function evaluations, repeated runs were performed

with different random initial guesses until the total number of function evaluations

exceeded 10,000 at the termination of a run. This approach essentially uses SQP and

BFGS as global optimizers, where the separate runs are like individual particles that

cannot communicate with each another but have access to local gradient information.

Finite difference step size tuning at the start of each run was included in the computation

of number of function evaluations. Once the total number of runs required to reach









10,000 function evaluations was known, the lowest cost function value from all runs at

each iteration was used to represent the cost over a range of function evaluations equal to

the number of runs.

Results

For the analytical test problems, our PSO algorithm was more robust than our GA,

SQP, and BFGS algorithms (Table 2, top half). PSO converged to the correct global

solution nearly 100% of the time on four of the six test problems (H1 and H3 with n = 4,

8, and 16). It converged 67% of the time for problem H2 and only 1.5% of the time for

problem H3 with n = 32. In contrast, none of the other algorithms converged more than

32% of the time on any of the analytical test problems. Though our GA algorithm

typically exhibited faster initial convergence than did our PSO algorithm (Figure 7, left

column), it leveled off and rarely reached the correct final point in design space within

the specified number of function evaluations.

Table 2 Fraction of successful optimizer runs for the analytical test problems. Top
half: Results from the PSO, GA, SQP, and BFGS algorithms used in the
present study. Bottom half: Results from the SA, GA, SQP, and DS
algorithms used in Soest and Casius 48. The GA and SQP algorithms used in
that study were different from the ones used in our study. Successful runs
were identified by a final cost function value within of the known optimum
value, consistent with [48]
H3
Study Algorithm H1 H2 (n = 4) (n = 8) (n = 16) (n = 32)
PSO 0.972 0.688 1.000 1.000 1.000 0.015
GA 0.000 0.034 0.000 0.000 0.000 0.002
Present
SQP 0.09 0.11 0.00 0.00 0.00 0.00
BFGS 0.00 0.32 0.00 0.00 0.00 0.00
Soest SA 1.000 0.027 0.000 0.001 0.000 0.000
and GA 0.990 0.999 1.000 1.000 1.000 1.000
Casius SQP 0.279 0.810 0.385 0.000 0.000 0.000
(2003) DS 1.000 0.636 0.000 0.000 0.000 0.000












a Present Study
o100n -_ _


1U-
-- PSO
S--GA
10-1 SQP
.---.BFGS
10-2 -' _-- --- ----


0 5 10 15 20
xC 103
41C


1 2 3 4 5
_ x 104


0 1 2 3 4
x 105
Number of Function Evaluations


Soest & Casius (2003)








1 2 4 6 8 10
x103


Iu I


x 104









3 1 2 3 4
x 10'
Number of Function Evaluations


Figure 7 Comparison of convergence history results for the analytical test problems.
Left column: Results from the PSO, GA, SQP, and BFGS algorithms used in
the present study. Right column: Results from the SA, GA, SQP, and DS
algorithms used in Soest and Casius [5]. The GA and SQP algorithms used in
that study were different from the ones used in our study. (a) Problem H1. The
SA results have been updated using corrected data provided by Soest and
Casius, since the results in 48 accidentally used a temperature reduction rate
of 0.5 rather than the standard value of 0.85 as reported. (b) Problem H-2. (c)
Problem H3 with n = 4. (d) Problem H3 with n = 32. Error was computed
using the known cost at the global optimum and represents the average of
1000 runs (100 multi-start SQP and BFGS runs in our study) with each
algorithm.



















3000 8000
SQP BFGS
2250 6000
m Unsealed
S1500 u Scaled 4000
750 i n 2000
0 0
12345678910 12345678910
Run Run


Figure 8 Final cost function values for ten unsealed (dark bars) and scaled (gray bars)
parallel PSO, GA, SQP, and BFGS runs for the biomechanical test problem.
Each pair of unsealed and scaled runs was started from the same initial
points) in design space, and each run was terminated when the specified
stopping criteria was met (see text).

In contrast, the SQP and BFGS algorithms were highly sensitive to design variable

scaling in the biomechanical test problem. For the ten trials, unsealed and scaled SQP or

BFGS runs rarely converged to similar points in design space (note y axis scale in Figure

8) and produced large differences in final cost function value from one trial to the next

(Figure 8c and d). Scaling improved the final result in seven out often SQP trials and in

five often BFGS trials. The best unsealed and scaled SQP final cost function values were

255 and 121, respectively, while those of BFGS were 355 and 102 (Table 3). Thus,

scaling yielded the best result found with both algorithms. The best SQP and BFGS trials

generally produced larger RMS marker distance errors (up to two times worse),

orientation parameter errors (up to five times worse), and position parameter errors (up to

six times worse) than those found by PSO or GA.







40


Table 3 Final cost function values and associated marker distance and joint parameter
root-mean-square (RMS) errors after 10,000 function evaluations performed
by multiple unsealed and scaled PSO, GA, SQP, and BFGS runs. See Figure 9
for corresponding convergence histories

RMS Error
Cost Marker Orientation Position
Optimizer Formulation Function Distances Parameters Parameters
(mm) (deg) (mm)
Unscaled 69.5 5.44 2.63 4.47
PSOt
Scaled 69.5 5.44 2.63 4.47
GA Unscaled 77.9 5.78 2.65 6.97
Scaled 74.0 5.64 3.76 4.01
Unscaled 255 10.4 3.76 14.3
Scaled 121 7.21 3.02 9.43
Unscaled 69.5 5.44 2.63 4.47
BFGS 69.5 5.44 2.634.47
Scaled 69.5 5.44 2.63 4.47


2000 4000 6000
Number of Function Evaluation


d: Black lines
3ray lines

8000 10000
s


Figure 9 Convergence history for unsealed (dark lines) and scaled (gray lines) parallel
PSO, GA, SQP, and BFGS runs for the biomechanical test problem. Each
algorithm was run terminated after 10,000 function evaluations. Only one
unsealed and scaled PSO and GA run were required to reach 10,000 function
evaluations, while repeated SQP and BFGS runs were required to reach that
number. Separate SQP and BFGS runs were treated like individual particles in
a single PSO run for calculating convergence history (see text).


-- PSO
----GA
------ SQP
S- BFGS

-- -------- ------ --

\- - - -


Unscalec
Scaled:


1011
0









Discussion

This chapter evaluated a recent variation of the PSO algorithm with dynamic inertia

and velocity updating as a possible addition to the arsenal of methods that can be applied

to difficult biomechanical optimization problems. For all problems investigated, our PSO

algorithm with standard algorithm parameters performed better than did three off-the-self

optimizers GA, SQP, and BFGS. For the analytical test problems, PSO robustness was

found to be better than that of two other global algorithms but worse than that of a third.

For the biomechanical test problem with added numerical noise, PSO was found to be

insensitive to design variable scaling while GA was only mildly sensitive and SQP and

BFGS highly sensitive. Overall, the results suggest that our PSO algorithm is worth

consideration for difficult biomechanical optimization problems, especially those for

which design variable scaling may be an issue.

Though our biomechanical optimization involved a system identification problem,

PSO may be equally applicable to problems involving forward dynamic, inverse

dynamic, inverse static, or image matching analyses. Other global methods such as SA

and GA have already been applied successfully to such problems [47,48,62], and there is

no reason to believe that PSO would not perform equally well. As with any global

optimizer, PSO utilization would be limited by the computational cost of function

evaluations given the large number required for a global search.

Our particle swarm implementation may also be applicable to some large-scale

biomechanical optimization problems. Outside the biomechanics arena [71,72,82-91],

PSO has been used to solve problems on the order of 120 design variables [89-91]. In the

present study, our PSO algorithm was unsuccessful on the largest test problem, H3 with n

= 32 design variables. However, in a recent study, our PSO algorithm successfully solved









the Griewank global test problem with 128 design variables using population sizes

ranging from 16 to 128 76. When the Corana test problem (H3) was attempted with 128

DVs, the algorithm exhibited worse convergence. Since the Griewank problem possesses

a bumpy but continuous search space and the Corana problem a highly discrete search

space, our PSO algorithm may work best on global problems with a continuous search

space. It is not known how our PSO algorithm would perform on biomechanical

problems with several hundred DVs, such as the forward dynamic optimizations of

jumping and walking performed with parallel SQP in [44-46].

One advantage of global algorithms such as PSO, GA, and SA is that they often do

not require significant algorithm parameter tuning to perform well on difficult problems.

The GA used by Soest and Casius in [48] (which is not freely available) required no

tuning to perform well on all of these particular analytical test problems. The SA

algorithm used by Soest and Casius in [48] required tuning of two parameters to improve

algorithm robustness significantly on those problems. Our PSO algorithm (which is freely

available) required tuning of one parameter (wd, which was increased from 1.0 to 1.5) to

produce 100% success on the two problems where it had significant failures. For the

biomechanical test problem, our PSO algorithm required no tuning, and only the

population size of our GA algorithm required tuning to improve convergence speed.

Neither algorithm was sensitive to the two sources of noise present in the problem noise

added to the synthetic marker trajectories, and noise due to a somewhat loose

convergence tolerance in the Levenberg-Marquardt sub-optimizations. Thus, for many

global algorithm implementations, poor performance on a particular problem can be

rectified by minor tuning of a small number of algorithm parameters.










2



[ 10

S\ -- Forward le-3
S o -0- Central le-3
-i- Forward le-6
-0- Central le-6

10-6 10-5 10 10-3 10-2 10- 100
Finite Difference Step Size


Figure 10 Sensitivity of gradient calculations to selected finite difference step size for
one design variable. Forward and central differencing were evaluated using
relative convergence tolerances of 10-3 and 10-6 for the nonlinear least squares
sub-optimizations performed during cost function evaluation (see Eq. (3.20)).

In contrast, gradient-based algorithms such as SQP and BFGS can require a

significant amount of tuning even to begin to approach global optimizer results on some

problems. For the biomechanical test problem, our SQP and BFGS algorithms were

highly tuned by scaling the design variables and determining the optimal FDSS for each

design variable separately. FDSS tuning was especially critical due to the two sources of

noise noted above. When forward and central difference gradient results were compared

for one of the design variables using two different Levenberg-Marquardt relative

convergence tolerances (10-3 and 10-6), a "sweet spot" was found near a step size of 10-2

(Figure 10). Outside of that "sweet spot," which was automatically identified and used in

generating our SQP and BFGS results, forward and central difference gradient results

diverged quickly when the looser tolerance was used. Since most users of gradient-based

optimization algorithms do not scale the design variables or tune the FDSS for each

design variable separately, and many do not perform multiple runs, our SQP and BFGS









results for the biomechanical test problem represent best-case rather than typical results.

For this particular problem, an off-the-shelf global algorithm such as PSO or GA is

preferable due to the significant reduction in effort required to obtain repeatable and

reliable solutions.

Another advantage of PSO and GA algorithms is the ease with which they can be

parallelized [48,76] and their resulting high parallel efficiency. For our PSO algorithm,

Schutte et al. [76] recently reported near ideal parallel efficiency for up to 32 processors.

Soest and Casius [48] reported near ideal parallel efficiency for their GA algorithm with

up to 40 processors. Though SA has historically been considered more difficult to

parallelize [92], Higginson et al. [93] recently developed a new parallel SA

implementation and demonstrated near ideal parallel efficiency for up to 32 processors.

In contrast, Koh et al. [77] reported poor SQP parallel efficiency for up to 12 processors

due to the sequential nature of the line search portion of the algorithm.

The caveat for these parallel efficiency results is that the time required per function

evaluation was approximately constant and the computational nodes were homogeneous.

As shown in [76], when function evaluations take different amounts of time, parallel

efficiency of our PSO algorithm (and any other synchronous parallel algorithm, including

GA, SA, SQP, and BFGS) will degrade with increasing number of processors.

Synchronization between individuals in the population or between individual gradient

calculations requires slave computational nodes that have completed their function

evaluations to sit idle until all nodes have returned their results to the master node.

Consequently, the slowest computational node (whether loaded by other users,

performing the slowest function evaluation, or possessing the slowest processor in a









heterogeneous environment) will dictate the overall time for each parallel iteration. An

asynchronous PSO implementation with load balancing, where the global best-found

position is updated continuously as each particle completes a function evaluation, could

address this limitation. However, the extent to which convergence characteristics and

scale independence would be affected is not yet known.

To put the results of our study into proper perspective, one must remember that

optimization algorithm robustness can be influenced heavily by algorithm

implementation details, and no single optimization algorithm will work for all problems.

For two of the analytical test problems (H2 and H3 with n = 4), other studies have

reported PSO results using formulations that did not include dynamic inertia and velocity

updating. Comparisons are difficult given differences in the maximum number of

function evaluations and number of particles, but in general, algorithm modifications

were (not surprisingly) found to influence algorithm convergence characteristics [94-96].

For our GA and SQP algorithms, results for the analytical test problems were very

different from those obtained by Soest and Casius in [48] using different GA and SQP

implementations. With seven mutation and four crossover operators, the GA algorithm

used by Soest and Casius in [48] was obviously much more complex than the one used

here. In contrast, both SQP algorithms were highly-developed commercial

implementations. In contrast, poor performance by a gradient-based algorithm can be

difficult to correct even with design variable scaling and careful tuning of the FDSS.

These findings indicate that specific algorithm implementations, rather than general

classes of algorithms, must be evaluated to reach any conclusions about algorithm

robustness and performance on a particular problem.









Conclusions

In summary, the PSO algorithm with dynamic inertia and velocity updating

provides another option for difficult biomechanical optimization problems with the added

benefit of being scale independent. There are few algorithm-specific parameters to adjust,

and standard recommended settings work well for most problems [70,94]. In

biomechanical optimization problems, noise, multiple local minima, and design variables

of different scale can limit the reliability of gradient-based algorithms. The PSO

algorithm presented here provides a simple-to-use off-the-shelf alternative for

consideration in such cases.

The algorithm's main drawback is the high cost in terms of function evaluations

because of slow convergence in the final stages of the optimization, a common trait

among global search algorithms. The time requirements associated with the high

computational cost may be circumvented by utilizing the parallelism inherent in the

swarm algorithm. The development of such a parallel PSO algorithm will be detailed in

the next chapter.














CHAPTER 4
PARALLELISM BY EXPLOITING POPULATION-BASED ALGORITHM
STRUCTURES

Overview

The structures of population based optimizers such as Genetic Algorithms and the

Particle Swarm may be exploited in order to enable these algorithms to utilize concurrent

processing. These algorithms require a set of fitness values for the population or swarm

of individuals for each iteration during the search. The fitness of each individual is

independently evaluated, and may be assigned to separate computational nodes. The

development of such a parallel computational infrastructure is detailed in this chapter and

applied to a set of large-scale analytical problems and a biomechanical system

identification problem for the purpose of quantifying its efficiency. The parallelization of

the PSO is achieved with a master-slave, coarse-grained implementation where slave

computational nodes are associated with individual particle search trajectories and

assigned their fitness evaluations. Greatly enhanced computation throughput is

demonstrated using this infrastructure, with efficiencies of 95% observed for load

balanced conditions. Numerical example problems with large load imbalances yield poor

performance, which decreases in a linear fashion as additional nodes are added. This

infrastructure is based on a two-level approach with flexibility in terms of where the

search effort can be concentrated. For the two problems presented, the global search

effort is applied in the upper level.









This work was done in collaboration with Jeff Reinbolt, who created the

biomechanical kinematic analysis software [58] and evaluated the quality of the solutions

found by the parallel PSO. The work presented in this chapter was also published in

[58,76,129].

Introduction

Present day engineering optimization problems often impose large computational

demands, resulting in long solution times even on a modern high-end processor. To

obtain enhanced computational throughput and global search capability, we detail the

coarse-grained parallelization of an increasingly popular global search method, the

particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated

using two categories of optimization problems possessing multiple local minima -large-

scale analytical test problems with computationally cheap function evaluations and

medium-scale biomechanical system identification problems with computationally

expensive function evaluations. For load-balanced analytical test problems formulated

using 128 design variables, speedup was close to ideal and parallel efficiency above 95%

for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical

system identification problems with 12 design variables, speedup plateaued and parallel

efficiency decreased almost linearly with increasing number of nodes. The primary factor

affecting parallel performance was the synchronization requirement of the parallel

algorithm, which dictated that each iteration must wait for completion of the slowest

fitness evaluation. When the analytical problems were solved using a fixed number of

swarm iterations, a single population of 128 particles produced a better convergence rate

than did multiple independent runs performed using sub-populations (8 runs with 16

particles, 4 runs with 32 particles, or 2 runs with 64 particles).These results suggest that









(1) parallel PSO exhibits excellent parallel performance under load-balanced conditions,

(2) an asynchronous implementation would be valuable for real-life problems subject to

load imbalance, and (3) larger population sizes should be considered when multiple

processors are available.

Numerical optimization has been widely used in engineering to solve a variety of

NP-complete problems in areas such as structural optimization, neural network training,

control system analysis and design, and layout and scheduling problems. In these and

other engineering disciplines, two major obstacles limiting the solution efficiency are

frequently encountered. First, even medium-scale problems can be computationally

demanding due to costly fitness evaluations. Second, engineering optimization problems

are often plagued by multiple local optima, requiring the use of global search methods

such as population-based algorithms to deliver reliable results. Fortunately, recent

advances in microprocessor and network technology have led to increased availability of

low cost computational power through clusters of low to medium performance

computers. To take advantage of these advances, communication layers such as MPI [3,

5] and PVM [97] have been used to develop parallel optimization algorithms, the most

popular being gradient-based, genetic (GA), and simulated annealing (SA) algorithms

[48,98,99]. In biomechanical optimizations of human movement, for example,

parallelization has allowed problems requiring days or weeks of computation on a single-

processor computer to be solved in a matter of hours on a multi-processor machine [98].

The particle swarm optimization (PSO) algorithm is a recent addition to the list of global

search methods [100].This derivative-free method is particularly suited to continuous

variable problems and has received increasing attention in the optimization community. It









has been successfully applied to large-scale problems [69,100,101] in several engineering

disciplines and, being a population-based approach, is readily parallelizable. It has few

algorithm parameters, and generic settings for these parameters work well on most

problems [70,94]. In this study, we present a parallel PSO algorithm for application to

computationally demanding optimization problems. The algorithm's enhanced

throughput due to parallelization and improved convergence due to increased population

size are evaluated using large-scale analytical test problems and medium-scale

biomechanical system identification problems. Both types of problems possess multiple

local minima. The analytical test problems utilize 128 design variables to create a

tortuous design space but with computationally cheap fitness evaluations. In contrast, the

biomechanical system identification problems utilize only 12 design variables but each

fitness evaluation is much more costly computationally. These two categories of

problems provide a range of load balance conditions for evaluating the parallel

formulation.

Serial Particle Swarm Algorithm

Particle swarm optimization was introduced in 1995 by Kennedy and Eberhart [66].

Although several modifications to the original swarm algorithm have been made to

improve performance [68,102-105] and adapt it to specific types of problems

[69,106,107], a parallel version has not been previously implemented. The following is a

brief introduction to the operation of the PSO algorithm. Consider a swarm ofp particles,

with each particle's position representing a possible solution point in the design problem

space D. For each particle i, Kennedy and Eberhart proposed that its position x, be

updated in the following manner:









+1 = x + v, (4.1)
with a pseudo-velocity v, calculated as follows:

vk = kVk + C1 (P + C2 (k X ) (4.2)
Here, subscript k indicates a (unit) pseudo-time increment, p,k represents the best

ever position of particle i at time k (the cognitive contribution to the pseudo-velocity

vector v,k+1), andpgk represents the global best position in the swarm at time k (social

contribution). rl and r2 represent uniform random numbers between 0 and 1.To allow the

product clrl or c2r2 to have a mean of 1, Kennedy and Eberhart proposed that the

cognitive and social scaling parameters cl and c2 be selected such that cl = c2 = 2 .The

result of using these proposed values is that the particles overshoot the target half the

time, thereby maintaining separation within the group and allowing for a greater area to

be searched than if no overshoot occurred. A modification by Fourie and Groenwold [69]

on the original PSO algorithm 66 allows transition to a more refined search as the

optimization progresses. This operator reduces the maximum allowable velocity Vmak and

particle inertia wk in a dynamic manner, as dictated by the dynamic reduction parameters

K d, Wd. For the sake of brevity, further details of this operator are omitted, but a detailed

description can be found in References [69,70]. The serial PSO algorithm as it would

typically be implemented on a single CPU computer is described below, where p is the

total number of particles in the swarm. The best ever fitness value of a particle at design

coordinates pk is denoted byfbest and the best ever fitness value of the overall swarm at

coordinates pk byfgbest. At time step k = 0, the particle velocities v,O are initialized to

values within the limits 0 < v 0 < Vmax0 .The vector Vmax is calculated as a fraction of the

distance between the upper and lower bounds Vmax = ((xUB XLB) [69], with = 0 .5.

With this background, the PSO algorithm flow can be described as follows:










6. Initialize

a.

b.

c.

d.

e.

f.



g.

h.

7. Optimize

a.

b.



c.


Set constants K c, cl, kmax, V ax, w0, Vd, Wd, and d

Initialize dynamic maximum velocity vx and inertia wk

Set counters k = 0, t = 0, i =1. Set random number seed.

Randomly initialize particle positions x' E D in 91 for i = 1,..., p

Randomly initialize particle velocities 0 < v, < vmax for i = 1,..., p

Evaluate cost function values fj using design space coordinates x' for
i=l,...,p

Set fb,t = fo and p, = x, for i = ,..., p

Set fg, to best fb,,t and go to corresponding x'


Update particle velocity vectors v'k+ using Eq.(4.2)

If vk+~ > vm~ for any component, then set that component to its
maximum allowable value

Update particle position vectors x'~+ using Eq. (4.1)


d. Evaluate cost function values fk' using design space coordinates x'k+ for
i=l,...,p

e. If f, < fb~,t, then fb~, = fk~+, pk+1 = Xk+1 for i = l,...,p

f If f 1 < fb,, then fb, = f~ ', gk+1 = Xk+ for i = ,...,p

g. If fb, was improved in (e), then reset t = 0. Else increment t. If k> kmax
go to 3

h. If t = d, then multiply wk+ by ( wd) and vkm by (1 vd)

i. If maximum number of function evaluations is exceeded, then go to 3

j. Increment i. If i > p then increment k, and set i = 1









k. Go to 2(a).

8. Report results

9. Terminate

The above logic is illustrated as a flow diagram in Figure 11 without detailing the

working of the dynamic reduction parameters. Problem independent stopping conditions

based on convergence tests are difficult to define for global optimizers. Consequently, we

typically use a fixed number of fitness evaluations or swarm iterations as a stopping

criteria.

Parallel Particle Swarm Algorithm

The following issues had to be addressed in order to create a parallel PSO algorithm.

Concurrent Operation and Scalability

The algorithm should operate in such a fashion that it can be easily decomposed for

parallel operation on a multi-processor machine. Furthermore, it is highly desirable that it

be scalable. Scalability implies that the nature of the algorithm should not place a limit on

the number of computational nodes that can be utilized, thereby permitting full use of

available computational resources. An example of an algorithm with limited scalability is

a parallel implementation of a gradient-based optimizer. This algorithm is decomposed

by distributing the workload of the derivative calculations for a single point in design

space among multiple processors. The upper limit on concurrent operations using this

approach is therefore set by the number of design variables in the problem. On the other

hand, population-based methods such as the GA and PSO are better suited to parallel

computing. Here the population of individuals representing designs can be increased or

decreased according to the availability and speed of processors Any additional agents in

the population will allow for a higher fidelity search in the design space, lowering











susceptibility to entrapment in local minima. However, this comes at the expense of

additional fitness evaluations.






Initialize :0l','rilhm /
constants kfmai. W,'' '. CI,, ce, c2


Setk = 1i = 1


Randomly initialize all
particle positions .xT


Randomly initialize all
particle velocities vt


Evaluate objective function
f (x) for particle i
Set i = 1,
Increment k Update particle i and swarm
yes best values fbies fbgest

no i > total number I pj.iic velocity v,
of particles? for particle i


Update velocity x,
Increment i for particle i



no- Si)pniL, criterion
satisfied?

yes

Output results




Figure 11 Serial implementation of PSO algorithm. To avoid complicating the diagram,
we have omitted velocity/inertia reduction operations.

Asynchronous vs. Synchronous Implementation

The original PSO algorithm was implemented with a synchronized scheme for

updating the best 'remembered 'individual and group fitness values fk andfgk,









respectively, and their associated positions p,k and pgk This approach entails performing

the fitness evaluations for the entire swarm before updating the best fitness values.

Subsequent experimentation revealed that improved convergence rates can be obtained

by updating thefk andfgk values and their positions after each individual fitness

evaluation (i.e. in an asynchronous fashion) [70,94].

It is speculated that because the updating occurs immediately after each fitness

evaluation, the swarm reacts more quickly to an improvement in the best-found fitness

value. With the parallel implementation, however, this asynchronous improvement on the

swarm is lost since fitness evaluations are performed concurrently. The parallel algorithm

requires updatingfk andfgk for the entire swarm after all fitness evaluations have been

performed, as in the original particle swarm formulation. Consequently, the swarm will

react more slowly to changes of the best fitness value 'position' in the design space. This

behavior produces an unavoidable performance loss in terms of convergence rate

compared to the asynchronous implementation and can be considered part of the

overhead associated with parallelization.

Coherence

Parallelization should have no adverse affect on algorithm operation. Calculations

sensitive to program order should appear to have occurred in exactly the same order as in

the serial synchronous formulation, leading to the exact same final answer. In the serial

PSO algorithm the fitness evaluations form the bulk of the computational effort for the

optimization and can be performed independently. For our parallel implementation, we

therefore chose a coarse decomposition scheme where the algorithm performs only the

fitness evaluations concurrently on a parallel machine. Step 2 of the particle swarm

optimization algorithm was modified accordingly to operate in a parallel manner:









2) Optimize

a) Update particle velocity vectors vk+1 using Eq.(4.2)

b) If vk+ > vm for any component, then set that component to its maximum
allowable value

c) Update particle position vectors x'k+ using Eq. (4.1)

d) Concurrently evaluate fitness values f+, using design space co-ordinates x'k+ for
i =l,...,p

e) If f+, < f/ then fbst = fk+l, Pk+ = xkl for i= 1,..., p

f) If fk, < fbg,,, then fbgt = fk+l, gk+l = Xk+l for i = 1,..., p

g) If f ,, was improved in (e), then reset t = 0. Else increment t. If k> kmax go to 3

h) If t = d, then multiply wk+1 by (1 wd) and vk+ by (1- v )

i) If maximum number of function evaluations is exceeded, then go to 3

j) Increment k

k) Goto2(a).


The parallel PSO algorithm is represented by the flow diagram in Figure 12.

Network Communication

In a parallel computational environment, the main performance bottleneck is often

the communication latency between processors. This issue is especially relevant to large

clusters of computers where the use of high performance network interfaces are limited

due to their high cost. To keep communication between different computational nodes at

a minimum, we use fitness evaluation tasks as the level of granularity for the parallel

software. As previously mentioned each of these evaluations can be performed











independently and requires no communication aside from receiving design space co-


ordinates to be evaluated and reporting the fitness value at the end of the analysis.






hiniihlize ali,-ritlniii /
constalln ..... i'I i. I ', Ci, C2 /
I s2
Set k = 1


Randomly initialize all
particle positions xa


Randomly initialize all
particle velocities vk




f(.I) f) f() f(4) e' f(cx)



Barrier
synchronize


Update particle and swarm
Increment k best values fst, fest


Update velocity v[
for all particles


Update position 4X
for all particles



no Stopping criterion
satisfied?

yes
Output results


( Stop

Figure 12 Parallel implementation of the PSO algorithm. We have again omitted
velocity/inertia reduction operations to avoid complicating the diagram.









The optimization infrastructure is organized into a coordinating node and several

computational nodes. PSO algorithm functions and task orchestration are performed by

the coordinating node, which assigns the design co-ordinates to be evaluated, in parallel,

to the computational nodes. With this approach, no communication is required between

computational nodes as individual fitness evaluations are independent of each other. The

only necessary communication is between the coordinating node and the computational

nodes and encompasses passing the following information:

1) Several distinct design variable configuration vectors assigned by coordinating node
to slave nodes for fitness evaluation.

2) Fitness values reported from slave nodes to coordinating node.

3) Synchronization signals to maintain program coherence.

4) Termination signals from coordinating node to slave nodes on completion of analysis
to stop the program cleanly.

The parallel PSO scheme and required communication layer were implemented in

ANSI C on a Linux operating system using the message passing interface (MPI) libraries.

Synchronization and Implementation

From the parallel PSO algorithm, it is clear that some means of synchronization is

required to ensure that all of the particle fitness evaluations have been completed and

results reported before the velocity and position calculations can be executed (steps 2a

and 2b). Synchronization is done using a barrier function in the MPI communication

library which temporarily stops the coordinating node from proceeding with the next

swarm iteration until all of the computational nodes have responded with a fitness value.

Because of this approach, the time required to perform a single parallel swarm fitness

evaluation will be dictated by the slowest fitness evaluation in the swarm. Two

networked clusters of computers were used to obtain the numerical results. The first









cluster was used to solve the analytical test problems and comprised 40 1 .33 GHz Athlon

PCs located in the High-performance Computing and Simulation (HCS) Research

Laboratory at the University of Florida. The second group was used to solve the

biomechanical system identification problems and consisted of 32 2 .40 GHz Intel PCs

located in the HCS Research Laboratory at Florida State University. In both locations,

100 Mbps switched networks were utilized for connecting nodes.

Sample Optimization Problems

Analytical Test Problems

Two well-known analytical test problems were used to evaluate parallel PSO

algorithm performance on large-scale problems with multiple local minima (see

Appendix A for mathematical description of both problems).The first was a test function

(Figure 13 (a)) introduced by Griewank [108] which superimposes a high-frequency sine

wave on a multi-dimensional parabola. In contrast, the second problem used the Corana

test function [109] which exhibits discrete jumps throughout the design space (Figure

13(b)).For both problems, the number of local minima increases exponentially with the

number of design variables. To investigate large-scale optimization issues, we formulated

both problems using 128 design variables. Since fitness evaluations are extremely fast for

these test problems, a delay of approximately half a second was built into each fitness

evaluation so that total computation time would not be swamped by communication time.

Since parallelization opens up the possibility of utilizing large numbers of processors, we

used the analytical test problems to investigate how convergence rate and final solution

are affected by the number of particles employed in a parallel PSO run. To ensure that all

swarms were given equally 'fair' starting positions, we generated a pool of 128 initial









positions using the Latin Hypercube Sampler (LHS). Particle positions selected with this

scheme will be distributed uniformly throughout the design space [110].

This initial pool of 128 particles was divided into the following sub-swarms: one

swarm of 128 particles, two swarms of 64 particles, four swarms of 32 particles, and

eight swarms of 16 particles. Each sub-swarm was used independently to solve the two

analytical test problems. This approach allowed us to investigate whether is it more

efficient to perform multiple parallel optimizations with smaller population sizes or one

parallel optimization with a larger population size given a sufficient number of

processors. To obtain comparisons for convergence speed, we allowed all PSO runs to

complete 10,000 iterations before the search was terminated. This number of iterations

corresponded to between 160,000 and 1,280,000 fitness evaluations depending on the

number of particles employed in the swarm.

Biomechanical System Identification problems

In addition to the analytical test problems, medium-scale biomechanical system

identification problems were used to evaluate parallel PSO performance under more

realistic conditions. These problems were variations of a general problem that attempts to

find joint parameters (i.e. positions and orientations of joint axes) that match a kinematic

ankle model to experimental surface marker data [56]. The data are collected with an

optoelectronic system that uses multiple cameras to record the positions of external

markers placed on the body segments. To permit measurement of three-dimensional

motion, we attach three non-colinear markers to the foot and lower leg. The recordings

are processed to obtain marker trajectories in a laboratory-fixed co-ordinate system [111,

112] The general problem possesses 12 design variables and requires approximately 1

minute for each fitness evaluation. Thus, while the problem is only medium-scale in









terms of number of design variables, it is still computationally costly due to the time

required for each fitness evaluation..


Y2 Q8~4"


Figure 13 Surface plots of the (a) Griewank and (b) Corana analytical test problems
showing the presence of multiple local minima. For both plots, 126 design
variables were fixed at their optimal values and the remaining 2 design
variables varied in a small region about the global minimum.

The first step in the system identification procedure is to formulate a parametric

ankle joint model that can emulate a patient's movement by possessing sufficient degrees

of freedom. For the purpose of this paper, we approximate the talocrural and subtalar









joints as simple 1 degree of freedom revolute joints. The resulting ankle joint model

(Figure 6) contains 12 adjustable parameters that define its kinematic structure [56]. The

model also has a set of virtual markers fixed to the limb segments in positions

corresponding to the locations of real markers on the subject. The linkage parameters are

then adjusted via optimization until markers on the model follow the measured marker

trajectories as closely as possible. To quantify how closely the kinematic model with

specified parameter values can follow measured marker trajectories, we define a

cumulative marker error e as follows:


e = Z A, (4.3)
J=1 1-l
where

A Ax2 + Ayl + Az (4.4)
where Ax,,, Ay,, and Az,, are the spatial displacement errors for marker i at time framej in

the x y ,and z directions as measured in the laboratory-fixed coordinate system, n = 50 is

the number of time frames, and m = 6 (3 on the lower leg and 3 on the foot) is the number

of markers. These errors are calculated between the experimental marker locations on the

human subject and the virtual marker locations on the kinematic model.

For each time frame, a non-linear least squares sub-optimization is performed to

determine the joint angles that minimize 4,,2 given the current set of model parameters.

The first sub-optimization is started from an initial guess of zero for all joint angles. The

sub-optimization for each subsequent time frame is started with the solution from the

previous time frame to speed convergence. By performing a separate sub-optimization for

each time frame and then calculating the sum of the squares of the marker co-ordinate

errors, we obtain an estimate of how well the model fits the data for all time frames









included in the analysis. By varying the model parameters and repeating the sub-

optimization process, the parallel PSO algorithm finds the best set of model parameters

that minimize e over all time frames.

For numerical testing, three variations of this general problem were analyzed as

described below. In all cases the number of particles used by the parallel PSO algorithm

was set to a recommended value of 20 [94].

1) Synthetic data without numerical noise: Synthetic (i.e. computer generated)
data without numerical noise were generated by simulating marker movements
using a lower body kinematic model with virtual markers. The synthetic motion
was based on an experimentally measured ankle motion (see 3 below).The
kinematic model used anatomically realistic joint positions and orientations. Since
the joint parameters associated with the synthetic data were known, his
optimization was used to verify that the parallel PSO algorithm could accurately
recover the original model.

2) Synthetic data with numerical noise: Numerical noise was superimposed on
each synthetic marker coordinate trajectory to emulate the effect of marker
displacements caused by skin movement artifacts [53]. A previously published
noise model requiring three random parameters was used to generate a
perturbation Nin each marker coordinate [80]:

N= Asin(cot+q+) (4.5)
where A is the amplitude, co the frequency, and p the phase angle of the noise.
These noise parameters were treated as uniform random variables within the
bounds 0 < A < Icm, 0 < co < 25 rad/s, and 0 < p < 2 (obtained from [80]).

3) Experimental data: Experimental marker trajectory data were obtained by
processing three-dimensional recordings from a subject performing movements
with reflective markers attached to the foot and lower leg as previously described.
Institutional review board approval was obtained for the experiments and data
analysis, and the subject gave informed consent prior to participation. Marker
positions were reported in a laboratory-fixed coordinate system.

Speedup and Parallel Efficiency

Parallel performance for both classes of problems was quantified by calculating

speedup and parallel efficiency for different numbers of processors. Speedup is the ratio

of sequential execution time to parallel execution time and ideally should equal the






64


number of processors. Parallel efficiency is the ratio of speedup to number of processors

and ideally should equal 100%.For the analytical test problems, only the Corana problem

was run since the half second delay added to both problems makes their parallel

performance identical. For the biomechanical system identification problems, only the

synthetic data with numerical noise case was reported since experimentation with the

other two cases produced similar parallel performance.


b


0 2000 4000 6000 8000 10000
Iterations
Figure 14 Average fitness convergence histories for the (a) Griewank and (b) Corana
analytical test problems for swarm sizes of 16,32,64,and 128 particles and
10000 swarm iterations. Triangles indicate the location on each curve where
160 000 fitness evaluations were completed.


s.





Corana 128 DVs


I









The number of particles and nodes used for each parallel evaluation was selected

based on the requirements of the problem. The Corana problem with 128 design variables

was solved using 32 particles and 1, 2, 4, 8, 16, and 32 nodes. The biomechanical

problem with 12 design variables was solved using 20 particles and 1, 2, 5, 10, and 20

nodes. Both problems were allowed to run until 1000 fitness evaluations were completed.

Numerical Results

Convergence rates for the two analytical test problems differed significantly with

changes in swarm size. For the Griewank problem (Figure 14(a)), individual PSO runs

converged to within le-6 of the global minimum after 10 000 optimizer iterations,

regardless of the swarm size. Run-to-run variations in final fitness value (not shown) for

a fixed swarm size were small compared to variations between swarm sizes. For example,

no runs with 16 particles produced a better final fitness value than any of the runs with 32

particles, and similarly for the 16-32, 32-64, and 64-128 combinations. When number of

fitness evaluations was considered instead of number of swarm iterations, runs with a

smaller swarm size tended to converge more quickly than did runs with a larger swarm

size (see triangles in Figure 14). However, two of the eight runs with the smallest number

of particles failed to show continued improvement near the maximum number of

iterations, indicating possible entrapment in a local minimum. Similar results were found

for the Corana problem (Figure 14(b)) with two exceptions. First, the optimizer was

unable obtain the global minimum for any swarm size within the specified number of

iterations (Figure 14(b)), and second, overlapping in results between different swarm

sizes was observed. For example, some 16 particle results were better than 32 particles

results, and similarly for the other neighboring combinations. On average, however, a

larger swarm size tended to produce better results for both problems.









Table 4 Parallel PSO results for the biomechanical system identification problem
using synthetic marker trajectories without and with numerical noise.
Optimizations on synthetic with noise and without noise were with 20
particles and were terminated after 40000 fitness evaluations.
Model Upper Lower Synthetic Synthetic data
parameter bound bound solution Without noise With noise
pi(deg) 48.67 -11.63 18.37 18.36 15.13
p2 (deg) 30.00 -30.00 0.00 -0.01 8.01
p3 (deg) 70.23 10.23 40.23 40.26 32.97
p4(deg) 53.00 -7.00 23.00 23.03 23.12
p5 (deg) 72.00 12.00 42.00 42.00 42.04
p6 (cm) 6.27 -6.27 0.00 0.00 -0.39
p7 (cm) -33.70 -46.24 -39.97 -39.97 -39.61
p8 (cm) 6.27 -6.27 0.00 -0.00 0.76
p9 (cm) 0.00 -6.27 -1.00 -1.00 -2.82
pio (cm) 15.27 2.72 9.00 9.00 10.21
pii (cm) 10.42 -2.12 4.15 4.15 3.03
p12 (cm) 6.89 -5.65 0.62 0.62 -0.19


The parallel PSO algorithm found ankle joint parameters consistent with the known

solution or results in the literature [61-63].The algorithm had no difficulty recovering the

original parameters from the synthetic date set without noise (Table 4), producing a final

cumulative error e on the order of 10 13.The original model was recovered with mean

orientation errors less than 0.05 and mean position errors less than 0.008cm.

Furthermore, the parallel implementation produced identical fitness and parameter

histories as did a synchronous serial implementation. For the synthetic data set with

superimposed noise, a RMS marker distance error of 0.568cm was found, which is on the

order of the imposed numerical noise with maximum amplitude of 1cm. For the

experimental data set, the RMS marker distance error was 0.394cm (Table 5),

comparable to the error for the synthetic data with noise. Convergence characteristics

were similar for the three data sets considered in this study. The initial convergence rate









was quite high (Figure 15(a)), where after it slowed when the approximate location of the

global minimum was found.

Table 5 Parallel PSO results for the biomechanical system identification problem
using synthetic marker trajectories without and with numerical noise.

Synthetic Data Experimental
RMS errors Without noise With noise data
Marker distances (cm) 3.58E-04 0.568 0.394
Orientation parameters (deg) 1.85E-02 5.010 N/A
Position parameters (cm) 4.95E-04 1.000 N/A

As the solution process proceeded, the optimizer traded off increases in RMS joint

orientation error (Figure 15(b)) for decreases in RMS joint position error (Figure 15(c))

to achieve further minor reductions in the fitness value.

The analytical and biomechanical problems exhibited different parallel

performance characteristics. The analytical problem demonstrated almost perfectly linear

speedup (Figure 16(a), squares) resulting in parallel efficiencies above 95% for up to 32

nodes (Figure 16(b), squares). In contrast, the biomechanical problem exhibited speedup

results that plateaued as the number of nodes was increased (Figure 16(a), circles),

producing parallel efficiencies that decreased almost linearly with increasing number of

nodes (Figure 16(b), circles). Each additional node produced roughly a 3% reduction in

parallel efficiency.

Discussion

This study presented a parallel implementation of the particle swarm global

optimizer. The implementation was evaluated using analytical test problems and

biomechanical system identification problems. Speedup and parallel efficiency results

were excellent when each fitness evaluation took the same amount of time.

























100 200


0 100 200


300 400


300 400


500 600


500 600


0 100 200 300 400 500 600 700 8C






1.5



1-


100 200


300 400
Swarm iterations


600 700


Figure 15 Fitness convergence and parameter error plots for the biomechanical system
identification problem using synthetic data with noise


I 600

" 400

200


30

25

20

15

10
rI_











a

30

S20-


10- -0- Analytical
-0- Biomechanical

0 I


b




60-

S40-

I. 20

0 I I I I I
0 5 10 15 20 25 30 3
Number of Nodes
Figure 16 (a) Speedup and (b) parallel efficiency for the analytical and biomechanical
optimization problems.

For problems with large numbers of design variables and multiple local minima,

maximizing the number of particles produced better results than repeated runs with fewer

particles. Overall, parallel PSO makes efficient use of computational resources and

provides a new option for computationally demanding engineering optimization

problems. The agreement between optimized and known orientation parameters pi-p4 for

the biomechanical problem using synthetic data with noise was poorer than initially

expected. This finding was the direct result of the sensitivity of orientation calculations to

errors in marker positions caused by the injected numerical noise. Because of the close

proximity of the markers to each other, even relatively small amplitude numerical noise









in marker positions can result in large fluctuations in the best-fit joint orientations. While

more time frames could be used to offset the effects of noise, this approach would

increase the cost of each fitness evaluation due to an increased number of sub-

optimizations. Nonetheless, the fitness value for the optimized parameters was lower than

that for the parameters used to generate the original noiseless synthetic data.

Though the biomechanical optimization problems only involved 12 design

variables, multiple local minima existed when numerical or experimental noise was

present. When the noisy synthetic data set was analyzed with a gradient-based optimizer

using 20 random starting points, the optimizer consistently found distinct solutions,

indicating a large number of local minima. Similar observations were made for a smaller

number of gradient-based runs performed on the experimental data set. To evaluate the

parallel PSOs ability to avoid entrapment in these local minima, we performed 10

additional runs with the algorithm. All 10 runs converged to the same solution, which

was better than any of the solutions found by gradient-based runs.

Differences in parallel PSO performance between the analytical test problem and

the biomechanical system identification problem can be explained by load balancing

issues. The half second delay added to the analytical test problem made all fitness

evaluations take approximately the same amount of time and substantially less time than

communication tasks. Consequently, load imbalances were avoided and little degradation

in parallel performance was observed with increasing number of processors. In contrast,

for the biomechanical system identification problem, the time required to complete the 50

sub-optimizations was sensitive to the selected point in design space, thereby producing

load imbalances. As the number of processors increased, so did the likelihood that at least









one fitness evaluation would take much longer than the others. Due to the

synchronization requirement of the current parallel implementation, the resulting load

imbalance caused by even one slow fitness evaluation was sufficient to degrade parallel

performance rapidly with increasing number of nodes. An asynchronous parallel

implementation could be developed to address this problem with the added benefit of

permitting high parallel efficiency on inhomogeneous clusters. Our results for the

analytical and biomechanical optimization problems suggest that PSO performs best on

problems with continuous rather than discrete noise. The algorithm consistently found the

global minimum for the Griewank problem, even when the number of particles was low.

Though the global minimum is unknown for the biomechanical problem using synthetic

data with noise, multiple PSO runs consistently converged to the same solution. Both of

these problems utilized continuous, sinusoidal noise functions. In contrast, PSO did not

converge to the global minima for the Corana problem with its discrete noise function.

Thus, for large-scale problems with multiple local minima and discrete noise, other

optimization algorithms such as GA may provide better results [48].

Use of a LHS rather than uniform random sampling to generate initial points in

design space may be a worthwhile PSO algorithm modification. Experimentation with

our random number generator indicated that initial particle positions can at times be

grouped together. This motivated our use of LHS to avoid re-sampling the same region of

design space when providing initial guesses to sub-swarms. To investigate the influence

of sampling method on PSO convergence rate, we performed multiple runs with the

Griewank problem using uniform random sampling and a LHS with the default design

variable bounds (-600 to +600) and with the bounds shifted by 200 (-400 to +800).We









found that when the bounds were shifted, convergence rate with uniform random

sampling changed while it did not with a LHS. Thus, swarm behavior appears to be

influenced by sampling method, and a LHS may be helpful for minimizing this

sensitivity.

A secondary motivation for running the analytical test problems with different

numbers of particles was to determine whether the use of sub-swarms would improve

convergence. The question is whether a larger swarm where all particles communicate

with each other is more efficient than multiple smaller swarms where particles

communicate within each sub-swarm but not between sub-swarms. It is possible that the

global best position found by a large swarm may unduly influence the motion of all

particles in the swarm. Creating sub-swarms that do not communicate eliminates this

possibility. In our approach, we performed the same number of fitness evaluations for

each population size. Our results for both analytical test problems suggest that when a

large numbers of processors are available, increasing the swarm size will increase the

probability of finding a better solution. Analysis of PSO convergence rate for different

numbers of particles also suggests an interesting avenue for future investigation. Passing

an imaginary curve through the triangles in Figure 14 reveals that for a fixed number of

fitness evaluations, convergence rate increases asymptotically with decreasing number of

particles. While the solution found by a smaller number of particles may be a local

minimum, the final particle positions may still identify the general region in design space

where the global minimum is located. Consequently, an adaptive PSO algorithm that

periodically adjusts the number of particles upward during the course of an optimization

may improve convergence speed. For example, an initial run with 16 particles could be









performed for a fixed number of fitness evaluations. At the end of that phase, the final

positions of those 16 particles would be kept, but 16 new particles would be added to

bring the total up to 32 particles. The algorithm would continue using 32 particles until

the same number of fitness evaluations was completed. The process of gradually

increasing the number of particles would continue until the maximum specified swarm

size (e.g. 128 particles) was analyzed. To ensure systematic sampling of the design space,

a LHS would be used to generate a pool of sample points equal to the maximum number

of particles and from which sub-samples would be drawn progressively at each phase of

the optimization. In the scenario above with a maximum of 128 particles, the first phase

with 16 particles would remove 16 sampled points from the LHS pool, the second phase

another 16 points, the third phase 32 points, and the final phase the remaining 64 points.

Conclusions

In summary, the parallel Particle Swarm Optimization algorithm presented in this

chapter exhibits excellent parallel performance as long as individual fitness evaluations

require the same amount of time. For optimization problems where the time required for

each fitness evaluation varies substantially, an asynchronous implementation may be

needed to reduce wasted CPU cycles and maintain high parallel efficiency. When large

numbers of processors are available, use of larger population sizes may result in

improved convergence rates to the global solution. An adaptive PSO algorithm that

increases population size incrementally may also improve algorithm convergence

characteristics.














CHAPTER 5
IMPROVED GLOBAL CONVERGENCE USING MULTIPLE INDEPENDENT
OPTIMIZATIONS

Overview

This chapter presents a methodology for improving the global convergence

probability in large scale global optimization problems in cases where several local

minima are present. The optimizer applied in this methodology is the PSO, but the

strategy outlined here is applicable to any type of algorithm. The controlling idea behind

this optimization approach is to utilize several independent optimizations, each using a

fraction of a budget of computational resources. Although optimizations may have a

limited probability of convergence individually as compared to a single optimization

utilizing the full budget, it is shown that when they are combined they will have a

cumulative convergence probability far in excess of the single optimization.

Since the individual limited optimizations are independent they may be executed

concurrently on separate computation nodes with no interaction. This exploitation of

parallelism allows us to vastly increase the probability of convergence to the global

minimum while simultaneously reducing the required wall clock time if a parallel

machine is used.

Introduction

If we consider the general global unconstrained optimization problem for the real

valued function f(x) defined on the set x E D in D one cannot state that a global

solution has been found unless an exhaustive search of the set is x e D is performed










[115]. With a finite number of function evaluations, at best we can only estimate the

probability of arriving at or near the global optimum. To solve global optimization

problems reliably the optimizer needs to achieve an efficient balance between sampling

the entire design space and directing progressively more densely spaced sampling points

towards promising regions for a more refined search [116]. Many algorithms achieve this

balance, such as the deterministic DIRECT optimizer [117] or stochastic algorithms such

as genetic algorithms [118], simulated annealing [119,120], clustering [121], and the

particle swarm optimizer [122].

Although these population-based global optimization algorithms are fairly robust,

they can be attracted, at least temporarily, towards local optima which are not global (see,

for example, the Griewank problem in Figure 17).

Griewank










10
-50
X2 -10 -10

Figure 17 Multiple local minima for Griewank analytical problem surface plot in two
dimensions

This difficulty can be addressed by allowing longer optimization runs or an

increased population size. Both these options often result in a decrease in the algorithm

efficiency, with no guarantee that the optimizer will escape from the local optimum.









It is possible that restarting the algorithm when it gets stuck in a local minimum

and allowing multiple optimization runs may be a more efficient approach. This follows

from the hypothesis that several limited independent optimization runs, each with a small

likelihood of finding the global optimum, may be combined in a synergistic effort which

yields a vastly improved global convergence probability. This approach is routinely used

for global search using multi-start local optimizers [123]. Le Riche and Haftka have also

suggested the use of this approach with genetic algorithms for solving complex

composite laminate optimization problems [124].

The main difficulty in the application of such a multi-run strategy is deciding when

the optimizer should be stopped. The objective of this manuscript is to solve this

problem by developing an efficient and robust scheme by which to allocate

computational resources to individual optimizations in a set of multiple optimizations.

The organization of this manuscript is as follows: First a brief description of the

optimization algorithm applied in this study, the PSO algorithm is given. Next a set of

analytical problems are described, along with details on calculating global convergence

probabilities. After that, the multiple run methodology is outlined and a general budget

strategy is presented for dividing a fixed number of fitness evaluations among multiple

searches on a single processor. The use of this method on a parallel processing machine is

also discussed. Then, numerical results based on the multiple run strategy for both single

and multi-processor machines are reported and discussed. Finally, general conclusions

about the multi-run methodology are presented.









Methodology

Analytical Test Set

The convergence behavior of the PSO algorithm was analyzed with the Griewank

[108], Shekel [114] and Hartman [114] analytical problems (see Appendix A for problem

definitions), each of which possess multiple local minima. Analytical test problems were

used because global solutions are known a-priori. The known solution value allows us

to ascertain if an optimization has converged to the global minimum. To estimate the

probability of converging to the global optimum, we performed 1000 optimization runs

for each problem, with each run limited to 500,000 fitness evaluations. These

optimization runs are performed with identical parameter settings, with the exception of a

different random number seed for each optimization run in order to start the population at

different initial points in the design space. To evaluate the global convergence probability

of the PSO algorithm as a function of population size, we solved each problem using a

swarm of 10, 20, 50 and 100 particles. A standard set of parameters were used for the

other algorithm parameters (Table 6).

Table 6 Particle swarm algorithm parameters
Parameter Description Value
cl Cognitive trust parameter 2.0
c2 Social trust parameter 2.0
wo Initial inertia 1
Wd Inertia reduction parameter 0.01
K Bound on velocity fraction 0.5
vd Velocity reduction parameter 0.01

d Dynamic inertia/velocity reduction delay (function 200
evaluations)









We assumed that convergence to the global optimum was achieved when the fitness

value was within a predetermined fixed tolerance E, (see Table 7) of the known global

optimum f*.

f < f + (5.1)
For the Shekel and Hartman problems, the tolerance ensures that the minimum

corresponding to the global optimum for the problem has been found. That is, because c i

0 the exact optimum is not obtained, but if a local optimizer is started from the PSO

solution found with the given tolerance it will converge to the global optimum. For the

Griewank problem, however, starting a local optimizer at the PSO solution will not

guarantee convergence to the global optimum, since this noisy, shallow convex problem

has several local minima grouped around the global optimum that will defeat a local

optimizer

Table 7 Problem convergence tolerances
Problem Convergence
tolerance et
Griewank 0.1
Shekel 0.001
Hartman 0.001

Multiple-run Methodology

The use of multiple optimizations using a global optimizer such as a GA was first

proposed by Le Riche and Haftka [124]. However, no criterion was given on the division

of computational resources between the multiple optimizations, and the efficiency of the

approach was not investigated. The method entails running multiple optimizations with a

reduced number of fitness evaluations, either by limiting the number of algorithm

iterations or reducing the population while keeping the number of iterations constant.

Individually, the convergence probability of such a limited optimization may only be a









fraction of a single traditional optimization run. However, the cumulative convergence

probability obtained by combining the limited runs can be significantly higher than that

of the single run. Previously similar studies have been undertaken to investigate the

efficiency of repeated optimizations using simple search algorithms such as pure random

search, grid search, and random walk [130,131]. The use of multiple local optimizations

or clustering [132] is a common practice but for some algorithms the efficiency of this

approach decreases rapidly when problems with a high number of local minima are

encountered [130].

For estimating the efficiency of the proposed strategy and for comparison with

optimizations with increased populations/allowed iterations, we are required to calculate

the probability of convergence to the global optimum for an individual optimization run,

P,. This convergence probability cannot be easily calculated for practical engineering

problems with unknown solutions. For the set of analytical problems however, the

solutions are known and a large number of optimizations of these problems can be

performed at little computational cost. With some reasonable assumptions these two facts

allow us to estimate the probability of convergence to the global optimum for individual

optimization runs. The efficiency and exploration run considerations derived from the

theoretical analytical results are equally applicable to practical engineering problems

where solutions are not known a priori. The first step in calculating P, is using the

convergence ratio, Cr, which is calculated as follows:

C =- (5.2)
r N
where Nc is the number of globally converged optimizations and Nis the number of

optimizations, in this case 1000. For a very large number of optimizations the probability









P, that any individual run converges to the global optimum approaches Cr. For a finite

number of runs however the standard error se in P, can be quantified using:


e = (5.3)
V N
which is an estimate of the standard deviation of Cr. For example, if we obtain a

convergence probability of Pi = 0.5 with N = 1000 optimizations, the standard error

would be se = 0.016.

To obtain the combined cumulative probability of finding the global optimum by

multiple independent optimizations we apply the statistical law for calculating the

probability for success with repeated independent events. We denote the combined or

cumulative probability of N multiple independent optimization runs converging to the

solution as Pc, and using the fact that the convergence events are uncorrelated then

N
P = 1- I (5.4)

where P, is the probability of the ith single individual optimization run converging to the

global optimum. If we assume that individual optimization runs with similar parameter

settings, as in the case of the following study, have equal probability of convergence we

can simplify Eq. (5.4) to

4 =1-(1- (5.5)
The increase in cumulative probability Pc with fixed values ofP, for increasing

number of optimization runs Nis illustrated in Figure 18. It must be stressed that the

above relations are only valid for uncorrelated optimizations, which may not be the case

when a poor quality random number generator is used to generate initial positions in the

design space. Certain generators can exhibit a tendency to favor regions in the design

space, biasing the search and probability of convergence to a minimum in these regions.













0.9 a*'

0.8
0.9 6 - - --

4 03
o 0.7 '
> 0.6

o 0.5

0.4

o 0.3

0.2

0.1
0 00oo
1 2 3 4 5 6 7 8 9 10
Number of runs
Figure 18 Cumulative convergence probability Pc as a function of the number of
optimization runs with assumed equal P, values

To verify the cumulative probability values predicted in theory with Eq. (5.5), the

Monte Carlo method is applied, sampling random pairs, triplets, quintuplets etc. of

optimizations in the pool of 1000 runs. For example, to estimate the experimental global

convergence probability of two runs, we selected a large number of random pairs of

optimizations among the 1000 runs. Applying Eq. (5.2), the number of cases in which

either or both of the pairs converged N, for the limited optimization run, divided by N

(total number of pairs selected) will yield the experimental global convergence

probability.

Exploratory run and budgeting scheme

Using the multiple run methodology requires a budget strategy by which to divide a

fixed budget of fitness evaluations among the independent optimization runs. The budget

of fitness evaluations nb is usually dictated by how much time the user is willing to

allocate on a machine in order to solve a problem divided by how long a single fitness

evaluation takes to execute. An exploratory optimization utilizing a fraction, nl of this











budget is required to determine the interaction between the optimizer and problem. The

fitness history of this optimization is used to obtain an estimate of the number of fitness

evaluations to be allocated to each run, n,. This strategy is based on the assumption that a

single fitness history will be sufficient to quantify the optimizer behavior on a problem.

For the set of problems it is observed that a correlation exists between the point where the

fitness history levels off and the convergence history levels off (Figure 19).

Griewank, 20 particles Hartman, 20 particles Shekel, 20 particles
0 0
102 -2
i -4
S100 -2 -6
I_ -8
-3 -10

0 5 10 0 0.5 1 1.5 2 0 5 10
Fitness Evaluationsx 104 Fitness Evaluationsx 104 Fitness Evaluationsx 104
1 0.4 1
0.8 0.3 0.8
0.6 0.6
8 0.2
S0.4 0.4
a) a)
0.2 0.2
>o o 0o
o 0 0 0 0
01-l-------- 1 --- <------ 0 0 1---------
0 5 10 0 0.5 1 1.5 2 0 5 10
Fitness Evaluationsx 104 Fitness Evaluationsx 104 Fitness Evaluationsx 104
Figure 19 Fitness history and convergence probability Pc plots for Griewank, Hartman
and Shekel problems

We hypothesize that the algorithm will converge quickly to the optimum or stall at


a similar number of fitness evaluations (Figure 20). The exploratory run is stopped using


a stopping criterion which monitors the rate of change of the objective fitness value as a

function of the number of fitness evaluations. As soon as this rate of improvement drops

below a predetermined value (i.e. the fitness value plot levels off), the exploratory

optimization is stopped and the number of fitness evaluations noted as n1. The stopping

criterion parameters used for obtaining the numerical results is a change of less than 0.01

in fitness value for at least 500 functions evaluations.







83



0 --


-2 -


-4
(D
-2
>
n) -6
(D
u-
-8


-10


-12


0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Fitness evaluations x105


Figure 20 Typical Shekel fitness history plots of 20 optimizations (sampled out of 1000)

The remainder of the budgeted fitness evaluations is distributed among a number of


N independent optimizations, which may be calculated as follows



N= --n 1 (5.6)
Sn,
with an allowed number of fitness evaluations per run calculate by



n N= n > n (5.7)

If a multi-processor machine is available, very high Pc values may be reached using


a multiple run strategy. If we take the simple case where each independent optimization


run is assigned to a separate node, the multiple run approach will be constrained


somewhat differently than the previous single-processor case. Rather than the number of


multiple optimizations being limited by a fixed budget of fitness evaluations (which is


divided equally among the set of multiple independent optimizations using Eq.(5.6)), the


number of optimization runs will be defined by the number of computational nodes and


the wall clock time available to the user. A similar method to that followed for a single


7rrT11









processor machine for determining algorithm/problem behavior must still be followed to

determine the optimal number of fitness evaluations for a single independent

optimization run. This exploratory run can, however, be done using a parallel

implementation of the population-based algorithm under consideration, in which

concurrent processing is achieved through functional decomposition [125].

Bayesian convergence probability estimation

If the amount of time and the global probability of convergence are competing

considerations a Bayesian convergence probability estimation method may be used, as

proposed by Groenwold et al. [134,135]. This criterion states that the optimization is

stopped once a certain confidence level is reached, which is that the best solution found

among all optimizations I will be the global solution f*. This probability or confidence

measure is given in [135] as

(N+t)!(2N+b)!
Prq=-(N + ) (2N + (5.8)
Pr[f=f' q (2N+U)! N+b)!(5.8)
where q is the predetermined confidence level set by the user, usually 0.95. N is the total

number of optimizations performed up to the time of evaluating the stopping criteria, and

= a + b -1, b =b N, with a, b suitable parameters of a Beta distribution f(a, b) .

The number of optimizations among the total Nwhich yield a final value of f is defined

as Nc. Values of parameter a and b were chosen as 1 and 5 respectively, as recommended

by Groenwold et. al.[135].










Numerical Results

Multi-run Approach for Predetermined Number of Optimizations

For the three problems under consideration only a limited improvement in global

convergence probability is achieved by applying the traditional approaches of increasing

the number of fitness evaluations or the population size (Figure 21).


TI 7 ---------- T--- 7---------- I --- 7II I
09 -

0.8 ,,..-

0. - T 1 p......... r
I I ----------

(D 0.5 Ti I ....... 10 particles -
20 particles
S0.4 --- 50 particles
> --- 100 particles
0 03- A- -- --- - -

0.2 -
0.1 r - -

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Fitness evaluations x 10


Figure 21 Shekel convergence probability for an individual optimization as a function of
fitness evaluations and population size

For the Shekel problem, using larger swarm sizes and/or allowing an increased

number of fitness evaluations yielded higher convergence probabilities only up to a point.

Similar results were obtained for the Griewank and Hartman problem cases. On the other

hand, optimizations with a small number of particles reached moderate global

convergence probabilities at significantly fewer fitness evaluations than did optimizations

with large swarms. This behavior was observed for all the problems in the test set (Figure

19). To exploit this behavior we replace a single optimization with several PSO runs,

each with a limited population and number of iterations. These individual optimizations











utilize the same amount of resources allocated to the original single optimization (in this

case the number of fitness evaluations).

To illustrate the merit of such an approach we optimize the Hartman analytical

problem with and without multiple limited optimizations. We observe that for a single

optimization the probability of convergence is not significantly improved by allowing

more fitness evaluations, or by increasing the population size (Figure 22).





0.8 ----- --..
0.9 8- -' .-- -


0.7 -- - 10 particles
0.7 ...... 10 particles
o
S-' -- 20 particles
S06 50 particles
S/ ---. 100 particles
0.5 -- P with 10 indep. optimizations

S0.4 - :- ---- I -
0 .......... ..... I-.
0.3-
o -r----------
0. -- i.--al------------ r- --------------
0.2

I I I I
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Fitness evaluations x 10


Figure 22 Theoretical cumulative convergence probability Pc as a function of the
number of optimization runs with constant P, for the Hartman problem.
Multiple independent runs with 10 particles.

We also observe that an optimization with 10 particles quickly attains a probability

of convergence of P, = 0.344 after only 10,000 fitness evaluations. Using a multiple run

strategy with 10 independent optimizations of 10,000 fitness evaluations yields the

theoretical Pc values reported in Table 8 (calculated using Eq. (5.5) with P, = 0.344 and n

= 1,...,10). These values are indicated as circled data points at a sum equivalent number

of fitness evaluations in Figure 22 for comparison with a single extended optimization.