<%BANNER%>

Parallel Computational Mechanics with a Cluster of Workstations

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20101122_AAAAAR INGEST_TIME 2010-11-22T23:13:03Z PACKAGE UFE0012121_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 556224 DFID F20101122_AAAGXJ ORIGIN DEPOSITOR PATH johnson_p_Page_085.jp2 GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
4ec21a28992c90e054b075cc7e60f264
SHA-1
afe422840ed62b94119fa3b94259ef98dc59cf6a
133126 F20101122_AAAHBZ johnson_p_Page_235.jp2
7a5bea4af838a8504a65b2f5f1b85cac
18c45f75b4342f1a2a737f8f2ae474fe31fa9ce6
321 F20101122_AAAHUC johnson_p_Page_115.txt
b63b60ef2d58d9724e19455f886ca80b
1e6de665dd9a5b241c712c78b9e6586124c2056d
419589 F20101122_AAAGXK johnson_p_Page_086.jp2
d9ddfa36a1fd1e57458c0b949bb32578
d730f175b7099e946c278335769098133086f258
462 F20101122_AAAHUD johnson_p_Page_116.txt
d02932efaa227f994a229abb1bff8e1d
ff760892df231c358123b2b648677001f604d3df
434642 F20101122_AAAGXL johnson_p_Page_087.jp2
50c4405f7ae314f78236f3db74c4dbdc
71311c4ee0d0482d0b9d038b2f351c9fb8ae4897
774 F20101122_AAAHUE johnson_p_Page_117.txt
371b7cac48d2b20c31e2604dca874fc7
5c753a9feddb0add8bf473a80207d67d58ab036e
60338 F20101122_AAAGKA johnson_p_Page_042.pro
5d3db87ed47758b1df9f46c677563975
dca103473f4c9e7432cb86ca7d8e73dd94ad52e9
715378 F20101122_AAAGXM johnson_p_Page_089.jp2
a9570248ec156ee19dc99ea3c24af6ed
e8ed367f5ca4834a99a900e2cbef9965a4ff3eb3
753 F20101122_AAAHUF johnson_p_Page_118.txt
11d83ca46cd66f842db3f74a09e3b210
571c382f52031a0868fd3a3a7a2f7508217abde4
757 F20101122_AAAGKB johnson_p_Page_163.txt
d3331d09e0a5a2adcf8f29ba19921566
ceedc74a82da1bad0ca2a129bb5921ce3f62d2c6
276312 F20101122_AAAGXN johnson_p_Page_091.jp2
d1f911b22f469a4c4b2debdb6ceb56a1
3ffb2d8e3f57106a97532bf8802cbaf37ef599f4
504 F20101122_AAAHUG johnson_p_Page_119.txt
b91e3fe83aa695e6caecc34ccd8b5b55
85063d7f2fb9e6def70d3500991817af896a867f
5463 F20101122_AAAGKC johnson_p_Page_059thm.jpg
7a106daabf1abd8c65fe530801c7f82f
ab0771366775fcb69cfc957346cf951ed362b121
449763 F20101122_AAAGXO johnson_p_Page_093.jp2
7d9b71e5ea2ee107fcffd6ddf41a047a
7961b9088b4b0045d0a5eece1beddbe05f3eaf93
732 F20101122_AAAHUH johnson_p_Page_120.txt
156e76a5eef05351322712f7f79d776f
2fd49e3f74a3587bf4b31b5dabfabf5f1d7c8903
29204 F20101122_AAAGKD johnson_p_Page_043.jpg
6f5f25ebdd431590c0eb175e3556fc96
91d294ac84c0bbeabd367960c5496071e2770b39
487850 F20101122_AAAGXP johnson_p_Page_094.jp2
755ab894cb2756098eec76a1d57af760
66f5971ca1f54f19b3753ef34fcc9c2dfe027243
445 F20101122_AAAHUI johnson_p_Page_121.txt
a22c3f665fb226ecf63363c32c1357ce
c826a9e3423eb74ac9c729faa6fd9e5889b0538e
25271604 F20101122_AAAGKE johnson_p_Page_147.tif
d40898c64cde9aac437bce748be5bd5a
aaccfb77b4120b47b861b651f44400703abcdf67
332514 F20101122_AAAGXQ johnson_p_Page_095.jp2
001967b545261160fd74af815d8aceb7
74c44ea32e96918b3e43060e151690d91b3265f1
606 F20101122_AAAHUJ johnson_p_Page_122.txt
6f8505d79c03639cf051879ba0dc9cc3
2307a90d4e96aba7de6af0e267aceb9bf51df6fd
85536 F20101122_AAAGKF johnson_p_Page_150.jpg
3c561c8c2f6ef13ae7f69f6f92b8c5ce
e1657390e3576ef1ea9d09bab2a1b0b1a686496c
1819 F20101122_AAAHUK johnson_p_Page_123.txt
db44fde2cf84854646124d5f3c20e754
dbcf8930af5df75ce3fa9d32abc83e769932f4cd
28162 F20101122_AAAGKG johnson_p_Page_057.pro
37e49a785fdac18fb474d44940001360
070ca0c16687519b9fb261cedbac5ec95b0651cc
760754 F20101122_AAAGXR johnson_p_Page_096.jp2
184eeff453ed9c4d71ab39b911729be7
ea81613e3c1c040b37dd9c62545d02df6681a483
424 F20101122_AAAHUL johnson_p_Page_125.txt
d29bc9a2b3bf511a10f0eb9eac75fa78
dcc2172c33815801ca12800710d26284a1073448
64252 F20101122_AAAGKH johnson_p_Page_229.jpg
3cea7fe7e266a6e8078cc4a8c541b296
978c79e7ee4238c725db9db6c1b9eccb80506c51
F20101122_AAAHHA johnson_p_Page_172.tif
8232b5750fb442437f6687fc274231fa
69e416da375bf4fd2adea8a7ab3f2bbe02ba0c75
441263 F20101122_AAAGXS johnson_p_Page_098.jp2
483c0c2d9407c5ffe5646d692ca37474
736f87f6d8b972af29448e5a2175671836e93a76
955 F20101122_AAAHUM johnson_p_Page_126.txt
5dd9181a9af2a600a8974eb67af8d8f1
68e22ed2da36068cb3ee72907951c9a0b2195b5c
F20101122_AAAHHB johnson_p_Page_173.tif
12dba4bca6933c8fae0074e72a4ed80f
8baa455638ec8d931c5d27128b748f6a17975388
556127 F20101122_AAAGXT johnson_p_Page_099.jp2
f9cce7e9abb31ff1b74f0b3a048b1484
23610fbc254ec51d02de657ab7fecad7e913f1f7
339 F20101122_AAAHUN johnson_p_Page_128.txt
8059036e8903f3de9231b01e3f6cd478
aeceeced646bbf9d970aa15324d630c5ee7ec1ca
28058 F20101122_AAAGKI johnson_p_Page_150.QC.jpg
e45a15d158451c00133a7ca69ba298e1
062367fd503d189f184d66588e796b668df5982a
F20101122_AAAHHC johnson_p_Page_174.tif
5df8a675f9ea62711cda68984df2c603
1b6e08aeff8e5aa4004d14b4635ff1d369160436
708681 F20101122_AAAGXU johnson_p_Page_101.jp2
69a74184f8dfa0bb8347f23ffac671d6
51bdadb2b270e60c5828357fa9915c926d15d4da
964 F20101122_AAAHUO johnson_p_Page_129.txt
6956a2f645ad971d929c4949e882b950
624354785a25f0220f407fd382cc6d0eb3b77189
2038 F20101122_AAAGKJ johnson_p_Page_153.txt
c0b60f340d805be940e491424bbeeebf
dae6d1c4d99f1f984663a305bed8a109c394d54f
F20101122_AAAHHD johnson_p_Page_175.tif
0d2275a8225dd10d6b1eab7d1f2f6256
82e8b1587672519abb7d367460b300a74318427c
1081 F20101122_AAAHUP johnson_p_Page_130.txt
3bcd5f8ea70fae70d3725c4dbefb33b9
b2a1c66a6b13678c1faca5ab1b9cfc74d5622094
24931 F20101122_AAAGKK johnson_p_Page_135.QC.jpg
025d0ebc0e46715f86cd1da7c502e4af
1e172ef5fc3d08bbaff04997272ef403824777c6
631642 F20101122_AAAGXV johnson_p_Page_102.jp2
62021b2887a576529aec56930d5ca6d3
e2294c8e5fd9126c37a031880f2f7e13498f4ab4
1070 F20101122_AAAHUQ johnson_p_Page_131.txt
d42e7293ace3efa39b914adebbed7af1
7fb72a7ab52967ae012387888dcb8eaf71688f5c
30909 F20101122_AAAGKL johnson_p_Page_048.QC.jpg
fd661e318805ea4ec8b52ac735b27aef
61fdd9bdd4cd714f8a14780527b5d916e64c61e5
25265604 F20101122_AAAHHE johnson_p_Page_177.tif
ee29be3449d38c0e0bc65433e127f635
bbcd34d5a8db949f76d42180fcf2b80fc6dc8da2
491188 F20101122_AAAGXW johnson_p_Page_103.jp2
741831d81736cd93da9fc375a0ac1f10
387e2602fb361eb72e522b68c81c9f788c60bd2e
40008 F20101122_AAAGKM johnson_p_Page_012.pro
6db43e20b62a865ba099ad89c235c1cf
d4572e088c6ba37d72f210fed614f7948da69de4
F20101122_AAAHHF johnson_p_Page_178.tif
f87a979e2d8006260003bb3f78fcedc9
59e5453b70eff2f21997ab03b6bae6ebe5137b9e
530170 F20101122_AAAGXX johnson_p_Page_104.jp2
1e79b7b3eb70afccc4c3736238bac5ba
272966bd51a581f46a9f6368709b9d2b22143a80
489 F20101122_AAAHUR johnson_p_Page_132.txt
522368dcb7adf981ebcb6e38a06e481a
f6c9d67b05a4f0d52a84591dbd6c1d7f189c4f6e
7137 F20101122_AAAGKN johnson_p_Page_150thm.jpg
8a5088040146d5fb3b7f9ad8f2055de9
f3864e3c1a571e4668195e32fe493d8b6598ba52
F20101122_AAAHHG johnson_p_Page_179.tif
4c2edaa50ac4b5b01d596db4e47406f3
08f7beeb6c99094f6c1a38b7054f0674c42c609b
628391 F20101122_AAAGXY johnson_p_Page_106.jp2
c62699c7427662bc13ac8ba8f2d4dcd3
43adf7b67281653f1995e3a5d1ad07f31d7193eb
19988 F20101122_AAAIEA johnson_p_Page_106.QC.jpg
1e4f2518df95a625546f7552b58f7464
53a8614f9dd6980a77d1559f0c1d5882034f42c0
1560 F20101122_AAAHUS johnson_p_Page_133.txt
28b596b1726ad49af883acc44b102b92
d072a268b34291bfa5eb58e84609a9b79fbe04eb
14071 F20101122_AAAGKO johnson_p_Page_180.pro
d8b4729e71ddbaab876fc646ec534584
3c4136898175d10218392c3d6f4f18575e46b969
F20101122_AAAHHH johnson_p_Page_180.tif
86b4a44224944d2f496d47f0d8ec69d7
22d0052e3de6344a94aad0037e5d9ebcaad87fee
791016 F20101122_AAAGXZ johnson_p_Page_109.jp2
3e7cdbcda4487cf7d338521fa9a47da3
1253a980cad12432c2bc3e0e24c2498093f9abe4
4914 F20101122_AAAIEB johnson_p_Page_129thm.jpg
65bf4c0fe23055c7607125a3d91e12b7
e3b6679e52a12f221b940749e6efd2702b3f858f
311 F20101122_AAAHUT johnson_p_Page_134.txt
d6f5b721a991e66707a1fd60f02c38cd
13c3642d830f6f5a5181ea7520cbced451339fdd
F20101122_AAAGKP johnson_p_Page_089.tif
576136aa3c5aadee0c657dd599512dc2
e704693663e6343ef7368fbe8fa28a27825c40ee
F20101122_AAAHHI johnson_p_Page_182.tif
ad7f86dc7f62e9a0d74fbb330f329aa0
230b096b1a1e5ee237e5951f1e65fa6ddd4ba1d5
9066 F20101122_AAAIEC johnson_p_Page_234thm.jpg
c790864c0ede442b5c99a412558bc653
3bb72b9c6354af6231b98be9ff3b760c5db132f7
1790 F20101122_AAAHUU johnson_p_Page_135.txt
968577eaeb11ae826ee56c5ca7e79d35
b128ccd418012aa7cdc0da5bb147dbc086cbf0f9
4489 F20101122_AAAGKQ johnson_p_Page_002.jpg
6413ba7012d147efa17c2824c45d1d06
67db80662f7808ef44d164877763aabdaf69fe30
F20101122_AAAHHJ johnson_p_Page_183.tif
97095163a729a64b97f3b9bb0469146a
c9dad94e9f49a52556740e8faec8bc23c83146ac
24168 F20101122_AAAIED johnson_p_Page_069.QC.jpg
2be02aa70131284421e032a6a1312e2a
601ed1b773a1e3dd15b6ac1045440d9eb37eecb6
2163 F20101122_AAAHUV johnson_p_Page_136.txt
2a090ee268fca4ed371b3edbc7e80bf5
8fc6b4407a35ec31bce5c1dc62344a04750cd6b4
28067 F20101122_AAAGKR johnson_p_Page_082.QC.jpg
db4c1b8a6aab5fa55a72f36af4fe8d92
30f1d01c86d895124f2f49876a06c5a14d6be88a
F20101122_AAAHHK johnson_p_Page_184.tif
3396337e74928661c7742f963757a4d7
bd5ef794bf28ecadf6ecc361fd711bd17a4bcfd1
7896 F20101122_AAAIEE johnson_p_Page_145thm.jpg
ac8591c93f7c83d6f648e7229e332e36
9983903f47da993ed0fb4350e5728134394bbeed
1257 F20101122_AAAHUW johnson_p_Page_138.txt
f417edb7edc36d4a8a221feddfcf0dab
57733722b7f28e2ffe43e2d8d2e79e1927ac921a
F20101122_AAAGKS johnson_p_Page_011.tif
f6b24ad4bf28cf4c691629c8dcca9ae7
55e67257b2a826c2877860494029486f1699c9a9
F20101122_AAAHHL johnson_p_Page_185.tif
034fb19ec0720ce3372d87c3afc40122
2f335ff8b739650a730c147be697da47a1e8c222
6689 F20101122_AAAIEF johnson_p_Page_173thm.jpg
3f2e3db5df336154ec7407a386489fe5
9278cc52b80b41a6b98fa1d9472c072320bea038
1512 F20101122_AAAHUX johnson_p_Page_139.txt
68a828e645680187c332901589d44469
310c17a8c195c41940cea0b90799302ae25f740a
14020 F20101122_AAAGKT johnson_p_Page_087.QC.jpg
86ae6d0556292f3469a0b187488c32bd
4ff845fbf716490aea63a3e893dc4e5a9c06fbaa
F20101122_AAAHHM johnson_p_Page_186.tif
e90d36c7eabdb4510f99ba0dd98dae10
34986a6367fa74b3af8d09fe6f014d9e1201496f
3607 F20101122_AAAIEG johnson_p_Page_184thm.jpg
8fe48155984093df8c4bad6eac655852
2fcbfe8cf06390c08ea714bd81b34849580b45cf
1296 F20101122_AAAHUY johnson_p_Page_140.txt
9f4222759284bd5ee81a65e36d4f6b36
aa9f36ff2cf8313100d93e3e033356f5b071841a
9751 F20101122_AAAGKU johnson_p_Page_171.pro
905c157f95afce34364f753144ce9907
64cf113e840cb68834a87a35ef3d2d897c2dc0cd
F20101122_AAAHHN johnson_p_Page_187.tif
56599235ebd0e04906bec66e8b8aced8
ab373a5965311cc3f08fa16fc0b68839de85783f
3455 F20101122_AAAIEH johnson_p_Page_098thm.jpg
2549c2a25c6c8ca501d1df7e8e32ad77
0711ba3c254a8446547b461aa463b92ebc0a0032
1907 F20101122_AAAHUZ johnson_p_Page_141.txt
64e9a0f7e97d83d2b24cb07d2a54b18a
79146c7bf3aeaf47914397835fbbc9334ce867b8
548357 F20101122_AAAGKV johnson_p_Page_107.jp2
09c7f449dc370b3c63bf70f6e40f2cf4
7ae44c18699c83941a1f6ccaf149783fd173a85c
F20101122_AAAHHO johnson_p_Page_188.tif
b0388e45b4b1e669d9f52ac256bded90
152e52f5ba641018e4cce9b7e6c88ce79a4a5b7c
4856 F20101122_AAAIEI johnson_p_Page_227thm.jpg
b983c2d3962e8061ca056726e146be4d
3ab0c74c5b2e1005a5fd9a2069b560132c6153a4
F20101122_AAAGKW johnson_p_Page_120.tif
4c9a270f525cb9f7541cfd1d0cb4691b
fff788f7612d095f2552f225f5e74e5059028e50
F20101122_AAAHHP johnson_p_Page_189.tif
3f10486170a7d3d98490a6a14b5fa77e
31ac3493d541d9b86d9bfa9894a373f74de3e9d0
22765 F20101122_AAAIEJ johnson_p_Page_027.QC.jpg
0a79c9cdc814da35dfca5d5cd0b4d94a
77a1c7871c622c7d84662ee0a589ed5cc457b439
1488 F20101122_AAAGKX johnson_p_Page_062thm.jpg
43e71b698cbc8d70bc72e20a747ba382
aea6f5df502797c2a7a1e6dd301cc876412a7e90
F20101122_AAAHHQ johnson_p_Page_190.tif
807be3860c1e95afc7ee0d9604c8aa75
d00a61fcf04d6da7ac6eed38990e17b9a4f265b3
9548 F20101122_AAAIEK johnson_p_Page_211.QC.jpg
55a364bf40a26fef3f2b3008c7d9fb50
e5772d1b96d085678e3c912ad1836369616d27ad
28493 F20101122_AAAGKY johnson_p_Page_169.pro
711372c57853d819b00629fbada18ea1
8fdf5e1074ff9f068a53de25ca5857b025782b77
F20101122_AAAHHR johnson_p_Page_191.tif
b6a0b93255ab7e4f89e68c7614963056
da28efd86d2c14edc3737f20ba14b177b86759ff
14469 F20101122_AAAIEL johnson_p_Page_210.QC.jpg
fae8431adb4e945d5327aa66e5f8aa3c
3e89608c1b85ae5e2badb01b9209fa2f2e6d90af
36746 F20101122_AAAGKZ johnson_p_Page_221.pro
89dca9a9986b7350cbbd80fba39645d8
8cee053ae9db941d59306380e7a8b9a889cc5a15
F20101122_AAAHHS johnson_p_Page_192.tif
864904dda1b129ebe51c18ff952ccff1
031c144c849395483fd99ee863a8cecb7fa685d9
4281 F20101122_AAAIEM johnson_p_Page_099thm.jpg
a100d3e5930901e6cfbe2cf15e078cd4
87a62d19d151fc30267b34d6f120e00dd7d494c5
F20101122_AAAHHT johnson_p_Page_193.tif
881c9b75b3ed940595936f3f14633c93
64b99037c5b0722700858f8e540dbf74f12adda1
36782 F20101122_AAAIEN johnson_p_Page_161.QC.jpg
eee263e9038c7659a8a40bdebcd41da4
33166d6a70a27c69187e766eec0a858a915e2944
F20101122_AAAHHU johnson_p_Page_194.tif
6bbcf8b8983edf8b906f5e157c4c8e27
870de75610540d05ce6a5d2e74a6ae5d1938121a
27799 F20101122_AAAIEO johnson_p_Page_157.QC.jpg
fa51630fefd37242ed607650348b5676
194da708678705495bc432917e9aeb65918a50dc
F20101122_AAAHHV johnson_p_Page_195.tif
58e07e51f53e9dc3a8e4d7b08ff8c066
4df84cd9a2a59e7f7b01c3be2f4ee10f2966c0da
33299 F20101122_AAAIEP johnson_p_Page_054.QC.jpg
d0a8c189d427ff98cd99ef27f9ac3688
6ec46d5772206262b47a8b15e6b83f8988480e80
F20101122_AAAHHW johnson_p_Page_196.tif
2fdb8c2e04bded3850626f4641919382
8f0d6b164399e7383141fc687ca7319e1b2add2e
20642 F20101122_AAAIEQ johnson_p_Page_101.QC.jpg
7b958f6634413fb38796e873f216cce6
127ec21cc42da7d94f81fce5d08c7d7594a4ac7b
F20101122_AAAHHX johnson_p_Page_197.tif
3c3732640d2aea6e2ccaca4bd5b4ff5a
62513359cf51a1ba122e019028472fc5c537d2f1
1501 F20101122_AAAIER johnson_p_Page_060thm.jpg
9a1dba878f40c6a0e36d88b31df40af3
941f05d5b4bb3104ff25920a1222681cab5d70d1
F20101122_AAAHHY johnson_p_Page_198.tif
e0a80a1f32e65884b04c39a5b0759d3a
37e91173b26efff8f0f8a69c835966a4a5ea778a
4172 F20101122_AAAIES johnson_p_Page_212thm.jpg
a3f1396e258ae02d2a627acc5f5e448d
f20e2024919466447713a7bb4ecda98f11186c16
F20101122_AAAHHZ johnson_p_Page_200.tif
e93f5f3ff9d28822c514ade1652d52fc
1305c69d9831b1a43d085d1bb751f11531c7877c
4831 F20101122_AAAIET johnson_p_Page_226thm.jpg
055fd0d488b69257e44b4bae950dca7b
669e03303c08b7f0717277769152e39c8b4b9aa0
4026 F20101122_AAAIEU johnson_p_Page_116thm.jpg
81b1e4568f583707153013f5e451fbd2
3c5c56eda7de18eba155215e945176af762d6017
82620 F20101122_AAAGQA johnson_p_Page_081.jpg
6ca44659e10068d9a8281e13c6c080e7
9ce9531a8b35f87c8534b14e84ed5a9bc3d47879
13421 F20101122_AAAIEV johnson_p_Page_070.QC.jpg
477655e021b5c29111e2eb64330cd09a
06e1037e267f1921a838e624447ec65a4b2e5209
48864 F20101122_AAAGQB johnson_p_Page_083.jpg
2268725193e7fa7ccdefa9d4a8b0df02
a4818793e505311e7a0837a984dc06198a4979ff
3194 F20101122_AAAIEW johnson_p_Page_043thm.jpg
f17cf9e665a6a3ab20c8de14d2c98664
f50cec408d16e5f4e90be070d66be6b11f186778
31439 F20101122_AAAGQC johnson_p_Page_084.jpg
778beda80bd5a991917a23df7bb0609c
fd04e0d7beb7cb7fbf95ba3a8bc27bede9d86ed4
11986 F20101122_AAAIEX johnson_p_Page_236.QC.jpg
16926edd57019b93d8eb4f0ccbba48a9
ff570bd807264626695e3a9286e8f52eb2654008
45107 F20101122_AAAGQD johnson_p_Page_087.jpg
8e864a6acfb604dcafc459159f399fc5
61b4e2ad27f4ab9d590f815cfbe54ad8bd8cc786
26918 F20101122_AAAIEY johnson_p_Page_051.QC.jpg
745e8db84eecf037d4cde627ee6486ad
9514cb870008ccccbd42967289488e65a3ce1f27
51464 F20101122_AAAGQE johnson_p_Page_088.jpg
1b220e645d9deaa0f515913ffdead5fa
91d1a07529789310a11272374e78fa2b16cf9335
6764 F20101122_AAAIEZ johnson_p_Page_159thm.jpg
3d2efc90232131b30fab5a53808d439b
cc00c9c231d72318c6a3b72a87deee521a7bf068
26934 F20101122_AAAGQF johnson_p_Page_091.jpg
1885aa7b0a381827ed0bb9842a65dc42
55d74a5f15971102dd9cfe92e25758464c0bd2ba
48790 F20101122_AAAGQG johnson_p_Page_092.jpg
0fb35e2d7ffbb0d0a8f335c82b44d4eb
6b22c7445a5a44dfe9d36a8d833d18665d5bf288
40508 F20101122_AAAGQH johnson_p_Page_093.jpg
674d49d6cc058638784a0a3d75e765b2
337ff4328f86d693512bbee393f9132032bff252
6798 F20101122_AAAHNA johnson_p_Page_128.pro
c28f960650de5b748e1888372a3bee13
a2503034226f8db8da58cfba1e0fb633fa72fedf
47980 F20101122_AAAGQI johnson_p_Page_094.jpg
6d96ba6bb91ae947b5ca8ad220c9f480
6a6fa4ce689b94e8f0699289fbd0c8ab0f63c8be
26321 F20101122_AAAHNB johnson_p_Page_130.pro
f8176472c43f60282d38ab3c619e39f9
c9717aa509b397b311b3a08533cfbd1ee8a4514e
66749 F20101122_AAAGQJ johnson_p_Page_096.jpg
e054239919099916255622c6c88920f4
2924d265484db74d35bb918078d08058c848f179
23618 F20101122_AAAHNC johnson_p_Page_131.pro
c2455ceeac23ae3711aacf9b747cff7d
82dfb09c0c6b659861f900862e16d29a5b618261
38556 F20101122_AAAGQK johnson_p_Page_097.jpg
66843f2f323487d50407c321ceee78cb
d9d27d7ea18d9058be021f2444a2113619a2cb05
7663 F20101122_AAAHND johnson_p_Page_132.pro
7a2c1458218544120b79665b3d09fc42
3d75a2463159a87b46751eaabc629ace6853e803
33708 F20101122_AAAGQL johnson_p_Page_098.jpg
a28a05d93345a139c109c2cbb4797436
2da5a24d506a18ee7686eb9cc1f606fb5c682c93
38338 F20101122_AAAHNE johnson_p_Page_133.pro
3e70a31492067428c648feb3b8064917
64d9da030d88c278dc378d16f71bef1163921a96
90932 F20101122_AAAGDA johnson_p_Page_046.jpg
dea905b891c732c51652fc93ab9dd95d
083068d38317481f54ebe0cae1ffea587ff41a49
45166 F20101122_AAAGQM johnson_p_Page_099.jpg
8cf101a4514685ed6e2a2b83227cebd1
6d206947ec33bd2e1f154acb1cf5194835d8e8aa
4296 F20101122_AAAHNF johnson_p_Page_134.pro
9483b83a1cc5af13f03d50737f4bbadb
b4c0d9c6589565a23ee298663b90c3e436e0393c
52293 F20101122_AAAGQN johnson_p_Page_100.jpg
6fa25b8feedcc09624c75a3e550e7f49
14b85aa6036fa6a247d71f5a8ffea0d0f7f4eb37
36189 F20101122_AAAHNG johnson_p_Page_135.pro
b902158deb2cfbdb99b305f100d0e137
afa805b8db76e272bc243bb9d7e3588fbc21fd04
F20101122_AAAGDB johnson_p_Page_227.tif
b599fa43ca18526d8bfe6a40aa745cdb
61238f9fe1cc65f0d113c3da68ad1ae7412e27f6
54625 F20101122_AAAHNH johnson_p_Page_136.pro
7d6da2fc555c59682c768f72f923879c
adc333621737c1b44b96c044f7fcffa5672a5a31
26792 F20101122_AAAIKA johnson_p_Page_050.QC.jpg
70941bc23f5a6cd0e0d9c3a563e7f590
41a6f3def216d8c28719774ec54b902707914738
13758 F20101122_AAAGDC johnson_p_Page_119.QC.jpg
c41f343d002b1862c1a6524e657da551
19d149831414471552dfd08ffa1d04ff420a5553
65295 F20101122_AAAGQO johnson_p_Page_101.jpg
dc0b40ded87a7259a9fbb38a234f22fa
2513120f043f6c31c27cca333615f4f7cf10d26e
39115 F20101122_AAAHNI johnson_p_Page_137.pro
57fb1af78a69cf07e7ec6eaa52e79e4a
8eee7038014174cae7d57bcc1c439e679ef7555f
3351 F20101122_AAAIKB johnson_p_Page_070thm.jpg
967957c25cd0ab4cc345baea82cec039
78d29fea724b66cdf2eacf0406b9a2279f440da7
6653 F20101122_AAAGDD johnson_p_Page_134.QC.jpg
34f7d756d03f38f81cad5272d1fa8e33
362aa3430c66074013bf4204d95036349e66dd07
45155 F20101122_AAAGQP johnson_p_Page_102.jpg
f325f6328b1027c6110d355d5ab9ecc8
770ad91432991aa6cf13c16f116ccf5d2e341bc6
30103 F20101122_AAAHNJ johnson_p_Page_138.pro
b9fd1c1fe797a26a2541ad87a6d1e1ee
2a3f6cb1e3aec14c6f9ab53c6adcd5f80c48067f
19975 F20101122_AAAIKC johnson_p_Page_008.QC.jpg
37988eb76638a8a6b989440ba5fc1ec9
e345107831c9431574991ee494d3d18480c5007f
41091 F20101122_AAAGDE johnson_p_Page_079.pro
feb0e6d22dacb22affb9c90b85a63554
867fbe1bf700f08089373d0550466f4e6370e0f5
40791 F20101122_AAAGQQ johnson_p_Page_103.jpg
d33ce20a00a8c33119e0cd4be83594a2
41c0e6c3e8f22063af20fcd085744ef77564df95
39223 F20101122_AAAIKD johnson_p_Page_074.QC.jpg
8352c6839e177d87d4fc48bf5efb0a53
3c346f548d0f2ffc52d07f990e0379617b5a49aa
1051975 F20101122_AAAGDF johnson_p_Page_197.jp2
a3eda278e69b15c38aa812e26533de68
9589e76d7fc40fc16f3deeadb0574db25d5ac2a5
55338 F20101122_AAAGQR johnson_p_Page_104.jpg
71242aa07716ec84d324329f1f4dc631
5ff8b843ab56ad30dea17bc9babcf93bd1b2f1f0
35708 F20101122_AAAHNK johnson_p_Page_139.pro
cf4875bce00f2ea20ddf9b968786bedd
2c932a3c8d49d6cf42d04ddf0d6913068f677c8f
6172 F20101122_AAAIKE johnson_p_Page_027thm.jpg
53ade1db7a4b622c8f85e10b074ecd44
1350f7362811c5efe43eb0feebbd9e3e9b4b6f47
50618 F20101122_AAAGDG johnson_p_Page_037.pro
c49728ca42fd4da57242ca8030d0c6c6
c86910a1436bc5c4c1d4ff80dcbb72fbfaa01740
907548 F20101122_AAAHAA johnson_p_Page_173.jp2
84c8bdf5242f859d121cfc1097384bbc
11c4ff7c659f14c9b73d034f4a9f6e780545df95
33455 F20101122_AAAGQS johnson_p_Page_105.jpg
6f776bf4dac1a24b424b8d53593be563
13f0285ded06b338ddb7eb83d1c0301f841deb7d
36820 F20101122_AAAHNL johnson_p_Page_142.pro
c9cd333c9fd7a4861fabef5e67a34142
d8c61f2638419c68cc048bf1be064c2fe09db29e
7483 F20101122_AAAIKF johnson_p_Page_079thm.jpg
a9dc1b0315017a25cffce297eeaf289c
d1015230f41b9b6d054bc16229daa69cbaaeb7f8
F20101122_AAAGDH johnson_p_Page_125.tif
bf698d0b25c1cc6eb2cd9ae60052a38f
887ea4fcde6c9bc95e0da7b546b404894d3ddbac
821005 F20101122_AAAHAB johnson_p_Page_174.jp2
615865d365de8e1d191d4e92795d5796
31042a180cf5fb413aec4d2f505fcdf8ed021be7
41226 F20101122_AAAGQT johnson_p_Page_107.jpg
b1284fd6adf329cda587efc8889ddef0
ea7448fcb0d3ac029067f81df1882ad4048a6320
43857 F20101122_AAAHNM johnson_p_Page_143.pro
a1097abd244174cc03a51a9a880bd2a1
6239534c78a83f9a60b04bfd80825c1d7f4411f4
85181 F20101122_AAAGDI johnson_p_Page_166.jpg
0ebed14fd3b7328b039d6a58f8664d59
b5cefc20b1af210a030b632718831fab99574ac6
1051985 F20101122_AAAHAC johnson_p_Page_175.jp2
cc3f46038950dc016219f6c190c5f980
c2bf80fd6a07950ae590a7cdc57c8949c53c4ee2
53405 F20101122_AAAGQU johnson_p_Page_108.jpg
645f00427cfb207123e6058c4dbf7659
e82db1ff2ef01a86487b43e8ce339828afcf1405
51143 F20101122_AAAHNN johnson_p_Page_145.pro
524df19d8772a98b01cfe2494ef0a561
32cdb5b039ed5ae50c27e8a8c86a07df9d79539b
25434 F20101122_AAAIKG johnson_p_Page_011.QC.jpg
4a05859800d551f99bb3480a0a4c27f7
62f6d8ea61c63792d48146e451d7e8452afc8fdc
8801 F20101122_AAAGDJ johnson_p_Page_161thm.jpg
c95140ca55d65f298fa97b0ddbeda0e2
d0ea267a881832b5698d10ed1238b33156b668fa
813455 F20101122_AAAHAD johnson_p_Page_176.jp2
3366b166e4fdf5294ab0da649e0c6cff
bc6c52f65af9d1399a557e524a487dde03abff3b
62743 F20101122_AAAGQV johnson_p_Page_110.jpg
3fc13eea78d8d3f1e6a68ceb9db95fa2
2897546eaf56bb9ffb2b0509d3dadda02633c842
23646 F20101122_AAAHNO johnson_p_Page_146.pro
132fc09d79ca6436e21f8c2efcdc4a03
c8564f10a2050de87c0fcf7e657b1958bc1a9d79
20470 F20101122_AAAIKH johnson_p_Page_020.QC.jpg
89e40bd031f54f23b1703cffc03a2c03
5f67dfb67ab90e21b7cd9611c61d3401c6aea0b3
689396 F20101122_AAAGDK johnson_p_Page_205.jp2
154be11eba6483f53cb346db71d31f8d
2a9d7ea5e8e3c5f5d0a2264d51e11e24dc9b0632
431039 F20101122_AAAHAE johnson_p_Page_177.jp2
ac18eeb617fe75c09ae3f455e22b30ae
9fe625d664729c87f8b180a3dcfb607353a46f1f
55878 F20101122_AAAGQW johnson_p_Page_111.jpg
c221b28ad9fe29c418bb85be4c769775
f7ba7a0a25106b01807591bf7d0320634bce752c
32414 F20101122_AAAHNP johnson_p_Page_147.pro
71c91c7b595ba2dac074fdc872860734
dc67e1bf4e698168c9a5c9882fe0d913cec712ac
3712 F20101122_AAAIKI johnson_p_Page_203thm.jpg
4563a5107aaa608210d3d3389a2ebba6
3facf415cd31d23fa5133497b54987efa181fdf2
12696 F20101122_AAAGDL johnson_p_Page_086.QC.jpg
430f24078e096e8587312c20f21d5bef
6e2adc62da0b0f2b1784e341ea415ff9f4db72b2
1051976 F20101122_AAAHAF johnson_p_Page_179.jp2
31dd751e6370695876a3ea35b71458a4
6884b937a9053c27136788d8858bee7f538465be
53322 F20101122_AAAGQX johnson_p_Page_112.jpg
7d551da10c1fc4c8de047a5a86dd02d9
1cd3996e0510eedd6384fe28fb961067966e088a
37771 F20101122_AAAHNQ johnson_p_Page_148.pro
286568627b7866562199c8c6f04ec282
5b2bb9f0e8c2cac838640a1b11c6cef4fea6eda4
12859 F20101122_AAAIKJ johnson_p_Page_132.QC.jpg
9ed8189a16358681b187b0efcb0c85bb
77bf72c44399a912c6881ac97f6d6232ceaa7407
6682 F20101122_AAAGDM johnson_p_Page_139thm.jpg
b7860daf097c76db56b4f4fbf73f7f38
04fd0c6d24cc75b0822c3b79b786ec5fd55ed333
1051969 F20101122_AAAHAG johnson_p_Page_180.jp2
fd27e649e0b1322591a3786d97f91871
314b69f145773678ae57f5611963f19d29132e82
65029 F20101122_AAAGQY johnson_p_Page_113.jpg
18987853db1b3b0463cf0a4304e257ba
89e1ccaf2e2d90db83df2185c214c4a56f553ff6
29945 F20101122_AAAHNR johnson_p_Page_149.pro
30301d7777bc2fd677ad736c51603753
6c45f39973f1bf5409b5daaccc304507a2bb79cc
25295 F20101122_AAAIKK johnson_p_Page_176.QC.jpg
fbbbf36025ac36f8b250572705b4abd6
3dbb1f477d1e5294990353a1b3dbf5dab04730d6
30173 F20101122_AAAGDN johnson_p_Page_095.jpg
fd8c9a9f06d3453ee797c13617f07eb0
3be93d0884b832c8e3eb8c8988b9e83bd8870082
904419 F20101122_AAAHAH johnson_p_Page_181.jp2
6be77d61a718567e673d56203dc39915
06a03471ef76c548e13a2d2d8d3653d5f04f70b9
40913 F20101122_AAAGQZ johnson_p_Page_114.jpg
37ad9d876aa8076e2156429d3974e4e8
4c1d39e91b7cad37993c009b98be0084bd792e1c
41884 F20101122_AAAHNS johnson_p_Page_150.pro
bdc92227d801a06075ef99a60c6fd625
c73b86ba8a9dcc04e73340974682fe7eafc648a6
5716 F20101122_AAAIKL johnson_p_Page_149thm.jpg
9f298aa80b4dc4e6200d3bdcced36de3
8d093d36029275fe1ee79a6922f1ebc7d8db7894
8890 F20101122_AAAGDO johnson_p_Page_074thm.jpg
64486806f2f18e6fd85b9b67850f025b
4aa9ef8a2deeae40d4d657fe8d69789ce977a644
1051978 F20101122_AAAHAI johnson_p_Page_182.jp2
e09d9741c525f2a374c8a96144ed2a29
678274ec4bc9c4b3482017b0bffc305e39f83da3
37720 F20101122_AAAHNT johnson_p_Page_151.pro
b6cec16852448425fcaee9c9e3b57f84
2a59a9d359b705cf4aed01067e6d3370da4ddac6
4363 F20101122_AAAIKM johnson_p_Page_164thm.jpg
03fd01790ab1b2c069133b47a09eec01
680aa35d30b4f8f82031e7fb57b34c664473e89b
22814 F20101122_AAAGDP johnson_p_Page_059.QC.jpg
609e62731e0eeb6fb95bb2f5766b6796
1d292cbd982fdb946ffc7f877e91c4f8487d0c88
730323 F20101122_AAAHAJ johnson_p_Page_183.jp2
5678ffe02a26aee9d4d1c5903253d0b0
342e21862eb6f0c28055bb4e891234c431fdaaaa
36880 F20101122_AAAHNU johnson_p_Page_152.pro
5e204a5eb2233d3290d8f14e260a3c3a
933c43d1646a420cdf3c3e6d146d14707dc704b4
5158 F20101122_AAAIKN johnson_p_Page_140thm.jpg
ec63c3dd7b513dcb3d164eb497499ddc
7de3262b58d6578401ef29f687036f2e6154614b
15409 F20101122_AAAGDQ johnson_p_Page_086.pro
c277863739e6763deb24bf86a5fe95d5
d85dba7de2ae8e9b9ee9f00d7d958562022f1624
442977 F20101122_AAAHAK johnson_p_Page_184.jp2
2882fef7374d9ba9f723e3012b8c33e5
d9c4da1d0fab01a54636f61c670ac7dff5518b0c
46996 F20101122_AAAHNV johnson_p_Page_153.pro
565fb02aeaf71ac9cb1d24b5b9e3a3eb
4b47da69591540708d547f053cabf568cd00f5a8
4409 F20101122_AAAIKO johnson_p_Page_119thm.jpg
8d90f2cbc97b42c982ead0fcf5410202
f7a4f07a75f5bf961e5e3d0d400e5fba9beddbea
86125 F20101122_AAAGDR johnson_p_Page_030.jpg
b7685761240abb1265ad650bf2687847
3361cb9f09458770661b92e767946a80b91e4dbb
127495 F20101122_AAAHAL johnson_p_Page_185.jp2
10fa26fe7ef09838133ea501843d4c93
452501790bb648f3058bcf4af6e3a68ec03ca6dc
45068 F20101122_AAAHNW johnson_p_Page_154.pro
de789a27316a24985849690e6a74d4a5
f83b8202941329ea3313bcf85aea4156fd917f4c
6752 F20101122_AAAIKP johnson_p_Page_148thm.jpg
bb5d18c7f6431425b224d86be75e190f
58b9155e4e2398723b14d58d1eeb771f6a51268b
18654 F20101122_AAAGDS johnson_p_Page_010.pro
7965cefcca6e161c92793c5241dab998
0407408ec4cc205cf2256546fbe6e88b16477cd8
1051955 F20101122_AAAHAM johnson_p_Page_186.jp2
d9d12655823e0c7cc7d30cdacf0556f5
a1f97d47aa13eb6edd18edfb722e81ade8c42241
46157 F20101122_AAAHNX johnson_p_Page_155.pro
a6cd2e60331080191558474224498468
c14ca35a95b055b29b7b4f6610cea574ca2d06be
15298 F20101122_AAAIKQ johnson_p_Page_120.QC.jpg
a781d558e3e306f32e28fea0cff823e5
4dba1f64fdfcf91badb6d4d1cb21bc236127e46a
2604 F20101122_AAAGDT johnson_p_Page_203.txt
3f2122d9e6e0e0706176c898b8230b82
392d5f24131cd897814e92eef6559c846ab310cf
1051927 F20101122_AAAHAN johnson_p_Page_187.jp2
130a1b4493ac1c64ff36e342ba086d74
7b56a0710f003013b85bf6a8a8fa65eef6c646f7
44780 F20101122_AAAHNY johnson_p_Page_156.pro
6c134f02c4e97a64d4caa95a608d6739
28fec82ca86db94d2389a1ec3a58d62c5a5e070b
4805 F20101122_AAAIKR johnson_p_Page_205thm.jpg
00efb48cdbdc8771a9a3f04a4c334588
3d58e1d00db4fe73195ce6bdd8417df12527f934
2442 F20101122_AAAGDU johnson_p_Page_039.txt
5ccfbfd54596e4c93670257fac9464be
82f90276303241feacd470de33d1d2b5afebe7ab
1051971 F20101122_AAAHAO johnson_p_Page_188.jp2
cbb912e9f1f90a415610de5586d801d5
54ea5d247c750d58e3324a2bc77de7dcfc0947b8
21445 F20101122_AAAHNZ johnson_p_Page_158.pro
1e7bc26adac8fd4c721f830b5be224b8
c4eeaa21591642abd0ec1a962ab535eec04eca8a
7778 F20101122_AAAIKS johnson_p_Page_036thm.jpg
50234f147b936dd31807b76d3d315236
7aca25ebdeafedaf4c7145ae59979b0b8915c063
446 F20101122_AAAGDV johnson_p_Page_171.txt
be9ee2f4c19a0085cb08e30c35d019f9
437235b2170b58c593fded14bdfca3f83fc4d8cf
893384 F20101122_AAAHAP johnson_p_Page_189.jp2
743ac53d5622ed4a09d00be1edee3f47
d3fc021bc8453877f5a4d6b8cfd817741dd0a912
7177 F20101122_AAAIKT johnson_p_Page_157thm.jpg
f026cf53c904b862cd77530449e423cc
1c2fd4d52206c496dadd0eb03ac94c688d10331e
12129 F20101122_AAAGDW johnson_p_Page_121.QC.jpg
4584aa0a4c077dfe31a0d146ec78c183
1aa95237c6850ab6ca8841bdb26431c6aba3c535
F20101122_AAAGWA johnson_p_Page_042.jp2
ec82c17f0a22ee65abfec86253c21641
582c5199a4c9d5e5ec7101ad7fd1c886d62e684c
978403 F20101122_AAAHAQ johnson_p_Page_190.jp2
55e42e8932e7cb6654d2a2102b8ece01
2429095fd9090120ad17392c25c8178b0b4ac60f
4096 F20101122_AAAIKU johnson_p_Page_115thm.jpg
98107131c97291cc2c6a062fc23efc10
46d3c09bd5c1d21364feebec34b1859e6b4bdb0b
33224 F20101122_AAAGDX johnson_p_Page_116.jpg
1c98916fd16adaf8d7aab9142a0fa6d2
85a66a94f51b69001562fb13a431eab6cb873df8
351454 F20101122_AAAGWB johnson_p_Page_043.jp2
098bf65896a25ba72266eb270a488c6b
1696b15e0da8da8c30f83fb5fba01457be9c8b76
922032 F20101122_AAAHAR johnson_p_Page_191.jp2
798b2343ffba890122f3e5e4b2effa0e
c69c35f71043c61f5f5dfff312236280d409d2a5
5707 F20101122_AAAIKV johnson_p_Page_057thm.jpg
f48dcc7bd11072ded53d5ae606a539c3
d16ab3dee928fe3802a6e43ed37f30e7e76d7026
25053 F20101122_AAAGDY johnson_p_Page_148.QC.jpg
18f8dde5b8e46f43a771db8674276375
eb17259f707bee3c24c2d7ea7e91eb0fffa6cc81
731267 F20101122_AAAGWC johnson_p_Page_045.jp2
7a7e94439e8106949acff874694bf4c2
28b931c805f94da0137306b00bcdad1946b87268
484335 F20101122_AAAHAS johnson_p_Page_193.jp2
5d2c64e0b269fdc58d08a2b503e477af
400a8a2dc027d01d216d59af6796100cc32fcaf9
352441 F20101122_AAAIKW UFE0012121_00001.xml FULL
d042b8d1f64624368068a568d96fcf71
ae722f6a3e0a81a71a2257b64d903f09dd7dec17
1051885 F20101122_AAAGDZ johnson_p_Page_037.jp2
f7f72c664aa119082b6844d6971be04b
dcfc08ca89d58115a6605fbb48441bc3374d614d
1002205 F20101122_AAAGWD johnson_p_Page_046.jp2
063034850639e383c36b26a6fb788ec0
b7b6ea563950b53b46cff67981963312fd0a65ca
1021537 F20101122_AAAHAT johnson_p_Page_194.jp2
c6f74be1379dcd41c9eebc5f562fade9
ea4df26f4308ffdfe855d21a5edf1cb6fbc545a5
5908 F20101122_AAAIKX johnson_p_Page_010thm.jpg
a0e30a160f74cd805ef50080fce93876
6520595367da1d3d83f6cec4ff34a1e3ab7bc72e
1051944 F20101122_AAAGWE johnson_p_Page_047.jp2
a2484c83b8e5c4c2265f18f582a34f34
43add05246caaa5634bebcc59eec4bd72fc0ba9d
1051895 F20101122_AAAHAU johnson_p_Page_195.jp2
95f41aa80a9de3a3fe1795c8fdd40ea2
60d89625315d068c137c99ec6b6451706e81e689
29185 F20101122_AAAIKY johnson_p_Page_023.QC.jpg
1c12aaf8a41005a4649eb69fe2638269
91b354a8d77d72141b4f55d7e46b111e7fc0a999
1051946 F20101122_AAAGWF johnson_p_Page_048.jp2
e52d634a0a2738c16090769b4cd72f0e
bf3d726eb4b0fa66ce4ba8894df38a0ba5abb29d
1051962 F20101122_AAAHAV johnson_p_Page_196.jp2
21c23d235ab377985a439a76fa7ba719
c847ad4ed65fa8d43e54d6e154063fe04e4f2a8b
7580 F20101122_AAAIKZ johnson_p_Page_029thm.jpg
529f7c2290f911fddbea3fe2f8f028c2
c5dfd9ac25fb588392c0db9700db9498b82bcd29
1051977 F20101122_AAAGWG johnson_p_Page_049.jp2
3ded4750d36f16722dba0197cc22fa0c
aac92cb2f0ba9c2b4223b67f6d7a82c795ad553f
1051857 F20101122_AAAHAW johnson_p_Page_198.jp2
43e3461bfb5a70dffe8ba42588cb6877
71e08168070b08e11b38e7e1fce95c7880726654
860415 F20101122_AAAGWH johnson_p_Page_051.jp2
c26f4d024f6f3e6bc0b9fd8a78905f6b
c0244396649e8de3bc10a75a6de1d219eeaac947
847105 F20101122_AAAHAX johnson_p_Page_199.jp2
19bbd58acb6352d2103a51743ef4059d
8323aff071761527ae0d0973d3df570c9f5ba8ea
759 F20101122_AAAHTA johnson_p_Page_084.txt
9cff889487507df7fd2922918ea662b3
3aed0f08902ac0a9e28c06ed63f2804f348e83ca
491645 F20101122_AAAGWI johnson_p_Page_052.jp2
c3890569ed3926d4cfcc5f00cef403a6
4f7912182bf58efb12dbeee3f80d53e6f106b159
771472 F20101122_AAAHAY johnson_p_Page_201.jp2
38a46183f108c35b694a8294e7798d23
dd34e1dbfdf9c7e4c860a27c2e48bc29bb040a41
1030 F20101122_AAAHTB johnson_p_Page_085.txt
656e41b2c54b232822bf63406b7827a6
8e4e6732cb40b14f9262107cf3d9e3196d75525e
1008254 F20101122_AAAGWJ johnson_p_Page_053.jp2
d3bba36da7aed58add574730eb46980c
afeb7093a85d8c2e09fa0f3da6287c8e0582c950
683580 F20101122_AAAHAZ johnson_p_Page_203.jp2
4b588f581216089545cd0a5fac49d9d3
e86dbe92132be93cd5ba2310e8b60d0e09798763
763 F20101122_AAAHTC johnson_p_Page_086.txt
3b266b07539899c1ae282dfc84788561
433b9506b14817f07daed71cb8f8a9dab99cebd4
1051923 F20101122_AAAGWK johnson_p_Page_054.jp2
20ddc50155a0887271327dbc736e47d6
a676b1b118d4fdc9531e46d87e16fdda92bcfd5c
831 F20101122_AAAHTD johnson_p_Page_087.txt
b1cd9cdd3ba526ea3d72f27effdbaaf4
aa0d599175f4127ae78c6d8e9be6f18aba8c793f
907960 F20101122_AAAGWL johnson_p_Page_055.jp2
9d063e58780e5a0bf699cd387743d7ba
3b862ebe9d18eea50cb718bd84a5c73d0e04919f
984 F20101122_AAAHTE johnson_p_Page_088.txt
051b98b685f778537691e835dc0aaa32
d669c5a43454068b9b65b2acf27bbd2eae198c13
F20101122_AAAGJA johnson_p_Page_023.tif
1bbdbb99f790c3be3eadcd86fb56852e
c654310d508f261fa9614a521c802ab4454d2980
F20101122_AAAGWM johnson_p_Page_056.jp2
9ef269bbbf8694a75c59afbb75e76419
4ca09946f3baf535f7c1f6fc8b92a6746366c973
1549 F20101122_AAAHTF johnson_p_Page_089.txt
37389e76c0062a80cabeda6834587c61
7bb921dc1d362cdd4a8ab715031611ce7252e25e
33092 F20101122_AAAGJB johnson_p_Page_217.pro
417396ac2d748323ece000c14d0736dc
d1c8e910ff8bd877b48e50517d464be9c72d0624
699642 F20101122_AAAGWN johnson_p_Page_057.jp2
4fc0a57880cac090700b39b285074025
fbd88f028482ad84260a7e33cfe529746bd62c27
1577 F20101122_AAAHTG johnson_p_Page_090.txt
534796626d7063d5205a5b9b93e00980
db1f80b7aa945478db9dec0e4984ad6a2cadae6c
3478 F20101122_AAAGJC johnson_p_Page_098.pro
5e5f3483d2c6430e337605c97d15cdc2
87ca8b6ffde794b5d8923465097d8d78b1758827
714853 F20101122_AAAGWO johnson_p_Page_059.jp2
68fab0618399c764e0de51f9a6837f38
ef57b78d94fd9560501b2e3ea7feffa3dd517450
968 F20101122_AAAHTH johnson_p_Page_092.txt
b87527cac4f289181c6034258b51b6a7
d5c7c7b88212abb9ecae69bb366718d01b4519c7
42088 F20101122_AAAGJD johnson_p_Page_032.pro
c642bb1cd356ade567555ca9521e40e5
612cba00d3afd98a2d0d85b500fd0764b7e7e47a
239565 F20101122_AAAGWP johnson_p_Page_060.jp2
361e85526dff930b6594485fcd9a772a
85f2fc2da112c699a9d8ba48ce9944ccf1f249af
884 F20101122_AAAHTI johnson_p_Page_093.txt
19b8c1d0db2fccad1ea5c52534ccd9d7
96eeeaaf5eb47d810a4e085eb98b2a8da5eb1d0e
F20101122_AAAGJE johnson_p_Page_032.tif
24a010b83c23cb1672d982bdb9f46d99
c358771db1b024d68912ddf696ae76f5381104f9
239071 F20101122_AAAGWQ johnson_p_Page_062.jp2
16ea4c17cf99976403cc6eac5dac4cb3
dcd820c9f37f7a72c4396067ae9169631969ab8b
1368 F20101122_AAAHTJ johnson_p_Page_094.txt
f2cc916678bd50fc407ae14aea16e892
39328d47257c30b7d9e8933614f40b2f80a39b29
4426 F20101122_AAAGJF johnson_p_Page_215thm.jpg
2c29d497b75a3212d389acc00154c9e6
87f983a1d104c54caaa85d481b065a730d1343ee
1051982 F20101122_AAAGWR johnson_p_Page_064.jp2
b32510547929ffe5f4233440a2519f71
fc00e11c97659d3bdf873e165128fc3cf5a790f2
393 F20101122_AAAHTK johnson_p_Page_095.txt
22ee492af0e13d9aa0a9c0a09aa13cbb
8c3e92096ea4d6b3289d749daeb184c6cbc9e93e
174858 F20101122_AAAGJG johnson_p_Page_196.pro
70b3f79f195a529080abbf6a5bd26a34
1c877cc504cd1680ec4eb922331142a09dc3dda9
776649 F20101122_AAAGWS johnson_p_Page_065.jp2
61dc70bd150c905707af259064805456
c1d9831a80cf308b0a733289e09ab00122df0dac
1092 F20101122_AAAHTL johnson_p_Page_096.txt
a3da027cd5a52fc1522f55a13536e721
0531946b5a6bc8290053a0adeb5fa8e948ce4c86
F20101122_AAAHGA johnson_p_Page_141.tif
58ab98121604ea619b994831f44e851c
78220c31f71f0130d7cb096abf5127220e41c88f
749816 F20101122_AAAGWT johnson_p_Page_066.jp2
98e72d80089d18be74167a28be6f0cc8
fd4a01e7f42619e984c4bec2e834035fd13d5c23
648 F20101122_AAAHTM johnson_p_Page_097.txt
62a9aeb12938e35d3db64cf85fc9e5c8
b2b338eb0bb9b04d1aa3e950b7c5e0379d898d25
280 F20101122_AAAGJH johnson_p_Page_187.txt
d295bc6e7f2ee237d23f194fd2bed20e
e0e470e28bf266beb13298ae892564cc63abc988
F20101122_AAAHGB johnson_p_Page_142.tif
a6c4f9337ea7654bc4af47e658607ad2
92703917991429dd669c27bfac9fb06900c7a1ee
197 F20101122_AAAHTN johnson_p_Page_098.txt
6561af5c0c311fe064e868ef56671d05
a8bf2a9126ab413a18bb326c6a9b59dde099b235
2226 F20101122_AAAGJI johnson_p_Page_175.txt
9a2fa80d825a82067e631c0f322707c3
2aea613bbf67b8615520cb5b5d2ad6dc76897387
F20101122_AAAHGC johnson_p_Page_143.tif
00767481d777933a82ec3e6af40acd44
7d13e1ba8a749b8fabbf8a9155a163b1239c1ff2
276931 F20101122_AAAGWU johnson_p_Page_067.jp2
bff82900868e0e854465e511139a7740
30f5b21334d0521408433270e3ae4324be1991e7
780 F20101122_AAAHTO johnson_p_Page_099.txt
e269a86effdde3b254dc6984fbfc4ac8
68c84c252cc6a37abb2b79d23158a54b516d6227
4740 F20101122_AAAGJJ johnson_p_Page_187thm.jpg
9d1f8540d48611a5f3f2088faa2c1022
f9e391a51c4bb726685d4d4aa0c029298f2b6cfb
445340 F20101122_AAAGWV johnson_p_Page_068.jp2
3b68c5a39c911221392f61feb66714c7
4e69fbe9894265234e780d99d815275b23123221
873 F20101122_AAAHTP johnson_p_Page_100.txt
517d66adf91dc884030cc232e1ff9d06
ab025723b7b41a960f4df94e83fea9ddcc188dd1
54031 F20101122_AAAGJK johnson_p_Page_052.jpg
c44dcb62c3813575b2f6202670be91d5
1aabf08fc73c766122cf6998c3fed241e371bb4e
F20101122_AAAHGD johnson_p_Page_144.tif
0f581cb7ef0603255a8b131308e78432
b0358b8c1dd23049d422aeb899dbfaa447f3306d
773382 F20101122_AAAGWW johnson_p_Page_069.jp2
2f195acba30ca773b01e64564d77db39
891c9d6c168c5a9c461040b3a1b90bf13c7a26f0
1533 F20101122_AAAGJL johnson_p_Page_152.txt
52e4f1cc68306c7a18a225e539848e36
dd50a812ec02f7efde1a83a113fda3c9b8262026
F20101122_AAAHGE johnson_p_Page_145.tif
4b0b9f51de411a1afa1e33ff17b1985e
be54a0b261f7029f9d2f3c44ba6bc4c63d11ae73
457772 F20101122_AAAGWX johnson_p_Page_071.jp2
aa23b8ddad45402ae3b3bd5f1b2281eb
b99be7fdc1700d6be3e27131ca26e7ea6b2a97c5
1056 F20101122_AAAHTQ johnson_p_Page_101.txt
aa59769fddfa5acc977bb0dead9c985a
6f112476a0cfbe9999166f6e4ffac256f463b97b
6085 F20101122_AAAGJM johnson_p_Page_115.pro
5624f77aa0b472462f0e4c0e27c8e348
fb0454481a039be4c5e3a54f4e962acbbedf2237
F20101122_AAAHGF johnson_p_Page_146.tif
e7b0d814e73d12b7b5beb9f6d9846e8b
556556f1c6c4c762ba53af55de8f893ee258f1fc
727111 F20101122_AAAGWY johnson_p_Page_072.jp2
0b73a1bfe10c4295169f429dd223a3e9
37ae15f54eac142c7d45ba2da3e8d63cef4b1c8b
765 F20101122_AAAHTR johnson_p_Page_102.txt
8f96f1981dea8ec79d207ebcceea11b2
c2d98ee024182c20d1f1822baa7a3e656a784657
7040 F20101122_AAAGJN johnson_p_Page_171thm.jpg
796692c4d938f215c387d59d9463b306
0c050d8ff7776383bed4c73d6be48301397804eb
F20101122_AAAHGG johnson_p_Page_149.tif
ed6ec5801482cdd6092f92533f1cea8f
91717a70145996d61f222303186bae53b2d9dfc5
379693 F20101122_AAAGWZ johnson_p_Page_073.jp2
74a4d6e0140d1d6ee75e117b3723b2af
bba31872067d4a0b520668916707a579ff379598
7315 F20101122_AAAIDA johnson_p_Page_156thm.jpg
daf75c7f94f47413fa6c37a683497fc9
619a121da08b49249b9ae36d6e4c15f3ecf409a3
589 F20101122_AAAHTS johnson_p_Page_103.txt
acf1f1ba5a95fb9c0f4eccbd070ebb2b
3a5cfc50c002b7ed7e1ea66d9ca623077bf1b9df
739794 F20101122_AAAGJO johnson_p_Page_221.jp2
630bf14ed14f816f318ece2f7ce2b5f3
5ac27a4d602ab6c7a3b728265f4665e91a9fbcfe
F20101122_AAAHGH johnson_p_Page_150.tif
8b8d8914d189bcdcadfa0799f273b82b
cefa91f5f98b06845d5cebaaf2b0275984e14cbb
4651 F20101122_AAAIDB johnson_p_Page_061thm.jpg
74b71b98ba445b0bed5644aaeef9f3cc
c9b0e3499e8a07c85f4ae256c06f44039fcab3aa
1107 F20101122_AAAHTT johnson_p_Page_104.txt
a7755a556da01ee2245e9324e8cd1e0c
e0328cee8410c472c64bddcbb60f47ffeca23a02
1051973 F20101122_AAAGJP johnson_p_Page_074.jp2
077a7b4aed31f1a0ba344d8ea7634e50
b2820326e331608a7bd111d41b2d2793e28467db
F20101122_AAAHGI johnson_p_Page_151.tif
d760a90612489c13c8f7936a42fddf78
3986fd26e05f55609803041f395b4207bdf3a2d8
24512 F20101122_AAAIDC johnson_p_Page_139.QC.jpg
f8bcbc6de7cdf58e20137857c6d0e9e2
2d8e9a7617df7727d9905cb9248f882fe9afd9b9
328 F20101122_AAAHTU johnson_p_Page_105.txt
933c8b5e37b89ac01f48391f8d97c8d7
2a0ccd07b57615944193f5d13247c2bfeb3c855b
75820 F20101122_AAAGJQ johnson_p_Page_011.jpg
971bff71c97a73cb72f2e027924e415e
1d168eea5ce5bf99d488b7a47862dec0d3c9df92
F20101122_AAAHGJ johnson_p_Page_152.tif
2d900b01a8be9ef62cbc30de2e5b3d53
b9beddcde06f3ab07526697e3fc20842a6669e65
6524 F20101122_AAAIDD johnson_p_Page_021thm.jpg
9ffca407db36ba8331780af74dd3eca4
77a3c7a0fedbafa1048eef567607e99797def1c4
1289 F20101122_AAAHTV johnson_p_Page_106.txt
929a1cfc1cc9c5083c3b5f2a16242b49
efbfe31de1d9ebd8d24f4f046c5b4fba3c48213f
23560 F20101122_AAAGJR johnson_p_Page_012.QC.jpg
1b826b9c2f20ce0b6ceda1e59dd6f20a
ba422873b12c96744c47bc038bd1ca4c4ccfccc5
F20101122_AAAHGK johnson_p_Page_153.tif
8390c716075cfe5dca0d72229e44476a
66b06c5a7ec0e0b6eb924ab8a048a2a4fdc82cea
6415 F20101122_AAAIDE johnson_p_Page_142thm.jpg
b80255c1f3097f2315f402e59e9f5493
f5c3f1186d33340c10062fc0b40f09885f29d81f
511 F20101122_AAAHTW johnson_p_Page_107.txt
97bd75cd46c6e6f3268f957dc482f6d9
fba6e761ff581815bebff249b37ded787a5939dc
8262 F20101122_AAAGJS johnson_p_Page_172thm.jpg
c21f99425904852a35ebfdb3e607c0f4
ac502641604823398f9ecb89a9cbca36f3855068
F20101122_AAAHGL johnson_p_Page_154.tif
da77125c390e6e6fb64d0deb26c662f3
b8e0a0f1ccb0a4d2889eeeaac9de0e8338492a65
27442 F20101122_AAAIDF johnson_p_Page_153.QC.jpg
1d9d4f04445c5830c117cd0e2bef1f13
c77ca03e095632d3c877cf4d56177b005a764144
921 F20101122_AAAHTX johnson_p_Page_108.txt
6e9a9b62a6980b8181c4320d0398647f
6929eed2c0c94bb9a2b01b7e50d10d6c2c2e710a
95826 F20101122_AAAGJT johnson_p_Page_123.jpg
5e17dd9a5fbe5a444ce25a8b820d3a98
178c746f4189271eed0ade3389f2228da8563416
F20101122_AAAHGM johnson_p_Page_155.tif
a5bd44e26765e05f3b3cf9e232eb2b6a
000100e2fbf6d21d773c06aad325350263b2e4a9
34592 F20101122_AAAIDG johnson_p_Page_037.QC.jpg
07c6ca07689c1842e53d1aae0a1d728d
d4e85a2d88831c9cbc63afb5cb67fad1f20519c9
1106 F20101122_AAAHTY johnson_p_Page_110.txt
8fa13ce970e680daafe3efbfcfef9a90
adfbc50fd8bc5bdcedfee460e2abe8321b4bc99c
1608 F20101122_AAAGJU johnson_p_Page_137.txt
604b0be954a7df48338b87179bda660b
5dbfbf1827ae13c0886777a510aa423f022b4c53
F20101122_AAAHGN johnson_p_Page_156.tif
bdba0835c84af11a7bda3b99047af031
da637b3f9fbd2ffa63fba8049685f92648fa5150
4799 F20101122_AAAIDH johnson_p_Page_112thm.jpg
9830ca5bf6d61a21063839dcc969c878
3598faaa67b1df8c5646a83a69d0138175557d1d
792 F20101122_AAAHTZ johnson_p_Page_112.txt
54ecbc714c48048a4df1414c28581f0b
ddaa82bb538176d32a68d3376060d20663650ad6
955221 F20101122_AAAGJV johnson_p_Page_150.jp2
976ef83822b20800fe06bfe6c017a8b8
469e4e8506d070880d7ad7e8c7c64fee76f6433f
F20101122_AAAHGO johnson_p_Page_157.tif
1f0955509d7e77f8d1c37a29e2ecc2df
9d61fe3c0cc7533c55f6de5f5c6464fd64accf63
28793 F20101122_AAAIDI johnson_p_Page_166.QC.jpg
8b3426ec14fd3345a8019583a5ff0169
b61d916ecae8c8c543a8306313a240faf65fad4b
21722 F20101122_AAAGJW johnson_p_Page_003.pro
a1f6f386d25515b48dec0aca766edee7
3301ffc077c0296debdcb092d143e651a0180ea5
F20101122_AAAHGP johnson_p_Page_158.tif
1734445c9173a3f5b74bb7f0964f6900
20a1de6a22a744511f9c0ce5cd3d1bd4862eb777
7856 F20101122_AAAIDJ johnson_p_Page_037thm.jpg
b6d7df4a08a0b5bf6c24b4ad1b656f06
044ef7384c5b7089f3b94118a7c89c3f9e867a1c
F20101122_AAAGJX johnson_p_Page_051.tif
46f6607150e20e06ac91f052a06da326
fc9079af402c9e7406c40d8d6bcacb05d36d7b87
F20101122_AAAHGQ johnson_p_Page_159.tif
ab2a9791af8c8a427a7b9db754a76fb3
79abdc1dfdbc4b5655097e11c8d57fcbd1cc8bf6
6892 F20101122_AAAIDK johnson_p_Page_077thm.jpg
edb34c9ebb714b3e66ca3c3ae2b64d0a
fb2a0148aaa6a4bf2c1a2a4da8db4f1137927c91
1860 F20101122_AAAGJY johnson_p_Page_210.txt
f348b67ee452d79df3ae5ac20fe0cc26
dec7c0ab5e26c38fa766782d17ac300b1de7bef2
F20101122_AAAHGR johnson_p_Page_161.tif
b46bdff60f7dc1bad19d87bcaa6fb784
3d2dd4769ec7657b21f10a02b250c21c45da2ddd
5055 F20101122_AAAIDL johnson_p_Page_225thm.jpg
043f41a061f2f1ed8a76cb9792404566
25c7244a41854b42eb1216a007f7348385d000cb
1980 F20101122_AAAGJZ johnson_p_Page_189.txt
a45d043d8743124043ba4bdea3f3f9f6
673072603129ecc19ce0fa2e3a6563802daba7f8
F20101122_AAAHGS johnson_p_Page_162.tif
400f7297922d77771d964812023aa970
5122150401d056e772222e1a767f790c2e19ea2b
6696 F20101122_AAAIDM johnson_p_Page_152thm.jpg
5839598e98d5caf8c1af7423bc9436bb
d7381267e8568cf9b5dd90c30b82fe3199865f59
F20101122_AAAHGT johnson_p_Page_163.tif
fce5ffcecfcc40e8a3ce2342c12af7a6
56fe40c8fbc1a497378d317ae6c9951e61d100be
14076 F20101122_AAAIDN johnson_p_Page_099.QC.jpg
c8dc27bec645f642bab69ce587064a64
bb5fa8f388cbfb0906d784b640c2f13f7e3ce41f
F20101122_AAAHGU johnson_p_Page_164.tif
dac5f4a530c1f5c3d7667f60fadf712b
a1ad148a91ebb41493854619855d8aac9996a4eb
F20101122_AAAIDO johnson_p_Page_054thm.jpg
acdc0589c55a05a4719c68ca32ebf96f
2aa29bceeed154620c96def012fe21813cb01004
F20101122_AAAHGV johnson_p_Page_165.tif
15c3e3bd514361f5eb12c5a63673ca73
7643da9a489e6214b0024070cdf231091654a280
7886 F20101122_AAAIDP johnson_p_Page_170thm.jpg
45e4eec22f5630f3189186af9d5b4dc4
c257889700f42ef5556403a29204f73e74a105d1
F20101122_AAAHGW johnson_p_Page_167.tif
e9082759c0b0a24615784e6906e5ef0e
cc3b67fbfdb9784640030b43610e37b8c797df05
27395 F20101122_AAAHZA johnson_p_Page_015.QC.jpg
86b2184ca0b2c970eabdf28b9b018626
f4ea3d133dc244b562189e90131ae73b92693656
31132 F20101122_AAAIDQ johnson_p_Page_231.QC.jpg
db672a05e89358f347a482d716c792ea
8f9238ff7febcea4860dcb0eb449fe5218113b48
F20101122_AAAHGX johnson_p_Page_169.tif
a1fab339b691f32a71a6f06b9740e436
b813241a290ae1cc232ae7df84ec64811120b5cf
17035 F20101122_AAAHZB johnson_p_Page_207.QC.jpg
177b1e4fd0b711a8ccb9ed685d34723d
38064e475cae028ebce030dee1d01ea875d160a1
2326 F20101122_AAAIDR johnson_p_Page_211thm.jpg
3b6f00e17277ec42a18a8b9d1dbf30c9
4d85d560e669217b1175ed6928cd97658f49635a
F20101122_AAAHGY johnson_p_Page_170.tif
ae06e9ef0d98d4a327c1d85216d5a435
0644312bd81dbcd23b6b0c226afb8e21f5762776
1397 F20101122_AAAHZC johnson_p_Page_185thm.jpg
0a3729576ad581eb57abd931016ad2f2
2282b7669bb7296f9ecc2c7a472a83cf7566776b
37991 F20101122_AAAIDS johnson_p_Page_017.QC.jpg
cfb89805d842fe572e5b90ed01816d08
7f58911429c41ce514e6a6a2f0e1df2c63bee15f
F20101122_AAAHGZ johnson_p_Page_171.tif
e9363947280d7960f0aa77248f73f890
d478bba5b512c7d843fe26cd51b33ea1a3255a9d
23348 F20101122_AAAHZD johnson_p_Page_075.QC.jpg
8fa83aa4d558892ed004229018663e6d
0d3a0ec95af736979a90d0233715bd914ecfdebb
19186 F20101122_AAAIDT johnson_p_Page_223.QC.jpg
9633fbd9220f228f1e9fea69fedbdbc7
dd4cbe9695da08727127f721dd4aff29b67d9c51
15372 F20101122_AAAHZE johnson_p_Page_092.QC.jpg
c4f952e96a3a634311b42b16071d27c5
d595439577847c80bec0a5e6a10f3ab9d5f72734
5899 F20101122_AAAIDU johnson_p_Page_066thm.jpg
cc6cfa5bbe384a2960153c854fef86d8
78cc616de3ab54f5c432bf30cb2f609f47d98d0a
98981 F20101122_AAAGPA johnson_p_Page_049.jpg
3fd612dc5e346a96a75c67f148a2200b
fd4adf12fa7f32270e379e68383653e46deb4851
33424 F20101122_AAAHZF johnson_p_Page_233.QC.jpg
5b0bbb2e9eef8ce0f4d26f1a714d4cc7
d0e3e650a320ac9f4a797813abfc600fb112cf19
21220 F20101122_AAAIDV johnson_p_Page_213.QC.jpg
bacbe4f5a393475f98533bab90fbf8ae
c3c557444eb91d2e3b926694a1ac9cb77f3876b0
85751 F20101122_AAAGPB johnson_p_Page_050.jpg
c3ef674c7a56fbedef5edd0942c037ba
983e60c170054fcce4524518940a5eff42375587
33128 F20101122_AAAHZG johnson_p_Page_049.QC.jpg
da5bed18da43e06781e8f9253f6b2d53
a14da6ed0b336395fd84cbb92482d6a730e98f29
6020 F20101122_AAAIDW johnson_p_Page_078thm.jpg
136c30a1039d909937425800d7ca4ba8
b1cf554eb2ef0e5f133b87e3a2a34d2936b084a5
85682 F20101122_AAAGPC johnson_p_Page_051.jpg
fe0280b6f4c48ab43a1f91ac928aff3f
d208b57fe57f555dc8c73f7a0086398286e32be8
7589 F20101122_AAAHZH johnson_p_Page_026thm.jpg
7d382031fb383eff9e0d06b11d6699ae
4acaed3fa37e19eb4fac4d0b539c667559bcba36
21405 F20101122_AAAIDX johnson_p_Page_226.QC.jpg
c5a844a9a53db9415cb3098750255ef1
60686a9a0184d0f73b9b4cca48edd8a7b2942a8a
94663 F20101122_AAAGPD johnson_p_Page_053.jpg
c598203b1561e67884b894c5fde890ed
a354123a310317656deae04a978db71ed492373c
30734 F20101122_AAAHZI johnson_p_Page_170.QC.jpg
5ea96b449903b053c68cd295ec397246
c8625ee315c2db68a6eac41fe6e357fd8e39d055
8814 F20101122_AAAIDY johnson_p_Page_199.QC.jpg
8de6a6c1ba247dc5de370a511192c5c5
18e4b16c97b40fa6083121be9bf5dbd781d095b5
110594 F20101122_AAAGPE johnson_p_Page_054.jpg
81d73cd4a81821257e49417a9df6ba8f
f8a93038895bedc09e48558cb422905b572974c0
4019 F20101122_AAAHZJ johnson_p_Page_235.QC.jpg
121bc7a5233589803368c9cfef93e061
88759dad6ebd14af4c5cde216fe5fea7ab5dd77a
2515 F20101122_AAAIDZ johnson_p_Page_068thm.jpg
7c91eb66bb807e78e810acdaad36cceb
c125fb2bc8fa4b04c531dec4350bde007af5aa81
107674 F20101122_AAAGPF johnson_p_Page_056.jpg
00049d98d3eb67cb98e858202f91b91f
270d3e8713ca261324f3fca1408c82e20390833f
30374 F20101122_AAAHZK johnson_p_Page_041.QC.jpg
c3fd96806627e009d5b24dd0bf47059a
631961807fdc6c1d6fbd043e34a3faa0cadef9e1
65775 F20101122_AAAGPG johnson_p_Page_057.jpg
54772eec414df0486890ed63c1aed2b0
0e4467d7d3d6fc69f4684fa07ff283677aaab522
16764 F20101122_AAAHZL johnson_p_Page_088.QC.jpg
255b7bce83547dbb4389e21d4e0e2246
4c7c5fe2e3c9f8aaa5f251fd62e8f4c671a2e16b
72091 F20101122_AAAGPH johnson_p_Page_059.jpg
97d35d50f05053608f04c5e7da3fc969
6a6dc2262ab623b16168728d68a0f8f53ac72970
13128 F20101122_AAAHMA johnson_p_Page_097.pro
c781bf0c177bdf8ab36101f0f19e8a19
c39a614ad50b4c5d026b211cae06470749934f38
8402 F20101122_AAAHZM johnson_p_Page_233thm.jpg
06d1e59b344ce89b6d204fcae78ca38a
fe4e1e0a27ceadfb6a638dcb618ec0f57dd7e98e
18785 F20101122_AAAGPI johnson_p_Page_060.jpg
6286b9b4d1d459264110138b23222a14
1b9021117b2bde3dae2ea2d701ac00c78815253c
13614 F20101122_AAAHMB johnson_p_Page_099.pro
cdd77cded5f7811a2b1a472e49f55277
e3230037d06e3a379c07c4167db99be3ac9873d2
28541 F20101122_AAAHZN johnson_p_Page_055.QC.jpg
4e109e1d973528385c044d0d0ec8fdb7
b19953a6961e4ecbbe3198cbbe2c85a6dd90ba23
54449 F20101122_AAAGPJ johnson_p_Page_061.jpg
d5379e9906932a550c02627c4c74386c
6b5de529c82c9a29347105445c63fbd78a2df872
21426 F20101122_AAAHMC johnson_p_Page_100.pro
c8ae918f2558fe1a57d37d200a6f9281
7fe08549262e3245abace56ec7232256cfbb8c88
3658 F20101122_AAAHZO johnson_p_Page_209thm.jpg
22f716be8cbf287225b617ce0accde77
d96233a5ac76ed1e79440c1c5b6fb05ff231d2a0
18695 F20101122_AAAGPK johnson_p_Page_062.jpg
eabfca11a4d18dc387a5eea66c4e9767
5863bcda90732a5aafd9437713aa7971803bca26
25834 F20101122_AAAHMD johnson_p_Page_101.pro
2c742a487d8ae66edb2d3da2c183e1c3
e855360efb4f6d4b70ab7b363701becc8f48241d
16190 F20101122_AAAHZP johnson_p_Page_208.QC.jpg
7651998e6cfc5b5e9f57fd661f4d8eb4
60f32d7a63ba97659aa723825966843d010e9c05
98192 F20101122_AAAGPL johnson_p_Page_063.jpg
b7b30480ce994d7f9766afa6a66130c6
fd584dcf016c2987c6f41ead0c15ba123c313bcd
12234 F20101122_AAAHME johnson_p_Page_102.pro
7d8ce8ac90567772fe5db8220dd43ed1
6e70d6829b302c7841f965d8f0c31d061ad68e29
21846 F20101122_AAAHZQ johnson_p_Page_169.QC.jpg
b253087e3ed8915e53d990546924beaa
961864c016a04900af0d2311533fb1b14dcf9467
104719 F20101122_AAAGPM johnson_p_Page_064.jpg
e0f131ec81b1b120c0d1000ecafc3ac6
6b35507c1216247daf6bc04fee93abff9d8c3f03
12556 F20101122_AAAHMF johnson_p_Page_103.pro
8a0eb9fb5d1d50327ba3e36e0b43455a
cb17a31f53b50108032211a4428c970241a2251b
7606 F20101122_AAAGCA johnson_p_Page_095.pro
b8c3b334c13c2ea4a8cd87b7bb5762a1
2ba536e76c8c8edc17dac7a597a33008c3744d26
5245 F20101122_AAAHZR johnson_p_Page_062.QC.jpg
fa8eadf1f205179b7bfda8690c75bccf
37d3d7d9e06d427dc5c01f440cd0f5b248031e76
24824 F20101122_AAAHMG johnson_p_Page_104.pro
84c1b88cba7caca5435e8fc6e67e4993
87b09aca70cc6dabbcecfc0cea45b695ed1127ec
1850 F20101122_AAAGCB johnson_p_Page_025.txt
914ad4088a805b79e873baa2e7affd9a
12c57c5e7d8723fe8ea530fe167a9113a8516031
4561 F20101122_AAAHZS johnson_p_Page_229thm.jpg
02b3e6ed37ce6f17c1b05f0ab9a1bdeb
a45cb32ae2a4168df690c7aa3be7c1eaeea61517
73475 F20101122_AAAGPN johnson_p_Page_065.jpg
dd2100c620778b369fe0e21df93a67ab
fe6c3249c233eb0d0f0053bd14636095cd542572
8325 F20101122_AAAHMH johnson_p_Page_105.pro
81e9c4b56bf3a9eb8b29fb6edab80bae
b689ed27f8a086245018285f6a67b91ae94d215c
3684 F20101122_AAAIJA johnson_p_Page_193thm.jpg
2c196e682feab8bb3f42801eb80f8fc7
33a329c5305f6d775647e36b7b9b7afd84d99a8d
F20101122_AAAGCC johnson_p_Page_228.tif
23aa1f7a1e203979ddfd18f570f2e138
e6a1f1e0a0091b4bf6fbc8983099046a3cf34d47
16022 F20101122_AAAHZT johnson_p_Page_013.QC.jpg
ac6080fcb4c374749022d290041991d9
ba2604cb00936330ba9279615b4a9541b85e24ee
69977 F20101122_AAAGPO johnson_p_Page_066.jpg
cc2c5b5d763b1f31f9b155f323592412
b4c4febde936eef0206a604b1697c2453daf554e
29916 F20101122_AAAHMI johnson_p_Page_106.pro
c1b327348e3fe2edbdf2dc3744cb0376
c43b4b74b415ad2546c7c377b38163710a042621
15943 F20101122_AAAIJB johnson_p_Page_094.QC.jpg
59013c41a8a3b29c4b3f1baead4d0be9
6f8f0a5c61f6bc2495b26def38489f4ac4d9f41e
506119 F20101122_AAAGCD johnson_p_Page_209.jp2
d19dc1a69bd457007c2f43bea91bd118
2dc30e43881f0a3a038b41a427e3d2bb1d1679ec
7492 F20101122_AAAHZU johnson_p_Page_053thm.jpg
4f810ab54a2ef6ede3103317c4567380
393e1fed6f500561871ad0d88500deb510e9b3bc
30130 F20101122_AAAGPP johnson_p_Page_067.jpg
407f5a78ad55a190a9977ca7724467cc
2f07c1fa8938b413a3b1bddeb043f73ae4e91ba8
16431 F20101122_AAAIJC johnson_p_Page_003.QC.jpg
9db8c47c47b4453d38538d4a5cf15981
44e4626a358259e34f4737b6eeb60d397be568d2
42937 F20101122_AAAGCE johnson_p_Page_046.pro
1f85db0ae595ccbb398012faee2ab88b
a2b5bf062e3627c758334f44195e5d94b054f235
8298 F20101122_AAAHZV johnson_p_Page_179thm.jpg
5e4a250adf98c0a920084bb56775f9ff
c2e430c8418cf29876b3b9357abf4076884705f9
33829 F20101122_AAAGPQ johnson_p_Page_068.jpg
f4ae85e6aade98f10a576a1e46461cda
5aff7c260d850a4c6640d112969dbe9ea2010fc3
10610 F20101122_AAAHMJ johnson_p_Page_107.pro
91add10553b253b9dcaa60f9248a7d12
2cf76d84372f60c90c1c94330086c8243e3d38e4
4953 F20101122_AAAIJD johnson_p_Page_088thm.jpg
3b3ce3bb9dfeb63e28885b15e7936d8a
30ffc07f2e0ae18eb7b65d4b4bc45439de324361
445028 F20101122_AAAGCF johnson_p_Page_097.jp2
67426d8de1e7bdf3a87e6c37d3187ec7
bba0f4905b3a94b6611604d413f0f20d4ffab743
74118 F20101122_AAAGPR johnson_p_Page_069.jpg
8add3e74a1212dd963e4f51768b642bb
0ea05090b5893d4d5dd92d2514afd32b95bb7638
21045 F20101122_AAAHMK johnson_p_Page_108.pro
1a4c776ef4dbd31dc41eca576471533e
7e7dbeff28e59c1a986d9ab1be47b3b95972d964
4711 F20101122_AAAIJE johnson_p_Page_122thm.jpg
be3e0d69fa5defe77a6f19fdce41da6f
7bd5716e0bb3b2f5d181fef773c653fcf5cbd746
5679 F20101122_AAAGCG johnson_p_Page_034.QC.jpg
22506a2c84fcb7273da9870b29571f7e
64cf6831ee55a3afc7c5c9850ff3bdbe95955a12
8793 F20101122_AAAHZW johnson_p_Page_136thm.jpg
76752faa4b5efb5dfa84a7b2fd9be6e2
28469aff6c5bcce09542fb06fdf2467d5f725033
43692 F20101122_AAAGPS johnson_p_Page_070.jpg
df05d8ca7f9017e8cc145aef45bfc950
a97582f948710340dc3096d76ae83a2cb70c61a4
24978 F20101122_AAAHML johnson_p_Page_110.pro
7f85e49ff8b162adeb539a789b6bdae0
e5fc68d5cf1ec118dd2e340e311e4ff2584fcb78
826693 F20101122_AAAGCH johnson_p_Page_044.jp2
2891334b9a194ce05cf10856e5159967
9d58081ef414382bedcd40e89f2744b7355173ad
6484 F20101122_AAAHZX johnson_p_Page_135thm.jpg
d592f4f59a4d479b17a174d97a4baa01
87e92f428a4742fb4bdc4c12d2ad9ad9703f7303
34759 F20101122_AAAGPT johnson_p_Page_071.jpg
104d404f113fb3c606c44c1cfb16a900
7ba65e80f2e3fca80f98770048c50d681a4791b2
24047 F20101122_AAAHMM johnson_p_Page_111.pro
13693d8189ed0b31dee8be5b0e7f24ac
0752fc92699793ac6115b70d344cdb31fa0befe9
24319 F20101122_AAAIJF johnson_p_Page_216.QC.jpg
df0283bde40e51bd939c3785dd197837
5b3b69e62142ff21c5ad9bea35eb9195a38d64c6
16492 F20101122_AAAGCI johnson_p_Page_120.pro
b80c808031a00bc3a6624750683287ff
528d8ccce3779b22bc582a9ace63b83d164a00ab
21506 F20101122_AAAHZY johnson_p_Page_174.QC.jpg
a6e50c887e80aa9277ce41e6ab958e6c
75247cb7f6bad42cedb9386c608d18f308672a0f
66691 F20101122_AAAGPU johnson_p_Page_072.jpg
f396cfa0007b136acd7f9b1b883bfee0
45b437f07022d098ce42644fccf5269e96373d2f
18879 F20101122_AAAHMN johnson_p_Page_112.pro
8815572307d358864bce94aa18179ca7
f6b188b1c3dffd8139ebada684d158ded8cfcb8d
3952 F20101122_AAAIJG johnson_p_Page_132thm.jpg
90656e4f66cb2eb5346a9672e3e6dcd7
1d9e0b6d97bbd16877aa1ade7f057717bc7eeefd
1335 F20101122_AAAGCJ johnson_p_Page_178.txt
e4d1c69a954d70b48affc488a048640a
8ba634aa149f4ec04f0c4ed4d4280d7f5f96fefb
5808 F20101122_AAAHZZ johnson_p_Page_014thm.jpg
27d83ec34523f2bc92d49c7231cf4da7
d331f791bcbfbafb536ff9bd947c2c03f3807e9b
127224 F20101122_AAAGPV johnson_p_Page_074.jpg
a23ac6c1bf03e414d8a16fe70a0734e7
7c5e29cb8b2b8d1bb391825fa9d3cbcff7bc7edc
28456 F20101122_AAAHMO johnson_p_Page_113.pro
bea6e8df8df8027ee43e19f965df88a9
6041c9c2fc911d070db5aecdd5cfd97a10854b3b
3427 F20101122_AAAIJH johnson_p_Page_095thm.jpg
b599c83377d7c168ada6676c1e016ee7
75ea3d2dea0f9e001dd7d4215f834151b52e419f
5929 F20101122_AAAGCK johnson_p_Page_130thm.jpg
8c319225bda06ebc6d4dddec343ee943
5a42ecce020955c1d77e1260fec0f54cd78b83bf
69770 F20101122_AAAGPW johnson_p_Page_075.jpg
11d2a2fa4c4c8f7b3b94a1694e60d0b9
6384bbba4b9ae228db1650ff67fba0cdfee52bbd
11272 F20101122_AAAHMP johnson_p_Page_114.pro
84cba88359abd66a69d0fae5b5187179
8fe0f3f66bfb295332946937b7ffec3ba0a615f3
3900 F20101122_AAAIJI johnson_p_Page_158thm.jpg
5162d9bcecdad21f5a614e684e54b51f
22777f25d06110a747e2477d10d741f44bb173f8
103821 F20101122_AAAGCL johnson_p_Page_145.jpg
771d52d620eb656e9e02585979090a81
53930884751c0c5c6bed005e32870fd86138b667
34550 F20101122_AAAGPX johnson_p_Page_076.jpg
3fa907208b43f58cd96b931147ae1964
56aec4f9e656665c92562e1532608fcf17eb238d
9130 F20101122_AAAHMQ johnson_p_Page_116.pro
54040c6cd67bc1e50938bef6eb520004
bc752758bf8050d3660c510ac4c9217ce70917a5
6052 F20101122_AAAIJJ johnson_p_Page_183thm.jpg
260298eaacfc77bd54446c62649f8fe6
dfbb33323f593df47e8b06a20dfb7b226edefa54
6425 F20101122_AAAGCM johnson_p_Page_044thm.jpg
79308ea94cdb0001bb67b7a444f5291d
fa3ac1d96a9c78605d05959edc3f4741dbb0ded4
93634 F20101122_AAAGPY johnson_p_Page_079.jpg
00011fc821e16b3675eab4858323b5bb
7a7c9f25dbb202d8a4c74229ff40967535e29e05
17642 F20101122_AAAHMR johnson_p_Page_117.pro
27a419678c95c998b153271b61c6ae27
46aa043325f3d4871cf25e9b446f2a617f49512a
7084 F20101122_AAAIJK johnson_p_Page_180thm.jpg
380f2b95652c05d2aec9f54eb1a70dff
a97eeef0a491c315c13e47f5f132eac1630abd82
31136 F20101122_AAAGCN johnson_p_Page_039.QC.jpg
ffe1d1f4d1f79d9da7036e15cf55d2e2
d5df53498daace5a6da4204766d5f46897bd7e31
81193 F20101122_AAAGPZ johnson_p_Page_080.jpg
b33f27f6bbad826038576cbd38f240a0
d9e6a3a9d8dafa34552e6496b2efa89306702f16
11302 F20101122_AAAHMS johnson_p_Page_119.pro
111fa0a80ab39d37c2e2c7ba7504db1f
b9eb33e0a3299139f037e00d84899383144c4e2d
5121 F20101122_AAAIJL johnson_p_Page_221thm.jpg
0a1a50474e2f226eb18ac8bedd935280
1377befbaa352922278e6fc679391a87fb919be0
1687 F20101122_AAAGCO johnson_p_Page_082.txt
ce6afd274de0d36f0a2140fa444a5283
280531cf3f56bf1bd61e358de5e26baafcc68240
9524 F20101122_AAAHMT johnson_p_Page_121.pro
871e58f47063594b510035188aa2012c
dfeadaaa044462b07af3afcc480056733fe88d86
4926 F20101122_AAAIJM johnson_p_Page_100thm.jpg
ccc06639965333f96c4be36386f5464a
cf9aba80ffd62cf81828b9502494d4a9eb7fcdd5
25942 F20101122_AAAGCP johnson_p_Page_109.pro
d695f347bf8ee0dc5638e9bee3c9cd4c
dd540ab431bc7b9b89180f29600714510ef2d31a
14741 F20101122_AAAHMU johnson_p_Page_122.pro
a625ec82839b70bd0d16f740d6aa5c97
571dbf1fced8484e9d58321fbc2047f66d30467d
30233 F20101122_AAAIJN johnson_p_Page_141.QC.jpg
5a00693e40b0841a9ea4fc00e2e7fceb
4fbe6b65f0ce3db67b27fcf25e0fecfc0b129b84
24633 F20101122_AAAGCQ johnson_p_Page_031.QC.jpg
4317d1a3e1889f2c0ad0a14c795e063c
0c780afe2852f60050c44b89c3bb23c9e3759ea3
44870 F20101122_AAAHMV johnson_p_Page_123.pro
a1c78c5144aa95a9239e45ce597532ad
3f357c8ca023716a0b2f423f058f2ac760ba85f4
6140 F20101122_AAAIJO johnson_p_Page_020thm.jpg
44d27bd2b4880f2c8e49d0ed589d3e48
6e9b22921a0b12f26e91ee20a13a12d10e76ebb4
90492 F20101122_AAAGCR johnson_p_Page_055.jpg
1cb98ec13bdc3545cb2e809aa128f7cc
321326d523ed344322b60ad704febe8b16200c1c
17996 F20101122_AAAHMW johnson_p_Page_124.pro
f574f2b4a7010b9889cedb90b30e5cfc
fbd363203e63c806f1b3f2a668981253c299e85d
5678 F20101122_AAAIJP johnson_p_Page_131thm.jpg
f6ec90f9f4bcaaed98ff6862688ac38a
dc287810d96624c8a9106526271bce054e21d250
1253 F20101122_AAAGCS johnson_p_Page_169.txt
e455632814238d4aa7c272539077a4f2
bf8a0d2c492dee3c25539a81833a9dfe1a4bde13
9294 F20101122_AAAHMX johnson_p_Page_125.pro
7e499878697a637c129fb67a6d360ff5
3c09795f89a8d4ec550aa4594648e1583f57f796
17203 F20101122_AAAIJQ johnson_p_Page_085.QC.jpg
1f63388e648f5a32c7342bc1e8eb4b3e
e974f89e29b2bb6614a2586b059bf2104ddc6b6c
26849 F20101122_AAAGCT johnson_p_Page_007.QC.jpg
06263e0936c1a75fec125586aa26a48e
16922fa09e16c0b6024b327523b7832ff9838a85
21224 F20101122_AAAHMY johnson_p_Page_126.pro
2bdeaa3f5f88645d864c0056a5404fc2
2471d76730deef6cfa001b3db618f5a03b1e0c35
17397 F20101122_AAAIJR johnson_p_Page_117.QC.jpg
c22af2e1529194b8bcc1bfefbe25d612
4184e74e3147e689948ef5f7a33124573a1b8c6d
431788 F20101122_AAAGCU johnson_p_Page_178.jp2
e3c125179830f30f2ae794b0ffdad57a
8892806f3ad188b513139a9d0bdd813582fe2bd3
11910 F20101122_AAAHMZ johnson_p_Page_127.pro
5740ab3fb7bb9f244ea65b12fb55cfcc
2be873811dffe10174f425c45fe4991197e6edf0
2299 F20101122_AAAIJS johnson_p_Page_201thm.jpg
25f2aebb8c4b762f8bcaf61473743713
11d12f11d20e3541fecdfc9a975c814705b7a094
27435 F20101122_AAAGCV johnson_p_Page_140.pro
fd17ea28672d6bc0c8c8468296e97ec2
e2ba8d913c76a18281e8b1eaaf453626ed3d8914
16165 F20101122_AAAIJT johnson_p_Page_200.QC.jpg
4057069a5f0a889aaca8f92a0c495c7e
aabe7bb85d7af909baa60ea4203321be8b89e91a
40009 F20101122_AAAGCW johnson_p_Page_024.pro
f5b2a53ff64c33be9b6b7a9064131eb2
0703b83dd5178f986601e2050e538cec047a9c2b
F20101122_AAAGVA johnson_p_Page_007.jp2
c3f426ee834d520ffa2ba2a09464ee9f
f3a8f27205a867c35d915372670fb67eeab0785b
4457 F20101122_AAAIJU johnson_p_Page_125thm.jpg
f52da583eccd1a6474ccd3d5449346c6
b48226e604a9357e021729ec29a1991c75e78482
18989 F20101122_AAAGCX johnson_p_Page_215.QC.jpg
a07e28c6d96e6f358bc63e749658d77d
083dfbb22fdae0e0778f9f1d49ea2bef93285966
911242 F20101122_AAAGVB johnson_p_Page_008.jp2
37d0582b61636cbf93a9cee4ca2d876d
8759aa8d53053b5fd05083967c9a4c651899e755
20579 F20101122_AAAIJV johnson_p_Page_227.QC.jpg
cafbd18b041768362b85a8460bd896e6
027552e7124be2d06dff40324b1d3a8eb487df0e
10780 F20101122_AAAGCY johnson_p_Page_198.QC.jpg
e1c36a14f1c5a1f08b7b3f15880ff787
8b54413f5ffaaeb1770d5ea483b705a7f432b5ff
964485 F20101122_AAAGVC johnson_p_Page_009.jp2
d7b002c38a3c6b6144fdea006e2937a2
fba3137add4760570adef08413a03f02eda99211
5410 F20101122_AAAIJW johnson_p_Page_162thm.jpg
1659b1e2a7e052fb82a01ee688e01e75
d0ef44cb1fe5b727e680430f8fae27f3fad797a2
22698 F20101122_AAAGCZ johnson_p_Page_083.pro
e2ddaaff2ccee15c4fae7ff79292cf0b
8e2ebb43e5c111e96b48bd110b96be48d7689734
900554 F20101122_AAAGVD johnson_p_Page_010.jp2
45acee62b7d86f2ef526f7185856fb69
9a0e6cee0a07cd32cb3f864d7c96f6b28221f48b
26558 F20101122_AAAIJX johnson_p_Page_133.QC.jpg
83ff3aa59bcd0b608b3a32e05693eefa
df737062c71c2bf0051c152aee53802f70bc1fd4
F20101122_AAAGVE johnson_p_Page_011.jp2
f7cfbafa74fe3365ec0cee22dd7c536b
be7e83197951085a13061d8210907bbfb7fceb0b
8313 F20101122_AAAIJY johnson_p_Page_016thm.jpg
9181fb1fecdceec808bafb617077cf99
301584c94a273d2dea53bb47e1a9486ce3988aa2
923184 F20101122_AAAGVF johnson_p_Page_012.jp2
32a6fc2cb78512c483750c4e870e4c14
05d7391e20d9770ff7f914fb4b85d5ae6fce3161
4238 F20101122_AAAIJZ johnson_p_Page_107thm.jpg
3cf610fb0fa75879522a160ee48694a6
16636aee7129b651d337fee7ce63a31b48425ba4
767368 F20101122_AAAGVG johnson_p_Page_013.jp2
09da0e6ea1fee66a166eba589d1c0a90
1ce45b983c0205f0b7b3e786482d31dfa03647c1
839446 F20101122_AAAGVH johnson_p_Page_014.jp2
77bf00d449c51539bb9f5b5cbbf307cc
8fb194956b03f559ba12a4a3b33119022a8e04f9
2063 F20101122_AAAHSA johnson_p_Page_054.txt
16cff801021d9c1fe9edfb88146e1dca
1578bc8aba394e59f5c91d6281f1f6a90c6e17cc
1051974 F20101122_AAAGVI johnson_p_Page_016.jp2
753ac03821c8cd7314c162d363c16ac6
6a820197aa04f06b53639511d9602593fe04b97c
1745 F20101122_AAAHSB johnson_p_Page_055.txt
a859482b36a0b68c767a95cdadbd56ed
c44a0d56eb369bc9bb29e0babb0e23b72619ff64
1051945 F20101122_AAAGVJ johnson_p_Page_017.jp2
0ea715fac63da44c45bef6647ee05e0c
435cc44e4b683387bc80cb1d50350d25652bd20b
2176 F20101122_AAAHSC johnson_p_Page_056.txt
ca4fe2c4fc64a5b0d13b8223664246ac
663a4a14b54eaddc386b25f42f894bf3bb624190
339838 F20101122_AAAGVK johnson_p_Page_018.jp2
9f2368967f2e88243502909584d5fc99
afe3f21f17c978842862819f8ab758e68c451fc3
1185 F20101122_AAAHSD johnson_p_Page_057.txt
d57f1d28cf4950e1397ea264aba7f780
d76ef296346b39f9f55a0be65d7983811f0c50b9
952630 F20101122_AAAGVL johnson_p_Page_019.jp2
860b9623464fe22d0a9b6ad8201232ad
0679228c92230b8259565b2cb284bcd994c54695
1928 F20101122_AAAHSE johnson_p_Page_058.txt
553b0d4bfd79150969e0e99067bcf44c
94a8eb67d912763754d5831e012b93b623c44f53
7780 F20101122_AAAGIA johnson_p_Page_231thm.jpg
cfd45a4a0bdb55e50f287eb6155e0683
0a9bb3853017394573ef16dffa37d8b78c7ee08a
F20101122_AAAGVM johnson_p_Page_022.jp2
182f874a5d2ca6ec20b6c958f2c68e94
4fa095f3c687310e02e0dfd62127dc860aa82058
1323 F20101122_AAAHSF johnson_p_Page_059.txt
3ff7fe1ea9e71f9e40dbd4f4d0dd4971
c1b6b6f015fc1819151a3c0d02b09886bd87600d
3709 F20101122_AAAGIB johnson_p_Page_105thm.jpg
e2ff29a232e59348c5d4d8aa4371704c
0ea7b04bbe62bfa5f65ce9df1b165b0ef11ca75a
1009132 F20101122_AAAGVN johnson_p_Page_023.jp2
fcaf3ad213136ee76f6b9fe6f1b6f864
4278b44feedaf994893773be54a7986985cba3f0
493 F20101122_AAAHSG johnson_p_Page_060.txt
a1725555a5c41262e74fcec8d054436f
f5aeec1a552fd4a0576412ec85606f2b76de1e37
9982 F20101122_AAAGIC johnson_p_Page_194.txt
678266c3ce6e88747a0d1c9e04afa44f
10e4566ae215565f756bf3860f4323a46aef8509
895970 F20101122_AAAGVO johnson_p_Page_024.jp2
accf67ef8f61c91bd810d02d4e62c729
a87548b36c9e1fa3925147eb3725f589da356959
923 F20101122_AAAHSH johnson_p_Page_061.txt
667cafccea0ecb4be70e6661f204cd88
77fa95a107c8ee5f93dbf7e216a239136ee032f0
1244 F20101122_AAAGID johnson_p_Page_111.txt
ee22d97b075006adb1400b47edc8704c
79adf94c2a0340836e11b01a8b5e2a6acb5576a8
986065 F20101122_AAAGVP johnson_p_Page_025.jp2
92a7301560bff7999d54fbed9658ec34
09193e86b37be838c4eeda7d7ac5fba6ccbc9ae8
495 F20101122_AAAHSI johnson_p_Page_062.txt
55996f69fff0405437e40954ed0e5d24
8edd79c98ad9f8846ad0bcc6335d129ba3ee77c4
6724 F20101122_AAAGIE johnson_p_Page_024thm.jpg
7b194415d18db3f523793a73c62429be
45629247f0d0eec8f33bd7975e6bfb4d7dd152b5
1012126 F20101122_AAAGVQ johnson_p_Page_026.jp2
f39ba567a768b84ede7c0379f7d1ee20
b16d119801c53c8540a2c16a2b5544886bcb2be6
2120 F20101122_AAAHSJ johnson_p_Page_063.txt
17123088e556025dbb4c828115797f52
3972cd405be6c33dcba139056d2773e6224faead
4106 F20101122_AAAGIF johnson_p_Page_208thm.jpg
f5b1e8e69f719bb295bea2f01201ccd4
b55426e7721149ec1d3c158b6c9c1dd6cf5664be
758522 F20101122_AAAGVR johnson_p_Page_027.jp2
30eff1375604f889aa54c93b18cbe12d
78465de318622b97fc4b7f17513aa9669c3c3674
1528 F20101122_AAAHSK johnson_p_Page_065.txt
568318ed95092f88cfec6769fe576730
ca780b3ae1d02a206f1ddd70723988d8da09e1bd
F20101122_AAAGVS johnson_p_Page_028.jp2
bd5368842b92074fb9bced4d70796c9a
7ae240c1bb12ca7819bd39a4d1cb3998d91addb0
551 F20101122_AAAHSL johnson_p_Page_067.txt
dfdabbe8cfefb17b81a81d83726649f4
84d36dcf9dba2f0586f7b4ad8fd86f461cd5234c
97122 F20101122_AAAGIG johnson_p_Page_058.jpg
3a50751cc32091973e45b3bae775e58f
b096712f24e5c2af20d72bad044ae3401ba6c847
F20101122_AAAHFA johnson_p_Page_105.tif
4fc677c8d67b029c14a49e094eca81a4
0d45671700f8c8ec85467fae61c1ca6cbaafe06c
1148 F20101122_AAAHSM johnson_p_Page_068.txt
03a41a2c3782e74a219949e9ad75b299
f94ccff1e30efcfa135675a9947c66195146ae3e
F20101122_AAAGIH johnson_p_Page_218.tif
9c8b9a1a9bfc4e2272f8d329498b2ff6
e5453ab5a97836867704b8d3dfcd223f8df0d1fe
F20101122_AAAHFB johnson_p_Page_106.tif
41c1e8e4194a67bab4975b3ea56efb61
1c624feab2505a99ba19d3361559a808774bc854
921220 F20101122_AAAGVT johnson_p_Page_030.jp2
d6d43737bf160870690b051b3fc7d469
c72e23ee1999b3ea94e5b83689d01862c15e6839
797 F20101122_AAAHSN johnson_p_Page_070.txt
b3843289d2c27e7fb18d75a8f9f78a2f
e36aa2304c6853b76aa30ee036c10c1ac009d423
16632 F20101122_AAAGII johnson_p_Page_236.pro
45c75956224fdf298fddd69020e90fd2
5a6a512b550f3fe1e0a6f27671dfad20878b7354
927568 F20101122_AAAGVU johnson_p_Page_032.jp2
1d58894010696fa3e45fe548cc803146
680350a71cb8a40cb34bfde49783b51fef1f197a
1275 F20101122_AAAHSO johnson_p_Page_071.txt
dec71ec3ba6e1012269e5c5c733edb65
4f22adab716f014095e8597b6d736b978bd20dfa
7354 F20101122_AAAGIJ johnson_p_Page_032thm.jpg
243fb2437e877f39cb37dd3931a6ebdb
0e56fe93c04a9a3628e19d585768a5b313b25176
F20101122_AAAHFC johnson_p_Page_107.tif
ffc7d79f2380ccbfac0878fd10b9a019
d568f041a825a52cee3c4277721d00bb3ba4c4b8
742674 F20101122_AAAGVV johnson_p_Page_033.jp2
ce5ed01e15b124e8e6bba8fcae901037
f37b909c24cb6ba7fc7db4cc01a1d6a57e2abd78
37624 F20101122_AAAGIK johnson_p_Page_022.QC.jpg
3848471eedc9630e09d3bbdf0426dadf
3853490ba273bd2c34a32b61567f8c19306df318
F20101122_AAAHFD johnson_p_Page_108.tif
e78569bd803f450a90bf2a9fcd4a4387
ff02bcddef139fcfc9de36c7f6c813244305127a
F20101122_AAAGVW johnson_p_Page_036.jp2
552e3fe23ce500bb9a9fa5e91a438787
e157c4cbd14c21fb06ad922f3f84ce64ee7cbe35
1451 F20101122_AAAHSP johnson_p_Page_072.txt
63e63b41b5f600555697b661e9ea50dc
c1677f6245311b312d415fd0e25ea3a8422b98c0
1483 F20101122_AAAGIL johnson_p_Page_066.txt
dbfc41f22ec74dd345d42641b3916ed0
cc8b28312b39d3b907305960d83f2f9d3f081fce
F20101122_AAAHFE johnson_p_Page_111.tif
4b4a4f0439e0f5580c686b0e08188b8b
90a19b3c502380829e8881cece0d2e6f5abe7722
929399 F20101122_AAAGVX johnson_p_Page_038.jp2
66339c76927b97bdbb998ebf006df11e
ba67ae302d2ad5bc17ee7190a4122e951ecb0e5e
994 F20101122_AAAHSQ johnson_p_Page_073.txt
2529acc726e0b4989d435e887a498302
571f8cbc07cba60fd86b9f09d65e0b73a85a9318
4260 F20101122_AAAGIM johnson_p_Page_222thm.jpg
4fb5771f87cbf253b9e16c0d880286e0
88721125918213b81538afeceebe3200d3050d5f
F20101122_AAAHFF johnson_p_Page_113.tif
25804f813b2c34d193182d6853434ab2
652ad15489085c0b8899b199c4bba7049392f44f
1051892 F20101122_AAAGVY johnson_p_Page_039.jp2
9d6aec82d6f70d825d710911e407e97b
f2487c869111bf3b99da8e9dbe58873084111026
2301 F20101122_AAAHSR johnson_p_Page_074.txt
2e437bd79b80676b733fbf88ee938ea3
f428b86ebfdc936503380af36b00baf16c80d7ca
2138 F20101122_AAAGIN johnson_p_Page_064.txt
1ec1daf1616bb4ac40672fb7015cc5ba
30983133dacfce089623894f7511828c8755186c
F20101122_AAAHFG johnson_p_Page_114.tif
2c624843bf0ded1bf295d0644c22c5c8
2871d97c376ae0e69970f4d068b6c415e8271b47
1032469 F20101122_AAAGVZ johnson_p_Page_041.jp2
fb588f3b7697fc466ca0ca9b9e396baa
0de6d42c4c7ffdb01af96f2fc0b316e0ac643ef4
28056 F20101122_AAAICA johnson_p_Page_151.QC.jpg
9c8e395dc95a741630f86134cb913202
bb744765c0891be06edc879a9fbae1625b80321f
1525 F20101122_AAAHSS johnson_p_Page_075.txt
e7bd62dc057be57b074a77ac344d12ac
3bb3ce22db6a68ec48527485634890fb18d45823
497 F20101122_AAAGIO johnson_p_Page_127.txt
02c78df9d3258a5130b22083eb42e1bd
1654032266390ebf4fe4b24b8e562939547d4d10
F20101122_AAAHFH johnson_p_Page_115.tif
e0393b0999100441cf2645dc830cf311
8ac4373192b77a394d1187d9b856e9657a1c58ce
6802 F20101122_AAAICB johnson_p_Page_168thm.jpg
39bfbd138efd68d552059d7a824e5e95
57607be48ed91716c05a0309a64c51df3ea58ed5
668 F20101122_AAAHST johnson_p_Page_076.txt
d10dbf22d8725b01172d8b82d6d05d19
a9711f1c95e0c020d36b4bc7872c6d67c8899aae
F20101122_AAAGIP johnson_p_Page_059.tif
a07e2cb06715ff965c957f1ece55f0d0
2f815c808ddd1447aef701efb0661551a3cb3d1c
F20101122_AAAHFI johnson_p_Page_116.tif
b03ef1b114a26af39a809f0e1e595abc
12385cb8e82a8eb49bd4cbdb54497f038b11660a
5284 F20101122_AAAICC johnson_p_Page_060.QC.jpg
572cbed6cb322b47f72ed4f7f173c377
fc99fab5a1967bde7949f96835dd1f9050b13789
1639 F20101122_AAAHSU johnson_p_Page_077.txt
5861b36b8dbbeb829718ea7b9c16ad18
700b5fe0848d101896a5871fb7ad1abd69f42596
528721 F20101122_AAAGIQ johnson_p_Page_061.jp2
484bd0cc3a0d3837933b2d1c7eaf4af1
3e3496bd7565eb1607794849add32577f034f60d
F20101122_AAAHFJ johnson_p_Page_118.tif
5c5f9dfd77715d00415db4b65dd230df
25d5a549ce921bb84cc0c864f7fd090937d90f06
9986 F20101122_AAAICD johnson_p_Page_071.QC.jpg
46d94aa8d8c0967e59f8b078f1886087
1025b5ced3c22687a042149c6481bc03d9f74ca0
1842 F20101122_AAAHSV johnson_p_Page_078.txt
4db87247cae0a50b061f5300786470da
5e001bd1501f033b1793a0c8e999219c496666c3
20681 F20101122_AAAGIR johnson_p_Page_206.pro
af6c940006a19ceca0bb51afbf656f8e
54efff09f4ed31637688475590c6bdbead17a84e
F20101122_AAAHFK johnson_p_Page_119.tif
ecc2e4bb664516c9c4b9f32db408d514
6c6780a1c995edb04254f6c11d07c38a6850bfc4
8887 F20101122_AAAICE johnson_p_Page_017thm.jpg
409cb507cb94904865a3e8045d380c57
1ce6d6279408d6ee764501367b9eb2d6b84406d8
1647 F20101122_AAAHSW johnson_p_Page_079.txt
69f5be51a5b3b5fc0b9985e9bc8137bd
e26bc7b31545ef41927b47191fc91e5fca3d506a
404905 F20101122_AAAGIS johnson_p_Page_128.jp2
a713831fa823605af75db0a86df117aa
349a9b5c7d74b159635d5752055ec80c155b22a6
F20101122_AAAHFL johnson_p_Page_121.tif
b7d7510f7a9a26d3770b1f23d20f8461
9cdaf00f459c7de4e2a5d1e866c2a3639b5ca0a6
1732 F20101122_AAAICF johnson_p_Page_198thm.jpg
15289075f79f38242bceedadba8c22ac
3dfea6770784444a53ece11a98ac0d65c39fdae3
1405 F20101122_AAAHSX johnson_p_Page_080.txt
046508fddd25ae0d0f804b62c8175226
9150ef7d571653c0a8aa0f828b6505cbb1e525be
1025086 F20101122_AAAGIT johnson_p_Page_029.jp2
23a983da10bf6f675ba7ad1e16eb23bf
247723cf2418b0980b7c51196f6d9d4e4b86786d
F20101122_AAAHFM johnson_p_Page_122.tif
7fb3569e677a99081fdbf906d6693e4c
537d0c24f105a72e4a7bb940fc3216de54027953
8364 F20101122_AAAICG johnson_p_Page_056thm.jpg
155003fb37d264b1351bb40873dbfb1b
b4cf833a4cc45aa5ce7ad4211b795b33ce6f382c
1688 F20101122_AAAHSY johnson_p_Page_081.txt
af186604f394d189a2a4ffd578273e3f
931ed8ed4fe7dfad1fd607ec196a066f6508773a
16585 F20101122_AAAGIU johnson_p_Page_108.QC.jpg
0dace3818379908235abcf7283f7b436
67217596974080826df527a17f32cfa327178cfc
F20101122_AAAHFN johnson_p_Page_123.tif
e787d12e662387f89fecba0b23acc41f
9123f76e1d944e7a9717adafc308ecb8def438a0
6989 F20101122_AAAICH johnson_p_Page_144thm.jpg
5406f3809a5ce8965c8e8139fd7a868d
3719a759637a4275d763845ac38b9fdece5df164
1011 F20101122_AAAHSZ johnson_p_Page_083.txt
3b569016704f4f1bbb5cc41e502cd8c5
5d354bf4e3397271178a73c92f46baba676750ed
36633 F20101122_AAAGIV johnson_p_Page_234.QC.jpg
bed2b4dc0795ce167c26ea13fb05e1a7
a1390b7407fd81f9abe33821281b16878b0758b1
F20101122_AAAHFO johnson_p_Page_124.tif
8151cc2b9aeddb071cca31a973a77e0a
902fa7cc8f6ebab9cddff9fbc61cc12bb3d6a3c2
9101 F20101122_AAAICI johnson_p_Page_177.QC.jpg
99733a6cbe19bcbd71dfcf5238d979ab
d2204b9edd87b383b4c787a200d02599f6d53f65
859451 F20101122_AAAGIW johnson_p_Page_031.jp2
4c90d7343d005e8c68bbc8c3c8ce5f26
11104318abf3092f10433f1a912ac6d132efc5c4
F20101122_AAAHFP johnson_p_Page_127.tif
1bcf7d695e601ecfa385804dfcf793e3
4d6679cc1980978bd230098bec775998b15b8aca
19415 F20101122_AAAICJ johnson_p_Page_140.QC.jpg
368bddf94f864323861718c67eb8bea3
e319a9542c7075b777b8836c1f2ec45881df6592
19056 F20101122_AAAGIX johnson_p_Page_073.pro
76d29a899a287460375be04786347be7
52f0725cbe19776450cf8d9c661ece084d155394
F20101122_AAAHFQ johnson_p_Page_128.tif
596d6c4f47210676272bdd5212abeb4e
6dcdc3c1dc99c00095cfbe07bae246c786430569
34156 F20101122_AAAICK johnson_p_Page_047.QC.jpg
4c8ad3d82d631ea7d69c703cbdc8d30f
867f922bf046b15db8d28f86417e48a6cd46f894
830712 F20101122_AAAGIY johnson_p_Page_168.jp2
48be7346b7a86c6f5c818a98b5f6bc32
634d06242543e98d5ca64c87efbd51c2bc1f7d0d
F20101122_AAAHFR johnson_p_Page_129.tif
5cbfd8f4271beef3720e422bf83cbdb9
8676527109b0bd85862e5d949bfb05849639e90b
5000 F20101122_AAAICL johnson_p_Page_085thm.jpg
cb0eb20e05217af77aad75949f3f3653
565363676ef2e0d73d1008b979f559c5b8dffbe4
47973 F20101122_AAAGIZ johnson_p_Page_158.jpg
7e1b9be6f86456e11b17013dc2201f3e
3cd3f3dace3803ccfb292cefc1354267992dd791
F20101122_AAAHFS johnson_p_Page_130.tif
fbdbf844ee6272a599087d3d45060d21
fe8a19f3595c12b70d9d96351170d1d89e8fcea9
7257 F20101122_AAAICM johnson_p_Page_035thm.jpg
84cf9c3f7fe1ff3933a395f5bd96b20f
a6858bfee06723a2a7bb40a56f32756eae16ee4d
F20101122_AAAHFT johnson_p_Page_131.tif
d0add748da9e214012f067b29f48e03f
6604193ad43598499c096198418351b01f084314
7996 F20101122_AAAICN johnson_p_Page_028thm.jpg
f05990f62826189f4689a95865489809
ad02ed0b5dd8bcba03c649cf4a015dca9e820e19
F20101122_AAAHFU johnson_p_Page_133.tif
b3e057d463a09ee47e59ad8c963e57cd
15a553e74524e66dc93959f31f99e59823f6dc3f
21693 F20101122_AAAICO johnson_p_Page_109.QC.jpg
36c7a377e05d0925881b208ba533d938
958d6fd3769ac3df74f8b4e21c8d83ed13406f08
F20101122_AAAHFV johnson_p_Page_135.tif
b5a1522f668ef7caf05bade8071f5cb5
026da42e119db14bccd7a111fe9bc36eae805621
6513 F20101122_AAAICP johnson_p_Page_025thm.jpg
a11c64cc8d59719d8c8de14c1a4d786a
15ce8c499f4e5797f970c03a25cb2e250786f2a6
F20101122_AAAHFW johnson_p_Page_136.tif
96cfa1a0250fb602c6e3d4529f86a5cb
9cde3e7fe828d718b85e2c583db8c7966af70c91
13308 F20101122_AAAHYA johnson_p_Page_115.QC.jpg
bd96e42e95a8283e6494e3756b5ae364
fa619887dde262bfda4b5f5567117625ac430d82
8210 F20101122_AAAICQ johnson_p_Page_042thm.jpg
78aabe2585279aa7ddf551219213c7ed
76d2cbc6effef52619192fc63bb4603747484304
F20101122_AAAHFX johnson_p_Page_137.tif
d75b6cd3df40d532fe70fc74fb3828b3
4a00e9808c4ae0d8b62199fd8a2781d585a4db69
17836 F20101122_AAAHYB johnson_p_Page_224.QC.jpg
f3aaa4889611f05b499ca161663973d0
db144e9a2cbbe1d5fa66dffc5fb7e80bf71edfba
6431 F20101122_AAAICR johnson_p_Page_006thm.jpg
4bd5e69a2909073dc89940ca673d837f
3c243d6e631d4684306f0c4de1e3d494c1563f91
F20101122_AAAHFY johnson_p_Page_139.tif
ceaaafdae27322b218bad631eeee3881
2d174b08604935aec78289b468b770f5cfd17aef
4410 F20101122_AAAHYC johnson_p_Page_224thm.jpg
ccfc191ca7cc51c7cc26c0dfa9f64a4b
7778abd2a35c9b60badc6091964cbb9db16760b3
13132 F20101122_AAAICS johnson_p_Page_097.QC.jpg
ed4acd38302e1c7706637c9dc499434d
ab2c07fbbf2e9148daa7c0c3e350025a28fc2458
F20101122_AAAHFZ johnson_p_Page_140.tif
4d9ecd0f73321be09611b3e7824a80e7
00a464a8ac7c073276003b36cf3d72790ab03212
7617 F20101122_AAAHYD johnson_p_Page_141thm.jpg
bc1545a175315987b7f708f76b208a53
0fe28375a743df4d1f5a84a83a47c971077e742c
2447 F20101122_AAAICT johnson_p_Page_076thm.jpg
eb5361594d819f2f53c8040ab2702986
278df5e8fbb9b7c8572ed0f20fc6a0053dfae672
6178 F20101122_AAAHYE johnson_p_Page_188thm.jpg
f137d7213bf5adecb25066fbe918b470
04b1001ee685b015e16d77d49758f293fab0ce20
4497 F20101122_AAAICU johnson_p_Page_114thm.jpg
4a46e6cab901f20a85df9c87c6987e59
fa6fd9953e6c883b9d7be3246fa1a795fd651208
124395 F20101122_AAAGOA johnson_p_Page_017.jpg
b430e1bc696b8f34ac15b9646da2751a
8a8008c37d7ccc5abdf99e1f8ddcde81f4dac986
16854 F20101122_AAAHYF johnson_p_Page_083.QC.jpg
2a00c6e17a39f2513e9a43dba92db5e7
adfc2777b957aa389d198f216cf51cc690094ea6
21760 F20101122_AAAICV johnson_p_Page_072.QC.jpg
4078b5d568d7076dd8221368c1eeccda
31196b0e9c7dce4d69c3ea4cebaf41572d562272
41019 F20101122_AAAGOB johnson_p_Page_018.jpg
7fb4e7502f1753e5fefb942c7cb88cbb
36ad7dd0d6cc2e5d8c1a2e2e955cbc1dab5cf07f
17483 F20101122_AAAHYG johnson_p_Page_163.QC.jpg
28ad7f6c285cfcb65bb726e73d7f902e
9777945ac026338dc3055a96b05a8efdaa1a592a
5913 F20101122_AAAICW johnson_p_Page_106thm.jpg
b27bf01f790bca8f5ae3ceb839ab67f6
3a900189a22331bd3127ea99f2d498ec4e48139d
93161 F20101122_AAAGOC johnson_p_Page_019.jpg
dbbebc23955da74855eafab1120f39e3
1c4a16f28b3507adb17e1ead532d856584c9bafd
5665 F20101122_AAAHYH johnson_p_Page_089thm.jpg
f4d599f2d4650a51b4eaa3483bb56673
51d32db2627e508b2aad1cf5571aea1bcc0eaed0
14535 F20101122_AAAICX johnson_p_Page_209.QC.jpg
7488d36574ee79d4fa80ee9694fe4eb8
0a36f25149d3cd255b64a17689492207fb926d0a
65133 F20101122_AAAGOD johnson_p_Page_021.jpg
01e7b37db60fefa56ebde5294dcea274
55c421c8e6f0a016a3f07d06c5c7a819df914fc1
23462 F20101122_AAAHYI johnson_p_Page_220.QC.jpg
374320c2c999712914b25abbc6dc4805
c73fc43a80b0d79be54d51b716d443b43b4501ba
4293 F20101122_AAAICY johnson_p_Page_200thm.jpg
19964a8b4d2d560da1b69cb255a01888
3fa279aaed7ccaf709b4f53055a1acd43fa58c3e
123389 F20101122_AAAGOE johnson_p_Page_022.jpg
7b3e56769c1bb4d84a5e887172bef7f3
5312e02fec5852019e4f60cbae86fe62f5e0562f
9951 F20101122_AAAHYJ johnson_p_Page_095.QC.jpg
bda03f698a8e0fd0ff4e9efe2a87d1da
a43c0b88e8d049eade328582fddba1385115de04
4670 F20101122_AAAICZ johnson_p_Page_108thm.jpg
4b0d4e15b911f07d219021f06f688c16
58a492e19ef369136673598fd0ea6900cb55ecce
91798 F20101122_AAAGOF johnson_p_Page_023.jpg
a93b93fe1559aa1ba7500843cc4b2d13
a5cf1fb1b8ecf32cc61f9b3f66b3f5bdbbab1580
29401 F20101122_AAAHYK johnson_p_Page_046.QC.jpg
fd4c73bf423b97aa1b1dc3e7c3db2d8b
4037c23abcf36b70061b1d375b9bf8b4a5ac7ebb
84430 F20101122_AAAGOG johnson_p_Page_024.jpg
6b1fef4a344429368439fcb1ccc0a470
443e7cffd842b4176508577246df0a9914a7d6ee
12588 F20101122_AAAHYL johnson_p_Page_184.QC.jpg
50f2061321f15456b7d267f54660dbf6
5fd6e785c5508171b162eca6ca39ce8b066705ed
86642 F20101122_AAAGOH johnson_p_Page_025.jpg
781040cd2c5e9c92f6a09b519fd60b45
56793d3f9dcc754981baccd22ee16759c39c5f25
22454 F20101122_AAAHLA johnson_p_Page_061.pro
f07e32e4238cf57b5f7184eb08e8a0c8
b6f10be1022fbb7cf8eaaa07e8541b8c5f1f4a11
20738 F20101122_AAAHYM johnson_p_Page_131.QC.jpg
5d57a37a9716211563bacba6cdb50069
dd2533572b7b7bbffc8b899a8a1454c1ccc65b46
94740 F20101122_AAAGOI johnson_p_Page_026.jpg
de01012275c3cf109dee90ddcaec4eb2
72a06808836388f8d52a7c2061cc86ab98f2e35e
10727 F20101122_AAAHLB johnson_p_Page_062.pro
9c8ca46d1c7acd9742905d34343c51ae
b5bb372b21eae41d20f4e3bf19408d19f38edb60
10408 F20101122_AAAHYN johnson_p_Page_194.QC.jpg
18778275f1643afef65bca0a2dc1a1c7
0830a89984c6b7d52b1f6875ce2475c5f4faea14
72655 F20101122_AAAGOJ johnson_p_Page_027.jpg
8193c591d253807c7aebbd96bb17bc7d
f48573c5d6013ca9e0519875fb24d89ad38cb37a
52833 F20101122_AAAHLC johnson_p_Page_063.pro
5ee4215ed479d647187e79731825ccba
bcd7723f72286e1bfe281cd4e9b8595b6b795da1
22349 F20101122_AAAHYO johnson_p_Page_066.QC.jpg
5942b6ae88e7ade8b1caa32695c154ae
e27d0034777bfc5b37cbf433619c025013efb869
101539 F20101122_AAAGOK johnson_p_Page_028.jpg
093d611069a2e9bc3bd16f92f454b009
278f7c5680543b1b1bb9beaf26947a3808028981
52171 F20101122_AAAHLD johnson_p_Page_064.pro
747b9879aac88f7f32c6cba02c4126e3
3f08720932a16bba814a752641758b7cc291e208
19472 F20101122_AAAHYP johnson_p_Page_126.QC.jpg
d4ca7f4caaaa2ab793ab18c496563297
e347c8c03c4e4555e59e76474ffaec63ffbe4877
102665 F20101122_AAAGOL johnson_p_Page_029.jpg
dfaa578b6748a8e9e2570c64e7a45809
7ae7c93cedfb95f970e2b250621030aefca96ec7
32295 F20101122_AAAHLE johnson_p_Page_066.pro
54901d73793d224414b2196ad7e18c38
2fc669d6d9b5ce91d2a67dbdc15f1a1dfccd3d93
5379 F20101122_AAAHYQ johnson_p_Page_191thm.jpg
be85351c775f57109a0c72e7b1b7509d
5be36b8c53e9cf02ef2e65ee898a32660cb317c8
22356 F20101122_AAAHLF johnson_p_Page_068.pro
b55bf59b74db1dd3e2363e176da52d67
2d2c2f55c5a14df70b4af3adb64f0c9971d83a8d
F20101122_AAAGBA johnson_p_Page_181.tif
06f87aaa4d8fe6b472e764bc6c924b93
9ae5fe0ccf9020805a0e0726a30d7a730f36fe01
5756 F20101122_AAAHYR johnson_p_Page_213thm.jpg
d03b54e1bb12385921f91e88774c41b3
381babe15851230af5d355aa0eeb937b4679e9b6
77762 F20101122_AAAGOM johnson_p_Page_031.jpg
6a2e06d65f7deaa1a8330eaf3fa64de5
11a3f0ac4d1eeded253fdecdb1d0df99d497b55b
34979 F20101122_AAAHLG johnson_p_Page_069.pro
7054410f2f4f15ee7f4305344a26c2b4
844a26292f3686ff401781228624cdb96080332e
F20101122_AAAGBB johnson_p_Page_132.tif
0eca71cc5dc631c19059d00b69e605fb
3de608c4cf4465bd49daac66282ba81562cd3c32
16467 F20101122_AAAHYS johnson_p_Page_219.QC.jpg
7298fef4f6442be69949c147f549520d
c4c4e27558870796e9eac00850a717eda83681d1
91749 F20101122_AAAGON johnson_p_Page_032.jpg
cf37986f911b6cb94a27ff1f75dd5dae
776b4eef6dc12d897ff3df19cba0929562aa809e
17931 F20101122_AAAHLH johnson_p_Page_070.pro
78bf4bd452481d37e84e95d2e2d59ade
1c6309b8dd29da7f31b8e8a7eaefae44e4635b09
27814 F20101122_AAAIIA johnson_p_Page_155.QC.jpg
22e7c759564960ea41a49ca27d7dd371
81e9f8b021febab474021bbdc570852acd322b33
1035536 F20101122_AAAGBC johnson_p_Page_153.jp2
4202b246cf4c3ff0b6a1599f3fe84276
4a660bd2227270338401cf93569e4cf8e02fb9b0
13399 F20101122_AAAHYT johnson_p_Page_107.QC.jpg
ae5f0d7e65afb66c8222611cb086c383
b854fef7f33727d9bd24e532a029b48264f18d8d
70440 F20101122_AAAGOO johnson_p_Page_033.jpg
f4cd861663562fb72adc8c843a7b33f6
2818e825ba2073230374efc55b82bfeb1867567c
19768 F20101122_AAAIIB johnson_p_Page_110.QC.jpg
f7a9edfd960a9deb3d20651942990e96
6ab5b49bb9c7ee992d43332e0e7a05b8c4f36268
F20101122_AAAGBD johnson_p_Page_090.tif
d8c2d84f4b3223555eef42046ca7407b
7dcb26be8e64d2ee00e3ffa989bec1183ce21f22
F20101122_AAAHYU johnson_p_Page_194thm.jpg
bf45cde6410d5fdd2aff77edc838252f
01e9a8af94e67fc09814bb43f57634fc9235da96
18683 F20101122_AAAGOP johnson_p_Page_034.jpg
e52171037d5f48996740d0bb615ce4a2
3caf109415041502da34ab2639a0be7ba54e36ff
23931 F20101122_AAAHLI johnson_p_Page_071.pro
ad11dfc904ee9243ac2c9a166933199b
c0b14b420b046e29b05fb85e132aac69bbf2591c
27230 F20101122_AAAIIC johnson_p_Page_081.QC.jpg
83a417d3d3159f097f537a5d8045f23a
6e56f6490e88a579050b49ceb8d2f0785347e4cf
69535 F20101122_AAAGBE johnson_p_Page_191.pro
d8bec591d6336e0d1360027e7b8220a7
fe94ffba0c32431a215f90dcb4c8a312e7df575c
95315 F20101122_AAAGOQ johnson_p_Page_035.jpg
94d07539d430041a9b42f3a6cfa314bd
e2a0da54ec7ca0b08a6683debc3ba50f5d3b78f6
58450 F20101122_AAAHLJ johnson_p_Page_074.pro
04fa9c690c6578911a4abc1bcc1d834b
75efacfbe540eeb4bd15eae59c3b244b4c8f5ec1
9730 F20101122_AAAIID johnson_p_Page_043.QC.jpg
09541f0f9614d131738c70e308464bba
451f64b28abef4ca7d6cbfa9b8f4283a6f75aae2
4430 F20101122_AAAGBF johnson_p_Page_092thm.jpg
1ba3f4bbfd949a97bc2538ee0ddd8e40
bc8dade8236f6355f994fdf2eb89926f676c74e7
4218 F20101122_AAAHYV johnson_p_Page_097thm.jpg
4a8a6076931643e6c6ef537ebdbcf794
0f2a650c300896453616293707d296056def456f
100841 F20101122_AAAGOR johnson_p_Page_036.jpg
3bb7d52388ea752943403c412310b3d1
324ae14bcbd3bfd07f72bc440a9ffa4bb695c1e5
34105 F20101122_AAAHLK johnson_p_Page_075.pro
5f47459903c2dea19fa83fa991482995
bd3d022299c6fd7d8eb9d236ab5e51d9730a947a
F20101122_AAAGBG johnson_p_Page_112.tif
51d5d111701722bb809006195f0aab56
6fd02cbecac63934d3c481d9a685183545cdddb6
3844 F20101122_AAAHYW johnson_p_Page_086thm.jpg
a5f09b317e657607b2b235bdcb26858c
957101f5e357cab8f1138a418e237527c4fe9f28
114043 F20101122_AAAGOS johnson_p_Page_037.jpg
d4ee4fff3806149bdcf67e2f2ea8e149
93717799762c02da88ee46caf871ef5f90bd8dbf
14718 F20101122_AAAHLL johnson_p_Page_076.pro
4a8f27c7d555aea2928219effae1d92c
902fa4cc97b6aa9e1eedf4e11b785b78ab0230e2
30078 F20101122_AAAIIE johnson_p_Page_026.QC.jpg
505d225cd4eaef22dd36f6d18b02a8de
2939435bb97b1bff486f1651805c015ddf7f7e64
957165 F20101122_AAAGBH johnson_p_Page_015.jp2
8263729217b1dacebb970b0ef5b07abc
093685c82ca0b8b9a51b0cb8a587a474f1acfa17
25383 F20101122_AAAHYX johnson_p_Page_065.QC.jpg
062ec3be1fc8544b588eafd19577b703
145ea93ad6b28d106d34c7dde42e848b65f8c65b
88427 F20101122_AAAGOT johnson_p_Page_038.jpg
89f357a44feff62790b78c1167b735aa
6cdc2d609b89592ad68cc8b933284f1023a1d748
38378 F20101122_AAAHLM johnson_p_Page_077.pro
b28f96d4bf79b2e6728829b24d578306
0bc8c60739425b96c70b64a524630576b9308b1a
3899 F20101122_AAAIIF johnson_p_Page_087thm.jpg
fc181521505bb1517c787e68656bb2bb
bd5785a582d9dc531b5e636cc26c072b6b350935
F20101122_AAAGBI johnson_p_Page_012.tif
f594e6eacee73d609322e3c2fc9fb345
a3b9cdf736941db0837ca0938effb017a8a198fe
6489 F20101122_AAAHYY johnson_p_Page_069thm.jpg
09a21c53d165a0eae9b05a836bb43ed9
cbf3e681e929c1540e6ba27fd03ba23d97df1140
62629 F20101122_AAAGOU johnson_p_Page_040.jpg
61f2771739e7446aa4cf7bb4d0219b27
1892afbf0b6912de64fa02d7caf58b226fc842a0
40147 F20101122_AAAHLN johnson_p_Page_078.pro
e3c6eddb850396feeb4687f8eddac8ae
0eb8e551ac64df93247001d64ff26d4d14ce215a
24747 F20101122_AAAIIG johnson_p_Page_173.QC.jpg
fa90ab7026c61d20d1ab7895ee36189b
060d88eb4854770295cac44c6aeae28257eac06a
56209 F20101122_AAAGBJ johnson_p_Page_117.jpg
deca39b2f7a17dd9189a7820021b6fd9
7ad0c542112b91dc0415b227ebab74e0bbee2717
6767 F20101122_AAAHYZ johnson_p_Page_181thm.jpg
ecf95cb048ce996b195289913f72093f
1bf5f34321abb4914c0ad6c84a14215c2d60829f
98756 F20101122_AAAGOV johnson_p_Page_041.jpg
1e03139b1ce77d77add4c28b1f8fc85d
9d479b72d55693ad98363fa8f0ed40c50c8ae241
34315 F20101122_AAAHLO johnson_p_Page_080.pro
1d890bc39534a21f1d44aea347179400
852613041aea7f1293b9ec5c8b19a72204908f04
8394 F20101122_AAAIIH johnson_p_Page_047thm.jpg
97f85f6610700b8d0aac9b62bd883de9
9895aaeea7d32ad220c8a52c3805e9b29f157091
74536 F20101122_AAAGBK johnson_p_Page_014.jpg
ff687427fa518255c27e7797823ae758
bddbd7d49c24185c9107dbb24d0b9c073c6aeea4
108578 F20101122_AAAGOW johnson_p_Page_042.jpg
44aad53b83e7c9df854394984fc123b2
4858357ecadba42a3eb938c78bbcce715a194b87
41024 F20101122_AAAHLP johnson_p_Page_081.pro
c2f10c0de8ff8d4bd9928ace858dddb0
0a9dcf910caaf572daffddd8f16f3936413b6ea8
22919 F20101122_AAAIII johnson_p_Page_010.QC.jpg
6c635f826fe47d45f8cf09b22393f230
6c536b16fae1de632666d72982b9af80de55ab29
F20101122_AAAGBL johnson_p_Page_160.tif
1250383c0d348925968a1c50b5b71bce
4e2c124b6cee1895ed75bb24fbb2a8754c0d8bfd
80705 F20101122_AAAGOX johnson_p_Page_044.jpg
ab28aa045535678e53fec7dc9008c16b
f810fb94913c266a3500ec1935e860b8660a95c0
40777 F20101122_AAAHLQ johnson_p_Page_082.pro
1ea5982d70fccdc3b62b4994f77f0e14
397b5e31909f8df905df5e0833c744a5457dc8e6
7390 F20101122_AAAIIJ johnson_p_Page_023thm.jpg
c88ee2c07f4042b3304e9680ef24a93d
93b71e64a613f524951ac79dda818aa1f103e12e
29320 F20101122_AAAGBM johnson_p_Page_183.pro
4b64e6abe7cc198a6ae7c21b7b70f8d1
9d7e18d364ee110524a645352e38632a0aae74f3
65294 F20101122_AAAGOY johnson_p_Page_045.jpg
7720bf685a5ce0cdf061c63e281d4654
ee2ad9dfd36b84ef8c52efd7366d25c3c7ce4f9d
11544 F20101122_AAAHLR johnson_p_Page_084.pro
f67c075ff8640ce32263b844d8874882
8012ef27dfc1282e68c49bd0f7f437ef65e7666f
26461 F20101122_AAAIIK johnson_p_Page_025.QC.jpg
e450fa4df350a14d4f0648f9ec11902c
91b87c0f20bec3fa8d113af5e0f9ef2be520f0fe
40712 F20101122_AAAGBN johnson_p_Page_086.jpg
f0aba23043e5c362ff072dbdeb751afd
5e13755b2db2235ca2a517bd8c9c8a7cc72ba7cd
105904 F20101122_AAAGOZ johnson_p_Page_047.jpg
66da58c7aa3148b40045821e6bac7ac6
674d6deb6ad741e6788c5ccb5d111740b571f8a9
19230 F20101122_AAAHLS johnson_p_Page_087.pro
66583dc8413fb741470185d63079ab68
dcae59d3d14410101cb39fb86c849f12a69c6e36
8611 F20101122_AAAIIL johnson_p_Page_232thm.jpg
bf1c0dec0307775bddef0096b73801d9
f5e27cb7126b548f1c2b1296a2cfc9f72a7e03ca
F20101122_AAAGBO johnson_p_Page_148.tif
051c374e28b211e1d334040262bba95d
82719a054f69a6f2026591f7267eeaea55f7c041
21657 F20101122_AAAHLT johnson_p_Page_088.pro
c0d479de94f766f47ee9dd8f618bb63f
c81b67f24564af70a63ee9b6e6fbee29e14dd043
17673 F20101122_AAAIIM johnson_p_Page_112.QC.jpg
6b5fc699bb518d8cb37f32de3808b8ea
59f960e1975a33b845a7911420723fd299d5d505
418859 F20101122_AAAGBP johnson_p_Page_105.jp2
c79b025b1838e68b769f2ef6cd819c04
0c94c27d567e3ea2df493b70742048713ba70ec2
35383 F20101122_AAAHLU johnson_p_Page_089.pro
1240fa8d09bbc39bcaef068fbaf07e24
dadef0b5173d8dc1c17c61dcef7a16151189f097
7530 F20101122_AAAIIN johnson_p_Page_046thm.jpg
ea0d7ddb72e9d4fed0637706543f1477
39b59a01db72187b2ade2ad0b346b16821485655
1778 F20101122_AAAGBQ johnson_p_Page_207.txt
27d9a9b07179932f01319d128675bd41
6b3dcdb34590700737d13a2cf936b890c3cb5f98
38534 F20101122_AAAHLV johnson_p_Page_090.pro
624d1956d7ac4024b3d81427a5b88e0b
fb57a02d7f6e46de45e5f2acdf6b7e3ea151d657
18918 F20101122_AAAIIO johnson_p_Page_229.QC.jpg
a3be68b43e25db4e997a4ed384269948
16c77d667baac42278537110888dacd3dfbce5b6
2024 F20101122_AAAGBR johnson_p_Page_233.txt
2c5cc37a13a0b24610d20909fc08ae24
b654629606ce304fe28109bc9669b4d3f68935a5
22255 F20101122_AAAHLW johnson_p_Page_092.pro
d44ac81651c06f0013865a4e2fd2c89b
385f0bff3a4114e4d54433fe3dff6f57cad18a37
5797 F20101122_AAAIIP johnson_p_Page_040thm.jpg
7da051c9ce61e8fa61558b590522d0f3
cd031c256c503c835cf7ba086d84b00e07c80f80
7476 F20101122_AAAGBS johnson_p_Page_038thm.jpg
104f9b377ae08ebccc6137693b1a5fc6
06f4179ca98e67401659d85062b1b80046ade0e0
18586 F20101122_AAAHLX johnson_p_Page_093.pro
84b901406e5d45983b5cb31f02614366
aca3f04a44ff33dbe9f528e1e4a6ad2df5caca27
38319 F20101122_AAAIIQ johnson_p_Page_182.QC.jpg
760e9f8ad997963f41c13f28e47645d6
1774cba2040000aa5bda2a4573e23aa165f5dd7f
30255 F20101122_AAAGBT johnson_p_Page_072.pro
5a4c9d0a7b75fa294fe4dc0d248d7975
2a3541ab03981769c6bbbb02283603e64615c651
20515 F20101122_AAAHLY johnson_p_Page_094.pro
b5be19ee68269faf34e13948c1ac2261
7d4a412ef6ce91ed83c1d5646ba5712fc29a577d
1606 F20101122_AAAIIR johnson_p_Page_034thm.jpg
78ca3dbcbc2f28f9475dccc8743a8f10
9a976d8c85586d68da7fb171a8f9dacdedc89478
84029 F20101122_AAAGBU johnson_p_Page_082.jpg
5408e77c89ffcfd130192cfdda5564f6
7309a97683296b6585ca932e5684b9706fa3cfec
26707 F20101122_AAAHLZ johnson_p_Page_096.pro
c481949c45de6022dd2fc5f73ddb1403
55844d3f9494f78105854c37b00a650129801ddb
21512 F20101122_AAAIIS johnson_p_Page_221.QC.jpg
953e192550ce5aa2d498074be9ba7fca
227300ba6e002031880020e05b1f2ed43946d3db
669079 F20101122_AAAGBV johnson_p_Page_021.jp2
3fbe2a1591164714f1cfa605eaebe298
b8a479feecc1badec161dce60ce34a687732ecc0
9327 F20101122_AAAIIT johnson_p_Page_067.QC.jpg
fceed9b808403c12d5e00a921a38b58b
96bd0dd3083d35fc781e4e0b9108b8037aee2861
1966 F20101122_AAAGBW johnson_p_Page_049.txt
c6a2c7af87250bb56395a6db85baa0cc
1338de81d2b46925292b9097b3399fa212a0507e
66193 F20101122_AAAGUA johnson_p_Page_214.jpg
0eaae29bae96d640106b74923f905a44
700b24b8e115fe262124b4af85749c0d14ea5fcc
24464 F20101122_AAAIIU johnson_p_Page_044.QC.jpg
1b149c85cc7588a7c38295c923c4b353
356e462d864aa58547f7cd2d21bca2149e94fea5
18095 F20101122_AAAGBX johnson_p_Page_163.pro
8f2b3d1040482d785696c35cfdf8b3e4
33356ed5fa16fd97ae886ac79ea9239a10b248c6
66724 F20101122_AAAGUB johnson_p_Page_215.jpg
4b3bbae96c049511ff5da89cf5a35b73
28b2b4e1b9474c896cbcaf3d56121a2044c0f1a6
7977 F20101122_AAAIIV johnson_p_Page_048thm.jpg
941612d51fca133a28ae9d2ade866f04
c105755bb05678591f9c62c9206a9743dec69f8e
24426 F20101122_AAAGBY johnson_p_Page_180.QC.jpg
f221e24979e832bbb6ebe8802bae31ec
c0b8b22054a6f26a23d9724d970e15d6f371ea9c
88869 F20101122_AAAGUC johnson_p_Page_216.jpg
2a191ae758c559108c293c676b6000c9
4192bdd77d8865925ca394a6d66b5586d1f434ca
19603 F20101122_AAAIIW johnson_p_Page_138.QC.jpg
acb3ef663bb507ed21cfe1c0d88c1387
45ad25f1d29f4a9304aa7ba52220f4dd743ea9fe
22925 F20101122_AAAGBZ johnson_p_Page_085.pro
a3642a8c889cd491a40cb01df23dfaa8
cbeb638e423425ac2b5f570716c2c46f10e27225
68920 F20101122_AAAGUD johnson_p_Page_217.jpg
dc9136ee19c1323f9253c81e0a619afd
f5cd43b5c200ce784321c2ebf6cf3a8f349afcc5
7353 F20101122_AAAIIX johnson_p_Page_154thm.jpg
230666cfe650da04eb82f81d5139cbe2
8a962d8381f5796c84790018e8848d35916fc01a
72229 F20101122_AAAGUE johnson_p_Page_218.jpg
f6d79ed2f8eb0360522ecf18186df0af
4c9f542424c423f37468964a063443084e110b71
13915 F20101122_AAAIIY johnson_p_Page_018.QC.jpg
4ad6494b3493345bd794caeaa61dc807
872604b6f40a47fb384a6786a11d57e781139bcf
85390 F20101122_AAAGUF johnson_p_Page_220.jpg
3c5715a362890464c0c0ebe2c3937288
bb0ec9813fe3a011b0e89ab1758cd920c6505bd2
5160 F20101122_AAAIIZ johnson_p_Page_083thm.jpg
ae17fd7b853543824915f6b9299fb691
80989e4acf9241dc3a9207cdf56fe52fafa747cf
73509 F20101122_AAAGUG johnson_p_Page_221.jpg
133f579df62d5c3a679df165e4158dbd
c756bcd8ed7459f30e95027487f6d0dac707a75c
63913 F20101122_AAAGUH johnson_p_Page_222.jpg
1df01deef701ace3f83cc51b3b3ca6ba
3ecdf8c1d649dad96559669c544abdbb5cafd829
1246 F20101122_AAAHRA johnson_p_Page_020.txt
83fba3b739da5e7cf8e6786748c3aeb1
644d641bfec7296faff51fa29fdd1fe9fdc9041a
69348 F20101122_AAAGUI johnson_p_Page_223.jpg
9cbc4a01f67e108b1a6623658462f67f
ade2fcd24a7feb0fb35956f0e8507bd30a7ee566
928 F20101122_AAAHRB johnson_p_Page_021.txt
8c4d3ab24e2a5ec5d00ad3d37b828077
0122c3e83dd27f51748d663768682e70e6359f54
60521 F20101122_AAAGUJ johnson_p_Page_224.jpg
a47cb4c54a1ccd19bd16e5b7be1b9ab4
790e8eed17a98a3a0a4142bacabf01ca79db5e5e
1890 F20101122_AAAHRC johnson_p_Page_023.txt
3e9f2814020e4539849bd8ae6c2b6e49
773d3f613e52c5c70a452008e5531120cb196190
74085 F20101122_AAAGUK johnson_p_Page_225.jpg
72e561e372da50fa3d74ae37122a3fb5
171c99e3402bf88be001ad75318ff9fe24ca7995
1612 F20101122_AAAHRD johnson_p_Page_024.txt
e7f9930f8de9ed780531e400c21424ae
9271fc4f09dd7d2396368aeb49b53a9cfc384e8c
73679 F20101122_AAAGUL johnson_p_Page_226.jpg
1c5b1f0cc5822c7c970c9ab03ab7bed2
e3579e36a7e80c9f8ce138850142683cc5c0eade
1783 F20101122_AAAHRE johnson_p_Page_026.txt
0dc96f315d71c8fd44a73cfd07578df5
a004e307a48ac6b4f9ab0fa0104048553c765e4a
F20101122_AAAGHA johnson_p_Page_155thm.jpg
0756f993ed588db45bb0cdf6c6ccdc8d
71c4b1c91228d7cecca68c1cb270056a2fb584b1
73526 F20101122_AAAGUM johnson_p_Page_227.jpg
1cad73aa5323d22189b075cc39873927
0a3c7b222c043ab5574b1dad004f066a7554b024
1429 F20101122_AAAHRF johnson_p_Page_027.txt
4019532850728c3381b11864d20d00ef
13e12899cb7dc568a839813f84eb56d38dcbb099
4244 F20101122_AAAGHB johnson_p_Page_003thm.jpg
f78f07bef98f31a04f6dae0dbd11ac5d
f53382e723cd8586b08787c03de10f498334924c
74273 F20101122_AAAGUN johnson_p_Page_228.jpg
2ca8bd6daeb662b36b58cf8a1c2dfb8a
33b19ff92ce6b6b386571e29a6d4b42e927bba5c
1878 F20101122_AAAHRG johnson_p_Page_028.txt
ff3b41a44179ac1c604d80a2262da318
f5a1755d15f19b48023d73235c18e762c40f7345
72551 F20101122_AAAGHC johnson_p_Page_089.jpg
29c8f143d01d1e6814c79dc739422ca3
ff67aceb9417f9488826c18d9ed5740da2244470
9723 F20101122_AAAGUO johnson_p_Page_230.jpg
8cb2e25ada9babac9fe769897e881bfb
3e6d61fd00aa792330fcf1fdd4efcbcbc3804a61
1603 F20101122_AAAHRH johnson_p_Page_030.txt
b2740907dd6e30473ab9b5ff979ad67f
1552bf326be6a1939b631fac4c55e27ee6e96d02
6157 F20101122_AAAGHD johnson_p_Page_137thm.jpg
9234ec7d70754b1d7428abed7caab010
9d69603608248223a71de471a232d3ea5a4d737d
103864 F20101122_AAAGUP johnson_p_Page_231.jpg
54e94e6b2b82e68839f4a37b5234980f
f75db63f8e347a79f56e405faea760fe8e4d87a4
1502 F20101122_AAAHRI johnson_p_Page_031.txt
9b8d79ec056a2dc2d994bd62a2f86da6
b832cc1bc311bff308c4abd28b62edb9cfa1dd0b
F20101122_AAAGHE johnson_p_Page_168.tif
17c0c48077a8456f68305f76f7c2a178
672afa1e8e96f84c862bfe550fc0e07a38a3ddb8
126957 F20101122_AAAGUQ johnson_p_Page_232.jpg
cd8b66d7ac95ef396be6a688da52f185
afe3ea54e02dbd897791c84f1ce20be751f326eb
1698 F20101122_AAAHRJ johnson_p_Page_032.txt
a27a53e8803c1cbab0477db476404492
035655a2a8b17d5d0b92283ac9d1194c78ffa0c9
119409 F20101122_AAAGUR johnson_p_Page_233.jpg
8ae50dd5cbe7e8cbbab37fe39e322f3b
c42931a52b386235af04185cf2dec5eb73406d66
1572 F20101122_AAAHRK johnson_p_Page_033.txt
d753b2a875e056c2b721bf8f5aff541d
ff416a807d67338cd41a4a6cb47bdd2fd65bea4d
830927 F20101122_AAAGHF johnson_p_Page_090.jp2
4b89838cc093bf1512ee0095b84a7819
180965c2e2d7d5835256f5e73598ec2a01442cd3
370 F20101122_AAAHRL johnson_p_Page_034.txt
36020a445b1b9b249362dba5f2f61f75
7612a1100e5ed538d8ce5595b9991a4b428e8347
28317 F20101122_AAAGHG johnson_p_Page_038.QC.jpg
fec9459896ddbc152d4f33a9288a6b51
8ae6115105bf67a2adf0dc14e7248eaa79accfe0
F20101122_AAAHEA johnson_p_Page_067.tif
2973b250b3b721ff33c076b78c68e940
f8d762479080fd506fbf5215e45db29822aeb73f
129182 F20101122_AAAGUS johnson_p_Page_234.jpg
8351faacf66b3b65ca3af81f587d9d9a
f2ed05ebcd2f7a5c17913fe03d4c74bc78949f9c
1913 F20101122_AAAHRM johnson_p_Page_035.txt
c8ec1656eb967a0af1fc836a83854b40
8f422510f135e7f9df435dfc933e46cb91737561
F20101122_AAAGHH johnson_p_Page_099.tif
a9a422e94a041059479a13b72f45b240
9207f87bf4d69ce5c19fb06677f9c1fc569e64ad
14655 F20101122_AAAGUT johnson_p_Page_235.jpg
008df104cdb6be0862cb86ea2db347b0
38ecd973efa1b5696beac290aacbf1728b2aa591
1983 F20101122_AAAHRN johnson_p_Page_036.txt
1887d658017d132db82bb01551bc700a
dee1371b37227c588b1fa44443630085f03d60ea
32678 F20101122_AAAGHI johnson_p_Page_144.pro
842d0e4c924ab02a6f9dce4777554ea3
df405180efe13fbb10e49ccb02d201f9b5aab008
F20101122_AAAHEB johnson_p_Page_068.tif
075b435a92c1c4ba419de5effb5a6bc8
2ee32a582771ac5f61940df204c1d2ed78adc655
38678 F20101122_AAAGUU johnson_p_Page_236.jpg
facfc754a2a90771912c3c188c52b5ab
ad7b4fd46d1744d6ca8d70cc80a42d0665f79e0f
1258 F20101122_AAAGHJ johnson_p_Page_149.txt
9882b2fa932a6e271053b39bca05a567
27e2e5785a6d40488d55afe41d82ed907b233769
F20101122_AAAHEC johnson_p_Page_069.tif
ab0c93c18c6773c13a198e7edda5730a
b7a0983b4756cd3f79f9a5b30840e2ca57feae86
220258 F20101122_AAAGUV johnson_p_Page_001.jp2
48754383066681add9b1bcb8c5304012
c2fd1f5adc9c2d5f82dd95ae1d7c341fb6145992
2103 F20101122_AAAHRO johnson_p_Page_037.txt
b542792e4d48fce742c5b3906178be90
16662aff5bd7059f3ef09a60a77ec5868927248f
48927 F20101122_AAAGHK johnson_p_Page_122.jpg
0108739fffeae9fe4fe2e53c275a423d
198b16893e7de5f511c86ecae998d461c18bd0af
F20101122_AAAHED johnson_p_Page_070.tif
0bfa055ba9a4b9450a0a912c1495ddfb
f4a524f0433230e3ecd130731ac5854b63669296
30323 F20101122_AAAGUW johnson_p_Page_002.jp2
5d137dcea68b96ed133f91a9f0a8dd64
fbb1699bd3cbf34e12e0b303ed458ae3140bd417
1580 F20101122_AAAHRP johnson_p_Page_038.txt
e981fb3e635caed8eb92eef8d6b160cf
a07d0a3d8448066cc9f81fae1b63a53f01ed945e
948663 F20101122_AAAGHL johnson_p_Page_050.jp2
a77913c4c96dadcd2ce53a531dde571b
0db5fc6d2b60c012f2431324e568659b1feabd27
F20101122_AAAHEE johnson_p_Page_071.tif
856dffb5b3f42f392bccb8070bfbba33
f855d3cb3afd802d28f2ad2bcd38c5487920c92f
506113 F20101122_AAAGUX johnson_p_Page_003.jp2
9d4806d0327a288fb89ec945ba7a47f8
48a25fda6370217c4a6592eebe2e5d1e203b14cc
1827 F20101122_AAAHRQ johnson_p_Page_041.txt
2319355b447af5fc9fe8724a35d04e03
5f719bcc6e4fc1b7d3e011ba62b64291b627d997
34170 F20101122_AAAGHM johnson_p_Page_215.pro
72dde3b2743c14cb82ee98d2f8e2c185
30e8a1049a4203ab666aa2198c71b27ccf03d880
F20101122_AAAHEF johnson_p_Page_072.tif
03707c6ebc0a335ad8e1fd3130f8f2f4
d6f74b39b54bfee1b5f3095c653c09ab5174a737
F20101122_AAAGUY johnson_p_Page_005.jp2
8fc760a03206b66fcb34ceeda9713da0
262ee50eda23997c52db64892572c76a574da45f
650 F20101122_AAAHRR johnson_p_Page_043.txt
95c5748c6c78fb07f9e8831ab222988c
5121c447be52b814b17402d2f78dae5868bfd11f
F20101122_AAAGHN johnson_p_Page_094.tif
67770b9302b8f4f50ff470aa93f933d2
a28592d18f4bdd753beb652b733ba944ad26f44c
F20101122_AAAHEG johnson_p_Page_073.tif
6f2561a3c46f054aee0cadf05249a4a8
c315a521348935f70caeac5b0862d4e5bbdc0060
1051950 F20101122_AAAGUZ johnson_p_Page_006.jp2
3cec6784ee5bb03b094347b4ca9628fe
15f401bb6a252599ff849a6b1fc0aa775af0a1da
7334 F20101122_AAAIBA johnson_p_Page_019thm.jpg
c750de0d29af718fe96f4314276348ba
3442b2493cbc039aa964b62ef6f0fbdf011bc5e6
1721 F20101122_AAAHRS johnson_p_Page_044.txt
202cf25aa30e0fa2382140f0bf384fdf
3ecbbb4826775eb1db9c0e48e40c21fc28ae3a61
46784 F20101122_AAAGHO johnson_p_Page_157.pro
b78aa49bedb699eab29cd33dd914c27a
0ae389d5ecce987235f8f9c26908a276c73d268f
F20101122_AAAHEH johnson_p_Page_074.tif
9c96ad5ed6b5fa39031c8b2c3b0966e3
e2443d49e3a80fc9396e9296540b3ff38f4f7d4d
10858 F20101122_AAAIBB johnson_p_Page_197.QC.jpg
7f6157d823ca95907ed819bd4be27026
43396faa072162180f61c11405dedaf287391326
1412 F20101122_AAAHRT johnson_p_Page_045.txt
5c5752c932ba9740c5383a2d883cc37e
9fc5e043534ce5d48b4ce0d35a85159706037dda
66949 F20101122_AAAGHP johnson_p_Page_109.jpg
389e1cd42a0509921770ad5863b065fd
f6ccecf60c0843b122ea100884ce90e374e475ab
F20101122_AAAHEI johnson_p_Page_076.tif
6b75223e8250888cf2f84bfca6f68041
092eef1defb2c2850e5b3d1e7d1efb2bf8980459
7297 F20101122_AAAIBC johnson_p_Page_050thm.jpg
2e2db22252e0d3a2fdd303b1b9f3db98
8941b33a58489169bb05ef4812153e187635cc65
1734 F20101122_AAAHRU johnson_p_Page_046.txt
7b0bf96fa3e16019afc20b675f453438
b0aed2f73a7269453ad14f68e073de9388f401ae
F20101122_AAAGHQ johnson_p_Page_117.tif
9474619459101e7999e24846cb5469a8
9936a11551b0c25acd595b23ea3a96b697ad45c4
F20101122_AAAHEJ johnson_p_Page_077.tif
a5978a7ebb944800867fabe04ec3f324
f43088ab782f52ba87408408bc17ce537f587690
2536 F20101122_AAAIBD johnson_p_Page_071thm.jpg
530fc8dd1da9b76a4f9319c021c4d2c7
b43e94b5e7148043c6da5da408f93dfa61071c8a
1743 F20101122_AAAHRV johnson_p_Page_048.txt
061f7f856001a9b88771c0363f311c0d
72b38960f5a9df80fc847c011a2d003e7d42ab69
7904 F20101122_AAAGHR johnson_p_Page_063thm.jpg
356fd0640edd862d0683c0331b1eee0a
016137166e45faefe9fab1faa269e045aaed69fe
F20101122_AAAHEK johnson_p_Page_078.tif
d5a8f83693f70856d5bf86167a463263
37c05f476b2e94687eb09d27c9906e5bcd5c387c
14682 F20101122_AAAIBE johnson_p_Page_203.QC.jpg
ce91665e1c963a90fc8e8a28827466cd
8b659435d266ee7ff0e5dc89ef16c281f7a49912
1697 F20101122_AAAHRW johnson_p_Page_050.txt
3207d8b78a48e8e6b5d282d8235f4f6a
a634e52d9d503bf539ebe5dceb221d52f28e5417
97980 F20101122_AAAGHS johnson_p_Page_048.jpg
c5dfd8674e09acac0b46e5fe99ed6043
f9c6798cbdd5e47ec1a7f7390d3f3af48e7ea3c3
F20101122_AAAHEL johnson_p_Page_079.tif
c71151c308b5633cd4618b573fa6b6eb
6302bdfdafe352b4df9d60606993ced1732ab147
9374 F20101122_AAAIBF johnson_p_Page_068.QC.jpg
5ef933f6e5e0fa3ac5a49d716d13a33e
91089cc4c204384d6a6550185722e217b198dc88
1768 F20101122_AAAHRX johnson_p_Page_051.txt
80186923b2d8a536adfba38ece9e2635
a3798530db98af5c64028e39b6c6cb72dd814a39
48522 F20101122_AAAGHT johnson_p_Page_209.jpg
85b2068077eee1f934f5fb0a77f27130
aeacf5b893b22f2b59120c3052a603d524ceabb1
F20101122_AAAHEM johnson_p_Page_081.tif
d6d9966227f498f235a6b828c783d2b4
21521583de8fbcddc19d0561677c0780876d4c4d
9690 F20101122_AAAIBG johnson_p_Page_091.QC.jpg
a3328d90d324c94ec420611156e97873
12e87730279ef3761bc4875442b9ac2503301a0d
889 F20101122_AAAHRY johnson_p_Page_052.txt
79728d57f46a3d7cd11d7d1b084b3f5e
2348bfac4e5891a66f8884e24aeaef650269baa8
2501 F20101122_AAAGHU johnson_p_Page_134thm.jpg
878986f768c3a8442a3ba4f2d38d08d7
c060c6fd60f729014cdc67eb888dc33a45261299
F20101122_AAAHEN johnson_p_Page_082.tif
44c7169d2d63a64e59bdcb4a6ff7cd07
1c092e39b9b9181349e19a8bfde9042624f8d0fb
5242 F20101122_AAAIBH johnson_p_Page_189thm.jpg
5972f6d14a3541a10101ec7461ef6816
c1ff1cad831d30c85cbcd6ecd5f136141d404814
1997 F20101122_AAAHRZ johnson_p_Page_053.txt
0b86537242e60ec7948db6a9350d6620
1b954ddd5a1cd51ce3137704d9c3db551a5c97e0
1051949 F20101122_AAAGHV johnson_p_Page_063.jp2
4bece76f1e9426ea94d5b8aa1b98c046
32c405aed6ca264abd2fd668ca12267fa2421300
F20101122_AAAHEO johnson_p_Page_085.tif
707124cf078326613e68f025efffc089
448e4c8131264298a91ec538f1a170efb2ab088b
1727 F20101122_AAAIBI johnson_p_Page_195thm.jpg
f653625a395a2c7ce78a33b753eb4b9d
0257e785a0c02369b73c54470f616c181baefaab
1662 F20101122_AAAGHW johnson_p_Page_069.txt
fe73223d1e3d3b83d2daaaa032454e17
795840e85f77cd875b005c6e86b8c392c52794ae
8423998 F20101122_AAAHEP johnson_p_Page_086.tif
fb2adcc7b22a3529f8397cf72589e051
f34508a2efd076a840072c16871a7ce2d24cfecb
6903 F20101122_AAAIBJ johnson_p_Page_065thm.jpg
668322d8d718b802c54810d3827ecf12
cb8812ed0db10745bea29be11be16a1d3b185f8e
F20101122_AAAGHX johnson_p_Page_126.tif
a2f44da1dc7248345b8c619feb3a68e1
060cf5d18a1565ced3bde71e19be1274f2279aa7
F20101122_AAAHEQ johnson_p_Page_087.tif
50a01f0a13d96dc16bcbab14627698da
6f1c64d1d723c92104bf190ae87270f7f09660ca
30445 F20101122_AAAIBK johnson_p_Page_058.QC.jpg
8c4cb846205e0a282944d622bab01a64
2c8eedba4b87de9ff4542011d77c53a0575a267d
5926 F20101122_AAAGHY johnson_p_Page_072thm.jpg
6c82b174756ba54e3a750b9cd7c15fe6
7e55737e1b90cefbe57287cb44a9a952d388a866
F20101122_AAAHER johnson_p_Page_088.tif
bf8ce87c66c7ed5177ceaa209cd2b958
1ddc9cd127e4b6645fa5869714d8efff040338c6
7611 F20101122_AAAIBL johnson_p_Page_041thm.jpg
cabbdaf330c880448162d85e4fae505f
cdf0409da01d837a35338a9a68243928ad45ef1d
5126 F20101122_AAAGHZ johnson_p_Page_190thm.jpg
1c1cf5346dd1cae70476cc9a3c511db1
a3315d30f8fc2a574e42cdeea409ac7df3dcc97d
F20101122_AAAHES johnson_p_Page_091.tif
b04a3a1e4b2795f86c5f4a5b6002960d
e4db831a1f0c931a3c8053a55ccf3d84364f7cb5
22324 F20101122_AAAIBM johnson_p_Page_009.QC.jpg
9be79348b11e0822da76782bac5a6ebd
2c0b06be544a340a068ad91c5e06a7896a78016c
F20101122_AAAHET johnson_p_Page_096.tif
c6c26f155d4315a38f9dd14fb6c0043e
533c17a068e9684e62b10c3a7d5bf04580a9de34
31654 F20101122_AAAIBN johnson_p_Page_006.QC.jpg
775de160dea6248d53aa4563abde7dd1
f56bc6cfc5105ac6b56cb7e94d828a989bbb57a3
F20101122_AAAHEU johnson_p_Page_097.tif
ad08016e723af16e49d0c7199bf520ed
9bcc4c0d7429a41afc432f3b5d7f540150655b84
13446 F20101122_AAAIBO johnson_p_Page_103.QC.jpg
576491a31e455cbaeb58a302c3e23fbc
b5911109b86a8ef99073c024a330284e33a2f32a
F20101122_AAAHEV johnson_p_Page_098.tif
1a63c001fc028b0503c3d0e54b243211
ea31048682e35b93e2d5a07a453a8adc4b56e103
17056 F20101122_AAAIBP johnson_p_Page_124.QC.jpg
bffd34f24950e58d47f9708dba420829
68d3f5119873c27de9b99cdffca8aa4d541f33b1
F20101122_AAAHEW johnson_p_Page_100.tif
63cdabb2b1d18155db0a91ced7d197c2
8f13949ab62668821506f9b554ce00425998a737
1779 F20101122_AAAHXA johnson_p_Page_213.txt
33f06731800848e427051fb2426b46f7
b6bc68a7b3b74970a33449e6d06e12bd48c9d4cd
6548 F20101122_AAAIBQ johnson_p_Page_169thm.jpg
084b0a2bf7fc587b6a3f1af789329e0a
be2c2f240dc4eeb701c0eea99d861bb644e9c6fb
F20101122_AAAHEX johnson_p_Page_101.tif
a96ab726308c7d6a5d81adb4feb6f6a5
ba0cf185db29b458907524339ed7f108f0f3789c
1432 F20101122_AAAHXB johnson_p_Page_214.txt
40a633a11bec56350c9cb5e54fc22faa
26335c7330431ad681eb78598ba342754a8d35de
6256 F20101122_AAAIBR johnson_p_Page_031thm.jpg
b9f6c3f0236cbb032eaeabc4446c74e7
468a194256d402a85c5c39190d35f3fb871b4f63
F20101122_AAAHEY johnson_p_Page_103.tif
abf676819f8e8a9bd3d0fcf663bc4bf8
490165f096c5e9740d3999ab707f7108eb931c43
1627 F20101122_AAAHXC johnson_p_Page_215.txt
44188616e08cba11b23f2b6e039a9234
f4693eafa89ada28ffed171a02967f9f446bd682
822 F20101122_AAAIBS johnson_p_Page_230thm.jpg
01e6192256a40347468e2ee9b57e459a
3d36fec03a4844d255cd8710daac3316863cc879
F20101122_AAAHEZ johnson_p_Page_104.tif
4b48154c2f5d40189e9b4e4a2519a9ba
61a65e8ce6266b69638f3b22569caedc7ae81b6c
2164 F20101122_AAAHXD johnson_p_Page_216.txt
d1fe85be9cc03643be6d2878a7a01802
ac6c905d47b0af5f46b4bb6ba24f9186a87b923e
35800 F20101122_AAAIBT johnson_p_Page_232.QC.jpg
2a63e2a54ccd575f9b0568fcb19eaf84
13392f9877525f9b7d8e2e97673ce539e3872556
1605 F20101122_AAAHXE johnson_p_Page_217.txt
b9c8c58a2f6aff4f85eee6a505491102
d23a12c17dd66aad5ee6036b18f6986b09133c9a
8762 F20101122_AAAIBU johnson_p_Page_022thm.jpg
62646bd453af753fa1a7316046de7251
981f6e9dcb958c2e9405b448226cbd3e7f843975
F20101122_AAAGNA johnson_p_Page_180.jpg
fb62e00f2dbc74caa3f893f9a4f303a5
cd9d5fffaf9175bfd30678a4358701000b8c1b71
1610 F20101122_AAAHXF johnson_p_Page_218.txt
5f4bc0a1eb729b859e70f2f5e9e0af07
4968bd8765829010f80bb418da02900dfd0a4599
F20101122_AAAIBV johnson_p_Page_146thm.jpg
3a89094fb2019a28ab0753437ffdda86
3213de57dad12d5b81a96b2a3f510fe084bfa54f
1893 F20101122_AAAGNB johnson_p_Page_047.txt
dd87d7dafbf78870453023f52dd09c1e
f425d94057f4af6270943045f0d6ed653fc3e9f5
1279 F20101122_AAAHXG johnson_p_Page_219.txt
36ad4302893dd15fe774b1d676443a62
cde5f4e17daab0ba354963973b6cb7c473a5da5f
26855 F20101122_AAAIBW johnson_p_Page_077.QC.jpg
f8e29001902c4bd47cdeafbc61a1db52
da3afc2d60fa34e4268b42c52b38ee813319aba0
35162 F20101122_AAAGNC johnson_p_Page_016.QC.jpg
3326eaaa4e8cfdd285f5b9a67ecf6046
497e75d7a1bfedb73a113294a20c2d140cceeefa
1633 F20101122_AAAHXH johnson_p_Page_221.txt
7bf66ae02ce0988b8c139e746db40c67
f0b2ea7380971dd5d682ae86af9ed55c756632d5
32071 F20101122_AAAIBX johnson_p_Page_042.QC.jpg
fddbe82d187790d04d0e0abb9a3560ed
defbac2f63cbe17fdb40df06c05997327fb7f0fa
6470 F20101122_AAAGND johnson_p_Page_001.QC.jpg
f8236ae0129cbbd0475d5d3fa3980119
de311da99a67af96c338b34973e10376c2fe2306
1493 F20101122_AAAHXI johnson_p_Page_222.txt
aa496845f7f7621e447e4b2abc95f9e1
5a5cf0a3963b1c893cd9c4931babc5e69333f09d
8888 F20101122_AAAIBY johnson_p_Page_182thm.jpg
9c6333387e41ea99b3060ae66d298c17
f639e8f424528e8e37d69359ce5045e035c79611
35285 F20101122_AAAGNE johnson_p_Page_065.pro
3dfa7cdda8acf3ba4759eb2936782cc9
ceb19711291d07df291d8b8fedd93abc193c97a4
1748 F20101122_AAAHXJ johnson_p_Page_225.txt
fc9873ba10ca77ca29114f6ab403c813
3f8a6d2ea5c2764ac634bc3ce629b79d545cddce
23346 F20101122_AAAIBZ johnson_p_Page_190.QC.jpg
c94fc419b4742da7c817a38a5549b6bc
12058412860bae421003a70df9e19d8deb11faea
6766 F20101122_AAAGNF johnson_p_Page_015thm.jpg
4442f2c85ae57ba6b727b7663fe7ad4d
5d106705ed6ba29c765fa0d40bdbf7af8fa5efad
1765 F20101122_AAAHXK johnson_p_Page_226.txt
b92042875351b561ada16faca85ca071
74db8bfb14b1e0b6a1c8920e83d96aa316e098e3
46015 F20101122_AAAGNG johnson_p_Page_141.pro
6cafac50021721a0f72514253cfc7fcb
6f39a72b5d56087cd99e53883a74bd27812acf32
1844 F20101122_AAAHXL johnson_p_Page_227.txt
c8866130bc1626ebee1b52abdca1dbbd
f7fef785f869066cb0d756288e186d1f84ac771f
F20101122_AAAGNH johnson_p_Page_009.tif
897f7f61781760b60b11d3760c23f00a
083eafcb758710914075ce148710caac14694421
47193 F20101122_AAAHKA johnson_p_Page_028.pro
961deb8c520eaa8ba67e987058e1757a
9d3b49db3989e97a5f3e94a6e4d0f10014c3f17e
1792 F20101122_AAAHXM johnson_p_Page_228.txt
f45205ee3222d7364a67c8be8bd06663
dcccddd412260ca25f899d9d9ccf4bf80ffff66b
5015 F20101122_AAAGNI johnson_p_Page_052thm.jpg
0c37e08f6f181843ac5737f5890b14fe
6c9c7f68f61f59d20f2a873a899f87629e1bce50
43012 F20101122_AAAHKB johnson_p_Page_029.pro
e6d16a0e9bfe10f490e21d9331967b92
c1f3ef177f5233ef1b1dfff04977f57316ad273c
1471 F20101122_AAAHXN johnson_p_Page_229.txt
b4e233401b1f816d32eeff3bfc439a36
4cd4297e2a483798cf528d3d55f4e61034814516
7154 F20101122_AAAGNJ johnson_p_Page_133thm.jpg
fdffb1851e65aad5a3ea9efa9dca9860
056ae4f2ccec72c486c8917ff62b7cf3edb27dd6
40057 F20101122_AAAHKC johnson_p_Page_030.pro
b301655f94a7adbf181cfedfaf81e7df
ccded7e55bae6718d24e39e53dffab07ab4e5fd8
232 F20101122_AAAHXO johnson_p_Page_230.txt
e79e10006a4e81fc53c59ff3c2c3a5d8
f9e3d6df0c84bc688dc20cf5fb5e83a9b41d6ede
F20101122_AAAGNK johnson_p_Page_102.tif
85e84ed50fc02ad6b94d2b07038925eb
16cd3da96bf5ebe5fddad4dae92c045e05946a3d
37385 F20101122_AAAHKD johnson_p_Page_031.pro
6f10d3b6b53d0b20b309d9b86afea6e1
31a470c0cff22db958bc17eb2a6324c8f76f5ee7
1760 F20101122_AAAHXP johnson_p_Page_231.txt
d9445b412421ef4749a8cfbbfbc8aca6
459946c7680f79301bd5d7eac05ac39d241150b2
34436 F20101122_AAAHKE johnson_p_Page_033.pro
5ee328ba32e480011cf53e84af34ef09
42ce8591becd07e5c100c38eeb4aadcf534a1de6
2257 F20101122_AAAHXQ johnson_p_Page_232.txt
3149c78d047a781d1f4b7a4b02dfdd0a
b7f36d34b9c44dabd1e1e0378e100d31c1f022d7
271928 F20101122_AAAGNL UFE0012121_00001.mets
78090a201dda9f793e6594a06bb7d702
8ad248ca1adc0691338dd2095f8282087df527a5
7118 F20101122_AAAHKF johnson_p_Page_034.pro
e5f333f49a0b210b6e1b0c1ce4f34a38
4a3895ba9a97b5e258ddd629dd78c762ac74c5f0
269 F20101122_AAAHXR johnson_p_Page_235.txt
a799e371a4c69e076e9fb4355ee0611f
1db1384b4edff7cd64ffe869d4dcfac29c1065d3
45697 F20101122_AAAHKG johnson_p_Page_035.pro
84b2854022da4c627f27141562c680ae
0af8751e7ecbf90b097fd5dd1096e1a7e9406fbb
5100 F20101122_AAAIHA johnson_p_Page_228thm.jpg
60bd04f7de5ce28b5c1a100d0c19b267
430a1d48a2c768725711ff8f0231427b58dcb5b4
708 F20101122_AAAHXS johnson_p_Page_236.txt
48909852f6753c233083145a16747041
72ade35c8ff1a6069cebc956d3afe98b454a57f3
7261 F20101122_AAAIHB johnson_p_Page_055thm.jpg
4b432995518f02fdf7d575babbc3ee43
2d6afd8a14328a1fac9315e4969467d5f39cc13d
1974 F20101122_AAAHXT johnson_p_Page_001thm.jpg
97de8a098d34d21c6999a9ba1dc198a2
69f3253ad4d977c0786dd9aee8f9825629feeba3
24167 F20101122_AAAGNO johnson_p_Page_001.jpg
3ee801e77b9dbdd79abb19aa1f7d0ab4
324a9cc3ef8e909b678580acf78f3b467338989f
49227 F20101122_AAAHKH johnson_p_Page_036.pro
3e453c08ade537c63bd9d6c4328f7b0e
f4896f4683a18de50e3364519cd1a38b3c6bab85
93166 F20101122_AAAGNP johnson_p_Page_004.jpg
3484fc33c0f9e15c1847ea116bcf4a72
d8c852651b24b5908aaece8ceb2ef9dbc12946cc
38961 F20101122_AAAHKI johnson_p_Page_038.pro
7dad8d8f1a149ce666ae8196e5006dc4
9603defb549d34d375ba7f3809e4bbb7d2a9729c
20928 F20101122_AAAIHC johnson_p_Page_189.QC.jpg
7afb45d4c7622a2fc525adf288f85d20
4c86f453b45cb5c58473421350a99d69db149ffd
4355428 F20101122_AAAHXU johnson_p.pdf
324f1826132f8c3c98925c82e254f890
23173aaee1f8a3957f6524bad54ce5f1ac1e47c1
BROKEN_LINK
www.scl.ameslab.gov/netpipe/
117527 F20101122_AAAGNQ johnson_p_Page_005.jpg
ba0d9a6ec8ed50738321ecca370167f8
3ca8e5cf08ead73e5a6357d06ec105ed00996cbd
56188 F20101122_AAAHKJ johnson_p_Page_039.pro
630a016445da3fd44233229415abc8fc
4b13113878eea05fbef2d7ce233eac80c6594390
7677 F20101122_AAAHXV johnson_p_Page_123thm.jpg
d4028d6a2412d3c8ea27afe71cdea866
01805846a1ed46114f9f1b7ad7c01e5803b24dc4
111863 F20101122_AAAGNR johnson_p_Page_006.jpg
aada3ac52103757e49271894242ad72a
1605622cedecd09b5064d85cfe2df34a4b6602e8
45523 F20101122_AAAHKK johnson_p_Page_041.pro
715f1b22e1c3bd437e79419b532707e2
b4332baad50e6c412e236194572f140578ed72e6
27137 F20101122_AAAIHD johnson_p_Page_004.QC.jpg
9b4a5b62ea38edcc174691689a743d45
794e56ecadbc631c0ceb1e1e5dbeba734f0ec99e
4554 F20101122_AAAHXW johnson_p_Page_160thm.jpg
3e6a2d8166d27105dcb3fb2c50c07b24
f4c08cc31cab1565decd7a0b9f171a2c08a300b2
91682 F20101122_AAAGNS johnson_p_Page_007.jpg
bb352bb354880c696197de306e83d9b8
356a3370318c7064bc52c4a59b6e13e803594da4
11612 F20101122_AAAHKL johnson_p_Page_043.pro
3fac427e2ca4ab3bbe16edae544396bb
51aa4268dd56d3a7f12401839071e05387ac441b
20877 F20101122_AAAIHE johnson_p_Page_057.QC.jpg
5b31220a1e8e4cbe986855f3d6a22452
bb81c591509926bef511d99901aa9c1ed525ce9c
3771 F20101122_AAAHXX johnson_p_Page_202thm.jpg
44b00974342e051bdf7161dbadf49888
02800785a211827498ee0c4a395741f87f38a088
60413 F20101122_AAAGNT johnson_p_Page_008.jpg
65248bfd1a4e1e3845dab7be2664ce02
b3c644d2dd01b97fb29d622c97323afe64deb7cf
36064 F20101122_AAAHKM johnson_p_Page_044.pro
adbda2c17d7bd751aaff54bb48c893a7
7f6e174e45796ca1120866b89e1fdf297b5e07b3
33769 F20101122_AAAIHF johnson_p_Page_064.QC.jpg
f325835c0a2e898182be50c15f9c396d
18167f6ab0bbe4038909e8168aab481f7f838735
7311 F20101122_AAAHXY johnson_p_Page_166thm.jpg
e10e9ab8b641ce4067eb5f414cc1fb1c
1cae82bd2246599b75391bae71e1c537487f4783
66559 F20101122_AAAGNU johnson_p_Page_009.jpg
0a6fa914efd04e5ac5935077464c79f3
bad3075bfc727688a6d65848aef29423290885f6
26886 F20101122_AAAHKN johnson_p_Page_045.pro
29d4a87f1b30a52aa764f66202489dc0
5ecae5c98ca0daeddbbaa5874efce37a33d58ce3
6129 F20101122_AAAIHG johnson_p_Page_011thm.jpg
9bd5d3427386048df4fadfd016175daa
3d4ba87bb646ba34060375b28d1199447b7f2121
30612 F20101122_AAAHXZ johnson_p_Page_123.QC.jpg
5320ef2136e8d2b9a4099df09177ad2a
da1f9d61167265aeecddbf4fae896860fa3b1449
71514 F20101122_AAAGNV johnson_p_Page_010.jpg
3b1f1e99e38c1430d93c76914a2f6e3b
b6f9fe767adc6217406a93adb0512618972e50e1
46936 F20101122_AAAHKO johnson_p_Page_047.pro
352ad0a19cf4faf8066813194137469d
894e6733221ca8efb7993ba80bc6d11affe6ca14
17878 F20101122_AAAIHH johnson_p_Page_118.QC.jpg
d9bd2431b70b59f06f5037c306061abd
435df40c740714643043fd6be5dbc637fb4e645c
71952 F20101122_AAAGNW johnson_p_Page_012.jpg
53079c450f4b03826dfabea6b4a15cec
7666e14b22ad7a6298edd816b5237641daf813a6
43296 F20101122_AAAHKP johnson_p_Page_048.pro
951ee6aa283d8ecd869b030f62359356
038f8e33c48b63cc66bb14c98d93d3595be18ede
2212 F20101122_AAAIHI johnson_p_Page_073thm.jpg
84fbda342018c763d7bf9d5fae2926e0
5d4fd64141aa5132b0118633396dfe0aaae72090
51482 F20101122_AAAGNX johnson_p_Page_013.jpg
6a540ccea7dccd61e869de4a4745ae81
bfa49bebbe9c8519164c375deb45f34a4bab0124
49414 F20101122_AAAHKQ johnson_p_Page_049.pro
2f4b04487360a35f8d59c5cc9d60f5bf
4e069b1fe6ac645f95845bdd51fdcec9b79e656e
28244 F20101122_AAAIHJ johnson_p_Page_156.QC.jpg
8d3f3006fb94932535da1c79de57b184
eabd839e4d1b79b8c88dcec0641d17c1d78b6b88
84528 F20101122_AAAGNY johnson_p_Page_015.jpg
8264459f77f47cc4457ff2dadcd4070e
52fe6a38ddd398b2599f5e111348e5bc2aae7f19
41827 F20101122_AAAHKR johnson_p_Page_050.pro
afe9c4128e57fbac79dc3f02dc8ae65e
fc8176cecf6dd2146fdbd69af11ff3357e2a7cfe
2979 F20101122_AAAIHK johnson_p_Page_236thm.jpg
b374ce234d7d4311b717322efdacd066
bfffe1ce2663f083aeca4e4f1665c2d727256f37
110402 F20101122_AAAGNZ johnson_p_Page_016.jpg
08dcffcc477eeeea696877cca6bd7265
708228c2ded86ab15af9af3aecf0cb24cd6bd68d
39834 F20101122_AAAHKS johnson_p_Page_051.pro
6ed1c86e52f9b37114ed2cc90fc4e418
0de9dd64d46f20ae44fcf62cfecb49fab7dce010
8876 F20101122_AAAIHL johnson_p_Page_175thm.jpg
e61f9de4dac4034cdf320c86004c9c7c
1e5e71569cca6b8135d988d31a1de1b7771e2f0f
19808 F20101122_AAAHKT johnson_p_Page_052.pro
7a39b4be6f51a00c6949ea329649a329
63d2689be171fc8dca705a2dfd22085efc6924cf
18170 F20101122_AAAIHM johnson_p_Page_104.QC.jpg
8e62d29e2dff510c2cb32efddd50bad5
e3e5cda1a1005baba98df85ae86d9995d3e86524
46601 F20101122_AAAHKU johnson_p_Page_053.pro
ef9e15ea1a0a8bf7b8b93854ec885a9c
830474200782557afa1cd5fdf48cddd1fbe3abad
5866 F20101122_AAAIHN johnson_p_Page_009thm.jpg
10bbbfad73a71525528b31e9d5f7f465
6bb82f705b105a2ce0a6817687e74cdb89e3f51f
50917 F20101122_AAAHKV johnson_p_Page_054.pro
50b2c0e4f79ac5b580f1de043ea9cbd2
f098a051927588163c5c9314897e44b70ec2ff55
6372 F20101122_AAAIHO johnson_p_Page_147thm.jpg
34374b313fc119d0b6ddb099878c2fab
ba8bda2bc37214aad6211d05499bd15344cd9290
42046 F20101122_AAAHKW johnson_p_Page_055.pro
80dcfea8c891d6a21239d2420c31023e
56c33ee1370dbd18f4e8b8c87d986c1cead2ad94
25140 F20101122_AAAIHP johnson_p_Page_090.QC.jpg
1747792072ac9060bd563126c140b340
6bf65ec9f28a69c8516ad79c53d43ed23d51993d
54880 F20101122_AAAHKX johnson_p_Page_056.pro
0eed2e7322a1f4b36ad8fee976cacf0f
2076d9cdbd0329e20bb71eb3495d9d94f6efacd5
5442 F20101122_AAAIHQ johnson_p_Page_033thm.jpg
acf636a3c402dbab299a7228f36eed90
d4a68356996c5ff9147ebe9743f2b5c894a11a9a
30992 F20101122_AAAHKY johnson_p_Page_059.pro
6616b84cefd7329da753b5d745875a46
91fef23a986b3064819c5b5329684b73c9516287
20994 F20101122_AAAIHR johnson_p_Page_045.QC.jpg
ba5340bc54fad09fafbffac9b492885c
85c0189e612091d8005489e4a15ba8a1b86c31c6
10687 F20101122_AAAHKZ johnson_p_Page_060.pro
563118f664abeea07eb9f2f789b07a71
fba8ce57159dc6ac8d81184b44bfac00feb18ff9
4065 F20101122_AAAIHS johnson_p_Page_121thm.jpg
374b8c4920a2de2c61ec9d1629a1eaac
97324fa3df2dfb8f707480fbd9490704c470fcc6
2290 F20101122_AAAIHT johnson_p_Page_177thm.jpg
71cff256d5bc035e0117116744f2b561
a1f9cce2862251ae05aa05ba346c8ff1f9f1c562
84823 F20101122_AAAGTA johnson_p_Page_181.jpg
8f4740fa31ae4dca972312ba0938ff57
1279884ebaa94221f10ee60a9e614434fb266c26
30019 F20101122_AAAIHU johnson_p_Page_019.QC.jpg
aa47bee4b67c42b47843a1b237a47879
7b7a9adf2c1602e616dec9162b11881829cbaa6c
75048 F20101122_AAAGTB johnson_p_Page_183.jpg
85a08a060d48b57e1abd9e6d47fbd5e4
76905980eb20ff07bd030256808412bad27065bf
25605 F20101122_AAAIHV johnson_p_Page_152.QC.jpg
182bdb240493c2b45be73a972545061c
ca2e1306b7bbdeca5b1314c0e8e1ab9774e405a4
2267 F20101122_AAAGAY johnson_p_Page_017.txt
5d1448c62c8317484ea7152cef2ad006
432dd83ac57733d9e40b75d5f339975eb88a60bd
115583 F20101122_AAAGTC johnson_p_Page_187.jpg
3bc2abbc4efb03c7ed55f3c7a1a001ae
4b6fe8503fe256a04393fdc8bddfa3bee4b135e3
4744 F20101122_AAAIHW johnson_p_Page_218thm.jpg
38be29e454e633030fe935ff77467921
f99a92e9f5e3d217a916a744677ce8e9061b8c4c
495630 F20101122_AAAGAZ johnson_p_Page_092.jp2
fac76f0e68b425d93671bb17ec3aebe5
0a33c0bd73bb5b22ea0ab30ab44e00409de79fcb
73954 F20101122_AAAGTD johnson_p_Page_189.jpg
9d02af100bc804751480dbfb6ee8c2f5
822cdfc51ce51ba731e383d59af687fa6c8f0127
7903 F20101122_AAAIHX johnson_p_Page_049thm.jpg
8f85c43dcb4982eb0053bd8a1a8886dc
90707bcfd9da70ba52df877f780dd0c9ecfa2934
90205 F20101122_AAAGTE johnson_p_Page_190.jpg
b8c25085870588b8f4e7a1ff84bd30bf
c60e61387593baaccd6dd47960cf045c3f1e1191
15221 F20101122_AAAIHY johnson_p_Page_204.QC.jpg
c42141c3e0964d3eb8354a0353352bbe
7140b49780b3a3354289c7d1e3fc3b494bcfe525
87704 F20101122_AAAGTF johnson_p_Page_191.jpg
931548a49e8353b04aa9094540ea2dfa
ccb42eebd34b80104cff3cc3f31b10717cab36f2
26069 F20101122_AAAIHZ johnson_p_Page_167.QC.jpg
c30466a29349dd0d41bd710ba3b754f3
08adf73cb5b77984c3ce8f023726060767bbbf87
73595 F20101122_AAAGTG johnson_p_Page_192.jpg
cd8c35d4bfe5aea44e60bf0e61a8252a
0bd6b0c1afebf0c69e06b1f239aa90c63a5530c9
44812 F20101122_AAAGTH johnson_p_Page_193.jpg
f000817152cee9b3d16cc4c4e5eb457c
b06804567e74ae5015d09c684f4af9098daf7f43
38742 F20101122_AAAHQA johnson_p_Page_225.pro
a9925cadb6125dde2c33f017ffbe6e78
5ec1048f684373674de8bb36eb6bfc482e81ec95
42011 F20101122_AAAGTI johnson_p_Page_194.jpg
71f726835bdcba517c906bde210320fa
36598e0262979dabc97399a1b20d7dca6939bee1
38505 F20101122_AAAHQB johnson_p_Page_226.pro
70694755f02362c2f63e830950f13cd0
5468283b1aa0e4bb1982ccfef40cb649566ba78c
42673 F20101122_AAAGTJ johnson_p_Page_195.jpg
e14df392a97a0bab9be45ec5812c296a
a98ed63f5ffd3b6407854181c5b8500a010d0b43
38866 F20101122_AAAHQC johnson_p_Page_227.pro
83c98a79d7b7678ad77ba8dfaf4ab275
db72cf88556f728c7ac7685441f8ed7c5614aa65
42073 F20101122_AAAGTK johnson_p_Page_196.jpg
f140c9b2a3e3901502c9c94ad605e9d9
881aac97508c421d567286f4cb3988ebced71b8a
F20101122_AAAHQD johnson_p_Page_228.pro
d10da7742496a2cd8146ea7ba0c07fef
d1c0d732c3ddda6eab1e3e68ef5e8fb17cd0e046
42580 F20101122_AAAGTL johnson_p_Page_197.jpg
fdf31c95515a802f1a74c1585f3df2c1
73f6a4e5c53f280003f68f66346affe1cc4abc89
33356 F20101122_AAAHQE johnson_p_Page_229.pro
3f5a4ce87e335106a86c495307b53b7d
7740aabc4b0672cfcb1285372349d167cf8f2314
54215 F20101122_AAAGGA johnson_p_Page_085.jpg
951874cc14b9735dca6196f5f4ff6c99
523193498d71a8b6a34769096dfab5654cf3a558
42547 F20101122_AAAGTM johnson_p_Page_198.jpg
f94c83df39465f515211f4e70dff714a
b61edbb0e1fd81d861274c066682d5ec68fc2964
3752 F20101122_AAAHQF johnson_p_Page_230.pro
a44045b36b80c734bd60d15e12ef44cc
3f6f8d9f80d14a3fd1897fe8de395e508c65c372
1166 F20101122_AAAGGB johnson_p_Page_004.txt
0879799c0443101ace1e19f6eeb6431e
b1fa5c21925a1fc75c26645d370cacb4172ef6e3
34323 F20101122_AAAGTN johnson_p_Page_199.jpg
17f1b2e6dff744d4424c7704aa321fbf
201c5dbcb7ac2d48cb47878177e7431272ec116c
42959 F20101122_AAAHQG johnson_p_Page_231.pro
8873c35a1180effdaa51c0470475cc5d
bf9cd748accf87a097381112d05a83dd85b43551
1091 F20101122_AAAGGC johnson_p_Page_109.txt
75f4cef0fc26e385e0893bbbc86ce194
b7a347e568a7c647f934afee5c0c88e57fea1456
58506 F20101122_AAAGTO johnson_p_Page_200.jpg
c976133dd4d2f4a5790416423c51ffe3
2558d521b9750785b68f52eddf8e6c1b37bdcdc6
55894 F20101122_AAAHQH johnson_p_Page_232.pro
bbb21f34911a8bcc4b4aa93605d6c458
901b67cbecb837bc2fba2d9476dcec46544ef386
12252 F20101122_AAAGGD johnson_p_Page_116.QC.jpg
d56078b52d633e74f67fd7868d628a30
bdf71878a2aeed4168acb3a16559d736c36593f6
55680 F20101122_AAAGTP johnson_p_Page_202.jpg
e4d1a60d9e9f1bac61dc0a9c55c9ea58
0c86044e655a052200e2c65b1999672a54d6de1e
49675 F20101122_AAAHQI johnson_p_Page_233.pro
2a0655bd5798ca4aee91a64202339400
267056f078ede0a00c1fa75ff79a8969bd2146e2
54839 F20101122_AAAGTQ johnson_p_Page_203.jpg
d12a4c56b4ccd17bf159bb4676565603
d1d24ca86bc6b74c4170023c11628114dc559b37
60960 F20101122_AAAHQJ johnson_p_Page_234.pro
8264874ba0169adaf98fdd0b8888d731
b93beb6a35578ecd0f4740da7d6b0c703971fe82
5750 F20101122_AAAGGE johnson_p_Page_004thm.jpg
1c87ad8c525dcfa28778a5499be77f5d
6f54186e758f8b95d78bb624960013610e96e2f4
4674 F20101122_AAAHQK johnson_p_Page_235.pro
0ed4c25f9badbdb9d0e849a62d9d1504
c842d7b72180d8b2d1f62ccad8a5c8b8f7bbe6fc
3392 F20101122_AAAGGF johnson_p_Page_091thm.jpg
80f0a633b9c26d6a15c1008ebf0bc45a
6b0d12d1bea9d1009b7e5a6be889fa507762885b
55380 F20101122_AAAGTR johnson_p_Page_204.jpg
23c9a864ac75b6da267193779099ac1c
9eb835752df3411d11f7f6c2a15fc3702af19fa4
106 F20101122_AAAHQL johnson_p_Page_002.txt
c8349f68c5d02fe8c04372787f996eaa
ba2cca2918605129df82b187566a86628fd208ac
F20101122_AAAGGG johnson_p_Page_034.tif
303c3e2538bfeadbc9615bdb253abfe4
7eff6be0ea9fb5c197d2b1e63dce128db274ff30
F20101122_AAAHDA johnson_p_Page_038.tif
6a2f7bbcbab26877bec02dba217a4f0e
aa431430ca1fb8e031ddc108f46cc299cf2d65ac
62466 F20101122_AAAGTS johnson_p_Page_205.jpg
efe6238fa41ec92576190918058f3519
c1ec3ad08a1a98662b95896be5753227bfa3549b
920 F20101122_AAAHQM johnson_p_Page_003.txt
180c6bba195ca7d261c58e1464f8d67e
3eeb2444095d1ec783670e43b6877c50624a451c
1175 F20101122_AAAGGH johnson_p_Page_206.txt
961631587ac406eaf9267a640be26427
083cea50a29e03f68b8bf0f5aa2a7bca9a23e0b7
F20101122_AAAHDB johnson_p_Page_039.tif
9c2b8ce967f979125a73f4d8acb7d85e
f0f36db2f91b17178c7b5c62bef83c43fa26d582
23483 F20101122_AAAGTT johnson_p_Page_206.jpg
3ae52d31ca0356df5c04f63392d40e59
163755f6bc88be467ee4db0065bb6ca8a23a8f93
3233 F20101122_AAAGGI johnson_p_Page_084thm.jpg
14a3a48d79bba295fffec74b97b4c506
d35e7601ea43de80d18001239d535a70f0d0f64e
F20101122_AAAHDC johnson_p_Page_040.tif
df8321564cbebfced9c3a927642031ff
1e4d5bb8451c82b155d287098c1ed57cd2345d7b
59903 F20101122_AAAGTU johnson_p_Page_207.jpg
16eeecbab16fcbf34c1b5764e0e2f69f
8bdc35e89c10aabd78d2447c732c74d6d3343fe3
2873 F20101122_AAAHQN johnson_p_Page_005.txt
37aca460d2a26ce38d3dd1f597b9802b
392557dd0be7fda628515c5d94402bab0cdf4d3e
1003195 F20101122_AAAGGJ johnson_p_Page_155.jp2
0476a0568fd19fb0c5395711ac455cbb
e4e1c47ec884be2000a9116e883fc293599dd8e5
F20101122_AAAHDD johnson_p_Page_041.tif
f5dc6ecdeecceb3b09169f1232464718
0417ae609be4a4d096cffec8a558a057212d2571
58230 F20101122_AAAGTV johnson_p_Page_208.jpg
0b392582f760bfc9840c61615c2368f2
b7d20a8bb8ed1ad3302fb88276bbaa0d97ca2bbc
2751 F20101122_AAAHQO johnson_p_Page_006.txt
bf512846eea26961b761487df334a663
49a668739c03374662c47099eecd95b6988a52af
F20101122_AAAGGK johnson_p_Page_047.tif
2149630d4bd92c9c6facdf00175a5d72
18a7c7cf27bb4569ccca0e16b823e70b70f1618c
F20101122_AAAHDE johnson_p_Page_042.tif
bc2b310c928739586d0eb5d2c0946e47
1b6c424b0aca9d773dffa5e5fcd10d686e0db5c1
58446 F20101122_AAAGTW johnson_p_Page_210.jpg
42d8c6866d14384f642ad21653eed988
cb73e7cfe99149c38b5069622acfd5b2b84b4736
1836 F20101122_AAAHQP johnson_p_Page_007.txt
8daa8c6ea96b522d512721e40f43aa38
31c38919e704a66d9c6989293c12e8a8942fe8ad
676967 F20101122_AAAGGL johnson_p_Page_149.jp2
6fd0aa4ea912496ef480d586fcbd6f6b
f55aaa92b9b4647b77cc98c6276f8886fd679976
F20101122_AAAHDF johnson_p_Page_043.tif
a9b82220534f50f562678246ccb5ee6a
db49019f1c5b2b1d910d48010dd6af75aec1c3aa
35408 F20101122_AAAGTX johnson_p_Page_211.jpg
1260fc4a3f229339043782af838c5e70
df5780af003ec74e1ace7c17dbccd259ae983ad5
1510 F20101122_AAAHQQ johnson_p_Page_009.txt
7aa54ffe9aee21107b4838bfd7856728
5c18a1e6394910aa2f8db36478eb33692c225feb
4671 F20101122_AAAGGM johnson_p_Page_217thm.jpg
cd81c33b82d2c05d8a8d435f679b3a25
090fd7594a44ebc6c4ce74f542c2a4950bb7f1b7
F20101122_AAAHDG johnson_p_Page_044.tif
f51d24c51bb270db9a8c91d88454819a
7791a672b40d6c1ea4c367c798b7c7e6ccba54ca
59747 F20101122_AAAGTY johnson_p_Page_212.jpg
c16ffb7d885a319454ccb8c776de4405
c0cbc81b8e85973dd49536a53a2f701662d9368d
F20101122_AAAHQR johnson_p_Page_010.txt
ebadb1cb2e94834276b54081130a89f8
011ad73801a67865f99a858973b3a11f25fd2c3e
1411 F20101122_AAAGGN johnson_p_Page_008.txt
7faba9b2a17e21162fecebb9ebddb99e
55a5ef3dcad173e2a62829e63394f2d3b71f7d5b
F20101122_AAAHDH johnson_p_Page_045.tif
ae50ef52ad20c792f94ec72e8145dc75
57cb0aed2216244d573875abc9022d6ccbffc482
75833 F20101122_AAAGTZ johnson_p_Page_213.jpg
6616465511e08fe73d5df25fa62c8fcb
13621d47c119a1316aabe45c8a57b1f0a62d9353
F20101122_AAAIAA johnson_p_Page_113thm.jpg
5a03792d7fdc47fb33586781a5ea4cd2
172eb5cb890fac7becdac7b365b52d2ec3dc0c4f
1855 F20101122_AAAHQS johnson_p_Page_011.txt
9a786cba90fd268d5cb648fa77b165a4
be510fd3397879e626304f607b9b2e61d2c8d3a4
F20101122_AAAGGO johnson_p_Page_014.tif
049cd5593513633353b025814a7c7973
62e59d03f8fd50dcef4a431a073d684c47c7fe87
F20101122_AAAHDI johnson_p_Page_046.tif
a7e1088289f7ad6ac08b011a5d96db20
33ec59f8e67cfab5cfa28865852812acd0354e31
5039 F20101122_AAAIAB johnson_p_Page_163thm.jpg
3b370e34652fb16a76854480c266527d
d24b6fb0b26a3b5321dcc19b5dff55ca34d733a4
1750 F20101122_AAAHQT johnson_p_Page_012.txt
3c5b131f48361b444135a90e1f27eee7
1077872a0ef19915a94a6be346b6caf9a9a76eff
6279 F20101122_AAAGGP johnson_p_Page_176thm.jpg
afb5afb6af138f1bcb131ae87bb83f75
d6fe11793cc20ad4ce8e57a29c3cc21014d34f27
F20101122_AAAHDJ johnson_p_Page_048.tif
9b80ea03807b74b22617039181381bd8
f2b3c5cbcd414ec5d24912575d0a06440376a6a8
22334 F20101122_AAAIAC johnson_p_Page_089.QC.jpg
9a4f2a4ed757e7871b8f99a5243e3180
8d0506ec78c55e31f5c467b84018ac2edd9cf6a4
1116 F20101122_AAAHQU johnson_p_Page_013.txt
eabd36ca4ff1ec9c037b0287c4f505a1
51e2da5b26bf9526eb47658a30e61a14efff67bc
F20101122_AAAGGQ johnson_p_Page_166.tif
38d947e2cebd820af64f64538f068c36
618bf51578c329fe7e761197e6c86c5ba22b7871
F20101122_AAAHDK johnson_p_Page_049.tif
192de9142b1183ab3f6a648c5412432d
46df544556f9b6290dbb862e0b3dd3007aa0ada6
3882 F20101122_AAAIAD johnson_p_Page_204thm.jpg
79d0f2e5f1cf9a87546d9fe3b12510d0
b6e30b1f2be7fc962dbeee0b10fcc8ec1ddbaacd
1645 F20101122_AAAHQV johnson_p_Page_014.txt
4c284ae1be48c0780ede9f645879623d
4fa609c9732ba13815e31de5a38de558400c1ef5
29463 F20101122_AAAGGR johnson_p_Page_073.jpg
505c2e1fd67c0f1eeccc643a22736aa2
c9320af6392f06dfa50f71d56d2319481e3424d1
20990 F20101122_AAAIAE johnson_p_Page_225.QC.jpg
4838a26753823a16633384377876d8b7
35f5c050281d5741e46272ff1335e79551368174
1770 F20101122_AAAHQW johnson_p_Page_015.txt
c7da4d984a807bd051d69f7b97f01bc7
d92d599dd232e81265f1a59ee014b87265595f32
28874 F20101122_AAAGGS johnson_p_Page_154.QC.jpg
6176d2f7d2d5db2865a487695508045e
9f2000cde809fca18c97cc81fefcd0330d8a8cf5
F20101122_AAAHDL johnson_p_Page_050.tif
1f101b1c9f08c1584914d181ead480ff
aebdd10c26f25343b7e5e215111061d5b82a3d15
5314 F20101122_AAAIAF johnson_p_Page_118thm.jpg
1f77919d1e3202860b6acf8c75f3fbff
09a41ed875fff73a7dcf3bac6513fadc6c084b97
2132 F20101122_AAAHQX johnson_p_Page_016.txt
1a6657f827d54134632b4810d73c27fa
ef28e71ddf1aaf9df515d480563870704de7d746
7304 F20101122_AAAGGT johnson_p_Page_153thm.jpg
f5d390eb4d4da4a67cef203172b77c27
a0f5e05f48199d59e6a6518b966a92ec06862c9c
F20101122_AAAHDM johnson_p_Page_052.tif
ecc3e78a9be692712f5a38367cfd7bd2
e804ded601515d1710e95e485953ffa8e235bfdb
4921 F20101122_AAAIAG johnson_p_Page_124thm.jpg
b8731038ca8abc776fe06cb718d72538
e02157eab8b16de878e63e6edc3d555b9ca89ea7
270 F20101122_AAAHQY johnson_p_Page_018.txt
7e3df9e771ffd67f25b4f1dcd1d3fb32
38fe2f1441420d5d7eac6e1129441160e1e88274
36401 F20101122_AAAGGU johnson_p_Page_014.pro
79b95a019f51b7611c021b7dbbe5dd83
8f4b380d0d1017284805c2ee801d6fcdedea410f
F20101122_AAAHDN johnson_p_Page_053.tif
5f0cd5b74bb2d00ecfed0dcf89114fd0
dcdb75f6c2ca32759e6044751257727b942b5d6f
3974 F20101122_AAAIAH johnson_p_Page_219thm.jpg
f1ef6ef75612df028000376a516fda38
c8de5c397b4d589754e477bfeca56cbb3e5fe01a
1736 F20101122_AAAHQZ johnson_p_Page_019.txt
1082cfd4223bba38547dcc3594cc8e43
ec129981c5107b0485af4b91071db144f360d4e9
F20101122_AAAGGV johnson_p_Page_026.tif
bdc916d2d4bac02c41ef1811e4489797
01ca05fead84f32f24c800dcdfc95fcdf725d1df
F20101122_AAAHDO johnson_p_Page_054.tif
6aa715bd58da3a96526f119634df7d22
0da1e26bade3f30e2fbc5f4349ef675e33287a56
4535 F20101122_AAAIAI johnson_p_Page_093thm.jpg
93f85d52d5d00e61179658f2f4a97419
cd68a65aeeb0ba99462d4d63724fa75a6bda2892
393516 F20101122_AAAGGW johnson_p_Page_236.jp2
d65c6c02d68c92d160d90d18a86ea2ad
8bb10b6a9366f93c06f8ca3f4b395b5b71ada539
F20101122_AAAHDP johnson_p_Page_055.tif
6efb907e9f74ccc7ea866f2e230f8b88
aeb7596a701f1a55431a18379f301dba2a3b9f44
1431 F20101122_AAAIAJ johnson_p_Page_235thm.jpg
282a0354ce53fbb0d8eb670a2cbcb3c8
8c7d8469c2f80af7ccc166070c81b74cab75cf62
43761 F20101122_AAAGGX johnson_p_Page_184.jpg
7b9de31c2fc2f5661c39f2b8e94741a4
4ed8d07cfdcf4770bc08c3d286a3043011c023f1
606409 F20101122_AAAGZA johnson_p_Page_140.jp2
33c5066690143f11246164f5424a2acf
910e9566fab5c24f5b734efc3ccbbda50bc9a817
F20101122_AAAHDQ johnson_p_Page_056.tif
ef1febd0087d68ff070a8c3104edd57d
0589f202591160a0e519d7d037b6e664d819a9d9
14885 F20101122_AAAIAK johnson_p_Page_158.QC.jpg
83c6aa3df475c164ae5d0c87fb1252a1
87d3e0ac92c60e32bba2bc36ed9af40a5866140e
27874 F20101122_AAAGGY johnson_p_Page_040.pro
cac0b7d461666a0477cef929d85c8087
3b7d677cac283fd99ab343cb70296d75ec828b6f
1033157 F20101122_AAAGZB johnson_p_Page_141.jp2
e2eddf31e731cab9901751140d1fc400
7d47dca3718573603a22bd617875feb9e160e611
F20101122_AAAHDR johnson_p_Page_057.tif
a1d7ee85e9d4cfcfc62da752dc9fe7ea
58176872cbfa93e942927021359eb64e21597bba
21172 F20101122_AAAIAL johnson_p_Page_040.QC.jpg
60590515e797151d77244919a6af29d2
fb1d53cf73451ba99ce16968bc61dc47eacb692f
F20101122_AAAGGZ johnson_p_Page_110.tif
9958ff23ce5f53c422c18ffc2212bd24
30e5d0068c9cc1c9aa8e5d6586458eac82388153
848222 F20101122_AAAGZC johnson_p_Page_142.jp2
32f075577ba9eb4a25228fb8af16ff08
496eb78941ec0fae2e0f3aeed49c3214bbdb909c
F20101122_AAAHDS johnson_p_Page_058.tif
9a81d8f288ff94de223bd71c4d047776
3263c0b6a0fb544ae986e89709c96d94005c5056
6496 F20101122_AAAIAM johnson_p_Page_005thm.jpg
2f2908eff8519a6b9aa772a413e613a9
2faf82b0ca881dade9a3df68a1f9ed4531f779dc
973560 F20101122_AAAGZD johnson_p_Page_143.jp2
becd4345424438e09d2965161baf9c49
3e47ae14b63592bcf88b60cce297599882651a98
F20101122_AAAHDT johnson_p_Page_060.tif
5db12eb3e71302c2a8b59b1250f6ae80
1c89963c490fceea21bd855e18eed98e0e73d5fc
21844 F20101122_AAAIAN johnson_p_Page_113.QC.jpg
15171f32ad3a145b59077804fb974b63
89e78749ea8a77408dee4b0648c9d66f3bc2dbc9
772148 F20101122_AAAGZE johnson_p_Page_144.jp2
079cd70d6e4eed3a1d8dbbcae81f18a2
890709193732fdc497c17d646d62be911339700e
F20101122_AAAHDU johnson_p_Page_061.tif
a9af699ee5c69dc3a65c009c0e53265e
284034e979e679279bd668cb8797b66b48de7717
7031 F20101122_AAAIAO johnson_p_Page_143thm.jpg
59742b875ebb9b77fdaf110f43636a80
a77dbd399b4f034573bafc63ab73aaa5bb207c59
491183 F20101122_AAAGZF johnson_p_Page_146.jp2
e454275f01d9b849c9192de199fbd1b7
4b6dee6da06769cb7964c85a812219ad842bd319
F20101122_AAAHDV johnson_p_Page_062.tif
624f15a60d851e6ed59fc32d1a53f6f7
a20c5b8c3b8fad0a6dd360a6b2673cbbe8c941c4
6026 F20101122_AAAIAP johnson_p_Page_109thm.jpg
3fdd40bffd46fbdfcf32fac97435f350
6fe57a401cdf97ce86ff4d5a83a3b31892b625fc
756636 F20101122_AAAGZG johnson_p_Page_147.jp2
f7301bb3bd20761dd1d17ecb901a5943
e94da2936478ee613992a88822390c8bb4f0eb4c
F20101122_AAAHDW johnson_p_Page_063.tif
7e17e77345a0a6439ed4e89283e6e2d2
9725fcdada871da85a7c661bffdf2c4192985f8d
1715 F20101122_AAAHWA johnson_p_Page_176.txt
838b7b0c966ffd5c3b128712cc333d21
72c627fd475f880f0beac3a220d765c2522880a4
16605 F20101122_AAAIAQ johnson_p_Page_212.QC.jpg
37d5a85f74bd73731995cf8ddfa2631f
622c881ff6435a88f5ce1088e940fae88d4f9df9
839828 F20101122_AAAGZH johnson_p_Page_148.jp2
2f9f4a6bf2b72252a2db041a62c110a4
d570051b6c58b9053ca6d70806e8c54112c12658
F20101122_AAAHDX johnson_p_Page_064.tif
424633bed6ace29819ad4525a479975a
16ab8b26328753e22bc3ea303c02f940b6b064cc
1319 F20101122_AAAHWB johnson_p_Page_177.txt
0652410ec2aff98e58db3a796e9b8d56
61bce5b3ce17f2cd98950dd81951c3c465d5bc39
31668 F20101122_AAAIAR johnson_p_Page_029.QC.jpg
424e994249802bb18b8ee737f3e3cfa2
e3c56ab54e9a56e041ea5a4375495089f48db41c
861153 F20101122_AAAGZI johnson_p_Page_151.jp2
bf41e355bc150c7f68cf1a3b144fc2d7
2634f86e261c6520b9132d33c7992ef74a2688ad
F20101122_AAAHDY johnson_p_Page_065.tif
7c6aa0038afcf752fb01b5b8f2e15cda
15607a3f18d0915d18231a5e315a12c52044d61a
2077 F20101122_AAAHWC johnson_p_Page_179.txt
4fc7c8cd3a4057f4589b8caaa90c61fd
b2fa63e3e73a11f2bdedb8b13cb335809e500f0d
2296 F20101122_AAAIAS johnson_p_Page_178thm.jpg
39502e6ed0d81b360efa97502af31d6b
4ef3fb99b96118ce4b0446faf135d281a494832a
814458 F20101122_AAAGZJ johnson_p_Page_152.jp2
43baf53e679cd226da4aa41b80ea267a
89fb42c2b27f5bd76de81d7338f33f43e3094925
F20101122_AAAHDZ johnson_p_Page_066.tif
b8f9f0f619b9419266d3586e6e930bb9
6138cfb7fa33e2b9d177e58c6a02b36ea25f1368
619 F20101122_AAAHWD johnson_p_Page_180.txt
5b612f58fccce56b7b33d719731fac92
377989c954076cce8fd2562a16d5a31228dd3b72
6533 F20101122_AAAIAT johnson_p_Page_206.QC.jpg
061567751b87e81fac0a7d89f982ba51
8609b3763a6dfb5aac533e17b9277a480394bea9
1005929 F20101122_AAAGZK johnson_p_Page_154.jp2
4b0575e262defdadecc86b48f70c67de
8b88246e9f03da61f5fe196b0a54bc7a7eb4eb71
1703 F20101122_AAAHWE johnson_p_Page_181.txt
820327054b0a1f6d36c2c4f0744738dd
55c5b05f9d49a5c0a3417818971053337ac279ea
F20101122_AAAIAU johnson_p_Page_146.QC.jpg
5f1116cca31a147b884f236daad6b400
698a4e27a8bc12ead8791c8c9747c8b63059606f
684185 F20101122_AAAGMA johnson_p_Page_215.jp2
5040787f0b4d08d83bd81aeca6ca063f
decde607442f0fca31f91b5a986de2af9fe208ad
1009161 F20101122_AAAGZL johnson_p_Page_156.jp2
42d92db2bace16c4bda2e5a9318e2721
ccedca734fb89c482f5e3dd97a225bdf4870ac2e
2274 F20101122_AAAHWF johnson_p_Page_182.txt
156478af348a4e668525549a37833251
5d5a9b3f9fb62df1d1fd226d6c9fd485c93af48d
7322 F20101122_AAAIAV johnson_p_Page_082thm.jpg
f7f2cca66258af8df166d4b0fa4b5e7c
18fb69999e2c21d8cde38481b3690c0dbb305b19
4305 F20101122_AAAGMB johnson_p_Page_120thm.jpg
c7f052743beec5f0c2480be0baa3df8d
5c3da198fc231935a831ffe2b09ed6299020fcf8
1051981 F20101122_AAAGZM johnson_p_Page_157.jp2
081b37d3141cc1a21008ccd3682d8f77
0ee57410e762c3290cb965d82f67f8b6b092a589
1439 F20101122_AAAHWG johnson_p_Page_184.txt
07074ae9679eea39982d711e33ad7480
a7d9262c04b9900dfcde9d687708f98eb8406417
9040 F20101122_AAAIAW johnson_p_Page_178.QC.jpg
aa0f7698517eb47aa939c74e915d26cd
82b187d668dcf511f62596f23c5444f17e1b6187
F20101122_AAAGMC johnson_p_Page_095.tif
559d85368081729103e810c7ee78502c
9672544cfb84b6d43edc52ebcbe1780e73a49e69
472795 F20101122_AAAGZN johnson_p_Page_158.jp2
1d75185b8606d67ec4987ff02e31858b
8dd4949c04bbce76e59061c00dc9695b2bd494db
507 F20101122_AAAHWH johnson_p_Page_185.txt
768ba9ac2dac22338f97bd161f7cfd2f
e3e664d33df3388b48f7512b1b596453e7e5a8f9
7005 F20101122_AAAIAX johnson_p_Page_081thm.jpg
af309d47d8c1435a5fc68e9a0c7370b8
527bfb2c3294bb1806f488308e138e5a7e4fa834
560822 F20101122_AAAGMD johnson_p_Page_160.jp2
87d9f1298178f971e561bfb59a20b14a
9d61662431210c77c28c444e1d218efd6f2d2dab
927606 F20101122_AAAGZO johnson_p_Page_159.jp2
fdaa6c7a28ce5add7159eb574b7eb1ea
5c093a15720b8787349607975e1dc0b037ababca
607 F20101122_AAAHWI johnson_p_Page_186.txt
363881652ab9ff03166d37ecf5a57fb4
931e873dfc7eb69fa50021d4c06b1a34f94caf9f
11172 F20101122_AAAIAY johnson_p_Page_105.QC.jpg
952eb28391e3b9f563bf165b2c287439
e5254368fabb9bc2623c3bdb6264d0d3c847d5be
729318 F20101122_AAAGME johnson_p_Page_226.jp2
3e122bd89403aa3473c49eeae1d36146
caac89d6aa6905ecb88b7afe42931d39cf944c95
1051983 F20101122_AAAGZP johnson_p_Page_161.jp2
f395a14a25ddfecbaa6348817063906c
2dee0f5589b994abc921515832f5e71420f27ca1
3471 F20101122_AAAHWJ johnson_p_Page_188.txt
ffda4e05bfe3c63e3be82ac72692963d
7664270cebce913980d7c7db110bf0dcb6e544f7
32235 F20101122_AAAIAZ johnson_p_Page_063.QC.jpg
533f48bde0fd6f7a9f5978d9d0f3a42c
36ace15a8ec01e54e06b97a949a9e4158c3728fc
F20101122_AAAGMF johnson_p_Page_109.tif
11f8ecf6d96dd45bbfddc3e05041c075
25f3e7b42d612b4b4bfb69ec98ab6140d92c0a49
647317 F20101122_AAAGZQ johnson_p_Page_162.jp2
693b894f6154cc17c4c2dbe0ea5caec6
0ebe7712f0f42b9c28ec5480b550c03c3df696d9
2828 F20101122_AAAHWK johnson_p_Page_190.txt
a519f72bbae85ec148d1bc31f27fa81b
7325f893f30c423a61bfe0349ec8253886f54349
162043 F20101122_AAAGMG johnson_p_Page_034.jp2
0c1d48e84c8c2b00b654feb4775d27cf
dd93f8765e30af195767abb21860e2be75a306c3
532454 F20101122_AAAGZR johnson_p_Page_163.jp2
469622a54fa3d82fea1155f4c19725df
25ac6b574e0d82045dd5becbc04f6fa0a5be1d77
1872 F20101122_AAAHWL johnson_p_Page_192.txt
6bbacd4b9959267f13e9a73548b5fc8f
5af5fe60e20d98e33609bad446eb6acf8f44b87f
F20101122_AAAGMH johnson_p_Page_093.tif
0f529cb3b6df43a3920221a92f03538c
2af6cd864a349092c76750e94cb4d103e4a94459
F20101122_AAAHJA johnson_p_Page_232.tif
5798d97189e2d670783db647390e6b8d
9acdea22a3e43941af3de566117dc21bcdb090ec
349695 F20101122_AAAGZS johnson_p_Page_164.jp2
e4afe8cac82d7074255ae8090a32f2b8
3fc5d74c35fc71990e42de17c52cd3a2996c9400
1831 F20101122_AAAHWM johnson_p_Page_193.txt
5555b5112eff4a8abbb217bcd4274eb5
9567b911190a40e925388c1284b5315730344607
F20101122_AAAGMI johnson_p_Page_016.tif
d08f87a6be21240ced9c1b40aa24454f
60dae6a8fc4f61382227cd8e06c93be057981dad
F20101122_AAAHJB johnson_p_Page_233.tif
1fb71490bb129ec0a302525b7ab6161e
8276599ff1a73fae9dc5e5ddd0be1a65d6df4418
1036317 F20101122_AAAGZT johnson_p_Page_165.jp2
9f8213fe23225801348498a76a814040
64feb8fba438f9e2231c621c90cf0055b4714e5d
9966 F20101122_AAAHWN johnson_p_Page_195.txt
8aae9717b7f2da4d217b865197de8ee8
af4a13f3199888ba2ee1167cc0d67bffb175274e
F20101122_AAAGMJ johnson_p_Page_145.jp2
108ee4eab324408e11f23a7612693355
d2d451d86be0dead9b7ead6902493641e43a7a40
F20101122_AAAHJC johnson_p_Page_234.tif
97209f229d0d53103a8f13efc367d397
6d8762d69ee346acae78ce1883a2534bbe79a513
976937 F20101122_AAAGZU johnson_p_Page_166.jp2
ac22a84627d8fb2cda8d73f0d0360081
927b9a75a43ed43e827d4f5cd29f90cad9716a38
9965 F20101122_AAAHWO johnson_p_Page_196.txt
1e10f6b91a0ae2d76cc8e3d61e2b0fbb
7c764ddf8405801bc0f580f614188dfde6f974f7
F20101122_AAAHJD johnson_p_Page_235.tif
840cb26ab55fe4df122854fe2a78c97a
68cbadb25d4dab6d325a62f5171b58108d009c97
800000 F20101122_AAAGZV johnson_p_Page_167.jp2
af52b93b9a85487eeb0df82994b9f521
5b6586a92f1961054054854c2ec99b78d7d3621f
9860 F20101122_AAAHWP johnson_p_Page_197.txt
327338bd7c283cf87c06aaba0c77ad06
eb04393a372d3029c5c0cb9c7ac6427a542fc4f1
86614 F20101122_AAAGMK johnson_p_Page_077.jpg
2e2cb32823528e03d94ae7e7133480c3
a086714492940c14bacf42b0deb03120af34340d
F20101122_AAAHJE johnson_p_Page_236.tif
419f24bad3b9aa7e05d61214b1367db7
e57cc205cc06dd061cb9fcd7aea52762487b9b89
800025 F20101122_AAAGZW johnson_p_Page_169.jp2
e650005b3e32dc73ffbe5f9a6da3b66d
707cd9d7a51e021ff2e89491565087c86b35806d
9981 F20101122_AAAHWQ johnson_p_Page_198.txt
03576ed0eed57ebbd5cc048f83934934
3c1feb5dc25c904083616ec8a2cab4af1edc8477
F20101122_AAAGML johnson_p_Page_004.jp2
54464df13d32a022d88728c58083c5b4
07d4f5275369e1f601f917b9b9bbb063811d5c6e
7445 F20101122_AAAHJF johnson_p_Page_001.pro
c5a4eb998dcae9a47e0592ae69323e51
54aa14e6f54d8c866faf64d369116eb85ef6e600
6995 F20101122_AAAHWR johnson_p_Page_199.txt
d0c7db520c30f6a58d07a6f01893db53
8579afc525425cc0788e6d1881880c640de2be20
63371 F20101122_AAAGMM johnson_p_Page_020.jpg
7c5db01530640ac0271f6ae86923ab82
cd1c2a50568774012c165ff51cd8e452f647a052
1011111 F20101122_AAAGZX johnson_p_Page_170.jp2
289d476a93edbf3bf598a76cc1a724b4
e242528d2d1624fe7ebba844b989859c4312e6fd
4992 F20101122_AAAIGA johnson_p_Page_117thm.jpg
434e1296091ae3b49ae8172b231c86fa
ac13490bbcd066ebebaca7a3012034f48c934a0b
1417 F20101122_AAAHWS johnson_p_Page_200.txt
e524b3196e767daa03cbb492b61e7df5
ea34f94b4a59d2aa88161ef460de66720f696328
F20101122_AAAGMN johnson_p_Page_020.tif
2cdf98ebd9be967801d7cc9b6e7954af
ff5c2a0fe77d87eb742e23502da634a7b205ed0f
1176 F20101122_AAAHJG johnson_p_Page_002.pro
f084810a2a8d20ab81662c6ce9c2e4fe
9393979883d5d4120f7ae5557afa4ce13aa928aa
F20101122_AAAGZY johnson_p_Page_171.jp2
d2ed83596ab94c4622ce38ea83d4d72a
5a683aa51a2e5212bfe572224d31063671b49a76
4600 F20101122_AAAIGB johnson_p_Page_223thm.jpg
373061bb4a2f2d9e6c3f4d0563187409
3f1d8c8d094c789ac08f589cb03e3be281521a1a
F20101122_AAAGMO johnson_p_Page_199.tif
63cb3a439c2fcbbd83ddaae32d1ed9b6
7429ba49f8bb9107ffc8b13ab0d78f5cc44e94d5
25510 F20101122_AAAHJH johnson_p_Page_004.pro
4b240e7aa188fe3e7f53856269be739e
ea3617349e2c337c7a8f8ec526725fd7d3f7445f
F20101122_AAAGZZ johnson_p_Page_172.jp2
9b3fde8ccc9943a4935223381bbe7f63
7029bb4e2b6ecada9020a64dbaffd68d0083f2e3
6634 F20101122_AAAHWT johnson_p_Page_201.txt
30ad77e75e48db3bb477f5427d6d93b4
6ef9ebf059279af10022e77601691bd89b0d85a7
13879 F20101122_AAAGMP johnson_p_Page_185.jpg
ecdbc6d0ea28fd5d7de2d204330ea71c
8074cf6e880bdf44b5c9731b7ae04550574fea54
66785 F20101122_AAAHJI johnson_p_Page_006.pro
4c61ec269f524a8ebb5b5a998c7ac72a
ff4c4955936318eb5cd586b70caad337f7a8aa6c
32821 F20101122_AAAIGC johnson_p_Page_172.QC.jpg
e5ed39ed268b6c417993230d0d407a94
f800aa0934bee1822a9c0f577d3707c004a34795
F20101122_AAAHWU johnson_p_Page_204.txt
025b03fb96c729316a610a486db2490e
6a67bd8f1d8fc8e946528a584872ffd1ea7a306d
F20101122_AAAGMQ johnson_p_Page_129.pro
2b1364c2b22c7c401478fe57f30b93b6
c199613a867b205a958f69a5c8792105dcabdf35
43147 F20101122_AAAHJJ johnson_p_Page_007.pro
01d34d9f316ff86249a6c09752f09162
450e72e0326ef4bef52f0644280ca6e547e9fc5d
7027 F20101122_AAAIGD johnson_p_Page_080thm.jpg
782aa0b5287a5789f2866ddfebb99151
93f26dc2394ec36fc50af5fdba00fcfff4686c54
1863 F20101122_AAAHWV johnson_p_Page_205.txt
4396722ce2703ee0469370702958cd94
aedaf4ae8ec9913b4c35f3cc65715c3babee6247
82265 F20101122_AAAGMR johnson_p_Page_090.jpg
9c3285f716c6c28799b880bae724bbfa
5b57f45ea2917e0195a39fe682b4a93d961caae3
32210 F20101122_AAAHJK johnson_p_Page_008.pro
c3bde548b36502e1ec86d0e323f94738
e7ba8e2c5d1d8dd99bd5caa6ef3a1adfd18e1101
17555 F20101122_AAAIGE johnson_p_Page_061.QC.jpg
798086066abfbf199f7f920be5d8adcf
5d0df20d30e74ba841c1e53ecf282f79ec4766ad
F20101122_AAAHWW johnson_p_Page_208.txt
471dc8fd6ca18fe6de24af4c57596714
5586789f97c84b8cd64d3ccbc1fbb9dfd3abbbf6
16194 F20101122_AAAGMS johnson_p_Page_118.pro
06a7ba83b4113511ceab183318982927
dc36017cc96308d5693ae8bb69810eed0d96dee3
34393 F20101122_AAAHJL johnson_p_Page_009.pro
4a711bbbafe3bc1b44cdecf772c74aca
d459748f3cb945d51114b5802d5dfb88be2abc23
36430 F20101122_AAAIGF johnson_p_Page_175.QC.jpg
074f56cebfe34eca90b56798d0fc6fd9
2a529424003278765c8421b2dc310383e3459f61
F20101122_AAAHWX johnson_p_Page_209.txt
a1dbf56af3cb2f82517ce9a7f4571ee3
aca5182f4eb5c5d1e0872bb63c0357f2f23b6a92
567871 F20101122_AAAGMT johnson_p_Page_112.jp2
234da8f4039f50bba326d8fca1677af9
5b20223960c1db45b7be6c7a11cf253262e82c83
42082 F20101122_AAAHJM johnson_p_Page_011.pro
d72bdb598f23a8b11781df5b8e0c57d9
223c0969a2d76069f2248ed165b799e6a7a7458a
5401 F20101122_AAAIGG johnson_p_Page_111thm.jpg
fec4682360b1c39ceba54d0e13e6d274
3844febd6920f17492b7f5c7f3aac420bc5eb9b5
1060 F20101122_AAAHWY johnson_p_Page_211.txt
e89f00690aada0c732c04cf761e9fedc
29609b1fc2e6eabaea049dd27035732526e1b44a
609982 F20101122_AAAGMU johnson_p_Page_108.jp2
88f5ff3fd779e724fd09e31018fdd8b9
eb50ad8bd140412f0ea0e91c40a29ddb0980cee1
26819 F20101122_AAAHJN johnson_p_Page_013.pro
5de98c71548059cda3b95cb428f6a307
e4237a44593b3d855c07912f87682ec734c974bd
24891 F20101122_AAAIGH johnson_p_Page_142.QC.jpg
4b57a04aadb02eb1685389309e09a0cd
b01390266ccf56075b6a612a49d32afd314d57ce
1302 F20101122_AAAHWZ johnson_p_Page_212.txt
0f80a9a0d30abce0ab3948ae7fdffe2c
f9db6335fb4bf555330b7291d06dc60544267df5
1614 F20101122_AAAGMV johnson_p_Page_223.txt
c5053089e44363e9f513a41f96252272
ed0b97d00c76173f07701c69e3158891a132d1a4
F20101122_AAAHJO johnson_p_Page_015.pro
34e117713b7a58f7f9bbfc3ee1b4c5a5
b3a8324182550d2364e447094e6254bc7bbc8941
25478 F20101122_AAAIGI johnson_p_Page_168.QC.jpg
de8717be7c30426b5b474660fab6bc3e
909b446c432a7bf009130eb4516570d0635bc942
788 F20101122_AAAGMW johnson_p_Page_124.txt
37d7536d4ade6f6bb0e179cdd5528a71
2e4c456dd5b527f0d537e0afbece240272040cbf
53882 F20101122_AAAHJP johnson_p_Page_016.pro
bdea4460affd352e033349784879d108
c14145f78102c73c3dd7af8a6196796e3e08b89c
6035 F20101122_AAAIGJ johnson_p_Page_045thm.jpg
9e01b0e624b1adb547dc62b22373ebd4
eb235993a0fd1ad9d479560475cef2c6dd75546d
21702 F20101122_AAAGMX johnson_p_Page_021.QC.jpg
a43bb8910a984b729d787b69163425ce
704ca0cfaacd518367f67677cd947959a627e352
57220 F20101122_AAAHJQ johnson_p_Page_017.pro
2fbc2486ebb05fb2c9df2a610d18f3bb
abad7ec73cdbac8c68ea85aed948b1640c143fc7
11438 F20101122_AAAIGK johnson_p_Page_098.QC.jpg
7fc57835bc1ae74c2d43c5c9f3151cfc
0c9ae7a0bbdbbc5b520314008b4b7a204e18a38c
F20101122_AAAGMY johnson_p_Page_084.QC.jpg
adc06a399dd53bfa1830e957a642038a
178624d796717db95a509db63aa7286fb3dd8c0f
5804 F20101122_AAAHJR johnson_p_Page_018.pro
b49b7687d3963b4e83e102573c45b17d
3d1b847a39b58da16e5a21cb1b16ee858342ca7a
28125 F20101122_AAAIGL johnson_p_Page_143.QC.jpg
2bcee387e40bc4399d7b30fa4f8e34d5
7e01f1f2a718df753c46ac1d3af72d124cb19712
1016621 F20101122_AAAGMZ johnson_p_Page_035.jp2
d37b0ccfb6e37c6bf62e8796366f2b69
a8777a1f0138a042f5453651d00b20d52590a542
41188 F20101122_AAAHJS johnson_p_Page_019.pro
60faf1e8d6e38eea4a66f11c6d2e1872
c271b62a4e6612660127fa99a4c766833f57ac04
6084 F20101122_AAAIGM johnson_p_Page_007thm.jpg
0ad66792656e8c853961c4c0a86c8d34
f6f3f043c02512f69906655d2a45eba4067edcbb
22237 F20101122_AAAHJT johnson_p_Page_020.pro
9a0a3dd6fcd56b8a4bef90b3437b8f89
66c1fffa82c43e130546cf0d914e36ba6da14ac3
21692 F20101122_AAAHJU johnson_p_Page_021.pro
cf2b578716572ba6af51328fb109cdfc
2d1eb1274bc7abfa760474e174bc013163f427ae
7061 F20101122_AAAIGN johnson_p_Page_030thm.jpg
7e180b24af1580e07e6600f874bf37b0
36f9e05f6df9c464620e0e7403c4a77d10384c94
56579 F20101122_AAAHJV johnson_p_Page_022.pro
a476b16a7b655198dfd467c1f8e7de7d
fdc17093cdeef474b81e70794b3d0fe7a7dbdca0
5517 F20101122_AAAIGO johnson_p_Page_126thm.jpg
6f08dd59dad7b0c29f90ead5d178955a
83cb172d6ca101855022e9d1daf49831065e185a
46666 F20101122_AAAHJW johnson_p_Page_023.pro
90bc82f390b19ab291b717b62ba03c58
ddfa7ac26417979efdf9e4db7ba4c5e6ef975e44
12057 F20101122_AAAIGP johnson_p_Page_128.QC.jpg
aab0e773f00067e1bf0fcc86e003b090
4f2736fe9a32e195464d8753dde7a2cc6762d830
46486 F20101122_AAAHJX johnson_p_Page_025.pro
f8ab97d718f99d4350d01a50a1465cbf
89c71f83a9fddd65767bb382958ddbf478d9c354
19091 F20101122_AAAIGQ johnson_p_Page_111.QC.jpg
2f53f0e24b8fe7a65f41a8b6e5f48c53
91a48625500be2cba34c65fee3e46c67d9936846
44054 F20101122_AAAHJY johnson_p_Page_026.pro
07cb2fa1872c5f7a4d5d35234f76fa48
636ef24d29522279e03e1ed66fb5bde8b499b91b
23210 F20101122_AAAIGR johnson_p_Page_191.QC.jpg
e2dac5811a11e574586ee9ea1df65af4
9133ef920403dc990dfcf9f414daa0cddf9dfad9
33560 F20101122_AAAHJZ johnson_p_Page_027.pro
bbe9712a10c22ffee91f120280a83938
67fc33b8d6676148540eda69d0c95b0080303800
20056 F20101122_AAAIGS johnson_p_Page_162.QC.jpg
c5c17095e73255dc0573854bdd0ca4c8
de51c1d5eb611f8008a024589dc96618ebf20209
8596 F20101122_AAAIGT johnson_p_Page_064thm.jpg
2924032f964731a86320156f5add0d27
2bb5fffed9463400357401eebd5461d68450ac89
73643 F20101122_AAAGSA johnson_p_Page_147.jpg
95249d6e17e524eb355174ed081a58f4
026e13efc651dbb4cccd6f21128a968163a65156
27841 F20101122_AAAIGU johnson_p_Page_159.QC.jpg
fb4b7cfb51e1eff86b4d27f41eff66f8
f204420d6608135870b1d9af5cd77d7f04c4ee39
77426 F20101122_AAAGSB johnson_p_Page_148.jpg
0beec9c4a0ba16fdc3f4ceb1388a10c0
28a6781a2f22015ab746dbfbef89148c32a31db4
32169 F20101122_AAAIGV johnson_p_Page_036.QC.jpg
1031ac445dc660b561c5f6181331f2d3
2d35b37f033f8594cd4c7e454f838171d6e4769b
66115 F20101122_AAAGSC johnson_p_Page_149.jpg
ea90ef565146ac006771c6a78d244def
b5aa0efc19d80b8f935ef3ff9ad25951256cf9fe
5954 F20101122_AAAIGW johnson_p_Page_174thm.jpg
f4adea406d2eb4d3684640eb933db3fe
734ce982bc20d2e72da02409004936e9a8877474
86562 F20101122_AAAGSD johnson_p_Page_151.jpg
4cf72c673c5ad3f63066a61081340678
3ccdc5c04bade4254f26f543e9f9cfa50043ee49
7596 F20101122_AAAIGX johnson_p_Page_058thm.jpg
a335a33c364156a18647682c152fb752
adfb9de7fa6eaa22011ad468e155610e8afb169f
81622 F20101122_AAAGSE johnson_p_Page_152.jpg
4f56882acb0b1f274bb8e3dad31027e1
99b551936eb5f6cd472f4899475ff0075e5b9265
6741 F20101122_AAAIGY johnson_p_Page_151thm.jpg
c1a29f6f21cae8fee430c89b2608bd4f
9f452f7c5a7531890c82f33847c7009cdfbec147
93473 F20101122_AAAGSF johnson_p_Page_153.jpg
c390116872a3c0e0bc8ec6544cc5e786
871b130b633c978afc81ca445647e8d8986fc375
6077 F20101122_AAAIGZ johnson_p_Page_012thm.jpg
8c4929c2130607b4f007f9e26f9de24c
5e132eef7ad87c035711c37d1dc505dd773c6b2b
89757 F20101122_AAAGSG johnson_p_Page_154.jpg
ac4d8fc920ab5db0179f938edb381a6a
98e4cb74b9202889d7dd176fdfeeff74c153da30
90555 F20101122_AAAGSH johnson_p_Page_155.jpg
5b4fdee31c4116bb587f7d0f04d100d1
1cc4a9c0cacebc90b3f0be02c4932130330159c9
43432 F20101122_AAAHPA johnson_p_Page_192.pro
271534b9a6d6ae0adc3f6d228c11439c
30782cd31ef761d53ed741719b7f11c1bb9cf8e2
89929 F20101122_AAAGSI johnson_p_Page_156.jpg
350d8d20f0c850fb22b9707082b1109c
ae3d20f1cdf439c7d6a672cc22e97d9e63bb3480
35467 F20101122_AAAHPB johnson_p_Page_193.pro
677e1120e1c7bcbe8eef695e4cbe045b
b7b6a6b11bdac46c55091b8b5d0a38bba4e9f38f
91818 F20101122_AAAGSJ johnson_p_Page_157.jpg
393a57ac6072741089d6a07f31aa750d
0d3288999e777d4a23889dba18a37d4b1d88e9da
159907 F20101122_AAAHPC johnson_p_Page_194.pro
a8e33d3e82a88499330167f900ad60a6
cce374d0780a4675e3790e54a6f3205bde4183a6
46109 F20101122_AAAGSK johnson_p_Page_160.jpg
eb07016bfe097c9028b3a427a5398145
9f6b8e2893914456e18003dc0625349a3b13498c
173281 F20101122_AAAHPD johnson_p_Page_197.pro
5babdebb3d094a13382010d9d6dcf2c8
9958a883ba999fc5ab585aad592a801e974ab9fa
118378 F20101122_AAAGSL johnson_p_Page_161.jpg
f30f9ca0d8da301d71a029e5f7a9e85f
fdc951f5cf4889c3dc0491a64ba4f2541ff0a15c
172595 F20101122_AAAHPE johnson_p_Page_198.pro
d85fae154e13b4eff5829c1950394d29
f13cf8b72eaa14fa977092b7074cafe87d2ed2c9
127442 F20101122_AAAGFA johnson_p_Page_182.jpg
9b51966ef748e841b3bfbdf198c93fc4
f2eb3e4b46aff04453ca30de3a4baf2a3a60f00d
53485 F20101122_AAAGSM johnson_p_Page_163.jpg
62f6597133dad1c58616b7904da8e2c8
dd1930939afaa90844be8dcae9f7e90c12a3356b
127881 F20101122_AAAHPF johnson_p_Page_199.pro
c920b08ead1f408998d27f7aff62a55c
2eb626ddf89eaef55c9eaed302ef28db2511390d
1815 F20101122_AAAGFB johnson_p_Page_220.txt
f7cd43740516a30f50b6e2719cdecea2
1f397eadf0573ac60037d8c658a9528bd0aa2810
40810 F20101122_AAAGSN johnson_p_Page_164.jpg
f8842f50c48d20401f93eaf1b098d5ff
d47b91a2515b2a36909fbf1e99ed6eae0ac772f2
29384 F20101122_AAAHPG johnson_p_Page_200.pro
063d4c109e9d1b9fdc3f4d4e7b3d409f
c023e29b168c16961efebc366407a1fec8d17daf
2527 F20101122_AAAGFC johnson_p_Page_067thm.jpg
9a248be457e8d79e21e560edd2024ef3
5888d3d0d86e34c5f3300219a6fe1dd7a4d6e85f
81062 F20101122_AAAGSO johnson_p_Page_167.jpg
5aa062dd066bd7adfa791833efc9f610
f35687cf5c1b0ef9f0cc6b38610a8faac70a12b6
115662 F20101122_AAAHPH johnson_p_Page_201.pro
dc0c6ecb0012ea4e49c6fb0f4d2f466c
d8014a13e9adb58297daeaf72d8e3183d9b9024f
6397 F20101122_AAAIMA johnson_p_Page_167thm.jpg
c946c3341d34e0102efc6f6afc181fd7
78a5eab47a0ef142a7437d0225607958f6daaaff
77335 F20101122_AAAGSP johnson_p_Page_168.jpg
e9a5dc7e0f52f23883dba328830acfee
d36e89a1543a7b003fd4185792eb826cd4c496b4
24851 F20101122_AAAHPI johnson_p_Page_202.pro
98630ecffc5139c640000a13785ed793
550accf8998a7d9fbd6f3d398b72f2e6bfa48a85
27016 F20101122_AAAIMB johnson_p_Page_181.QC.jpg
a6ac189cfda008e246a07edfcc9ea951
2759cf65ff9da60b63b2d02e1bc801d8f6d58e23
323654 F20101122_AAAGFD johnson_p_Page_116.jp2
7e984b8dece2a6f659c8d68e475699fb
cfe8cfb0b8ee6d0e0c0260f818ed44255c15d838
57898 F20101122_AAAHPJ johnson_p_Page_203.pro
04ce8ff7dddbdab7c48f7cad0b873d21
6c1ece71ffe025df5fba01b959e9a248da443af4
24385 F20101122_AAAIMC johnson_p_Page_186.QC.jpg
b6771dfce54b71406b2d32f52c6592fa
f05b57aa6d2b4d52cdb964a0dc47a56e77d0ee84
45185 F20101122_AAAGFE johnson_p_Page_058.pro
a81dfff996b896f78c0618af4c37d1db
1968bc3f4c3f3b762b676d861fb6553c8b4db28a
65358 F20101122_AAAGSQ johnson_p_Page_169.jpg
d239429b5121348e84b20f7bcdf28204
00821c2346c95f36f511a5006efd270ead7ff657
30475 F20101122_AAAHPK johnson_p_Page_204.pro
0a39d93a5e9aa13cff6d4737d82a00b3
502c877381d4d07fff07f31609fa885bb85326b2
27126 F20101122_AAAIMD johnson_p_Page_187.QC.jpg
7a3c5fb95276b7743ffeaaacb03662aa
305ba0233d245e8e6fa74d1c03c12b732c73310b
44166 F20101122_AAAGFF johnson_p_Page_208.pro
835a36b9fef904a5dd0827cbb35b08e1
f3658369f322ea229ad560232aa9d87ac2695c46
98887 F20101122_AAAGSR johnson_p_Page_170.jpg
67789c4bb383576a0e5b4009b935b12f
4fe6a8a7887bb15b6de1cd559c4c12c9795ee6ca
39990 F20101122_AAAHPL johnson_p_Page_205.pro
ecda716b34e599122667f28460a20121
6ba33a14ff7a94d247560a8a8dcf6c625fa680ee
1758 F20101122_AAAIME johnson_p_Page_196thm.jpg
13624c5cbca7410c148c446268879fd3
526f9c820e4e00741748790e02b1fd9ead1dd0e7
54938 F20101122_AAAGFG johnson_p_Page_219.jpg
f0ef5d39d72d2adf10dc99b1412d6d61
304528a99844cfed5762ac89131bf86fe4c97616
F20101122_AAAHCA johnson_p_Page_001.tif
e9e6169d9b434f61f1c0aa015d533b59
709da14978a63a222213caf2508837b0b5395e63
70894 F20101122_AAAGSS johnson_p_Page_171.jpg
dd3eebe3c8aa57eb68f279efdb58f9be
b0e4cac2ee3c597d0b9895e83674c2b75af671b4
1587 F20101122_AAAIMF johnson_p_Page_199thm.jpg
c8dd11940830a719878823f9f9b40269
7a3e23a2472a90b60235e742633bcb4474b856f8
1444 F20101122_AAAGFH johnson_p_Page_183.txt
aad68de29c5e35826bb98527b2dee469
7ee8416cb38ad1cd0f37c864864bef30a55913ce
F20101122_AAAHCB johnson_p_Page_002.tif
f039e48d7649505bf7f6520fa8f586fb
ca61fcfc3344982f6a6e5a0f25111acdd840a151
76076 F20101122_AAAGST johnson_p_Page_173.jpg
7cec15bb5fbb83f81a6470b8c89e11a3
6c676c8b7f20c1dd886b5a67b2b72b43950496c4
39920 F20101122_AAAHPM johnson_p_Page_207.pro
ccc4213e0569469f60a08e96e7457e31
1fe655fe68e1919206f7cbe253f3d77814117e0a
15355 F20101122_AAAIMG johnson_p_Page_202.QC.jpg
6fdd82fba06c0532c5b76076556c16ed
7b2f7491ee3a42969a75570ae25c23f5dcb43536
680994 F20101122_AAAGFI johnson_p_Page_200.jp2
227c2d5dc505c801d6d23e565ef89610
283afca093c100a4a273e43cce4eda376828e610
F20101122_AAAHCC johnson_p_Page_003.tif
a33ec537b38bbe9d5d25f05a2d322c5d
a12fc65cb09ce4918d3c4c4e533d19ad486ea18e
64437 F20101122_AAAGSU johnson_p_Page_174.jpg
9ccfbe0ff01ef6687807121cb437afab
d37ab3c4fb9d9100d2119d017941c89e6bcf5c75
36019 F20101122_AAAHPN johnson_p_Page_209.pro
61ccd5c3589030a642e5f402b9566b88
cec19d7756712f78d15fedf2e9dd8aedccc6a908
F20101122_AAAIMH johnson_p_Page_206thm.jpg
448d961f839a54394000edf0be38e734
67c9673dfd27ed6aeb5bc96e2ad87bd3a903bd23
805105 F20101122_AAAGFJ johnson_p_Page_131.jp2
8bc294917f160bc66b19d89799b936db
f28aad078c12a50c00db012561b1fdc5db50bcee
F20101122_AAAHCD johnson_p_Page_004.tif
0b19def1a8ae7200915f93dd7ab9b443
cf8793b9ebbbc37da59211a22ef6c51a95f5cd4c
120235 F20101122_AAAGSV johnson_p_Page_175.jpg
b75c63d817c0ab32538f5cb50dcf367c
6aa80a054dd55134f719c3396428e9a9a9f95956
45293 F20101122_AAAHPO johnson_p_Page_210.pro
92971e1cc8ddd53ea284a7e434d08661
a0b4106a4eec2d928c9af6e6e441d16f3fe20507
F20101122_AAAGFK johnson_p_Page_080.tif
78cb016bc4d79902dfeb281ccee384c2
ce6bdc94d5ddce415abc717269a4922e7d701850
F20101122_AAAHCE johnson_p_Page_005.tif
8385288693bf19b4c2d93090b72b0676
dd394829a6a4d3eaf3eee75b1b4e2b0a3cc9fd90
82159 F20101122_AAAGSW johnson_p_Page_176.jpg
3695ef46678f5b66862b35c203593206
dc0d63d07de973517a928889332dce7de75fd0d5
24530 F20101122_AAAHPP johnson_p_Page_211.pro
f0038ef2f1740ff0460e14c41e45d3a6
6763a962ab6e2a1c77e19150ca066203ccdaf095
4546 F20101122_AAAIMI johnson_p_Page_207thm.jpg
28c575a2b49761f4d42cdd35b8e80e70
e8a5f2da5dff8553b4b357e59567bb5e6c4715de
1263 F20101122_AAAGFL johnson_p_Page_202.txt
6d717506e8affc81d2a1716143e2d5cf
fe34db54dbac572f1cb1813caf68efd2912177d1
F20101122_AAAHCF johnson_p_Page_006.tif
3ad55af497569c885f2213526a60f24a
1012b014a0c2a83ba36a19835e74028efaa4953a
31828 F20101122_AAAGSX johnson_p_Page_177.jpg
b58889afa488690e0d109d6d013776bc
9cc65c34877f55b1805bbbabb13a099bc8101901
29983 F20101122_AAAHPQ johnson_p_Page_212.pro
59954012bf59624cec2990f22faa83c1
5a388a866a9a1feaaf2b2a5f7d37b1837091fb38
3423 F20101122_AAAIMJ johnson_p_Page_210thm.jpg
311b95bc7b0290b88b36265b373fea84
07e54a01d0bb9a58ebcef34d86eb05f5d6d7c88b
752930 F20101122_AAAGFM johnson_p_Page_192.jp2
a4a814e9e04012e26f3d816858f11ef7
fb83ec0f9381cc84ac5f931c12d0520ed15deded
F20101122_AAAHCG johnson_p_Page_007.tif
37c3ba61fd2a61816922665743517dd8
4b8770daddbd20ae733f509d95a7784132ddbe66
31581 F20101122_AAAGSY johnson_p_Page_178.jpg
c9f90eb1713541a4d39800e3ab56527d
a03fd618a7632540f5e1f0298b08776d13a70236
39318 F20101122_AAAHPR johnson_p_Page_213.pro
5f2fccc98f4dd5a418e97a531c096757
fa57296cd3439e154504a269f7350153745bca96
4461 F20101122_AAAIMK johnson_p_Page_214thm.jpg
8d8fb5e6f10226349f36f696552b9bd8
f29f363756a4f588639708a92be77c1b71b2d056
84082 F20101122_AAAGFN johnson_p_Page_078.jpg
8cf2c9e8170221c11912d33ba1de46b7
10733c2384189f9faa9c08b4978cec3542a70fe3
F20101122_AAAHCH johnson_p_Page_008.tif
73ad499ba417dd12ade8f48ec957858e
9074e686c83428f77a50518e917a86f8c46d4a8d
101836 F20101122_AAAGSZ johnson_p_Page_179.jpg
d8c807f2033bfd2f7b9c1eac379356f3
a854f1d8ee6d4a1344c69c91affac2b43cfbf879
32706 F20101122_AAAHPS johnson_p_Page_214.pro
b0223449360e155d97e60af0ae763e02
0d7231965cd5d338f9406f77da3b923ba0d27038
18878 F20101122_AAAIML johnson_p_Page_214.QC.jpg
4d63556c5b82d2b3594e04415c721394
50f09b7b4b6632f1b5c715bd59df2d83e48e255b
406704 F20101122_AAAGFO johnson_p_Page_070.jp2
62ff44fcc80e7b7a1ff51e1aa3131f52
3e39fcdc30d49a3e8f2200b490f80a781e3b0f6a
F20101122_AAAHCI johnson_p_Page_010.tif
a505e7ad979767324104f5dafb5724e9
ee48c0062367babd14f42948473c0992e6686678
47129 F20101122_AAAHPT johnson_p_Page_216.pro
2b8e029ecfbd872ad415b59ce0a0b956
cffdbf65bb86a6bc4e6fe80340851ca6821cf61c
5120 F20101122_AAAIMM johnson_p_Page_216thm.jpg
bd194b277f58f40688e91c983b3beaaa
6b0f9e823c49f71a8b44fcea67d60413bf1b2124
673158 F20101122_AAAGFP johnson_p_Page_202.jp2
2d7bc147e46ba5b941fd6617baed9d5d
57a685250fdd2c29adf672da5a4f6482e6700455
F20101122_AAAHCJ johnson_p_Page_013.tif
3a7433793ed5cd0f3644e8996d7ada2a
1c1d2cc16f4dcadc2be75e5370444e510feb307f
34391 F20101122_AAAHPU johnson_p_Page_218.pro
286a8884b498652872ebca43fbc3f46f
658e27248635779a879140fb7b530e0be8cc5645
19992 F20101122_AAAIMN johnson_p_Page_217.QC.jpg
65ccee10d238cef4d131f50c488d464f
d62b34663063290b0344af10350560da46a36b7e
F20101122_AAAGFQ johnson_p_Page_084.tif
806b7e9b5654f0f2c339cb892025563a
a3675afc4449daf64299b2f8569a89cc46c32b97
F20101122_AAAHCK johnson_p_Page_015.tif
6c8e0caa71e1d3887b861d170205c627
2e4bb455ed410cbd4fb12eb08a417c24835b5fcf
28126 F20101122_AAAHPV johnson_p_Page_219.pro
f76868f4b8ee7b8e09317ba2751b4682
8578c5cf0711961949a8e2fa375500e6f7c51038
20261 F20101122_AAAIMO johnson_p_Page_218.QC.jpg
b1272ff0ec62a49d583551cd8af12ee4
12efd9046b4b72c34c2220d4d3607d2d003e963b
F20101122_AAAGFR johnson_p_Page_092.tif
c028e68b7609b5d6d65413e91da9b9bf
ed782f73c80e8eee69b467508e06acd4f7f84a91
F20101122_AAAHCL johnson_p_Page_017.tif
afec41b660d2249a54657adb9fba81a3
537efd698720e4243d96a0f7aa0fd0b50e995848
43527 F20101122_AAAHPW johnson_p_Page_220.pro
683c01febbf5357879e1c0be63a985f6
d77c3666f046c771d25194de167af9b7199ea021
21835 F20101122_AAAIMP johnson_p_Page_228.QC.jpg
cb13c4f004629e05cdc969e9329a6c33
7e3aa4bb9061045a0d8465b40ec470b98c1b8404
15041 F20101122_AAAGFS johnson_p_Page_160.QC.jpg
69df311de0ea0fc97ec7bb76cbe04744
21ca343abbd225cb72a5d88ba99fac4ca674cfb4
F20101122_AAAHCM johnson_p_Page_018.tif
148d086d7c6742e4ee691bb0a71de93f
1da43666feb8980100ee6d853bb73051c76b0452
31969 F20101122_AAAHPX johnson_p_Page_222.pro
6cb155c5945928ff65206c3aff113fcd
8563210abf99488bcf3d858ee82594e0526ec7f3
2698 F20101122_AAAIMQ johnson_p_Page_230.QC.jpg
bf4898cf474596b8945d1be90ee5ad5f
bced0c368468f2bf77d3703a13d31f47a9087b70
32805 F20101122_AAAGFT johnson_p_Page_179.QC.jpg
e260f94d3a1008f7d726dbfd1336035f
35e2a38e6b3d7a712fee2da950e2d1d3452762a3
F20101122_AAAHCN johnson_p_Page_019.tif
465e7d6f48c4bc5c6628f47e8c1c994f
3aa05cfa18f018f6633f6704a0e8dfadb1f8dc1e
34932 F20101122_AAAHPY johnson_p_Page_223.pro
ab4b73df34c93d3e26b15f4eb95950d8
ed8697a9ee02bd6c9b972c62fa7a30fcc23df01f
2249 F20101122_AAAGFU johnson_p_Page_022.txt
1b4190f9d866d1c60cdbb04f138470e7
a28c9face291e58872385321575313d5225dc4f9
F20101122_AAAHCO johnson_p_Page_021.tif
e203d50231317a9d38eccb01ed00f902
19ff6566fb4e48aaaf15122d5a145e35ac8e2bd1
30138 F20101122_AAAHPZ johnson_p_Page_224.pro
7e9e4c82b2aeaa272aba769c13217373
e31feb5626e1cb7170562a14973a60323adcf98f
60039 F20101122_AAAGFV johnson_p_Page_162.jpg
65bf0434a65e2f52cbd6b34c677d44e9
afb612208cc3202a4e3f8615fdbfb44a54d2091a
F20101122_AAAHCP johnson_p_Page_022.tif
949f863648b2b17b39330264836fe595
c39b3e57d75210301a264b571794a4899942e40f
1024386 F20101122_AAAGFW johnson_p_Page_058.jp2
b0bc45dcc66b50e59b5c2c9363675be2
322c8e3047c157ac514b17788ffb3b4b6e7310dc
699752 F20101122_AAAGYA johnson_p_Page_110.jp2
ab24bf86845d8b90ba73555916946c57
a9988bf1e102e360bfb7aa94f1e3ff4000d0fc73
F20101122_AAAHCQ johnson_p_Page_024.tif
5c8b15cbee0c96f344035897473f99b0
7391aa30510230daa5088839e75ffdec894e8be4
92235 F20101122_AAAGFX johnson_p_Page_159.jpg
66081ca63aaeb01087efc70a30908c61
4fa8bfeb2eb16d30522cb39f9ccdf67f86404b0c
751513 F20101122_AAAGYB johnson_p_Page_111.jp2
decde667c4ca0c4917c89c29272e9755
d27db1cfbc1a612e8c09ab95378ffec544697d25
F20101122_AAAHCR johnson_p_Page_025.tif
15d8cb2717a8139d26146dd46a128ee7
971f3aff75ce4e0c5848e4c1b3cb433a83d8a24f
22503 F20101122_AAAGFY johnson_p_Page_171.QC.jpg
bcd47a5bffb298b996402c109db6d657
0aa7a544c33337091c0ea66d8852d61b93710a20
726656 F20101122_AAAGYC johnson_p_Page_113.jp2
c2f9b994a3a3e2d76004ef5913f87c2d
3bae997a07ec41a6dad3b79f349f4e55624aaf3d
F20101122_AAAHCS johnson_p_Page_027.tif
bb00fd6ac7cff89fd43cc87fa50dbc46
bc0385b97a0444603cd844d791edf1fda1f12ee9
F20101122_AAAGFZ johnson_p_Page_222.tif
c50c7c866e865b0dcb3b39c1619412a5
e520f0aaa70ceacd2dcf882a506748e2c5db9e74
409824 F20101122_AAAGYD johnson_p_Page_114.jp2
3c38b8969b3f28e9bf68bc820f1c5644
de02d923a4b1ebb7a36f5d98b61f2e120040422b
F20101122_AAAHCT johnson_p_Page_028.tif
5bde24d2456ea4d5c0c8dc57908f92f9
80808102b283c35a5586176c62089271b2247b1f
515700 F20101122_AAAGYE johnson_p_Page_115.jp2
fe55eaa016014c0eeb6338223f841fd8
dbc8269e142f95e78a17c29f50ed26058ad876a2
F20101122_AAAHCU johnson_p_Page_029.tif
e31121ad06e4965a8884c3ade40b6339
485037f00c8b181f0c8270c7292e7a716dfdbe43
F20101122_AAAHCV johnson_p_Page_031.tif
cf20937bd6a479f23c2a294f60bad0c3
9f360d8901cf743d00fb22439e046776dba18907
919155 F20101122_AAAGYF johnson_p_Page_117.jp2
ca9015bc19f00dc9bd6e3f8a93408fc0
b86f9050c10db2ca9c0c29a4b336c022871b625d
F20101122_AAAHCW johnson_p_Page_033.tif
5e67ff12362573b097ad8df4522b9e9f
b53e648d182d101ae98affe9706a7d77957cc027
1035358 F20101122_AAAGYG johnson_p_Page_118.jp2
0645cc004f23947f311e4a60f107316b
cba716b1a168cd83f959d5980c18a1ef11da2135
F20101122_AAAHVA johnson_p_Page_142.txt
20a27eb81a8d61d8b389bf73d399fea3
0d00802d83a1cde78f0240afb30060ec55ffc74a
403138 F20101122_AAAGYH johnson_p_Page_119.jp2
b280c91008b17e12e05eba49c8120a9a
a532f09cc83b41dc3631a3d4f6e39c47fb2b3500
F20101122_AAAHCX johnson_p_Page_035.tif
f341920819d5da6d119a25da69bd5ac1
3aa852560e38e50ce92b1872c1530e87d47401f4
F20101122_AAAHVB johnson_p_Page_143.txt
f4dbee8eda1572fdcebeaba816100745
d8faa248ac87319dfbda6c70b14da9f16ef814b5
446006 F20101122_AAAGYI johnson_p_Page_120.jp2
784b9546f769885e2a434a78eba2e110
2a1d27339fb373ef967a1e88d28bb3df19fa23b8
F20101122_AAAHCY johnson_p_Page_036.tif
301826dfc9c4bb451b681cff061a2584
0ec45b5bd67a6a6ce379bbcdb046d92fc6b0dcec
1427 F20101122_AAAHVC johnson_p_Page_144.txt
b1c7df2c853df3f9bcaa814c273bbd4a
93ba70d0c186c55bb40bbcd177f7715a2c076d8e
344160 F20101122_AAAGYJ johnson_p_Page_121.jp2
db0f71e0c89575dff1d726b342a78944
768d26b2da6f2f53e877950ba3327f3bb475bb7f
F20101122_AAAHCZ johnson_p_Page_037.tif
7b707d7a418729fe68165a70619094ec
289a5264788363b171ec2dea909c07ac4186b3d8
2111 F20101122_AAAHVD johnson_p_Page_145.txt
3b97a4bdb8eaf432867fe67edc454374
9e5204e6b8bc266ae51f9c98e9dd02bc3e9afae4
525648 F20101122_AAAGYK johnson_p_Page_122.jp2
c57f66650612fb6f65aa1bf01f8fdf28
12c57e441285a4367d66dd0ab4dcb8953b7d7735
1197 F20101122_AAAHVE johnson_p_Page_146.txt
8794ae9ef678532bcdf090191cb68000
dad223efcde9bc6c555ca39133241092e3a59202
5193 F20101122_AAAGLA johnson_p_Page_008thm.jpg
aca7fe80bba40bbc183aca9f8eddd355
b4a535861c8e7e0faaaa1f33b47da49f603fa368
985839 F20101122_AAAGYL johnson_p_Page_123.jp2
316f5fb9dc924e176d2d684e88f9462d
6d43681cc438f73306cb00f60f76bae55976ca51
1359 F20101122_AAAHVF johnson_p_Page_147.txt
86b84c4411d96fe115fde1ececcf66f4
39f60d86625d3172d2fa8b6b835fb489a6893028
16090 F20101122_AAAGLB johnson_p_Page_122.QC.jpg
ba7b375484065126047495d48ee65d4d
103eb8f3625d4fc5d4c34ec7269de4d671761a26
943855 F20101122_AAAGYM johnson_p_Page_124.jp2
30566082986644797036df78cb8b0dba
e0704c17acb9c3404560301ab26db2c886da372b
1569 F20101122_AAAHVG johnson_p_Page_148.txt
72fea804394d51f8a4c29c483236f631
99092da425eb80d24131b0d19f1a270df96c5c30
52002 F20101122_AAAGLC johnson_p_Page_003.jpg
0695a1abbeac59ac4e97a5d295e71e0a
a564a83125d0e8187e93d5b7c8bf86d6287ada85
493459 F20101122_AAAGYN johnson_p_Page_125.jp2
a45d5d016218cc7bdac218b3a1b89dcf
fcb0ecd4875363e0b569b7f894029776cab407b2
1735 F20101122_AAAHVH johnson_p_Page_150.txt
7c4ef5afc0d8aaa12cbd5f25684047e7
4a6b7db82d0d82efe39a52f6264b88a039036dd2
65610 F20101122_AAAGLD johnson_p_Page_106.jpg
b70076f1db7cfd363b2f966516036144
059430336da311bb952f572e1a389895dd550654
708471 F20101122_AAAGYO johnson_p_Page_126.jp2
2cef3284b6d496ccc449d4859de7acde
bcc9612f7a5c322d92fbaf26ff454b10a4b26a92
1552 F20101122_AAAHVI johnson_p_Page_151.txt
42e5d98df9137fdadeb7f392c66c2a81
bb5b30faad3815b4201cb02069b3aee2c1efb839
7506 F20101122_AAAGLE johnson_p_Page_039thm.jpg
bcbf5ad6c30712499b517c3ab6b285fc
ae63b0e4d14db01fac367da74bdc0e7e77513d70
739454 F20101122_AAAGYP johnson_p_Page_127.jp2
4de35067994ec5307c84b83c945ddf6c
a6c0781185d8dbf0aef8abb9392a1d0d3d19f6b0
1876 F20101122_AAAHVJ johnson_p_Page_154.txt
37010dd5ac45631544debe2b505f64e7
bee6c2507bf266afdea73d102080455906f21ec5
2462 F20101122_AAAGLF johnson_p_Page_234.txt
c4aecac7d875b73d845ca8380acbc2fb
2233f4f69d5b2c9490800b799555192122e03bd3
818540 F20101122_AAAGYQ johnson_p_Page_129.jp2
4e8880274fbb56ab0e7fbe6833b2e85c
eef227689ee70dc30e0ef61f1da15d73f7d03a43
1958 F20101122_AAAHVK johnson_p_Page_155.txt
92f134c1c08100f3d5bbf2ab9d93c52d
c3ff01446c98026c3be0c6b32f7e408da80eb3ec
102198 F20101122_AAAGLG johnson_p_Page_172.jpg
02cb66b378efba0ed450eec953647aa3
56dba9bbc4857da7c41a1a6cffe2a39153337751
770518 F20101122_AAAGYR johnson_p_Page_130.jp2
f833d2a04e07d29bb7807d4154602cf3
799863b2d4a515dd465c111b3917e80fe4993470
1973 F20101122_AAAHVL johnson_p_Page_156.txt
de1d1b0bdda76d654d995f0276a032ea
3c4d80cd3645222e6e640c828098857d5fc9c001
F20101122_AAAGLH johnson_p_Page_172.txt
b30f835d4cb01e291bb672b44489221c
9c15800d80bfee08c659c2aeb2dd62a33407d8ee
F20101122_AAAHIA johnson_p_Page_201.tif
38530a1a5f1a2db67a7ae763e254a779
ee81683eba3f7762eb34fe01182bfc86dbc70f1b
633612 F20101122_AAAGYS johnson_p_Page_132.jp2
7b73eadab9a7926d961d211fe14242bf
9a226cdf248b9bd22b90e64ccd4a10f66eb84048
1898 F20101122_AAAHVM johnson_p_Page_157.txt
e0e514ef75cbd223eeaaa4fe78962c80
adb4dbc27dc956c9b493ef2cef582c20978cc889
573192 F20101122_AAAGLI johnson_p_Page_088.jp2
27c27cb01839a6637da9c8aa117bd201
4cc43a07567ffd16c88e6c2c094fdf8c8a53db9b
F20101122_AAAHIB johnson_p_Page_202.tif
55f011fce77140bfee112d3366bee3b8
7cb52ecd2b72e261588c5eab65b6c7933989fd33
844735 F20101122_AAAGYT johnson_p_Page_133.jp2
a4d8bfa0115c3415cbaf717ff8d068b0
e4d3f8391e37e6eca3b87fff6e1843502e25e40e
929 F20101122_AAAHVN johnson_p_Page_158.txt
d2a1297523eb3560c114ca13ffb7e4c7
1dc4e61c44f6f01023723aad32e93ba499f07e80
F20101122_AAAHIC johnson_p_Page_203.tif
f34311949c787272b9340fc2ad004a43
04234302c392011c8fc078787b8df301b422694d
291280 F20101122_AAAGYU johnson_p_Page_134.jp2
9f01e083dae61b8e828b5746fa254b54
672b7185fc5243f306fa7925cfd82662e6c876ef
F20101122_AAAHVO johnson_p_Page_159.txt
8c774f4de6bbbbacdc8329a9965b671e
37622e4c0eb013c2c3209c5839de9be478eeaa87
170432 F20101122_AAAGLJ johnson_p_Page_195.pro
e1e04909f39a0b012a21f94e883d048a
7faafe88488b1af646bf34f529561c593afb0d10
F20101122_AAAHID johnson_p_Page_204.tif
850ac070d67332983607653bc01412e1
69ae0d9cb3923117dfc66e9b8ecdfeec1729a0b0
850911 F20101122_AAAGYV johnson_p_Page_135.jp2
c92bc663548e51705511d44ec5232cf3
5c412464803f246a83298d90890109d8d6f628f5
2310 F20101122_AAAHVP johnson_p_Page_160.txt
f966ac151a4c8a29f571126b01e55da8
6364270263c28f8a4097c194d49c121f3102e0e7
14749 F20101122_AAAGLK johnson_p_Page_102.QC.jpg
4c8fda5c842d8a76514c85dde018cda3
7d6803815a1d60b6b8db7ef5bed337be81ccbab7
F20101122_AAAHIE johnson_p_Page_205.tif
f8a799f5960c434fdf073463a497074f
53e3f171454191b60f8a9526b806af5896f36920
2260 F20101122_AAAHVQ johnson_p_Page_161.txt
c3e261678e4501f6fbb5f3d9aed86c34
7c0bf48377bebe2db0ffcf66b4e1bd49181a6c48
9385 F20101122_AAAGLL johnson_p_Page_091.pro
c021ff5ffe73ac2ef96e47874813b4f8
58abfbb74f9e2b94bcfdb8ed7de9bcded1c38ff8
1051839 F20101122_AAAGYW johnson_p_Page_136.jp2
a3c0b5bd18a1adcbc2b43e2d1e802cd1
f617aa0f53ad0db535267b847db7dbd248202d19
1426 F20101122_AAAHVR johnson_p_Page_162.txt
8313e1abf3b096f20ccb2c4ac9a2ef12
931a6f00fab909646dfa393a4fcca90b0d9bc9d0
11765 F20101122_AAAGLM johnson_p_Page_067.pro
96e20eb3f13ff5d0725e796e4d27174a
765cc12d9a04471a067f4b9f85c590de3e0af320
F20101122_AAAHIF johnson_p_Page_206.tif
8578698ce48e160ab39f53c4e8ac9039
ea227a9bff9165c76f0508acb7783d66d097ea35
853804 F20101122_AAAGYX johnson_p_Page_137.jp2
cdddbb4ada62490d1ed274adb84cf430
78834694e0ffe69f0da9cf5436387818c215f209
13025 F20101122_AAAIFA johnson_p_Page_193.QC.jpg
257882d4778c4f4cdff0fef636c43b02
ed08fb98b119e7426e4eefab5a6f18aaf79220d9
65278 F20101122_AAAGLN johnson_p_Page_005.pro
cf0dcd88c80d9966760d97f34af3029f
db6845a935ad2a50f8b8e2b0f9f54d58cb95659a
F20101122_AAAHIG johnson_p_Page_207.tif
3c29bee95bb68317354ca6a0b94e9b7b
52c0dfad138af067b2b1f2e2e348186678853c4a
678273 F20101122_AAAGYY johnson_p_Page_138.jp2
e4167dacd384f0e845df31c09c3657b9
f334bf513c9236d882a77fdd7c0e50a863d71c0e
377 F20101122_AAAHVS johnson_p_Page_164.txt
faea7852aa888188ccf5fbe2cae8edcc
a14e7513b20e5e195d44e630ec3f103608a96f2a
737797 F20101122_AAAGLO johnson_p_Page_225.jp2
d216454e1168577fdb2e96be3d5cc066
677e2142c73dad9807fc075fa331d3e35fce1423
F20101122_AAAHIH johnson_p_Page_208.tif
4c448c94c86c3899ddaf63517107cb64
7542716d1da2e8a56db15409f2f89fa3fdcca8ec
764245 F20101122_AAAGYZ johnson_p_Page_139.jp2
2c0b8090c7ce1851a53b5f7e8fd77a00
2f1b659f00d45d482f162a73f6378e906455d924
33529 F20101122_AAAIFB johnson_p_Page_005.QC.jpg
4b1ea1869ff3b11c5fb6d9f56a01ce79
8525a3425d7961c707aae4a2e824f2e002f87ed7
1916 F20101122_AAAHVT johnson_p_Page_165.txt
abea31358e727fa6614242c5f058f659
c8b77fc734d712fea94344d0ed71a8881ee418b3
1281 F20101122_AAAGLP johnson_p_Page_224.txt
a32730ec0d853a77d216a3b938a7f2c3
6265ffaca7f1ddf9bef50eae225e2dbd6e3b208a
F20101122_AAAHII johnson_p_Page_209.tif
a39ddfd8154324489f6ef40a3257870b
d02a4fd02e6aa1291367dcd721c5968eaac28d21
4848 F20101122_AAAIFC johnson_p_Page_127thm.jpg
0cd36990242c9817e41a89a8969d750f
3214d2de4726c10d25c50e5cb6aeb51722533851
1791 F20101122_AAAHVU johnson_p_Page_166.txt
9bcf707281d92a3c271401db20772701
cb981dcdc388f1f7a0a53ee6b243c62a869c1816
2995 F20101122_AAAGLQ johnson_p_Page_191.txt
500897ee730428b954b2561d78d325b5
7a49303ed5a9e104798870a57531a53f1b1d6b55
F20101122_AAAHIJ johnson_p_Page_210.tif
b5c8ab5e65bb32e70d5f793a10370c5e
346bd3fa947da5539acd2e2730c624d6d8718956
20650 F20101122_AAAIFD johnson_p_Page_192.QC.jpg
530a7dc4074e453c50cc8fdd8d4c7c29
06da1169f47404d8f4358a135ea04040944bf0db
F20101122_AAAHVV johnson_p_Page_167.txt
b157ce880f2640d96f861a5e1fd74991
816bcdc33fec6c4292334e8e1f8d5964a79a6e3f
555 F20101122_AAAGLR johnson_p_Page_091.txt
59513ca05beaaa2538bf91deebcda57a
306794b271c1b23fc811386882d1d34243c56d20
F20101122_AAAHIK johnson_p_Page_211.tif
58fb31d378495164d5bbc84e186e4414
de8a360bfcc06ecde7b305ce936dd1a418e967ee
15858 F20101122_AAAIFE johnson_p_Page_127.QC.jpg
4b4c03e48b7d834f683148802a8727e5
2c925f90e3d8419e82079678b0ce006f4433845a
1467 F20101122_AAAHVW johnson_p_Page_168.txt
3c289de5d8a07f444cc81508ccb51400
65aa6fcf20eddb480a0cb9b77d8df1989f0fb9b4
43720 F20101122_AAAGLS johnson_p_Page_120.jpg
66facc5838a8d0b62a6ce45095289c8a
5b4de1800ab09af9a6841f443c3cd146a8fa4dbf
F20101122_AAAHIL johnson_p_Page_213.tif
c7196a94b200d06feb6fc98fb3c91b4e
484857ab00208116b3835e0a0b9c9adf83c65f54
23555 F20101122_AAAIFF johnson_p_Page_014.QC.jpg
f6db8129755c7438f952bea4df69a941
080aacb42526364bb97d664e17569a5724fbb092
1766 F20101122_AAAHVX johnson_p_Page_170.txt
4dd0eb7f1bbbdc0783135ca64b513cdc
9ff616c5f464786d0ead1f79bebb22533962805d
24882 F20101122_AAAGLT johnson_p_Page_147.QC.jpg
df5d920f16061d8feed99758e5f8ecc4
d2eab63529dd0554ac3b7e1255be602e1fcd4501
F20101122_AAAHIM johnson_p_Page_214.tif
f3b6579d96fed99a5ff87cfe2a4edeca
d40995069a8fc669059bef3792d90801c1d901e2
13349 F20101122_AAAIFG johnson_p_Page_093.QC.jpg
aa04ebdfe46cede479b578473afed702
a120ed2ba76303f30bfbab04a7c1c44d72950bb4
1247 F20101122_AAAHVY johnson_p_Page_173.txt
7ea966b3825c09f935ffa1280654d421
e0941aff8960cb0b792a91ce7cd58d763983b7bd
26988 F20101122_AAAGLU johnson_p_Page_024.QC.jpg
b67931a52b77210acb86fa592e576298
85d54df292fd5364cd98f02e2f0177d09a3d6a18
F20101122_AAAHIN johnson_p_Page_215.tif
95026bfc872dd3fce5fcbe9ad6789249
8b216ac1ecdc882ae5be2e1e99448d0bed4f70be
8199 F20101122_AAAIFH johnson_p_Page_165thm.jpg
06dfc8dda98ac32fed64b00188ad37ab
efe6dd3ca1bff6c670a76a20f9d89b24bdc9bec5
F20101122_AAAHVZ johnson_p_Page_174.txt
324830389601d17a69e8a8ab8a756ae1
32e94c5d5abbc23361780aa64a91e054af79c6b7
F20101122_AAAGLV johnson_p_Page_040.txt
54dc6387a0c2c951f400aa80deecffe0
c887c0fdf70a89005855afc6dfcbc5ff2702215a
F20101122_AAAHIO johnson_p_Page_216.tif
fcb6fe0d56607e099d44c7bae46cc1c9
e7789f34de3dc0f2394597708943bff42039af4f
5278 F20101122_AAAIFI johnson_p_Page_138thm.jpg
d3c93a9ae8fe359d6498de7cd2f1d889
7a0a8c5b2befbd68833b2d0365c2b688b5adaec8
886265 F20101122_AAAGLW johnson_p_Page_078.jp2
1e9cc549cd2cd338f6c34eac4656f61b
d6d975f443bb4af29930030a39296217954957c7
F20101122_AAAHIP johnson_p_Page_217.tif
796e7b2f97e2b5bcc9d00868c370b88b
cadbeb84c06e90344af40feae02a0038ee1f38ae
1719 F20101122_AAAIFJ johnson_p_Page_002.QC.jpg
58d865b572f5b1400c0023b5e8f78934
7dd4d65ebc22766ee3d43ac424a576d9f7691564
F20101122_AAAGLX johnson_p_Page_138.tif
67353c0b4a8731fc416e31129fa8d356
bc46825ef98823268fa1a2d2cf2e412165688999
F20101122_AAAHIQ johnson_p_Page_219.tif
99438ccfab733c31fe2b8119c5fc3f6c
6f9cd85945aef37329141ce075c228d77abaeb7e
31406 F20101122_AAAIFK johnson_p_Page_145.QC.jpg
e3eed3466c7380130567347cfc782127
7ccb20ea85c440167b69afe9331d51f370511dbd
1709 F20101122_AAAGLY johnson_p_Page_029.txt
dc9ebb87c1b0110de5f54a507a8a16a6
2889724cfcdbf4a5616d00531a2e000895d12aa2
F20101122_AAAHIR johnson_p_Page_220.tif
2c4f8a4dafc03f205172a472ee470724
511fabcd80186159e68fb66aed14ac8386e898be
1754 F20101122_AAAIFL johnson_p_Page_197thm.jpg
3deeb26dcbb73294a6b83b2a7a43e9dc
8d69a65c73dc0f9c61faf2ac7226e8f96326222b
109628 F20101122_AAAGLZ johnson_p_Page_039.jpg
a3492a81a2bb315afe8a6e1f47c0da9e
710017c63971d2303a0374ca92edd6810d7ade97
F20101122_AAAHIS johnson_p_Page_221.tif
38449b29e7a1a7762891aa442b3e2046
c6d72f7bf598edf3d9dd49eace02a7ec26ff69ca
5305 F20101122_AAAIFM johnson_p_Page_104thm.jpg
2ed12bfdaa9ae9cdd76deaae9efce543
d9e9ee2ff4fcb45250e79f737c5914d08215d742
F20101122_AAAHIT johnson_p_Page_223.tif
61527a39b494fd9eea34c42851f730a5
2f5336e18c172e36e211dc0ee0c7677fd23606d4
5981 F20101122_AAAIFN johnson_p_Page_096thm.jpg
21409c8b92cd47e42f5401bb55d74cbb
ca7317f9f7b288f75acc9acaf18f76d988d9ffee
F20101122_AAAHIU johnson_p_Page_224.tif
da088b5b8f4b3ef15f035b4611b05d5b
851d0b150337d4dcab7987f52c0a4cd11ffd544e
4493 F20101122_AAAIFO johnson_p_Page_013thm.jpg
c9b19df1fbb3d00a95823d7ac145494e
3a710287a9be301fb4bce5a22dfaf12610bd36d0
F20101122_AAAHIV johnson_p_Page_225.tif
54474a9f8c22fb44482eedf1e481ea3f
53266cf3934690bd095e6e8c527919cc3cb7973c
4195 F20101122_AAAIFP johnson_p_Page_018thm.jpg
6584319674f0d543eeb94e700d861818
ac02ae444c6bc3aee71468b5ef9442a5783cf43c
F20101122_AAAHIW johnson_p_Page_226.tif
01ee6e5639ae95acd8092af81a15e518
ecbe4d534ffd8a4ad231d6b7f6c6f4bd341b6ec5
23428 F20101122_AAAIFQ johnson_p_Page_137.QC.jpg
dc4dd76c4800adaee09183db431f1638
9a8a31a61c14f406fa803ec627b7583b9a16d188
F20101122_AAAHIX johnson_p_Page_229.tif
b8af561e4221577132277295d4eec555
ff76d637207022ada74b60871877e58a68acbdad
26065 F20101122_AAAIFR johnson_p_Page_144.QC.jpg
64f891fd84bd082558d6b6856fabce6b
7105d1a9a1fe73dc6b455507aad4696d3875b0f1
F20101122_AAAHIY johnson_p_Page_230.tif
9922c1763703dfab8a6a34d993463f4d
efa4a2d330cd9f2023f6547e4d6a4ca7acf099cc
23666 F20101122_AAAIFS johnson_p_Page_183.QC.jpg
9bbe753226ce95eb280954f24b204402
7f89d4f830aace2b1062d73e43df5a891ccacfe9
F20101122_AAAHIZ johnson_p_Page_231.tif
714a1521a732a0c7e26f5a54aadc7b4f
a99e0d5b9e877030ab68618219b24d38c32eed7e
F20101122_AAAIFT johnson_p_Page_073.QC.jpg
5e4f9d3bcf6ad7294a72d2486ca0091b
6a09b462525c24e8c2447a10c64078febf0d69bd
22080 F20101122_AAAIFU johnson_p_Page_149.QC.jpg
a67bddbd6850a5631d28cc519011792a
59c80fd6b14a3a50b29869d37d26a7ca41d7b994
41356 F20101122_AAAGRA johnson_p_Page_115.jpg
bd250decc173b20034dca0c23d66b6e3
05804d25d5ae9352b34728874b51c98b4b694d38
17801 F20101122_AAAIFV johnson_p_Page_205.QC.jpg
b6e7895e3cf0bf635e80faa6dd077865
ba9d534a0d5eb0cf67faf727aa493ae93721888e
58218 F20101122_AAAGRB johnson_p_Page_118.jpg
98d7f6c3ca24980d6789a8a1a32dad7e
46d321c37000cacccf74abca0b742dec54ab2ef0
633 F20101122_AAAIFW johnson_p_Page_002thm.jpg
3eac146b66064a3ba6fed500a7891f25
61d129b6e96b0f08298d32e938e9b296b0e509d9
40036 F20101122_AAAGRC johnson_p_Page_119.jpg
ca6b46b9402000191edea8f18e270aa5
31e0d7f2d05720afeca0b71482dac999bcf4d115
4894 F20101122_AAAIFX johnson_p_Page_186thm.jpg
6727b385878577808477b254ae7c6563
3a87e3b31903e7465022e78c48939c43cc10a831
34508 F20101122_AAAGRD johnson_p_Page_121.jpg
f0817edbf35f76d78a7b4da167033ba3
ce2d627e39921f2915b29ecf0cc0317087b76a3a
4891 F20101122_AAAIFY johnson_p_Page_192thm.jpg
5e2e86d3125290c428950366107552f0
56fc11a4661dae1e44a9f53e6cbe85e9f15e1c0d
57406 F20101122_AAAGRE johnson_p_Page_124.jpg
e77ed326e28e58a3970f748c01ffdcc5
6168fcb3fad083683eecb9f76fe00c6b77c200ba
43000 F20101122_AAAGRF johnson_p_Page_125.jpg
5aebfe59ee21e0094d8745a84ef6de8a
f3fcc56d19c69a8854ea6541bd7224c690ff05b8
10558 F20101122_AAAIFZ johnson_p_Page_076.QC.jpg
e027df47db8a0fc66f0d051b6dd2bc4c
92003bb6f6095fa11a49ea60724c6f4345c6d71b
61856 F20101122_AAAGRG johnson_p_Page_126.jpg
d74f526a68bf2bf0f6b3bbc9e7b45e32
c5432264c95318634424f749d39a04ab0aa8fd81
49796 F20101122_AAAGRH johnson_p_Page_127.jpg
405e1d4910a2743e642cddbbc2047503
6794c2f79c1066ea2485a39d5a5c487b385824a9
40946 F20101122_AAAHOA johnson_p_Page_159.pro
4c0dbe49d0aaa40bdc7499807cc4a959
bd93134767a563d0c57c4aef39ed4c6e55789955
36162 F20101122_AAAGRI johnson_p_Page_128.jpg
bb1bb16b7879928cd6e26041de62486d
60d6d94b5b1458a33bad9d2321f3598e0ee03033
34761 F20101122_AAAHOB johnson_p_Page_160.pro
d3c58a2a214a528ce7d791dacb0e8ccd
86a0d2a093b009c04d6db05cfdc927e671c5e647
57964 F20101122_AAAGRJ johnson_p_Page_129.jpg
68098e0c2b53534dd9a0fe3a2021c350
5920db05ebfab7522f296ba800baa3cba6ee5eec
56528 F20101122_AAAHOC johnson_p_Page_161.pro
d1a5a35007dc605db65855750bae7119
39301d19129d95fed96ae803e7c924e6af8a64c3
70387 F20101122_AAAGRK johnson_p_Page_130.jpg
a5c6f665cfa7b13d74964ed721e7b24a
f5eae6ec3310719df3d43700af971452f404f96e
23339 F20101122_AAAHOD johnson_p_Page_162.pro
afdcfa508f024c2b7555afb78d3638c0
d584c3450ce64cf048216c6a6888cf8dbf273b95
68366 F20101122_AAAGRL johnson_p_Page_131.jpg
dfa4acd9844afb488760e4ed7b9795c9
dcaf7de9b4512c23ab80bd9e196e53878ce25dee
7835 F20101122_AAAHOE johnson_p_Page_164.pro
1dcffe01c5dc26504976913db5b58176
e4982f0d0a6bb923627ed0b5a61f001ef129b1df
F20101122_AAAGEA johnson_p_Page_176.tif
b6f732ed1df23f14dba85d091876780e
77268fffa8daf1a66a1ac9c20d5645848a20884f
41138 F20101122_AAAGRM johnson_p_Page_132.jpg
0b10b60e6c948745d428e167fcb394d4
badf37fce05fec5ab6ff2b282c2af660221500bc
46927 F20101122_AAAHOF johnson_p_Page_165.pro
7976c250ce8b104b17b07b603544a1d1
acb3ee2d662cebfbb19c4f8e6dd99fcb756916d0
3881 F20101122_AAAGEB johnson_p_Page_128thm.jpg
69ba9d52023a9e3dce19bf9198d24296
6e0e0e071f6b961af902d6a4ccc273aa15c6ed45
80770 F20101122_AAAGRN johnson_p_Page_133.jpg
2c214a742aa2654351b47a40ff97428e
6fe2269d0711095003be4b416e0d18b662c042ed
43465 F20101122_AAAHOG johnson_p_Page_166.pro
69601d59b54bf3446f13c1c3fbc5f7d7
07e52db01d4e3b0a24d67bea968cbfca2ffd1187
18669 F20101122_AAAGRO johnson_p_Page_134.jpg
1dad88f3c3d97c9950d0f289c695bbd8
1f7cc3d84a446079bb02ef835a31d67e2df34572
32752 F20101122_AAAHOH johnson_p_Page_167.pro
c502912ff38ef1f3ba4e88bcfa0ac329
5a5fdd4e980a26a2a070b8d0b2ed8494f1224b55
25979 F20101122_AAAILA johnson_p_Page_030.QC.jpg
a329ad3339048b6c8325dc57f892e164
3e69531d4ae008c39fd1e84c70bd262fc353de1c
F20101122_AAAGEC johnson_p_Page_030.tif
59f196e11281e592567ca4cb8d5b48ed
85cd41f3e235cc4d87716a1784d0c689d2893312
34950 F20101122_AAAHOI johnson_p_Page_168.pro
8d02c957340a9a09829925b5c1472f60
33891619ccdea2066a22435e9d41a73e44285343
28139 F20101122_AAAILB johnson_p_Page_032.QC.jpg
623815ca8a4594960717853476d41f45
48f05c2b83818cbc71bdea6b053c5c324bca04c8
F20101122_AAAGED johnson_p_Page_083.tif
24866c561648076c62103f88b3bc7eac
14985cb20efa9705ad599cfbc2adf8f3c3820035
76740 F20101122_AAAGRP johnson_p_Page_135.jpg
bfb8a03b168abfd636109fd3868c98d2
12e40d1442c8eebf9c38f6faa288eced315f3188
37184 F20101122_AAAHOJ johnson_p_Page_172.pro
93d83091ea2e5eef75218e1f2f4bea18
81566de5680eb3bd7e842d37837f2b13ed031465
21530 F20101122_AAAILC johnson_p_Page_033.QC.jpg
ce1ef89f3f26f775c46f57a61aa9ab56
5fa9ab892caa67a16e84d10f51572f9c7a31090e
F20101122_AAAGEE johnson_p_Page_212.tif
cbd74d3e6934a1800b60489033fbd234
e3ccb6cddb211f518a4b65fde164d6c58344d5af
111401 F20101122_AAAGRQ johnson_p_Page_136.jpg
f3b7326c56a7fc8eeaeb0362d1ddfc7b
c39e729a69100f39bb41ea2ee1c18a9a0ec4f1e5
29905 F20101122_AAAHOK johnson_p_Page_173.pro
77052aa4cf0af66998a1a92670883865
b681fad0baa71522a195fb1cae525373c0d3f4a6
30020 F20101122_AAAILD johnson_p_Page_035.QC.jpg
561c441977fa98fd7bd8681542f828f1
ee42790ddf14100feb265e52a82ffd129aeb6555
10825 F20101122_AAAGEF johnson_p_Page_195.QC.jpg
2ccb077a97fc0a9dfb0934ae0a11e62f
fed1c0fccaa8dacde44c62b4370b9f95c03442e4
83233 F20101122_AAAGRR johnson_p_Page_137.jpg
cd71b9c107ef56b1b9a07a808ae788e1
c400a193a8c1b4e035b27ef2f675f1df7694ff46
6923 F20101122_AAAILE johnson_p_Page_051thm.jpg
9eed1feb7371f750a9c77ba746695ab9
90d8083db047889b0715c79cb2322cf8201df091
4590 F20101122_AAAGEG johnson_p_Page_185.QC.jpg
c80ec4e292c75606d1a104a0f0f535ac
9177e3dbec1b935592eb7bcb1e62c8dadcbc448e
643293 F20101122_AAAHBA johnson_p_Page_204.jp2
6789dd7dbabf22c20ae8249293ed3f33
ceba22c0006ae661185aa953be8ceedbfc97ae85
66196 F20101122_AAAGRS johnson_p_Page_138.jpg
dd4797c1dc45196b86ead34de9af653a
533084ff7eb61754c05acb47af7ed119155f460c
17195 F20101122_AAAHOL johnson_p_Page_174.pro
d36c26af841f1d3b272f60a18f20d3d8
07a1df4f395c4474182e3ddb6feeb1c5dd470bf3
18562 F20101122_AAAILF johnson_p_Page_052.QC.jpg
ed4f0c8d39b49dea641d4b9b155f248f
a0ac0464fce60ce886aae0728343cbf4414984f5
2689 F20101122_AAAGEH johnson_p_Page_042.txt
e77c59fa89b2c293ca2ac885b4e71e2c
790a14b844c319c5555d2e0384fae76ab44781bd
231836 F20101122_AAAHBB johnson_p_Page_206.jp2
decb7adf388b7c03bbbd97265fe1caa1
10be2263472df66b862a1e0ab6d0aee95fbf7a6b
76855 F20101122_AAAGRT johnson_p_Page_139.jpg
2396bbc836bafee13ec3fb53ab82a7f0
a0e32f8be0b7eb627192bc92d4eca133b52ed886
55942 F20101122_AAAHOM johnson_p_Page_175.pro
4d4de3fed08ac67b7728496522df78bd
a3e4fd3b08355d63fb914f8184075552cb5980b0
30006 F20101122_AAAILG johnson_p_Page_053.QC.jpg
7051a2ead9e7ba02237686866b8801b2
82af9671f28949b9039a0305d6edde5810b95c3f
99943 F20101122_AAAGEI johnson_p_Page_186.jpg
3340f447edeeddc32a4a5a1c6d826c94
9a825a4a9d1591755e53281786e8adef4bb27fa8
588258 F20101122_AAAHBC johnson_p_Page_207.jp2
090971243ddc31d09818d57fa081e9b9
d50040099b68cd1df0875d11a937ee5ef237c5ec
59743 F20101122_AAAGRU johnson_p_Page_140.jpg
0ac5ddfba3c2f7f08cae87b0350f587e
71055e09f6b5fa46a65f4524b2fa5ebb8d067a66
36864 F20101122_AAAHON johnson_p_Page_176.pro
72c1ff392263583369e1ca871f7adc93
b874dd22f243d4f99021202cbbcac0e1506a64d6
18705 F20101122_AAAGEJ johnson_p_Page_222.QC.jpg
94dfc98c12bf6d82804c4bab70d83c5c
c5cb2f9a96cf3abc289c1427f5d2ff9e21d223d8
556450 F20101122_AAAHBD johnson_p_Page_208.jp2
753ecca3f5a5d235e8199750224de662
2da5631e5642538536ee66f811d138c714ef82b1
96087 F20101122_AAAGRV johnson_p_Page_141.jpg
f22a522a38570ee94f2a6996acad8ed9
b5b6eb53b07ece365a0e67401a467d1ad8b83a19
22364 F20101122_AAAHOO johnson_p_Page_177.pro
49d53ccc3cc1877414b740ede8498e1a
f2a4b2336a6f1ad09220663136c0ec3c989d28a8
34034 F20101122_AAAILH johnson_p_Page_056.QC.jpg
39b51615faaf88a2df63fa9e8ea7f6d7
15d4bc87ad652616cb05c643020d0dd3794a5a34
610746 F20101122_AAAGEK johnson_p_Page_100.jp2
25af6cdf3e929d5b6e660f0088736596
b019feea4821cbfaff5f292726983928a8817387
649314 F20101122_AAAHBE johnson_p_Page_210.jp2
a27eb4eba796c2484b3fdb069bae16d4
b1a202f39065073793d339029f46e1352444bc71
78231 F20101122_AAAGRW johnson_p_Page_142.jpg
32d98012a81019f8ce08296722763ecf
4b7a200eb595c0cb5734319ae7753e56e57170f8
22784 F20101122_AAAHOP johnson_p_Page_178.pro
0da3fad281ed580ea9ef1b0d4d845f46
68fd49d2f927027205de74160fc1eeec2c0d7530
6380 F20101122_AAAILI johnson_p_Page_075thm.jpg
51d8e2ac1721236175f2c0e4de012b02
85ebf4f2f401cb5d6106296514f89455346ae543
621272 F20101122_AAAGEL johnson_p_Page_020.jp2
8fbd43d1d4dd8605ce815ac7960f0aba
1e27c2ac582f2602297a205930dc75e3b3f09b3a
377156 F20101122_AAAHBF johnson_p_Page_211.jp2
6c1991b6bdc3a25f1dbc62b10fbc05c3
8c734c58e0cbfbe5e0987d7a2f2d5c9d0a4f0480
94343 F20101122_AAAGRX johnson_p_Page_143.jpg
b5757b0315ad721cb059a9d9066d1a24
905948b3db71779ff520cc52f16f1d7bec3de83f
49040 F20101122_AAAHOQ johnson_p_Page_179.pro
6665e3f91c901fea5598310af8db2769
9660dc9de8a6d6be3bf0fd00a48928b9974f9754
23982 F20101122_AAAILJ johnson_p_Page_078.QC.jpg
13995343349285b43c9fb3f8b0579fd7
645642ab4005d6979a705406d10fee3538a1a0c2
5631 F20101122_AAAGEM johnson_p_Page_220thm.jpg
673f79aef74107bab034825e58691920
3112ac5d33941608b8ff8a15f7e1e32bf79e2a18
605899 F20101122_AAAHBG johnson_p_Page_212.jp2
fb54a81dadc8261c2630495cf9f6aa81
886286dfc0117205a3f6527864e7fed4714d9412
76502 F20101122_AAAGRY johnson_p_Page_144.jpg
2e893395d7f9261cba3419244551bd75
52b5fe69d6bc4aab200d7c1fe417faffde4657de
40430 F20101122_AAAHOR johnson_p_Page_181.pro
09aafc9cd39f4122ee8049a995a75422
46871bd4a387a727f287a973dbc82cbbda633373
29196 F20101122_AAAILK johnson_p_Page_079.QC.jpg
4f0e838a1868e636a1406cec70ac46ab
551d21583c6fa5f900d21d51ff7303b208062910
467 F20101122_AAAGEN johnson_p_Page_001.txt
ad72ee62601243f2b4158bd8b92e59e8
e6570cd83d8a1a625b99f435951f889c193d2951
786949 F20101122_AAAHBH johnson_p_Page_213.jp2
a3d25185b81f68c52a8770c6e99449b8
452688130745160a64ca4aa54b1b49a133f77683
49628 F20101122_AAAGRZ johnson_p_Page_146.jpg
22ef6972fbd3b8b4e9d7e0905829db41
35360de59ef8565d79fdad49e354fd136845df41
57596 F20101122_AAAHOS johnson_p_Page_182.pro
33e1f0d114e7390a6b788e83fcfc6ee2
dd499fd4a4c776ce2be7de8e2e7a4ec7a70c2e3e
27512 F20101122_AAAILL johnson_p_Page_080.QC.jpg
d6f00dcaf54b1f8830e5a132a91ff84b
1465ba569706fe5eb89520aa86cba397a1aa7861
13439 F20101122_AAAGEO johnson_p_Page_201.QC.jpg
b831b8986c640e2581cdfa113837cff8
63c8408760dbc9cde3efe45973596decda14b80a
683645 F20101122_AAAHBI johnson_p_Page_214.jp2
433573afb42f88d8db923ff2541f1114
1bcec02e236ad147f1fd5cef214af996600a62a9
26734 F20101122_AAAHOT johnson_p_Page_184.pro
e4b032485d8d225854a1d284c8d25399
0134c1d7765571b62ec9594055e48c5a73b2a49a
6933 F20101122_AAAILM johnson_p_Page_090thm.jpg
0d24a0a809a2d7b125d11abe42bd7bcf
511fa84864c34f36ca851d565e87915d4ea73b30
92765 F20101122_AAAGEP johnson_p_Page_188.jpg
26e37b8f3d740c4c6adf1226d04d62ab
5d40e92e4725db0c9e95702c119c4e0df0156edd
918188 F20101122_AAAHBJ johnson_p_Page_216.jp2
db5494c52d00b0a92934cb6d841e89ff
d397dbdeb54782c6e2543af5599d93e60785c590
7844 F20101122_AAAHOU johnson_p_Page_185.pro
95ec37f4673a815b82b53b930955d7cf
5a4f5d78266d7ea3960266ffbdd4b71a90ff5ae0
4630 F20101122_AAAILN johnson_p_Page_094thm.jpg
68fb6e7539266c8aefc776043e2587d3
9732642a6814e6148ddccd1f5c6c4678437e77a2
22438 F20101122_AAAGEQ johnson_p_Page_130.QC.jpg
463a3711311d70d04567ce6c8ee31d64
fd1bfd694eacd8f38ce5ee2e928c3b18fd0169cc
669969 F20101122_AAAHBK johnson_p_Page_217.jp2
a505125c9507a7dd614c14d2706229fd
a4333021f6ada46dda7880578e6a45a08f477704
12238 F20101122_AAAHOV johnson_p_Page_186.pro
7c53a87e29e075fec15e21225c77f80d
e54443bc5d1f755f4a3e35db1097d26531041c8f
21369 F20101122_AAAILO johnson_p_Page_096.QC.jpg
fc7ac4114a4164430929ad00ea6ccc88
4a5cb33825a73c3b458253ab97af6a67e34e5c54
F20101122_AAAGER johnson_p_Page_134.tif
d4c5283e7f304b450bc81f82aebbcde2
964253e2cbbf47a0d3908a1c96ec8e7e3b19ca1e
673237 F20101122_AAAHBL johnson_p_Page_218.jp2
ba6bf896a2d4af1a3db7a3cc33decfce
8e0184325adaa516554a7e75902ba027521ce23f
5247 F20101122_AAAHOW johnson_p_Page_187.pro
c14eb1e6ae9c8a151c44882d31efcaa2
0e1d61d181568bbb6bfcb7ea8e60692282253350
17179 F20101122_AAAILP johnson_p_Page_100.QC.jpg
2940f9feb31c07864c134b4e2e09ff54
39761a8b547aefa5066356733cdcc64bb761f7a4
103068 F20101122_AAAGES johnson_p_Page_165.jpg
264b36890c6e7c1b11305e11629980d9
cc1787256681349a5f5cabb299092ed4ce3c9790
563740 F20101122_AAAHBM johnson_p_Page_219.jp2
517857e50be41919b52a6df9640aba52
49959ae82498c696e0ac9147323c4b94586f580a
80093 F20101122_AAAHOX johnson_p_Page_188.pro
0943f7753b3219ff8bc0818d3cbd1cec
214178f451df02ab88a71db717506c00a3f6ee23
5586 F20101122_AAAILQ johnson_p_Page_101thm.jpg
c0f92d4cf23409fe39cd1b3a943f6c90
4d755ba65000e3e80b09688b7ed601608039e0c2
10868 F20101122_AAAGET johnson_p_Page_196.QC.jpg
b489631d83667c16c31d9aae015c299f
c6e70588dc06a7f35a09cf31f4d9b25551ccb725
898692 F20101122_AAAHBN johnson_p_Page_220.jp2
fedc079b3867d585a1b21b316b502f73
66c2ba7ffc51d951d6225cc07c6683c58011bf88
46818 F20101122_AAAHOY johnson_p_Page_189.pro
62636c3fe2d76fe4b0a0588c717b9cfb
cfaeebf8b406074c08f05fdb572a485876c94b8e
4599 F20101122_AAAILR johnson_p_Page_102thm.jpg
a8044beb013d20b2f0ff989ca2d90eeb
8f04474be57266a7dff0c6f0e29495ae32edb322
32485 F20101122_AAAGEU johnson_p_Page_028.QC.jpg
a8f53becb1ca6b1a4648fd72ec6e0434
2040f01dda755d640f8abc82bb11efab86c680b6
627069 F20101122_AAAHBO johnson_p_Page_222.jp2
9c539159f29bd4ed62ad7db17b383a3a
769b0dd420b96c1562c9a921f23ff961afb41a22
65925 F20101122_AAAHOZ johnson_p_Page_190.pro
0255d332b8de6a9072a1400cb44ff7ff
adda1eab844e780e5ea538c0e6f05e23997b338b
4279 F20101122_AAAILS johnson_p_Page_103thm.jpg
76cb8ee0c7e1d16a70c4bc46ddf06f65
f891b9b5bb5b7d608a222be3a6f101d8615167ea
25917 F20101122_AAAGEV johnson_p_Page_188.QC.jpg
a14ff63b29ed16d696429085841ec364
ed5f73bf592f5386dfe3d740df47326527b0a5c4
695167 F20101122_AAAHBP johnson_p_Page_223.jp2
142237bb314a6baf1f5d8500274142eb
e34bcca9abf757170f51404684e89c4c35ff871f
5357 F20101122_AAAILT johnson_p_Page_110thm.jpg
a2c372fd83bb71e32585fda3df8ba61b
0dba00206bb0cea08d3e10b39a5c0e7b639c85de
F20101122_AAAGEW johnson_p_Page_075.tif
7c18a3e3ea60b682a05c8d2544f8fe12
ff78c60a2117742fbc24dd1443f736dfecd337d3
767665 F20101122_AAAGXA johnson_p_Page_075.jp2
8e2140586d5342591d7474801676122a
b76a2d3a827b478a65011bc215e2095f582a8260
646477 F20101122_AAAHBQ johnson_p_Page_224.jp2
e063266aaf106b73b7d2a85f8235e3f3
dd00d9220d440c3f71b38bedc2b6718d5534cca0
14428 F20101122_AAAILU johnson_p_Page_114.QC.jpg
0e3d26747951005b21c4fe98ef03eebe
b9ca11799c23528061f3b54c452430e9f82a8f1c
53226 F20101122_AAAGEX johnson_p_Page_201.jpg
7fb6a07aa76ce588ffa48675e76d00ee
16c396a670ad10b1ef2137afaefece651209f21f
329209 F20101122_AAAGXB johnson_p_Page_076.jp2
f5bdd1d2beabd16c604caf80413c5221
fde300764c11306788d0416ce1671c16ed201972
747161 F20101122_AAAHBR johnson_p_Page_227.jp2
d85dcf0039c73011b873f8ca17fb6589
6fe8fa48e264b9808d7bc64672155b1b830506d6
13967 F20101122_AAAILV johnson_p_Page_125.QC.jpg
559f96e1939abc7a8a212a2c86da11ac
0b9948e9d2f13a7f19f8195cb0cf47b3e78fcae5
43716 F20101122_AAAGEY johnson_p_Page_170.pro
c73ff5ad3831b0964fc0aa2f8b7bd340
8fc82a809ba8f5002c64d5d13d96aab8c2df9492
910049 F20101122_AAAGXC johnson_p_Page_077.jp2
b85dd775bafae04df4ef00e1f94617e0
faaeec38d8289d16414b93c2ff23d1b928c6c3aa
726888 F20101122_AAAHBS johnson_p_Page_228.jp2
c9a9385f8b426dbcb77ee4fdf7fc5d5f
f2522dd4d1a0f4101e66c5d621681a079edd9a0a
17490 F20101122_AAAILW johnson_p_Page_129.QC.jpg
e9e0f326f34fd6f5ba2c9f1506a32031
c7284abc71af513ff05c16cc5a0f38905967aec5
700271 F20101122_AAAGEZ johnson_p_Page_040.jp2
95cb3a06ed6644fe0fc4c54547105c6c
bf8b5feda0b1b2f14e70844c01c490ef65423060
1033402 F20101122_AAAGXD johnson_p_Page_079.jp2
aa0d64042d364a98c4810f7a4523921a
56a135c32925e8be1a38c2af241941dd8dc7f41e
637269 F20101122_AAAHBT johnson_p_Page_229.jp2
fe85522608c1ef250ddb04ec700841d5
352647f0b93ef8abaf2fdab1ad39628afce07e74
35197 F20101122_AAAILX johnson_p_Page_136.QC.jpg
54b94cc47b578c1335dd5031c5ed8989
5c20104bbb88c1ec0c91688440b6fc450982d88d
877222 F20101122_AAAGXE johnson_p_Page_080.jp2
7e7d92679ff1bd451099e29fd750ead2
0a95f37a262d946d7c6e8d7b1688b966a85ab5eb
86627 F20101122_AAAHBU johnson_p_Page_230.jp2
e661d14041ee0a30b34c74ff0a4a06fb
6df460529a65b541a1734cb73e7eb2a7d38662c6
13784 F20101122_AAAILY johnson_p_Page_164.QC.jpg
de80eb0c97396a325bff1ee38f5b7fc3
9ef07eb4abb27abbcc19c51b300f59b737904fdc
907605 F20101122_AAAGXF johnson_p_Page_081.jp2
12dd4c047602fa79107698b5bba88a4f
e13d873fa60e2c24b620ff7b4b018cf8702b8681
1051984 F20101122_AAAHBV johnson_p_Page_231.jp2
27e16c072bd1c8f5b80898cfd02b3762
ea6f7c8864e325adfd04e705bfa132e8d3d548ac
31890 F20101122_AAAILZ johnson_p_Page_165.QC.jpg
cf224c323fce78f65543a3aad22de62d
d33531f3919a525d816a3c76800e2bddf02d1ece
897603 F20101122_AAAGXG johnson_p_Page_082.jp2
db1c8f187aadfea45315f9bd0dcfcff9
6faa01048d381698406944c7b5d386f400ae19b1
1051968 F20101122_AAAHBW johnson_p_Page_232.jp2
54ec88bb712d209a47c556d35f0ff19f
55acaf90fd2c818623f284947ea8d723ec137239
510335 F20101122_AAAGXH johnson_p_Page_083.jp2
12d20b231ed22c20522ab8ecbfe30120
9d1988529f807a9ff59995f98dab22545598fc44
F20101122_AAAHBX johnson_p_Page_233.jp2
858dfdeff7282f47ab019020865ec7f9
2636985442d29b4ff33df0ec361d9958ab650985
1324 F20101122_AAAHUA johnson_p_Page_113.txt
8f43153eee6eb0efe010303177301432
7f81c188da176a0ed0b03268cb0b83fbdd4bc1f0
281943 F20101122_AAAGXI johnson_p_Page_084.jp2
d6098e4bd18f21cfa40d2e6f37cb9fa2
64279954b3201d26548fcf9139bb09b35bffd17f
F20101122_AAAHBY johnson_p_Page_234.jp2
f5b1b4fdcf4c01207089fbd1bc2490f1
0cb697747356e85422364db2ea3dcefbee526816
479 F20101122_AAAHUB johnson_p_Page_114.txt
4dcfe36f5ab6415d2b2c1c68be991e04
3e9ca1f38dd365ca818d30ccf6b9e1026445cf87



PAGE 1

P ARALLEL COMPUT A TION AL MECHANICS WITH A CLUSTER OF W ORKST A TIONS By P A UL C. JOHNSON A THESIS PRESENTED T O THE GRADU A TE SCHOOL OF UNIVERSITY OF FLORID A IN P AR TIAL FULFILLMENT OF THE REQ UIREMENTS OF THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORID A 2005

PAGE 2

Cop yright 2005 by P aul C. Johnson

PAGE 3

A CKNO WLEDGEMENTS I wish to e xpress my sincere gratitude to Professor Loc V u-Quoc for his support and guidance throughout my master' s study His steady and thorough approach to teaching has inspired me to accept an y challenge with determination. I also w ould lik e to e xpress my gratitude to the supervisory committee members: Professors Alan D. Geor ge and Ashok V K umar Man y thanks go to my friends who ha v e al w ays been there for me whene v er I needed help or friendship. Finally I w ould lik e to thank my parents Charles and Helen Johnson, grandparents W endell and Giselle Cernansk y and the Collins f amily for all of the support that the y ha v e gi v en me o v er the years. I sincerely appreciate all that the y ha v e sacriced and can not imagine ho w I w ould ha v e proceeded without their lo v e, support, and encouragement. iii

PAGE 4

T ABLE OF CONTENTS page A CKNO WLEDGEMENTS : : : : : : : : : : : : : : : : : : : : : : : : : : : : iii ABSTRA CT : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : xi v 1 P ARALLEL COMPUTING : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1.1 T ypes of P arallel Processing : : : : : : : : : : : : : : : : : : : : : : : 1 1.1.1 Clusters : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.1.2 Beo wulf Cluster : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.1.3 Netw ork of W orkstations : : : : : : : : : : : : : : : : : : : : : 3 2 NETW ORK SETUP : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 2.1 Netw ork and Computer Hardw are : : : : : : : : : : : : : : : : : : : : 5 2.2 Netw ork Conguration : : : : : : : : : : : : : : : : : : : : : : : : : : 6 2.2.1 Conguration Files : : : : : : : : : : : : : : : : : : : : : : : : 7 2.2.2 Internet Protocol F orw arding and Masquerading : : : : : : : : : 9 2.3 MPI–Message P assing Interf ace : : : : : : : : : : : : : : : : : : : : : 12 2.3.1 Goals : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 12 2.3.2 MPICH : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14 2.3.3 Installation : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14 2.3.4 Enable SSH : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14 2.3.5 Edit Machines.LINUX : : : : : : : : : : : : : : : : : : : : : : 16 2.3.6 T est Examples : : : : : : : : : : : : : : : : : : : : : : : : : : 17 2.3.7 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 18 3 BENCHMARKING : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 21 3.1 Performance Metrics : : : : : : : : : : : : : : : : : : : : : : : : : : : 21 3.2 Netw ork Analysis : : : : : : : : : : : : : : : : : : : : : : : : : : : : 22 3.2.1 NetPIPE : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 23 3.2.2 T est Setup : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 24 3.2.3 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 26 3.3 High Performance Linpack–Single Node : : : : : : : : : : : : : : : : 31 3.3.1 Installation : : : : : : : : : : : : : : : : : : : : : : : : : : : : 31 3.3.2 A TLAS Routines : : : : : : : : : : : : : : : : : : : : : : : : : 33 3.3.3 Goto BLAS Libraries : : : : : : : : : : : : : : : : : : : : : : : 34 i v

PAGE 5

3.3.4 Using either Library : : : : : : : : : : : : : : : : : : : : : : : 35 3.3.5 Benchmarking : : : : : : : : : : : : : : : : : : : : : : : : : : 36 3.3.6 Main Algorithm : : : : : : : : : : : : : : : : : : : : : : : : : : 37 3.3.7 HPL.dat Options : : : : : : : : : : : : : : : : : : : : : : : : : 37 3.3.8 T est Setup : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 42 3.3.9 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 43 3.3.10 Goto' s BLAS Routines : : : : : : : : : : : : : : : : : : : : : : 47 3.4 HPL–Multiple Node T ests : : : : : : : : : : : : : : : : : : : : : : : : 49 3.4.1 T w o Processor T ests : : : : : : : : : : : : : : : : : : : : : : : 49 3.4.2 Process Grid : : : : : : : : : : : : : : : : : : : : : : : : : : : 50 3.4.3 Three Processor T ests : : : : : : : : : : : : : : : : : : : : : : : 55 3.4.4 F our Processor T ests : : : : : : : : : : : : : : : : : : : : : : : 58 3.4.5 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 60 4 CALCULIX : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 63 4.1 Installation of CalculiX GraphiX : : : : : : : : : : : : : : : : : : : : 63 4.2 Installation of CalculiX CrunchiX : : : : : : : : : : : : : : : : : : : : 64 4.2.1 ARP A CK Installation : : : : : : : : : : : : : : : : : : : : : : : 65 4.2.2 SPOOLES Installation : : : : : : : : : : : : : : : : : : : : : : 66 4.2.3 Compile CalculiX CrunchiX : : : : : : : : : : : : : : : : : : : 66 4.3 Geometric Capabilities : : : : : : : : : : : : : : : : : : : : : : : : : : 67 4.4 Pre-processing : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 68 4.4.1 Points : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 68 4.4.2 Lines : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 69 4.4.3 Surf aces : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 71 4.4.4 Bodies : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 71 4.5 Finite-Element Mesh Creation : : : : : : : : : : : : : : : : : : : : : : 72 5 CREA TING GEOMETR Y WITH CALCULIX. : : : : : : : : : : : : : : : 74 5.1 CalculiX Geometry Generation : : : : : : : : : : : : : : : : : : : : : 74 5.1.1 Creating Points : : : : : : : : : : : : : : : : : : : : : : : : : : 75 5.1.2 Creating Lines : : : : : : : : : : : : : : : : : : : : : : : : : : 78 5.1.3 Creating Surf aces : : : : : : : : : : : : : : : : : : : : : : : : : 80 5.1.4 Creating Bodies : : : : : : : : : : : : : : : : : : : : : : : : : : 82 5.1.5 Creating the Cylinder : : : : : : : : : : : : : : : : : : : : : : : 85 5.1.6 Creating the P arallelepiped : : : : : : : : : : : : : : : : : : : : 90 5.1.7 Creating Horse-shoe Section : : : : : : : : : : : : : : : : : : : 92 5.1.8 Creating the Slanted Section : : : : : : : : : : : : : : : : : : : 94 5.2 Creating a Solid Mesh : : : : : : : : : : : : : : : : : : : : : : : : : : 95 5.2.1 Changing Element Di visions : : : : : : : : : : : : : : : : : : : 97 5.2.2 Delete and Mer ge Nodes : : : : : : : : : : : : : : : : : : : : : 103 v

PAGE 6

5.2.3 Apply Boundary Conditions : : : : : : : : : : : : : : : : : : : 116 5.2.4 Run Analysis : : : : : : : : : : : : : : : : : : : : : : : : : : : 119 6 OPEN SOURCE SOL VERS : : : : : : : : : : : : : : : : : : : : : : : : : 121 6.1 SPOOLES : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 121 6.1.1 Objects in SPOOLES : : : : : : : : : : : : : : : : : : : : : : : 122 6.1.2 Steps to Solv e Equations : : : : : : : : : : : : : : : : : : : : : 122 6.1.3 Communicate : : : : : : : : : : : : : : : : : : : : : : : : : : : 123 6.1.4 Reorder : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 125 6.1.5 F actor : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 125 6.1.6 Solv e : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 126 6.2 Code to Solv e Equations : : : : : : : : : : : : : : : : : : : : : : : : : 127 6.3 Serial Code : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 127 6.3.1 Communicate : : : : : : : : : : : : : : : : : : : : : : : : : : : 127 6.3.2 Reorder : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 129 6.3.3 F actor : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 130 6.3.4 Communicate B : : : : : : : : : : : : : : : : : : : : : : : : : : 133 6.3.5 Solv e : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 134 6.4 P arallel Code : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 136 6.4.1 Communicate : : : : : : : : : : : : : : : : : : : : : : : : : : : 136 6.4.2 Reorder : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 138 6.4.3 F actor : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 140 6.4.4 Solv e : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 143 7 MA TRIX ORDERINGS : : : : : : : : : : : : : : : : : : : : : : : : : : : : 145 7.1 Ordering Optimization : : : : : : : : : : : : : : : : : : : : : : : : : : 145 7.2 Minimum De gree Ordering : : : : : : : : : : : : : : : : : : : : : : : 147 7.3 Nested Dissection : : : : : : : : : : : : : : : : : : : : : : : : : : : : 151 7.4 Multi-section : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 152 8 OPTIMIZING SPOOLES FOR A CO W : : : : : : : : : : : : : : : : : : : 153 8.1 Installation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 153 8.2 Optimization : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 154 8.2.1 Multi-Processing En vironment MPE : : : : : : : : : : : : : : 155 8.2.2 Reduce Ordering T ime : : : : : : : : : : : : : : : : : : : : : : 159 8.2.3 Optimizing the F r ont T r ee : : : : : : : : : : : : : : : : : : : : : 161 8.2.4 Maxdomainsize : : : : : : : : : : : : : : : : : : : : : : : : : : 161 8.2.5 Maxzer os and Maxsize : : : : : : : : : : : : : : : : : : : : : : 162 8.2.6 Final T ests with Optimized Solv er : : : : : : : : : : : : : : : : 165 8.3 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 167 8.3.1 Recommendations : : : : : : : : : : : : : : : : : : : : : : : : 168 vi

PAGE 7

A CPI SOURCE CODE: : : : : : : : : : : : : : : : : : : : : : : : : : : : : 170B BENCHMARKING RESUL TS: : : : : : : : : : : : : : : : : : : : : : : : 172B.1 NetPIPE Results: : : : : : : : : : : : : : : : : : : : : : : : : : : : : 172B.2 NetPIPE TCP Results: : : : : : : : : : : : : : : : : : : : : : : : : : 173B.3 High Performance Linpack: : : : : : : : : : : : : : : : : : : : : : : 174B.3.1 HPL Mak eles: : : : : : : : : : : : : : : : : : : : : : : : : : 174B.3.2 HPL.dat File: : : : : : : : : : : : : : : : : : : : : : : : : : : 179B.3.3 First T est Results with A TLAS: : : : : : : : : : : : : : : : : : 179B.3.4 HPL.dat for Second T est with A TLAS Libraries: : : : : : : : : 186B.3.5 Second T est Results with A TLAS: : : : : : : : : : : : : : : : : 186B.3.6 Final T est with A TLAS Libraries: : : : : : : : : : : : : : : : : 188B.3.7 HPL.dat File for Multi-processor T est: : : : : : : : : : : : : : 189B.3.8 Goto' s Multi-processor T ests: : : : : : : : : : : : : : : : : : : 190B.3.9 HPL.dat File for T esting Broadcast Algorithms: : : : : : : : : : 191B.3.10 Final T est with Goto' s Libraries: : : : : : : : : : : : : : : : : : 191C CALCULIX INST ALLA TION: : : : : : : : : : : : : : : : : : : : : : : : 193C.1 ARP A CK Mak ele: : : : : : : : : : : : : : : : : : : : : : : : : : : : 193C.2 CalculiX CrunchiX Mak ele: : : : : : : : : : : : : : : : : : : : : : : 195D CALCULIX CR UNCHIX INPUT FILE: : : : : : : : : : : : : : : : : : : 198E SERIAL AND P ARALLEL SOL VER SOURCE CODE: : : : : : : : : : : 199E.1 Serial Code: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 199E.2 Optimized P arallel Code: : : : : : : : : : : : : : : : : : : : : : : : : 205E.2.1 P solver Mak ele: : : : : : : : : : : : : : : : : : : : : : : : : 205E.2.2 P solver Source Code: : : : : : : : : : : : : : : : : : : : : : : 206REFERENCES: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 217BIOGRAPHICAL SKETCH: : : : : : : : : : : : : : : : : : : : : : : : : : : 222 vii

PAGE 8

LIST OF T ABLES T able page 3.1 Netw ork hardw are comparison : : : : : : : : : : : : : : : : : : : : : : : 23 3.2 A TLAS BLAS routine results : : : : : : : : : : : : : : : : : : : : : : : 46 3.3 Goto' s BLAS routine results : : : : : : : : : : : : : : : : : : : : : : : : 48 3.4 Goto' s BLAS routine results–2 processors : : : : : : : : : : : : : : : : : 54 3.5 Goto' s BLAS routine results–3 processors : : : : : : : : : : : : : : : : : 57 3.6 Goto' s BLAS routine results–4 processors : : : : : : : : : : : : : : : : : 59 6.1 Comparison of solv ers : : : : : : : : : : : : : : : : : : : : : : : : : : : 121 6.2 Utility objects : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 123 6.3 Ordering objects : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 123 6.4 Numeric objects : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 124 8.1 maxdomainsize ef fect on solv e time (seconds) : : : : : : : : : : : : : : : 162 8.2 Processor solv e time (seconds)–700 maxdomainsize : : : : : : : : : : : : 163 8.3 Processor solv e time (seconds)–900 maxdomainsize : : : : : : : : : : : : 164 8.4 Results with optimized v alues : : : : : : : : : : : : : : : : : : : : : : : 165 8.5 Results for lar ge test : : : : : : : : : : : : : : : : : : : : : : : : : : : : 167 viii

PAGE 9

LIST OF FIGURES Figure page 1.1 Beo wulf layout : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 2.1 Netw ork hardw are conguration : : : : : : : : : : : : : : : : : : : : : : 6 2.2 Original netw ork conguration : : : : : : : : : : : : : : : : : : : : : : : 7 2.3 cpi results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 19 3.1 Message size vs. throughput : : : : : : : : : : : : : : : : : : : : : : : : 26 3.2 MPI vs. TCP throughput comparison : : : : : : : : : : : : : : : : : : : : 29 3.3 MPI vs. TCP saturation comparison : : : : : : : : : : : : : : : : : : : : 29 3.4 Decrease in ef fecti v e throughput with MPI : : : : : : : : : : : : : : : : : 30 3.5 Throughput vs. time : : : : : : : : : : : : : : : : : : : : : : : : : : : : 31 3.6 Block size ef fect on performance for 1 node : : : : : : : : : : : : : : : : 43 3.7 2D block-c yclic layout : : : : : : : : : : : : : : : : : : : : : : : : : : : 51 3.8 Block size ef fect on performance for 2 nodes : : : : : : : : : : : : : : : 52 3.9 Block size ef fect on performance for 3 nodes : : : : : : : : : : : : : : : 55 3.10 Block size ef fect on performance for 4 nodes : : : : : : : : : : : : : : : 58 3.11 Decrease in maximum performance : : : : : : : : : : : : : : : : : : : : 61 4.1 Opening screen : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 69 4.2 p1 with label : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 70 ix

PAGE 10

4.3 Spline : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 70 4.4 Surf ace : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 71 4.5 Body created by sweeping : : : : : : : : : : : : : : : : : : : : : : : : : 72 5.1 Final part : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 74 5.2 Creating points : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 77 5.3 Selection box : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 77 5.4 Creating lines : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 79 5.5 Creating lines : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 80 5.6 Creating surf aces : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 81 5.7 Creating surf ace A001 : : : : : : : : : : : : : : : : : : : : : : : : : : : 82 5.8 Creating surf ace A002 : : : : : : : : : : : : : : : : : : : : : : : : : : : 83 5.9 Creating bodies : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 84 5.10 Plotting bodies : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 84 5.11 Creating the handle : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 85 5.12 Creating the c ylinder points : : : : : : : : : : : : : : : : : : : : : : : : 86 5.13 Creating the c ylinder lines : : : : : : : : : : : : : : : : : : : : : : : : : 87 5.14 Creating the c ylinder surf aces : : : : : : : : : : : : : : : : : : : : : : : 88 5.15 Cylinder surf aces : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 88 5.16 Cylinder surf aces : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 89 5.17 Creating points for parallelepiped : : : : : : : : : : : : : : : : : : : : : 91 5.18 Creating lines for parallelepiped : : : : : : : : : : : : : : : : : : : : : : 91 x

PAGE 11

5.19 Creating lines for horse-shoe section : : : : : : : : : : : : : : : : : : : : 93 5.20 Surf aces : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 93 5.21 Creating body for horse-shoe section : : : : : : : : : : : : : : : : : : : : 94 5.22 Creating lines for the slanted section : : : : : : : : : : : : : : : : : : : : 95 5.23 Final part : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 96 5.24 Unaligned meshes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 97 5.25 Changing line di visions : : : : : : : : : : : : : : : : : : : : : : : : : : : 98 5.26 Pick multiple di vision numbers : : : : : : : : : : : : : : : : : : : : : : : 99 5.27 Change all numbers to 9 : : : : : : : : : : : : : : : : : : : : : : : : : : 100 5.28 Select line a w ay from label : : : : : : : : : : : : : : : : : : : : : : : : : 100 5.29 Change c ylinder di visions : : : : : : : : : : : : : : : : : : : : : : : : : 101 5.30 Change parallelepiped di visions : : : : : : : : : : : : : : : : : : : : : : 101 5.31 Change horse-shoe section di visions : : : : : : : : : : : : : : : : : : : : 102 5.32 Change horse-shoe section di visions : : : : : : : : : : : : : : : : : : : : 102 5.33 Impro v ed element spacing : : : : : : : : : : : : : : : : : : : : : : : : : 103 5.34 First nodal set : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 104 5.35 Selected nodes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 105 5.36 Select more nodes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 105 5.37 Selected wrong nodes : : : : : : : : : : : : : : : : : : : : : : : : : : : 106 5.38 Correct set of nodes : : : : : : : : : : : : : : : : : : : : : : : : : : : : 107 5.39 Selected e xtra nodes : : : : : : : : : : : : : : : : : : : : : : : : : : : : 107 xi

PAGE 12

5.40 Select nodes to delete : : : : : : : : : : : : : : : : : : : : : : : : : : : : 108 5.41 Final node set : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 108 5.42 Select nodes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 110 5.43 Plot nodes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 110 5.44 Select nodes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 111 5.45 Select nodes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 111 5.46 Final node set : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 112 5.47 Select nodes from the side : : : : : : : : : : : : : : : : : : : : : : : : : 113 5.48 Good node set : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 113 5.49 Final node set : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 114 5.50 Determine node distance : : : : : : : : : : : : : : : : : : : : : : : : : : 114 5.51 Create selection box : : : : : : : : : : : : : : : : : : : : : : : : : : : : 115 5.52 Final node set : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 116 5.53 Side vie w of handle with nodes plotted : : : : : : : : : : : : : : : : : : : 117 5.54 Select nodes on handle inner surf ace : : : : : : : : : : : : : : : : : : : : 118 5.55 Add nodes to set load : : : : : : : : : : : : : : : : : : : : : : : : : : : : 118 5.56 v on Mises stress for the part : : : : : : : : : : : : : : : : : : : : : : : : 120 6.1 Three steps to numeric f actorization : : : : : : : : : : : : : : : : : : : : 130 6.2 Arro whead matrix : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 132 7.1 Original A : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 146 7.2 Lo wer matrix : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 146 xii

PAGE 13

7.3 Upper matrix : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 146 7.4 Steps to elimination graph : : : : : : : : : : : : : : : : : : : : : : : : : 148 7.5 Minimum de gree algorithm : : : : : : : : : : : : : : : : : : : : : : : : : 149 7.6 Modied minimum de gree algorithm : : : : : : : : : : : : : : : : : : : : 150 8.1 v on Mises stress of cantile v er : : : : : : : : : : : : : : : : : : : : : : : 155 8.2 p solv er MPI communication–2 processors : : : : : : : : : : : : : : : : 157 8.3 p solv er MPI communication zoomed–2 processors : : : : : : : : : : : : 157 8.4 p solv er MPI communication–4 processors : : : : : : : : : : : : : : : : 158 8.5 MPI communication for cpi : : : : : : : : : : : : : : : : : : : : : : : : 159 8.6 First optimization : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 160 8.7 Final optimization results for tw o processors : : : : : : : : : : : : : : : : 166 8.8 Final optimization results for four processors : : : : : : : : : : : : : : : 166 8.9 Diskless cluster : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 169 xiii

PAGE 14

Abstract of Thesis Presented to the Graduate School of the Uni v ersity of Florida in P artial Fulllment of the Requirements for the De gree of Master of Science P ARALLEL COMPUT A TION AL MECHANICS WITH A CLUSTER OF W ORKST A TIONS By P aul Johnson December 2005 Chairman: Loc V u-Quoc Major Department: Mechanical and Aerospace Engineering Presented are the steps to creating, benchmarking, and adapting an optimized parallel system of equations solv er pro vided by SPOOLES to a Cluster of W orkstations (CoW) constructed from Commodity Of f The Shelf (CO TS) components. The parallel system of equations solv er is used in conjunction with the preand post-processing capabilities of CalculiX, a freely a v ailable three-dimensional structural nite-element program. In the rst part, parallel computing is introduced with the dif ferent architectures e xplained and compared. Chapter 2 e xplains the process of b uilding a Cluster of W orkstations. Explained is the setup of computer and netw ork hardw are and the underlying softw are that allo ws interprocessor communication. Ne xt, a thorough benchmarking of the cluster with se v eral applications that report netw ork latenc y and bandwidth and o v erall system perfor mance is e xplained. In the last chapter the parallel solv er is optimized for our Cluster of W orkstations with recommendations to further impro v e performance. xi v

PAGE 15

CHAPTER 1 P ARALLEL COMPUTING Softw are has traditionally been for serial computation, performed by a single Central Processing Unit (CPU). W ith computational requirements al w ays increasing with the gro wing comple xity of softw are, harnessing more computational po wer is al w ays demanded. One w ay is to increase the computational po wer of a single computer b ut this method can become v ery e xpensi v e and has its limits or a supercomputer with v ector processors can be used, b ut that can also be v ery e xpensi v e. P arallel computing is another method which basically utilizes the computational resources of multiple processors simultaneously by di viding the problem amongst the processors. P arallel computing has a wide range of uses that may not be widely kno wn b ut af fects a lar ge number of people. Some uses include predicting weather patterns, deter mining airplane schedules, unra v eling DN A, and making automobiles safer By using parallel computing, lar ger problems can be solv ed and also time to solv e these problems decreases. 1.1 T ypes of P arallel Processing There are se v eral types of parallel architectures, Symmetric MultiProcessing (SMP), Massi v ely P arallel Processing (MPP), and clusters. Symmetric multiprocessing systems contain processors that share the same memory and memory b us. These systems are limited to their number of CPUs because as the number of CPUs increases, so does the requirement of ha ving a v ery high speed b us to ef ciently handle the data. Massi v ely parallel processing systems o v ercome this limitation by using a message passing system. 1

PAGE 16

2 The message passing scheme can connect thousands of processors each with their o wn memory by using a high speed, lo w latenc y netw ork. Often the message passing systems are proprietary b ut the MPI [ 1 ] standard can also be used. 1.1.1 Clusters Clusters are distrib uted memory systems b uilt from Commodity Of f The Shelf (CO TS) components connected by a high speed netw ork. Unlik e MPP systems, ho we v er clusters lar gely do not use a proprietary message passing system. The y often use one of the man y MPI [ 1 ] standard implementations such as MPICH [ 2 ] and LAM/MPI [ 3 ]. Clusters of fer high a v ailability scalability and the benet of b uilding a system with supercomputer po wer at a fraction of the cost [ 4 ]. By using commodity computer systems and netw ork equipment along with the free Linux operating system, clusters can be b uilt by lar ge corporations or by an enthusiast in their basement. The y can be b uilt from practically an y computer from an Intel 486 based system to a high end Itanium w orkstation. Another benet of using a cluster is that the user is not tied to a specic v endor or its of ferings. The cluster b uilder can customize the cluster to their specic problem using hardw are and softw are that presents the most benet or what the y are most f amiliar with. 1.1.2 Beo wulf Cluster A Beo wulf cluster is a cluster of computers that is dedicated along with the netw ork only to parallel computing and nothing else [ 5 ]. The Beo wulf concept be g an in 1993 with Donald Beck er and Thomas Sterling outlining a commodity component based cluster that w ould be cost ef fecti v e and an alternati v e to e xpensi v e supercomputers. In 1994, while w orking at Center of Excellence in Space Data and Information Sciences (CESDIS), the Beo wulf Project w as started. The rst Beo wulf cluster w as composed of sixteen Intel DX4 processors connected by channel bonded Ethernet [ 5 ]. The project w as an instant success and led to further research in the possibilities of creating a high performance system based on commodity products.

PAGE 17

3 F or a Beo wulf cluster there are compute nodes and a master node which presides o v er the compute nodes. The compute nodes of a Beo wulf cluster may not e v en ha v e a monitor k e yboard, mouse, or video card. The compute nodes are all CO TS computers, generally identical, that run open source softw are and a v ariant of the Linux or Unix operating system [ 6 ]. Linux is a rob ust, multitasking deri v ati v e of the Unix operating system that allo ws users to vie w the underlying source code, modify it to their needs if necessary and also escape some of the v endor lock-in issues of some proprietary oper ating systems. Some benets of using Linux are that it is v ery customizable, runs under multiple platforms, and it can be obtained from numerous websites for free. F or Beo wulf clusters there is a master node that often has a monitor and k e yboard and also has a netw ork connection to the outside world and another netw ork card for connecting to the cluster The master node performs such acti vities as data backup, data and w orkload distrib ution, g athering statistics on the nodes performance or state, and allo wing users to submit a problem to the cluster Figure 1.1 is a sample conguration of a Beo wulf cluster 1.1.3 Netw ork of W orkstations Netw ork of W orkstations (NoW), is another cluster conguration that stri v es to harness the po wer of underutilized w orkstations. This type of cluster is also similar to a Cluster of W orkstations (CoW) and Pile of PCs (PoPs) [ 7 ]. The w orkstations can be located throughout a b uilding or of ce and are connected by a high speed switched netw ork. This type of cluster is not a Beo wulf cluster because the compute nodes are also used for other acti vities, not just computation. A NoW cluster has the adv antage of using an e xisting high-speed LAN and with w orkstations al w ays being upgraded, the technology deplo yed in a NoW will stay current and not suf fer the technology lag time as often seen with traditional MPP machines [ 7 ]. The cluster that we use for our research is considered a Cluster of W orkstations. This type of cluster can be described as being in between a Beo wulf cluster and a Netw ork

PAGE 18

4 Figure 1.1. Beo wulf layout of W orkstations. W orkstations are used for computation and other acti vities as with NoWs b ut are also more isolated from the campus netw ork as with a Beo wulf cluster

PAGE 19

CHAPTER 2 NETW ORK SETUP In this chapter dif ferent aspects of setting up a high performance computational netw ork will be discussed. The steps tak en to install the softw are so that the computers can communicate with each other ho w the hardw are is congured, ho w the netw ork is secured, and also ho w the internal computers can still access the W orld W ide W eb will be e xplained. 2.1 Netw ork and Computer Hardw are The cluster that w as b uilt in our lab is considered a Cluster of W orkstations, or CoW Other similar clusters are Netw ork of W orkstations (NoW), and Pile of PCs (PoPs). The cluster consists of Commodity Of f The Shelf (CO TS) components, link ed together by switched Ethernet. The cluster consists of four nodes, apollo euclid hydr a3 and hydr a4 with apollo being the master node. The y are arranged as sho wn in Figure 2.1 Hydr a3 and hydr a4 each ha v e one 2.0 GHz Pentium 4 processor with 512 KB L2 cache, Streaming SIMD Extensions 2 (SSE2), and operate on a 400 MHz system b us. Both hydr a3 and hydr a4 ha v e 40 GB Seag ate Barracuda hard dri v es, operating at 7200 rpm, with 2 MB cache. Apollo and euclid each ha v e one 2.4 GHz Pentium 4 processor with 512 KB L2 cache, SSE2, and also operate on a 400 MHz system b us. Apollo and euclid each ha v e a 30 GB Seag ate Barracuda dri v e operating at 7200 rpm and with a 2 MB cache. Each computer in the cluster has 1 GB of PC2100 DDRAM. The computers are connected by a Netgear FS605 5 port 10/100 switch. As you can probably tell by the abo v e specs, our b udget is a little on the lo w side. 5

PAGE 20

6 Figure 2.1. Netw ork hardw are conguration When deciding to b uild a high performance computational netw ork one aspect to consider is whether there are suf cient funds to in v est in netw ork equipment. A bottleneck for most clusters is the netw ork. W ithout a high throughput and lo w latenc y netw ork, a cluster is almost useless for certain applications. Ev en though the price of netw orking equipment is al w ays f alling, a netw ork e v en for a small cluster can be e xpensi v e if Gig abit, Myrinet, or other high performance hardw are is used. 2.2 Netw ork Conguration This section will e xplain the steps on ho w to get the computers in the cluster communicating with each other One of the most important part of a cluster is the communi-

PAGE 21

7 cation backbone on which data is transferred. By properly conguring the netw ork, the performance of a cluster is maximized and its construction is more justiable. Each node in the cluster is running Red Hat' s Fedora Core One [ 8 ] with a 2.4.22-1.2115.nptl k ernel [ 9 ]. 2.2.1 Conguration Files Internet Protocol (IP) is a data-oriented method used for communication o v er a netw ork by source and destination hosts. Each host on the end of an IP communication has an IP address that uniquely identies it from all other computers. IP sends data between hosts split into pack ets and the T ransmission Control Protocol (TCP) puts the pack ets back together Initially the computers in the lab were set up as sho wn in Figure 2.2 Figure 2.2. Original netw ork conguration

PAGE 22

8 The computers could be set up as a cluster in this conguration b ut this presents the problem of ha ving the traf c between the four computers rst go out to the campus serv ers then back into our lab Ob viously the data tra v eling from the lab computers to the campus serv ers adds a time and bandwidth penalty All our parallel jobs w ould be greatly inuenced by the traf c of the Uni v ersities netw ork. T o solv e this problem, an internal netw ork w as set up as illustrated in Figure 2.1 The internal netw ork is setup so that traf c for euclid hydr a3 and hydr a4 goes through apollo to reach the Internet. Apollo has tw o netw ork cards, one connected to the outside world and another that connects to the internal netw ork. Since euclid hydr a3 and hydr a4 are all w orkstations used by students in the lab, the y need to be allo wed access to the outside world This is accomplished through IP forwar ding and masquer ading rules within the Linux re w all, iptables IP masquerading allo ws computers with no kno wn IP addresses outside their netw ork so that the y can communicate with computers with kno wn IP addresses. IP for w arding allo ws incoming pack ets to be sent to another host. Masquerading allo ws euclid hydr a4 and hydr a3 access to the W orld W ide W eb with the pack ets going through and looking lik e the y came from apollo and forw arding allo ws pack ets destined to one of these computers be routed through apollo to their correct destination. Excellent tutorials on conguring iptables IP forwar ding and masquer ading can be found at the Red Hat online documentation [ 10 ] and The Linux Documentation Project [ 11 ]. T o enable forwar ding and masquer ading there are se v eral les that need to be edited. The y are /etc/hosts /etc/hosts.allow and /etc/syscong/iptables The hosts le maps IPs to hostnames before DNS is called, /etc/hosts.allow species which hosts are allo wed to connect and also what services the y can run, and iptables enables pack et ltering, Netw ork Address T ranslation (N A T), and other pack et mangling [ 12 ]. The format for /etc/hosts is IP address, follo wed by the hosts name with domain

PAGE 23

9 information, and an alias for the host. F or our cluster each computer has all the other computers in the cluster listed and also se v eral computers on the Uni v ersity' s netw ork. F or e xample, a partial hosts le for apollo is: 192.168.0.3 euclid.xxx.ufl.edu euclid 192.168.0.5 hydra4.xxx.ufl.edu hydra4 192.168.0.6 hydra3.xxx.ufl.edu hydra3 The le /etc/hosts.allow species which hosts are allo wed to connect and what ser vices the y are allo wed to use, i.e. sshd and sendmail. This is the rst checkpoint for all incoming netw ork traf c. If a computer that is trying to connect is not listed in hosts.allow it will be rejected. The le has the format of the name of the daemon access will be granted too follo wed by the host that is allo wed access to that daemon, and then ALLO W F or e xample, a partial hosts.allow le for apollo is: ALL: 192.168.0.3: ALLOW ALL: 192.168.0.5: ALLOW ALL: 192.168.0.6: ALLOW This will allo w euclid hydr a4 and hydr a3 access to all services on apollo 2.2.2 Internet Protocol F orw arding and Masquerading When information is sent o v er a netw ork, it tra v els from its origin to its destination in pack ets. The be ginning of the pack et, header species its destination, where it came from, and other administrati v e details [ 12 ]. Using this information, iptables can, using specied rules lter the traf c, dropping/accepting pack ets according to these rules and redirect traf c to other computers. Rules grouped into c hains and c hains are grouped into tables By iptables I am also referring to the underlying netlter frame w ork. The netlter frame w ork is a set of hooks within the k ernel that inspects pack ets while iptables congures the netlter rules.

PAGE 24

10 Because all the w orkstations in the lab require access to the Internet, iptables will be used to forw ard pack ets to specied hosts and also allo w the computers to ha v e a pri v ate IP address that is masqueraded to look lik e it has a public IP address. The goal of this section is not to e xplain in detail all the rules specied in our iptables le b ut to just e xplain ho w forw arding and masquerading are set up. F or our netw ork we ha v e an iptables script, named iptables script that sets the rules for iptables The script is located in /etc/syscong/ T o run the script, simply type as root: root@apollo> ./iptables_script This will mak e acti v e the rules dened in the script. T o ensure that these rules are loaded each time the system is rebooted, create the le iptables in the directory /etc/syscong with the follo wing command: root@apollo> /sbin/iptables-save -c > iptables T o set up IP forwar ding and masquer ading rst open the le /etc/syscong/networking/de vices/ifcfg-eth1 There are tw o netw ork cards in apollo eth0 which is connected to the e xternal netw ork, and eth1 which is connected to the internal netw ork. Add the follo wing lines to ifcfg-eth1 : IPADDR=192.168.0.4NETWORK=192.168.0.0NETMASK=255.255.255.0BROADCAST=192.168.0.255 This will set the IP address of eth1 to 192.168.0.4 Ne xt, open the le iptables script and add the follo wing lines to the be ginning of the le: # Disable forwarding echo 0 > /proc/sys/net/ipv4/ip_forward # load some modules (if needed)

PAGE 25

11 # Flush iptables -t nat -F POSTROUTING iptables -t nat -F PREROUTING iptables -t nat -F OUTPUT iptables -F # Set some parameters LAN_IP_NET='192.168.0.1/24'LAN_NIC='eth1'FORWARD_IP='192.168.0.4'# Set default policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT ACCEPT # Enable masquerading and forwarding iptables -t nat -A POSTROUTING -s $LAN_IP_NET -j MASQUERADE iptables -A FORWARD -j ACCEPT -i $LAN_NIC -s $LAN_IP_NET iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT # Open SSH of apollo to LAN iptables -A FORWARD -j ACCEPT -p tcp --dport 22 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 22 -j DNAT \ --to 192.168.0.4:22 # Enable forwarding echo 1 > /proc/sys/net/ipv4/ip_forward F ollo wing the abo v e lines is the original iptables script rules. First, IP forwar ding is disabled and the present running iptables rules are ushed. Ne xt, some alias' are set that just mak e it easier to read and write the rules. After that, the def ault policies are set. All incoming and forw arded traf c will be dropped and all pack ets sent out will be allo wed. W ith just these rules in place, there w ould be no incoming traf c allo wed. No w with the def ault policies in place, other rules will be appended to them so that certain connections are allo wed. By setting INPUT and FOR W ARD to A CCEPT the netw ork w ould allo w unrestricted access, NO T a good idea! The ne xt three lines enable masquerading of the internal netw ork via N A T (Netw ork Address T ranslation) so that all traf c appears to be coming from a single IP address, apollo' s and forw arding of IP pack ets to the internal netw ork. Ne xt, SSH is allo wed between the computers on the internal netw ork and apollo Finally IP forwar ding is enabled in the k ernel.

PAGE 26

12 T o allo w the computers on the internal netw ork access to apollo the follo wing lines need to be added to the le /etc/syscong/networking/de vices/ifcfg-eth0 on euclid hydr a3 and hydr a4 The belo w IP address is for hydr a3 The IP address for the internal computers will be in the IP range for internal netw orks setup by RFC 1918 [ 13 ]. BROADCAST=192.168.0.255IPADDR=192.168.0.6NETWORK=192.168.0.0GATEWAY=192.168.0.4 It is important to set GA TEW A Y to the IP address of apollo' s internal netw ork card so that the cluster computers traf c is routed through apollo 2.3 MPI–Message P assing Interf ace A common frame w ork for man y parallel machines is that the y utilize message passing so that processes can communicate. The standardization of a message passing system be g an in 1992 at the W orkshop on Standards for Message P assing in a Distrib uted Memory En vironment sponsored by the Center for Research on P arallel Computing [ 14 ]. When the Message P assing Interf ace, or MPI [ 1 ], w as concei v ed, it incorporated the attracti v e features of se v eral other message passing system and its de v elopment in v olv ed about 60 people and 40 or g anizations from uni v ersities, go v ernment laboratories, and industry [ 14 ]. 2.3.1 Goals By creating a message passing standard, portability between computer architectures and ease-of-use are achie v ed. W ith a common base of routines, v endors can ef ciently implement those routines and it is also easier to pro vide support for hardw are. T o achie v e the aforementioned benets, goals were set by the F orum. These goals are: [ 14 ] Design an API, Application Programming Interf ace, that denes ho w softw are

PAGE 27

13 communicates with one another Allo w ef cient communication by a v oiding memory-to-memory cop ying and allo wing o v erlap of computation and communication. Allo w the softw are using MPI to be used in a heterogeneous en vironment. Allo w con v enient C and F ortran 77 bindings for the interf ace. Mak e the communication interf ace reliable Dene an interf ace that is not too dif ferent from other libraries and pro vide e xtensions for greater e xibility Dene an interf ace that can be run on man y dif ferent hardw are platforms such as distrib uted memory multiprocessors and netw orks of w orkstations Semantics of the interf ace should be language independent. The interf ace should be designed to allo w for thread safety The abo v e goals of fer great benet for the programmer of all application sizes. By k eeping the logical structure of MPI language independent, ne w programmers to MPI will more readily grasp the concepts while programmers of lar ge applications will benet from the similarity to other libraries and also the C and F77 bindings. There are some aspects that are not included in the standard. These include: [ 14 ] Explicit shared-memory operations. Program construction tools. Deb ugging f acilities. Support for task management.

PAGE 28

14 2.3.2 MPICH There are man y implementations of MPI; MPI/Pro, Chimp MPI, implementations by hardw are v endors IBM, HP SUN, SGI, Digital, and others, with MPICH and LAM/MPI being the tw o main ones. MPICH be g an in 1992 as an implementation that w ould track the MPI standard as it e v olv ed and point out an y problems that de v elopers may incur and w as de v eloped at Ar gonne National Laboratory and Mississippi State Uni v ersity [ 15 ]. 2.3.3 Installation MPICH can be do wnloaded from the MPICH website at http://www-unix.mcs.anl.go v/mpi/mpich/ The v ersion that is run in our lab is 1.2.5.2. The installation of MPICH is straightforw ard. Do wnload the le mpic h.tar .gz and uncompress. The directory in which MPICH is installed on our system is home/apollo/hda8 redboots@apollo> gunzip mpich.tar.gz redboots@apollo> tar -xvf mpich.tar This creates the directory mpic h-1.2.5.2 The majority of the code for MPICH is de vice independent and is implemented on top of an Abstract De vice Interf ace or ADI. This allo ws MPICH to be more easily ported to ne w hardw are architectures by hiding most hardw are specic details [ 16 ]. The ADI used for netw orks of w orkstations is the ch p4 de vice, where c h stands for ”Chameleon”, a symbol of adaptability and portability and p4 stands for ”portable programs for parallel processors” [ 15 ]. 2.3.4 Enable SSH The def ault process startup mechanism for the ch p4 de vice on netw orks is remote shell or r sh Rsh allo ws the e x ecution of commands on remote hosts [ 17 ]. Rsh w orks only if you are allo wed to log into a remote machine without a passw ord. Rsh relies on the connection coming from a kno wn IP address on a pri vile ged port. This creates a huge security risk because of the ease in which hack ers can spoof the connection. A more

PAGE 29

15 secure alternati v e to r sh is to use the Secure Shell or SSH protocol, which encrypts the connection and uses digital signatures to positi v ely identify the host at the other end of the connection [ 17 ]. If we were to just create a computational netw ork that w as not connected to the Internet, r sh w ould be ne. Since all our computers in the lab are connected to the Internet, using insecure communication could possibly result in the compromise of our system by hack ers. T o set up SSH to w ork properly with MPICH, se v eral steps need to be done. First mak e sure SSH is installed on the computers on the netw ork. Most standard installations of Linux come with SSH installed. If it is not, SSH can be do wnloaded from http://www .openssh.com Ne xt, an authentication k e y needs to be created. Go to the .ssh folder located in your home directory and type ssh-k e yg en -f identity -t r sa When the output asks you for a passphrase, just press Enter twice. redboots@apollo> ssh-keygen -f identity -t rsa Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in identity. Your public key has been saved in identity.pub. The key fingerprint is: 43:68:68:30:79:73:e2:03:d9:50:2b:f1:c1:5d:e7:60redboots@apollo.xxx.ufl.edu This will create tw o les identity and identity .pub No w place the identity .pub k e y in the le$HOME/.ssh/authorized k e ys where$HOME is the users home directory If the users home directory is not a shared le system, authorized k e ys should be copied into$HOME/.ssh/authorized k e ys on each computer

PAGE 30

16 Also, if the le authorized k e ys does not e xist, create it. redboots@apollo> touch authorized_keys Finally while in$HOME/.ssh type: redboots@apollo> ssh-agent $SHELL redboots@apollo> ssh-add The abo v e commands will allo w the user to a v oid typing in the pass phrase each time SSH is in v ok ed [ 18 ]. No w enter into the main MPICH directory and type: redboots@apollo> ./configure -rsh=ssh This will congure MPICH to use SSH instead of r sh The abo v e steps of installing MPICH need to be performed for all the computers that are to be in the cluster 2.3.5 Edit Machines.LINUX In order for the master node to kno w which computers are a v ailable for the cluster the le mac hines.LINUX needs to be edited. After MPICH is installed on all the computers, open the le /home/apollo/hda8/mpic h-1.2.5.2/util/mac hines/mac hines.LINUX on the master node, apollo in our case, and edit it so that each node of the cluster is listed. In order to run an MPI program, the number of processors to use needs to be specied: redboots@apollo> mpirun -np 4 program In the abo v e e xample, 4 is the number of processors that are used to run pr o gr am When the e x ecution be gins, mpirun reads the le mac hines.LINUX to see what machines are a v ailable in the cluster If the number of processors specied by the -np ag are more than what is listed in mac hines.LINUX the dif ference will be made up by some processors doing more w ork. T o achie v e the best performance, it is recommended that the number

PAGE 31

17 of processors listed in mac hines.LINUX is equal to or more than -np The format for mac hines.LINUX is v ery straightforw ard, hostname:number of CPUs F or each line, a hostname is listed and, if a machine has more than one processor a colon follo wed by the number of processors. F or e xample, if there were tw o machines in the cluster with mac hine1 ha ving one processor and mac hine2 ha ving four processors, mac hines.LINUX w ould be as follo ws: machine1machine2:4 The mac hines.LINUX le that apollo uses is: apollo.xxx.ufl.edueuclid.xxx.ufl.eduhydra4.xxx.ufl.eduhydra3.xxx.ufl.edu Because all our compute nodes ha v e single processors, a colon follo wed by a number is not necessary 2.3.6 T est Examples MPICH pro vides se v eral e xamples to test whether the netw ork and softw are are setup correctly One e xample computes Pi and is located in /home/apollo/hda8/mpic h1.2.5.2/e xamples/basic The le cpi.c contains the source code. T o calculate Pi, cpi solv es the Gre gory-Leibniz series o v er a user specied number of interv als, n MPI programs can be really small, using just six functions or the y can be v ery lar ge, using o v er one hundred functions. The four necessary functions that cpi uses are MPI Init, MPI F inalize MPI Comm size MPI Comm r ank with MPI Bcast and MPI Reduce used to send and reduce the returned data to a single number respecti v ely The code for cpi.c is seen in

PAGE 32

18 Appendix A After the desired number of interv als n is dened in cpi.c simply compile cpi by typing at a command prompt: redboots@apollo> make cpi /home/apollo/hda8/mpich-1.2.5.2/bin/mpicc -c cpi.c /home/apollo/hda8/mpich-1.2.5.2/bin/mpicc -o cpi cpi.o \ -lm This will create the e x ecutable cpi T o run cpi while at a command prompt in /home/apollo/hda8/mpic h-1.2.5.2/e xamples/basic enter the follo wing: redboots@apollo> mpirun -np 4 cpi Process 0 of 4 on apollo.xxx.ufl.edu pi is approximately 3.1415926535899033, Error is 0.00000000000011 wall clock time = 0.021473 Process 3 of 4 on hydra3.xxx.ufl.edu Process 2 of 4 on hydra4.xxx.ufl.edu Process 1 of 4 on euclid.xxx.ufl.edu Se v eral tests were run while v arying the number of interv als and processors. These results are summarized in Figure 2.3 2.3.7 Conclusions Depending on the comple xity of your application, MPI can be relati v ely simple to inte grate. If the problem is easy to split and di vide the w ork e v enly among the processors, lik e the e xample that computes Pi, as little as six functions may be used. F or all problems, the user needs to decide ho w to partition the w ork, ho w to send it to the other processors, decide if the processors ha v e to communicate with one another and decide what to do with the solution that each node computes, which all can be done with a fe w functions if the problem is not too complicated. When deciding whether to parallelize a program, se v eral things should be considered and performed. First, really understand ho w the serial code w orks. Study it and all

PAGE 33

19 Figure 2.3. cpi results its intricacies so that you ha v e a good visual of ho w the data is mo v ed around and operated upon. W ithout doing this step, you will ha v e no clue on where to e v en be gin the parallelization process. Also, clean up the code, remo v e an y unnecessary functions, and simplify it as much as possible. Ne xt, determine if it' s e v en possible to parallelize the program. If there are not an y suf ciently sized groups of data that can be independently solv ed on a single processor parallelizing the code may be impossible or not w orth the ef fort. No w determine if parallelizing the code is going to gi v e a speedup that justies ef fort. F or e xample, with cpi solving problems less than one hundred million interv als simply is not w orth the ef fort of parallelizing the code. Ev en though it w as relati v ely easy to parallelize cpi imagine trying to parallelize a code with se v eral hundred thousand lines of code, spending man y hours in your ef fort with the end result of an insignicant speedup. As the problem size increases with more ef fort being e x erted by the processors

PAGE 34

20 than the netw ork, parallelization becomes more practical. W ith small problems, less than one-hundred million interv als for the cpi e xample, illustrated by the results in Figure 2.3 the penalty of latenc y in a netw ork simply does not justify parallelization.

PAGE 35

CHAPTER 3 BENCHMARKING In this chapter cluster benchmarking will be discussed. There are se v eral reasons wh y benchmarking a cluster is important. One being determining the sensiti vity of the cluster to netw ork parameters. Such parameters include bandwidth and latenc y Another reason to benchmark is to determine ho w scalable your cluster is. W ill performance scale with the addition of more compute nodes enough such that the price/performance ratio is acceptable? T esting scalability will help determine the best hardw are and softw are conguration such that the practicality of using some or all of the compute nodes is determined. 3.1 Performance Metrics An early measure of performance that w as used to benchmark machines w as MIPS or Million Instructions Per Second. This benchmark refers to the number of lo w-le v el machine code instructions that a processor can e x ecute in one second. This benchmark, ho we v er does not tak e into ef fect that all chips are not the same in the w ay that the y handle instructions. F or e xample, a 2.0 GHz 32bit processor will ha v e a 2000 MIPS rating and a 2.0 GHz 64 bit processor will also ha v e a 2000 MIPS rating. This is an ob viously a wed rating because softw are written specically for the 64 bit processor will solv e a comparable problem much f aster than a 32 bit processor with softw are written specically for it. The widely accepted metric of processing po wer used today is FLOPS or Floating Point Operations Per Second. This benchmark unit measures the number of calculations that a computer can perform on a oating point number or a number with a certain precision. A problem with this measurement is that it does not tak e into account the conditions 21

PAGE 36

22 in which the benchmark is being conducted. F or e xample, if a machine is being benchmark ed and also being subjected to an intense computation simultaneously the reported FLOPS will be lo wer Ho we v er for its shortcomings, the FLOPS is widely used to measure cluster performance. The actual answer to all benchmark questions is found when the applications for the cluster are installed and tested. When the actual applications and not just a benchmark suite are tested, a much more accurate assessment of the clusters performance is obtained. 3.2 Netw ork Analysis In this section, NetPIPE(Netw ork Protocol Independent Performance Ev aluator) will be discussed [ 19 ]. The st step to benchmarking a cluster is to determine if the netw ork you are using is operating ef ciently and to get an estimate on its performance. From the NetPIPE website, ”NetPIPE is a protocol independent tool that visually represents netw ork performance under a v ariety of conditions” [ 19 ]. NetPIPE w as originally de v eloped at the Scalable Computing Laboratory by Quinn Snell, Armin Mikler John Gustafson, and Guy Helmer It is currently being de v eloped and maintained by Da v e T urner A major bottleneck in high performance parallel computing is the netw ork on which it communicates [ 20 ]. By identifying which f actors af fect interprocessor communication the most and reducing their ef fect, application performance can be greatly impro v ed. The tw o major f actors that af fect o v erall application performance on a cluster are netw ork latenc y the delay of when a piece of data is sent and when it is recei v ed, and the maximum sustainable bandwidth, the amount of data that can be sent o v er the netw ork continuously [ 20 ]. Some other f actors that af fect application performance are CPU speed, the CPU b us, cache size of CPU, and the I/O performance of the nodes hard dri v e.

PAGE 37

23 Fine tuning the performance of a netw ork can be a v ery time consuming and e xpensi v e process and requires a lot of kno wledge on netw ork hardw are to fully utilize the hardw are' s potential. F or this section I will not go into too man y details about netw ork hardw are. The minimum netw ork hardw are that should be used for a Beo wulf today is one based on one-hundred Me g abit per second (100 Mbps) Ethernet technology W ith the increase in popularity and decrease costs of one-thousand Me g abit per second (1000 Mbps) hardw are, a much better choice w ould be Gig abit Ethernet. If v ery lo w latenc y high and sustainable bandwidth is required for your application, and cost isn' t too important, Myrinet [ 21 ] or other proprietary netw ork hardw are are often used. From the comparison chart, T able 3.1 both F ast Ethernet and Gig abit Ethernet technologies ha v e a much higher latenc y than compared to Myrinet hardw are. The dra wback of Myrinet technology e v en for a small four node cluster is its price. The data for the table w as estimated by using the computer hardw are price searching pro vided by http://www .price w atch.com T able 3.1. Netw ork hardw are comparison Netw ork Latenc y(microsecs) Max. Bandwidth (Mbps) Cost ($/4 nodes) F ast Ethernet 70 100 110 Gig abit Ethernet 80 1000 200 Myrinet 7 2000 6000 3.2.1 NetPIPE NetPIPE essentially gi v es an estimate of the performance for the netw ork in a cluster The method in which NetPIPE analyzes netw ork performance is by performing a simple ping-pong test, bouncing messages between tw o processors of increasing size. A ping-pong test, as the name implies, simply sends data to another processor which in turn sends it back. Using the total time for the pack et to tra v el between the processors and kno wing the message size, the bandwidth can be calculated. Bandwidth is the amount of

PAGE 38

24 data that can be transferred through a netw ork in a gi v en amount of time. T ypical units of bandwidth are Me g abits per second (Mbps) and Gig abits per second (Gbps). T o pro vide a complete and accurate test, NetPIPE uses message sizes at re gular interv als and at each data point, man y ping-pong tests are carried out. This test will gi v e an o v ervie w of the unloaded CPU netw ork performance. Applications may not reach the reported maximum bandwidth because NetPIPE only measures the netw ork performance of unloaded CPUs, measuring the netw ork performance with loaded CPUs is not yet possible.3.2.2 T est Setup NetPIPE can be obtained from its website at http://www .scl.ameslab .go v/netpipe/ Do wnload the latest v ersion and unpack. The install directory for NetPIPE on our system is home/apollo/hda8 redboots@apollo> tar -xvzf NetPIPE_3.6.2.tar.gz T o install, enter the directory NetPIPE 3.6.2 that w as created after unpacking the abo v e le. Edit the le mak ele with your f a v orite te xt editor so that it points to the correct compiler libraries, include les, and directories. The le mak ele did not need an y changes for our setup. T o mak e the MPI interf ace, mak e sure the compiler is set to mpicc Ne xt, in the directory NetPIPE 3.6.2 type mak e mpi : redboots@apollo> make mpi mpicc -O -DMPI ./src/netpipe.c ./src/mpi.c -o NPmpi -I./src This will create the e x ecutable NPmpi T o run NPmpi simply type at a command prompt:mpirun -np 2 NPmpi -o np.out.mpi

PAGE 39

25 This will run NetPIPE on the rst tw o machines listed under /home/apollo/hda8/mpic h 1.2.5.2/util/mac hines/mac hines.LINUX NetPIPE will by def ault print the results to the command prompt and also to the le np.out.mpi specied after the -o option ag. Belo w is an e xample output between apollo and hydr a4 printed to the command prompt. The format of the data printed to the command prompt is as follo ws: rst column is the run number second column is the message size, third column is the number of times it w as sent between the tw o nodes, the fourth column is the throughput, and the fth column is the round trip di vided by tw o. In Appendix B.1 the le np.out.mpi for apollo and hydr a4 is sho wn. The rst column lists the test run, second column is the message size in Mbps, third column lists ho w man y messages were sent, the fourth column lists the throughput, and the last column is the round-trip time of the messages di vided by tw o. Belo w is a partial output from a test run. redboots@apollo> mpirun -np 2 NPmpi -o np.out.mpi 0: apollo 1: hydra4 Now starting the main loop 0: 1 bytes 1628 times --> 0.13 Mbps in 60.98 usec 1: 2 bytes 1639 times --> 0.25 Mbps in 60.87 usec 2: 3 bytes 1642 times --> 0.37 Mbps in 61.07 usec 3: 4 bytes 1091 times --> 0.50 Mbps in 61.46 usec 4: 6 bytes 1220 times --> 0.74 Mbps in 61.48 usec 5: 8 bytes 813 times --> 0.99 Mbps in 61.86 usec 6: 12 bytes 1010 times --> 1.46 Mbps in 62.53 usec 7: 13 bytes 666 times --> 1.58 Mbps in 62.66 usec 8: 16 bytes 736 times --> 1.93 Mbps in 63.15 usec ...116: 4194304 bytes 3 times --> 87.16 Mbps in 367126.34 usec 117: 4194307 bytes 3 times --> 87.30 Mbps in 366560.66 usec 118: 6291453 bytes 3 times --> 87.24 Mbps in 550221.68 usec 119: 6291456 bytes 3 times --> 87.21 Mbps in 550399.18 usec 120: 6291459 bytes 3 times --> 87.35 Mbps in 549535.67 usec 121: 8388605 bytes 3 times --> 87.32 Mbps in 732942.65 usec 122: 8388608 bytes 3 times --> 87.29 Mbps in 733149.68 usec 123: 8388611 bytes 3 times --> 87.37 Mbps in 732529.83 usec

PAGE 40

26 3.2.3 Results NetPIPE w as run on apollo and hydr a4 while both CPUs were idle with the follo wing command: mpirun -np 2 NPmpi -o np.out.mpi The results are found in Appendix B.1 The rst set of data that is plotted compares the maximum throughput and transfer block size. This is sho wn in Figure 3.1 Figure 3.1. Message size vs. throughput This graph allo ws the easy visualization of maximum throughput for a netw ork. F or the netw ork used in our cluster a maximum throughput of around 87 Mbps w as recorded. This is an acceptable rate for a 100 Mbps netw ork. If the throughput suddenly dropped or w asn' t at an acceptable rate, there w ould ob viously be a problem with the netw ork. It should be noted that a 100 Mbps netw ork will not reach this maximum v alue. This can be attrib uted to the netw ork o v erhead introduced by dif ferent netw ork layers: e.g. Ethernet card dri v er TCP layer and MPI routines [ 22 ].

PAGE 41

27 NetPIPE also allo ws for the testing of TCP bandwidth without MPI induced o v er head. T o run this test, rst create the NPtcp e x ecutable. T o install NPtcp on our cluster required no changes to the le mak ele in the NetPIPE 3.6.2 dir ectory T o create the NPtcp e x ecutable, simply type at the command prompt while in the NetPIPE 3.6.2 directory:redboots@apollo> make tcp cc -O ./src/netpipe.c ./src/tcp.c -DTCP -o NPtcp -I./src This will create the e x ecutable NPtcp T o run the TCP benchmark, it requires both a sender and recei v er node. F or e xample, in our TCP benchmarking test, hydr a4 w as designated the recei v er node and apollo the sender This test ob viously requires you to open a terminal and install NPtcp on both machines unlik e NPmpi which doesn' t require you to open a terminal on the other tested machine, in this case hydr a4 First, log into hydr a4 Install NPtcp on hydr a4 follo wing the abo v e e xample. F or hydr a4 the e x ecutable NPtcp is located in /home/hydr a4/hda13/NetPIPE 3.6.2 While in this directory at a command prompt type ./NPtcp redboots@hydra4> ./NPtcp Send and receive buffers are 16384 and 87380 bytes (A bug in Linux doubles the requested buffer sizes) The abo v e line will no w allo w hydr a4 to be the recei v er F or each separate run, the abo v e command needs to be retyped. Ne xt, log into apollo and enter the directory in which NPtcp is installed. F or apollo this is located in /home/apollo/hda8/NetPIPE 3.6.2 While in this directory at a command prompt start NPtcp while specifying hydr a4 as the recei v er redboots@apollo> ./NPtcp -h hydra4 -o np.out.tcp Send and receive buffers are 16384 and 87380 bytes (A bug in Linux doubles the requested buffer sizes)

PAGE 42

28 Now starting the main loop 0: 1 bytes 2454 times --> 0.19 Mbps in 40.45 usec 1: 2 bytes 2472 times --> 0.38 Mbps in 40.02 usec 2: 3 bytes 2499 times --> 0.57 Mbps in 40.50 usec 3: 4 bytes 1645 times --> 0.75 Mbps in 40.51 usec 4: 6 bytes 1851 times --> 1.12 Mbps in 41.03 usec 5: 8 bytes 1218 times --> 1.47 Mbps in 41.64 usec 6: 12 bytes 1500 times --> 2.18 Mbps in 42.05 usec 7: 13 bytes 990 times --> 2.33 Mbps in 42.54 usec ...116: 4194304 bytes 3 times --> 89.74 Mbps in 356588.32 usec 117: 4194307 bytes 3 times --> 89.74 Mbps in 356588.50 usec 118: 6291453 bytes 3 times --> 89.75 Mbps in 534800.34 usec 119: 6291456 bytes 3 times --> 89.75 Mbps in 534797.50 usec 120: 6291459 bytes 3 times --> 89.75 Mbps in 534798.65 usec 121: 8388605 bytes 3 times --> 89.76 Mbps in 712997.33 usec 122: 8388608 bytes 3 times --> 89.76 Mbps in 713000.15 usec 123: 8388611 bytes 3 times --> 89.76 Mbps in 713000.17 usec Running NPtcp will create the le np.out.tcp The -h hydr a4 option species the hostname for the recei v er in this case hydr a4 Y ou can use either the IP-address or the hostname if you ha v e the recei v ers hostname and corresponding IP-address listed in /etc/hosts The -o np.out.tcp option species the output le to be named np.out.tcp The format of this le is the same as the np.out.mpi le created by NPmpi The le np.out.tcp is found in Appendix B.2 T o compare the o v erhead costs of MPI on maximum throughput, the throughput of both the TCP and MPI runs were plotted compared to message size. In Figure 3.2 the comparison of maximum throughput can be seen. From Figure 3.2 the TCP o v erhead test consistently recorded a higher throughput throughout the message size range as e xpected. Another interesting plot is to consider message size v ersus the time the pack ets tra v el. This is seen in Figure 3.3 This plot sho ws the saturation point between sending data through TCP and with MPI routines atop of TCP The saturation point is the position on the graph after which an increase in block size results in an almost linear increase in transfer time [ 23 ]. This point is more easily located as being at the ”knee” of the curv e. F or both the TCP and MPI run,

PAGE 43

29 Figure 3.2. MPI vs. TCP throughput comparison Figure 3.3. MPI vs. TCP saturation comparison

PAGE 44

30 the saturation point occurred around 130 bytes. After that, both rose linearly together with no distinction between the tw o after a message size of one Kilobyte. It can be concluded o v erhead induced by MPI routines does not af fect latenc y performance greatly for message sizes abo v e one-hundred thirty bytes. The greatest decrease in throughput also occurs for small message sizes. From Figure 3.4 there is a f airly consistent percentage decrease in throughput for message sizes belo w one Kilobyte. Belo w that there is as much as a 35 percent decrease in throughput when MPI routines are added on top of TCP Figure 3.4. Decrease in ef fecti v e throughput with MPI The ne xt plot, Figure 3.5 is considered the netw ork signature graph. This plots the transfer speed v ersus the elapsed time for the data to tra v el. This is also considered an ”acceleration” graph [ 23 ]. T o construct this plot, elapsed time w as plotted on the horizontal axis using a log arithmic scale and throughput on the v ertical axis. From Figure 3.5 the latenc y occurs at the rst point on the graph. This occurs for our netw ork around 61 sec. Since we are using F ast Ethernet this is an acceptable latenc y

PAGE 45

31 Figure 3.5. Throughput vs. time [ 22 ]. Also, Figure 3.5 allo ws for the easy reading of the maximum throughput, around 87 Mbps for our netw ork. 3.3 High Performance Linpack–Single Node High Performance Linpack (HPL) is a portable implementation of the Linpack benchmark for distrib uted memory computers. It is widely used to benchmark clusters and supercomputers and is used to rank the top v e-hundred computers in the w orld at http://www .top500.or g HPL w as de v eloped at the Inno v ati v e Computing Laboratory at the Uni v ersity of T ennessee Computer Science Department. The goal of this benchmark is to pro vide a ”testing and timing program to quantify the accurac y of the obtained solution as well as the time it took to compute it” [ 24 ]. 3.3.1 Installation T o install HPL rst go to the project webpage: http://www .netlib .or g/benchmark/hpl/inde x.html Near the bottom of the page there is a

PAGE 46

32 h yperlink for hpl.tgz Do wnload the package to the directory of your choice. Unpack hpl.tgz with the command: tar -xvzf hpl.tgz This will create the folder hpl Ne xt, enter the directory hpl and cop y the le Mak e .Linux PII CBLAS from the setup directory to the main hpl directory and rename to Mak e .Linux P4 : redboots@apollo> cp setup/Make.Linux_PII_CBLAS \ Make.Linux_P4 There are se v eral other Mak eles located in the setup folder for dif ferent architectures. W e are using Pentium 4' s so the Mak e .Linux PII CBLAS Mak ele w as chosen and edited it so that it points to the correct libraries on our system. The Mak ele that w as used for the compilation is sho wn in Appendix B.3.1 First, open the le in your f a v orite te xt editor and edit it so that it points to your MPI directory and MPI libraries. Also, edit the le so that it points to your correct BLAS (Basic Linear Algebra Subprograms) library as described belo w BLAS are routines for performing basic v ector and matrix operations. The website for BLAS is found at http://www .netlib .or g/blas/ The Mak ele which w as used for our benchmarking is located in Appendix B.3.1 Note the libraries which were used for the benchmarks were either those pro vided by A TLAS or Kazushige Goto, which will be discussed shortly After the Mak ele is congured for your particular setup, HPL can no w be compiled. T o do this simply type at the command prompt: make arch=Linux_P4 The HPL binary xhpl will be located in$hpl/bin/Linux P4 Also created is the le HPL.dat which pro vides a w ay of editing parameters that af fect the benchmarking results.

PAGE 47

33 3.3.2 A TLAS Routines F or HPL, the most critical part of the softw are is the matrix-matrix multiplication routine, DGEMM, that is a part of the BLAS. An optimized set of BLAS routines widely used is A TLAS, or Automatically T uned Linear Algebra Softw are [ 25 ]. The website for A TLAS is located at http://math-atlas.sourcefor ge.net The A TLAS routines stri v e to create optimized softw are for dif ferent processor architectures. T o install precompiled A TLAS routines for a particular processor rst go to http://www .netlib .or g/atlas/archi v es On this page are links for AIX, SunOS, W indo ws, OS-X, IRIX, HP-IX, and Linux. Our cluster is using the Linux operating system so the linux link w as click ed. The ne xt page lists precompiled routines for se v eral processors, including Pentium 4 with Streaming SIMD Extensions 2 (SSE2), the AMD Hammer processor Po werPC, Athlon, Itanium, and Pentium III. The processors that we are using in the cluster are Pentium 4' s so the le atlas.6.0 Linux 4SSE2.tgz w as do wnloaded. The le w as do wnloaded to /home/apollo/hda8 and unpack ed. redboots@apollo> tar -xvzf atlas3.6.0_Linux_P4SSE2.tgz This creates the folder Linux P4SSE2 W ithin this directory is the Mak ele that w as used to compile the libraries, the folder containing the precompiled libraries, lib and in the include directory the C header les for the C interf ace to BLAS and LAP A CK. T o link to the precompiled A TLAS routines in HPL, simply point to the routines LAdir = /home/apollo/hda8/Linux_P4SSE2/lib LAlib = $(LAdir)/libcblas.a $(LAdir)/libatlas.a Also, for the A TLAS libraries, uncomment the line that reads: HPL_OPTS = -DHPL_CALL_CBLAS Finally compile the e x ecutable xhpl as sho wn abo v e. Enter the main HPL directory and type mak e ar c h=Linux P4

PAGE 48

34 redboots@apollo> make arch=Linux_P4 This creates the e x ecutable xhpl and the conguration le HPL.dat 3.3.3 Goto BLAS Libraries Initially the A TLAS routines were used in HPL to benchmark the cluster The results of the benchmark using the A TLAS routines were then compared to the results using another optimized set of BLAS routines de v eloped by Kazushige Goto [ 26 ]. The libraries de v eloped by Kazushige Goto are located at http://www .cs.ute xas.edu/users/kgoto/signup rst.html The libraries located at this website are optimized BLAS routines for a number of processors including Pentium III, Pentium IV AMD Opteron, Itanium2, Alpha, and PPC. A more in depth e xplanation as to wh y this library performs better than A TLAS is located at http://www .cs.ute xas.edu/users/ame/goto/ T o use these libraries on our cluster the routines optimized for Pentium 4' s with 512 Kb L2 cache were used, libgoto p4 512r0.96.so.gz Also the le xerbla.f needs to be do wnloaded which is located at http://www .cs.ute xas.edu/users/kgoto/libraries/x erbla.f This le is simply an error handler for the routines. T o use these routines, rst do wnload the appropriate le for your architecture and do wnload xerbla.f F or our cluster libgoto p4 512-r0.96.so.gz w as do wnloaded to /home/apollo/hda8/goto blas Unpack the le libgoto p4 512-r0.96.so.gz redboots@apollo> gunzip libgoto_p4_512-r0.96.so.gz This creates the le libgoto p4 512-r0.96.so Ne xt, do wnload the le xerbla.f from the website listed abo v e to /home/apollo/hda8/goto blas Ne xt, create the binary object le for x erbla.f redboots@apollo> g77 -c xerbla.f

PAGE 49

35 This will create the le xerbla.o These tw o les, libgoto p4 512-r0.96.so and xerbla.o need to be pointed to in the HPL Mak ele. LAdir = /home/apollo/hda8/goto_blas LAlib = $(LAdir)/libgoto_p4_512-r0.96.so $(LAdir)/xerbla.o Also the follo wing line needs to be commented. By placing a pound symbol in front of a line tells the compiler to ignore that line and treat it as te xt. #HPL_OPTS = -DHPL_CALL_CBLAS 3.3.4 Using either Library T w o sets of tests were carried out with HPL: one using the A TLAS routine and the other using Kazushige Goto' s routine. These tests were carried for se v eral reasons. One is to illustrate the importance of using well written and compiled softw are on a clusters performance. W ithout well written softw are that is optimized for a particular hardw are architecture or netw ork topograph y performance of a cluster suf fers greatly Another reason wh y tw o tests using the dif ferent BLAS routines were conducted is to get a more accurate assessment on our clusters performance. By using a benchmark, we ha v e an estimate on ho w our applications should perform. If the parallel applications that we use perform at a much lo wer le v el than the benchmark, then that w ould allo w us to conclude that our softw are isn' t tuned for our particular hardw are properly or the softw are contains inef cient coding. Belo w the process of using either A TLAS or Kazushige Goto' s BLAS routines will be discussed. The rst tests that were conducted used the A TLAS routines. Compile the HPL e x ecutable, xhpl as described abo v e for the A TLAS routines using the le Mak ele in Appendix B.3.1 After the tests are completed using the A TLAS routines simply change the links to Goto' s BLAS routines and comment the line that calls the BLAS F ortran 77 interf ace. F or e xample, the section of the Mak ele which we use that determines which

PAGE 50

36 library to use is seen belo w F or the belo w section of the Mak ele, Goto' s BLAS routines are specied. # Below the user has a choice of using either the ATLAS or Goto # BLAS routines. To use the ATLAS routines, uncomment the # following 2 lines and comment the 3rd and 4th. To use Goto's BLAS # routines, comment the first 2 lines and uncomment line 3rd and # 4th. # BEGIN BLAS specification LAdir = /home/apollo/hda8/hpl/libraries LAlib = $(LAdir)/libgoto_p4_512-r0.96.so $(LAdir)/xerbla.o #LAdir = /home/apollo/hda8/Linux_P4SSE2/lib #LAlib = $(LAdir)/libcblas.a $(LAdir)/libatlas.a # END BLAS specification If Goto' s routines are to be used, just uncomment the tw o lines that specify those routines and comment the tw o lines for the A TLAS routines. The line that species the BLAS F ortran 77 interf ace is also commented when using Goto' s BLAS routines. #HPL_OPTS = -DHPL_CALL_CBLAS If the A TLAS routines are to be used, the abo v e line w ould be uncommented. After that xhpl is recompiled using the method described abo v e. redboots@apollo> make arch=Linux_P4 3.3.5 Benchmarking T o determine a FLOPS v alue for the machine(s) to be benchmark ed, HPL solv es a random dense linear system of equations in double precision. HPL solv es the random dense system by rst computing the LU f actorization with ro w-partial pi v oting and then solving the upper triangular system. HPL is v ery scalable, it is the benchmark used on the supercomputers with thousands of processors found at T op 500 List of Supercomputing Sites and can be used on a wide v ariety of computer architectures.

PAGE 51

37 3.3.6 Main Algorithm HPL solv es a linear system of equations,Eq. 3.1 for x using LU f actorization. It rst computes the product of the matrix in lo wer and upper triangular form. Ax = b (3.1) Ne xt, y is solv ed by forw ard substitution in Eq. 3.2 Ly = b (3.2) Finally x is solv ed by back substitution in Eq. 3.3 U x = y (3.3) T o distrib ute the data and pro vide an acceptable le v el of load balancing, a tw odimensional P-by-Q grid process is utilized. The n-by-n+1 coef cient matrix is partitioned into nb-by-nb blocks that are c yclically distrib uted onto the P-by-Q process grid. In each iteration, a panel of nb columns is f actorized, and the trailing submatrix is updated [ 24 ]. After the f actorization is complete and a v alue for x is solv ed, HPL then re generates the input matrix and v ector and substitutes the computed v alue of x to obtain a residual. If the residual is less than a threshold v alue of the order of 1.0 then the solution, x is considered ”numerically correct” [ 24 ] A further e xplanation of the algorithm is found at the projects website [ 24 ]. 3.3.7 HPL.dat Options When HPL is compiled, a le HPL.dat is created which holds all the options which direct HPL to be run in a particular manner Here, the format and main options of this le will be discussed. Belo w is a sample HPL.dat le used during a benchmarking process. HPLinpack benchmark input file Innovative Computing Laboratory, University of Tennessee

PAGE 52

38 HPL.out output file name (if any) 1 device out (6=stdout,7=stderr,file) 3 # of problems sizes (N) 1000 4800 8000 Ns 4 # of NBs 60 80 120 NBs 1 PMAP process mapping (0=Row,1=Column) 2 # of process grids (P x Q) 1 2 Ps 4 2 Qs 16.0 threshold 3 # of panel fact 0 1 2 PFACTs (0=left, 1=Crout, 2=Right) 3 # of recursive stopping criterium 2 4 8 NBMINs (>= 1) 1 # of panels in recursion 2 NDIVs 3 # of recursive panel fact. 0 1 2 RFACTs (0=left, 1=Crout, 2=Right) 6 # of broadcast 0 1 2 3 4 5 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM) 1 # of lookahead depth 1 DEPTHs (>=0) 2 SWAP (0=bin-exch,1=long,2=mix) 64 swapping threshold 0 L1 in (0=transposed,1=no-transposed) form 0 U in (0=transposed,1=no-transposed) form

PAGE 53

39 1 Equilibration (0=no,1=yes) 8 memory alignment in double (> 0) The rst tw o lines of the le are not used. The third line lets the user choose the name of the le the results will be sa v ed to if desired, in this case HPL.out The fourth line directs the output to either the command terminal or to a le whose name is assigned in the third line. The program will print to a le if the v alue in line four is an y other than 6 or 7. F or the abo v e e xample, output will be written to the le HPL.out because line four is specied as 1. The fth line allo ws the user to specify ho w man y linear system of equations will be solv ed. Line six species the size, Ns of the matrices. The generated dense matrix will therefore ha v e the dimension Ns x Ns The limiting f actor in choosing a matrix size is the amount of ph ysical memory on the computers to be benchmark ed. A benchmark will return much better results if only the ph ysical memory Random Access Memory (RAM), is used and not the virtual memory V irtual memory is a method to simulate RAM by using the hard dri v e for data storage. T o calculate the maximum matrix size that should be used rst add up the amount of ram on the computers in the cluster F or e xample, our cluster has four nodes with one Gig abyte of ram each for a total of four Gig abytes of ph ysical memory No w multiply the total ph ysical memory by 1element per 8bytes Eq. 3.4 Each entry in the matrix has a size of eight bytes. number el ements = 4000000000 by tes ¢ 1 el ement 8 by tes (3.4) The abo v e result will gi v e the total number of entries allo wed in the matrix, in this e xample, 500,000,000. T aking the square root, Eq. 3.5 will gi v e the matrix size. matr ix siz e = p number el ements (3.5)

PAGE 54

40 F or this e xample the maximum matrix size is around 22,000 x 22,000. T o allo w enough memory for the operating system and other system processes reduces the abo v e dimensions for a maximum allo w able matrix dimension of around 20,000 x 20,000. Lines six and se v en, respecti v ely specify the number of block sizes to be tested in dif ferent runs and the sizes of those blocks. Block sizes, used during the data distrib ution, helps determine the computational granularity of the problem. If the block size is too small, much more communication between the nodes is necessary in order to transfer the data. If the block size is too lar ge, the compute node may not be able to handle the computation ef ciently Block sizes typically range from 60 to 180 depending on netw ork and compute node architecture [ 24 ]. Line nine species ho w the MPI process should be mapped onto the compute nodes. Mapping is a w ay of specifying which processors e x ecute which threads. The tw o possible mappings are ro w and column major Lines ten through twelv e allo w the user to specify the number of grid congurations to be run and the layout of the grid. F or the abo v e e xample, tw o grid congurations will be run. The rst one will ha v e a 1 by4 layout and the second will ha v e a 2 by 2 layout. If the user w anted to test HPL on a single computer the number of process grids w ould be 1 and lik e wise the v alues of P and Q. 1 # of process grids (P x Q) 1 Ps 1 Qs Line thirteen species the threshold to which the residuals should be compared to. The def ault v alue is sixteen and is the recommended v alue which will co v er most cases [ 24 ]. If the residuals are lar ger than the threshold v alue, the run will be mark ed as a f ailure e v en though the results can be considered correct. F or our benchmarking, the def ault v alue of 16 will be used for lar ge problem sizes and -16 for small problems. A

PAGE 55

41 ne g ati v e threshold will cause xhpl to skip checking of the results. The user may wish to skip the checking of results if a quick benchmark is desired without ha ving to resolv e the system of equations. Lines fourteen through twenty-one allo w the user to choose the panel f actorization, PF A CTs and recursi v e panel f actorization, RF A CTs The panel f actorization is matrixmatrix operation based and recursi v e, di viding the panel into NDIVs subpanels at each step [ 24 ]. The recursion continues until there is less than or equal to NBMINs columns left. F or the panel and recursi v e panel f actorization, the user is allo wed to test left-looking, right-looking, and Crout LU f actorization algorithms. 3 # of panel fact 0 1 2 PFACTs (0=left, 1=Crout, 2=Right) 2 # of recursive stopping criterium 2 4 NBMINs (>= 1) 1 # of panels in recursion 2 NDIVs 3 # of recursive panel fact. 0 1 2 RFACTs (0=left, 1=Crout, 2=Right) The abo v e e xample tests the three LU f actorization algorithms for both recursi v e panel f actorization and panel f actorization, tests tw o cases of stopping the recursion at tw o and four columns in the current panel, and tests one case of di viding the panel into tw o subpanels. Lines twenty-tw o and twenty-three specify ho w the panels are to broadcast to the other processors after f actorization. The size a v ailable algorithms for broadcast are increasingring, modied increasing-ring, increasing-2-ring, modied increasing-2-ring, long bandwidth reducing, and modied long bandwidth reducing [ 24 ].

PAGE 56

42 The remaining lines species options that will further optimize the benchmark tests. F or our runs the recommended v alues will be used. A further e xplanation of what the remaining options can be found at the HPL website [ 24 ]. 3.3.8 T est Setup Man y tests were conducted with dif ferent parameters specied in the le HPL.dat Initially a small problem size, Ns w as tested while v arying the other options. From there it w as determined what options allo wed xhpl to perform the best. A small problem size w as used primarily because it w ould tak e a lot of time to conduct all the tests if a lar ge size w as used. If 5 block sizes, NBs all 6 panel broadcast methods, BCASTs and 2 process grids were tested, that w ould be 60 tests. Ev en though by using a small problem size the reported Gops will not be the highest that is attainable, we still are able to determine what parameters af fect relati v e performance. After a test with a small problem size w as conducted, the options that performed the w orst were eliminated and tests with the remaining good performing options were carried out. This process of eliminating the w orst performing options and rerunning the tests with the best options w as continued until a set of options that performed the best w as obtained. The rst test conducted in v olv ed a problem size, Ns of 1000 on apollo Initially only apollo will be tested to get a base measurement for a nodes performance. After apollo is fully tested, tw o, then three, then all the nodes will be tested so that the scalability of the cluster can be determined. The HPL.dat le that w as used for the initial testing of apollo is seen in Appendix B.3.2 Six block sizes, left-looking, right-looking and Crout' s method for f actorization, three v alues for the number of subpanels to create, and four v alues for the columns when recursi v e panel f actorization stops. It should be noted that the threshold used for these tests will be -16 By using a ne g ati v e threshold, checking of the results will not be performed. F or lar ger tests and tests using the netw ork,

PAGE 57

43 the results will be check ed to determine if the netw ork is correctly transferring data. Also, for a single CPU test, testing panel broadcast algorithms is not necessary because the netw ork is not being used. 3.3.9 Results The results for the initial tests on apollo are sho wn in the Appendix B.3.3 From the data and Figure 3.6 there is a v ery noticeable correlation between block size, NBs and performance. The best performance w as achie v ed at a block size of 160. Figure 3.6. Block size ef fect on performance for 1 node The ne xt test conducted used a single block size, 160, and left the other options as the y were for the pre vious test. The HPL.dat le used for this test is sho wn in Appendix B.3.4 The test w as rerun using the command: redboots@apollo> mpirun -np 1 xhpl The results of the second test using 160 as the block size are sho wn in Appendix B.3.5 From the results, the most noticeable parameter that af fects performance is NDIVs

PAGE 58

44 or the number of subpanels that are created during the recursi v e f actorization. F or NDIVs equal to 2, the a v erage Gops is 1.891, for NDIVs equal to 3, the a v erage Gops is 1.861, and for a NDIVs equal to 4, the a v erage Gops is 1.864. The ne xt test in v olv ed setting NDIVs to 2 and rerunning with the other parameters unchanged. From the data, the parameter that af fects performance the most for this run is the v alue of NBMINs A NBMINs v alue of 4 returns the bests results with an a v erage of 1.899 Gops compared to an a v erage of 1.879 Gops for NBMINs equal to 1, 1.891 Gops for NBMINs equal to 2, and 1.892 Gops for NBMINs equal to 8. The remaining parameters that can be changed are the algorithms used for the panel f actorization and recursi v e panel f actorization. F or this test, NBMINs w as set to 4 and rerun. This test w as run as before. redboots@apollo> mpirun -np 1 xhpl From the results, using an y of the algorithms for panel f actorization and recursi v e panel f actorization produced v ery similar results. Since the three f actorization algorithms produced similar results, for the nal test using a lar ge problem size, Crout' s algorithm for both f actorizations will be used mainly because the algorithm implemented in SPOOLES for f actorization is Crout' s. The nal test in v olving one processor will use the optimum parameters determined as sho wn abo v e and also the maximum problem size allo wed by system memory Checking of the solution will also be enable by changing the threshold v alue, line 13, to a positi v e 16. T o calculate the maximum problem size, Eqs. 3.4 and 3.5 will be used. number el ements = 1000000000 by tes ¢ 1 el ement 8 by tes matr ix siz e = p 125000000 matr ix siz e = 11180

PAGE 59

45 A matrix of size 11180 x 11180 is the maximum that can t into system memory T o ensure that slo w virtual memory is not to be used, a matrix of size 10000 x 10000 will be used for the nal test. The HPL.dat le used for this test is found in AppendixB.3.6. Using the A TLAS BLAS routines, apollo achie v es a maximum Gops of 2.840. The theoretical peak performance of a Pentium 4 2.4 GHz is calculated as follo ws. The processors we are using include SSE2 instructions which allo w 2 oating point operations per CPU c ycle. The theoretical peak performance is then calculated by multiplying 2 oating point operations per CPU c ycle by the processor frequenc y 2.4 GHz, Eq.3.6. T heor etical P eak = 2 F P ops C P U cy cl e 2 : 4 GH z (3.6) The 2.840 Gops reported by xhpl using the A TLAS BLAS routines is approximately 59 percent of the theoretical peak, 4.8 Gops, of the processor The results of the tests using the A TLAS BLAS routines are sho wn in T able3.2 .Table 3.2 listsproblem size, block size, NDIVs, NBMINs, PF A CTs, RF A CTs, the a v erage Gops for all tested options during each run, and the a v erage Gops percentage of the theoretical maximum 4.8 Gops.

PAGE 60

46T able 3.2. A TLAS BLAS routine results Ns Block Size NDIVs NBMINs PF A CTs RF A CTs Gops % T Max 1000 32 64 96 128 160 192 2 3 4 1 2 4 8 L C R L C R 1.626 33.88 1000 160 2 3 4 1 2 4 8 L C R L C R 1.872 39.00 1000 160 2 1 2 4 8 L C R L C R 1.890 39.38 1000 160 2 4 L C R L C R 1.893 39.44 10000 160 2 4 C C 2.840 59.17

PAGE 61

47 3.3.10 Goto' s BLAS Routines Ne xt, Goto' s BLAS routines will be tested. First, compile xhpl using Goto' s BLAS routines as sho wn abo v e. Edit the le Mak e .Linux P4 so that the follo wing lines are uncommented LAdir = /home/apollo/hda8/goto_blas LAlib = $(LAdir)/libgoto_p4_512-r0.96.so $(LAdir)/xerbla.o and the follo wing line is commented #HPL_OPTS = -DHPL_CALL_CBLAS The tests will be carried out in the same manner as the A TLAS routines were tested. First, all options will be tested. The options that returned the most noticeable inuence on performance will be selected and the tests rerun until a nal set of optimized parameters are selected. The results for the tests using Goto' s BLAS routines are sho wn in T able3.3 .TheresultsclearlyshowthatGoto'sBLASroutinesperformmuchbetterthanthe A TLAS routines, around a 29.6% increase.

PAGE 62

48T able 3.3. Goto' s BLAS routine results Ns Block Size NDIVs NBMINs PF A CTs RF A CTs Gops % T Max 1000 32 64 96 128 160 192 2 3 4 1 2 4 8 L C R L C R 2.141 44.60 1000 128 2 3 4 1 2 4 8 L C R L C R 2.265 47.19 1000 128 2 1 2 4 8 L C R L C R 2.284 47.58 1000 128 2 8 L C R L C R 2.287 47.65 10000 128 2 8 L C 3.681 76.69

PAGE 63

49 3.4 HPL–Multiple Node T ests F or this section, multiple nodes will be used. By kno wing apollo' s performance, running xhpl on more than one node will allo w us to see the scalability of our cluster Ideally a clusters performance should scale linearly to the number of nodes present. F or e xample, apollo achie v ed a maximum Gops of 3.681 using Goto' s BLAS routines. Ho we v er because of netw ork induced o v erhead, adding another identical node to the cluster will not increase the performance to 7.362 Gops. By determining ho w scalable a cluster is, it is possible to select the appropriate cluster conguration for a particular problem size.3.4.1 T w o Processor T ests F or tests using more than one node, the initial matrix size tested will be increased from that of the matrix tested in the single node runs. The reasoning behind this is illustrated in the tests of MPICH after it w as installed. When running cpi for small problem sizes a single processor performed better than a multi-node run. By using a small problem size and multiple nodes, the netw ork w ould add signicant o v erhead, so much that it w ould be dif cult to accurately measure the clusters performance. F or small problem sizes, the full potential of a processor isn' t being used, instead time is w asted on inter processor communication. F or testing multiple nodes, tw o other options are added to those tested for a single node, these being the process grid and panel broadcast algorithm. If these options were added to the options tested for a single node, this w ould bring the total number of initial tests to almost eight-thousand. Instead of running all tests at once, the process grid layout and panel broadcast algorithms will be tested separately Once it is determined which parameters for the process grid layout and panel broadcast perform the best for a general problem re g ardless of the other options, the same testing methodology applied to a single node can be applied to multiple nodes.

PAGE 64

50 3.4.2 Process Grid The rst test for multiple nodes will determine which process grid layout achie v es the best performance. A grid is simply a splitting of the w ork for matrix computations into blocks and these blocks are then distrib uted among processors. A block matrix is a submatrix of the original matrix. This is more easily illustrated by Eq. 3.7 A 11 A 12 A 21 A 22 = L 11 0 L 21 L 22 ¢ U 11 U 12 0 U 22 (3.7) A is di vided into four submatrices, A11 A12 A21 and A22 of block size NBs i.e. A11 is of size NBs x NBs After LU f actorization, A is represented by the matrices on the right hand size of Eq. 3.7 By using block ed matrices, Le v el 3 BLAS can be emplo yed which are much more ef cient than Le v el 1 and Le v el 2 routines [ 27 ]. This ef cienc y can be attrib uted to allo wing block ed submatrices to t into a processors high speed cache. Le v el 3 BLAS allo ws for the ef cient access of todays processors hierarchal shared memory cache, and re gisters [ 28 ]. Le v el 1 BLAS only operates on one or tw o columns, or v ectors of a matrix at a time, Le v el 2 BLAS performs matrix-v ector operations, and Le v el 3 BLAS handles matrix-matrix operations [ 27 ][ 29 ]. There are se v eral dif ferent block partitioning schemes, one, tw o, and three-dimensional. HPL emplo ys a tw o-dimensional block-c yclic P-by-Q grid of processes so that good load balance and scalability are achie v ed [ 24 ]. This layout is more easily visualized from Figure 3.7 F or ar guments sak e, let Figure 3.7 illustrate a 36x36 matrix. 1, 2, 3, and 4 are the processor ids arranged in a 2x2 grid which w ould equate to a block size, NBs, of six. This block ed c yclic layout has been sho wn to possess good scalability properties [ 28 ]. The rst test for a multi-computer benchmark will determine which grid layout achie v es the best results. A problem size, Ns of 3500, and tw o process grid layouts, 1x2 and 2x1, were tested rst. The remaining options do not matter for this test for it is only

PAGE 65

51 Figure 3.7. 2D block-c yclic layout being used to determine which grid layout returns the best relati v e result. Also, Goto' s BLAS library will be used for the remaining tests because it performs the best on our architecture. The HPL.dat le used for the rst multi-computer test is seen in Appendix B.3.7 T o run a multi-computer test, an additional node needs to be specied for the ar guments to mpirun T o run tw o nodes, the follo wing command is entered. redboots@apollo> mpirun -np 2 xhpl The 2 after -np species the number of nodes the test will be run on. The nodes are listed in the le mac hines.Linux located in /home/apollo/hda8/mpic h 1.2.5.2/util/mac hines/ The tw o nodes used will be the rst tw o listed in this le, apollo and euclid From the results in Appendix B.3.8 it is clear that a ”at” grid, 1x2, performs much better than the 2x1 layout. F or the remaining tests, a 1x2 process grid layout will be used. The ne xt test in v olv es selecting which panel broadcast algorithm performs the best. The HPL.dat le used for this test is seen in Appendix B.3.9 F or this test, all 6 panel broadcast algorithms will be tested. The Incr easing-2-ring algorithm performed the best for this test.

PAGE 66

52 The ne xt test will use the Incr easing-2-ring algorithm. The option that returned the bests results for this run w as the block size. As seen in Figure 3.8 a block size of 128 clearly performed better than the others. Figure 3.8. Block size ef fect on performance for 2 nodes The ne xt run will use a block size of 128 and test the remaining options. From the results, a NDIVs of 2 returned the best performance. Ne xt, NDIVs will be set to 2 and the test rerun with the command: redboots@apollo> mpirun -np 2 xhpl The follo wing results were obtained after this run. NBMINs af fected performance the most, though slightly with the follo wing a v erages for dif ferent NBMINs, a NBMINs of 1 had an a v erage of 3.0304 Gops, 2 had an a v erage of 3.0308 Gops, 4 had an a v erage of 3.0312 Gops, and 8 and an a v erage of 3.0306 Gops. The nal test will determine the maximum performance of the tw o computer cluster by using the lar gest matrix size that will t into memory By using Eqs. 3.4 and 3.5 and maximum system memory of 2 Gb, the maximum matrix size is around 15800x15800.

PAGE 67

53 T o account for memory to run the operating system, a matrix of size 14500x14500 will be used. The HPL.dat le used for this test is in Appendix B.3.10 This run achie v ed 5.085 Gops. Out of curiosity dif ferent matrix sizes were also tested. The results of these tests are as follo ws: 13500x13500 achie v ed 5.245 Gops, 12500x12500 achie v ed 5.364 Gops, and 11500x11500 achie v ed 5.222 Gops. The results are summarized in T able 3.4

PAGE 68

54T able 3.4. Goto' s BLAS routine results–2 processors Ns Grid layout Bcast algorithm Block Size NDIVs NBMINs PF A CTs RF A CTs Gops % T Max 3500 2x1 1x2 1 128 2 8 C C 2.585 26.93 3500 1x2 0 1 2 3 4 5 128 2 8 C C 2.891 30.12 3500 1x2 2 32 64 96 2 3 4 1 2 4 8 L C R L C R 2.927 30.49 128 160 192 3500 1x2 2 128 2 3 4 1 2 4 8 L C R L C R 3.025 31.51 3500 1x2 2 128 2 1 2 4 8 L C R L C R 3.027 31.53 3500 1x2 2 128 2 4 L C R L C R 3.029 31.55 11500 1x2 2 128 2 4 C C 5.222 54.40 12500 1x2 2 128 2 4 C C 5.364 55.88 13500 1x2 2 128 2 4 C C 5.245 54.64 14500 1x2 2 128 2 4 C C 5.085 52.97

PAGE 69

55 3.4.3 Three Processor T ests This section will go through the steps of testing three processors and discussing the results. The steps of testing three processors is the same as testing just tw o. First the best grid layout is determined, then the best broadcast algorithm, and nally the remaining v ariables that return the best results are found. The rst test determined the grid layout. As with the tw o processor test, a ”at” grid of 1x3 performed the best as sho wn in the results. F or broadcasting the messages to the other nodes, modied increasing ring algorithm performed the best. Ne xt, all the options were tested. As in pre vious tests, the block size af fected performance the most. F or the three processor test, the block size that achie v ed the o v erall best performance w as 96. The results are sho wn in Figure 3.9 Figure 3.9. Block size ef fect on performance for 3 nodes The ne xt test used a block size of 96 and tested the remaining v ariables. The results from this test were inconclusi v e as to which parameter af fects performance the most. This test w as run se v eral times with similar results. The highest recorded performance for the

PAGE 70

56 test w as 2.853 Gops which occurred four times. Since no conclusion could be made from the results, the remaining v ariables were chosen to be Crout' s algorithm for both the panel f actorization and panel f actorization, NBMINs of 4, and NDIVs of 2. The nal test will determine the maximum performance of our three node cluster T o determine the maximum matrix size to use, Eqs. 3.4 and 3.5 will be used. F or three gig abytes of total system memory and accounting for operating system memory requirements, a maximum matrix size of 17800x17800 will be used. As with the tw o processor tests, dif ferent matrix sizes will be tested to see which achie v es maximum performance.

PAGE 71

57T able 3.5. Goto' s BLAS routine results–3 processors Ns Grid layout Bcast algorithm Block Size NDIVs NBMINs PF A CTs RF A CTs Gops % T Max 3500 3x1 1x3 1 128 2 8 C C 2.203 16.20 3500 1x3 0 1 2 3 4 5 128 2 8 C C 2.489 18.30 3500 1x3 1 32 64 96 2 3 4 1 2 4 8 L C R L C R 2.778 20.43 128 160 192 3500 1x3 1 96 2 3 4 1 2 4 8 L C R L C R 2.847 20.93 17800 1x3 1 96 2 4 C C 6.929 50.95 16800 1x3 1 96 2 4 C C 6.798 49.99 15800 1x3 1 96 2 4 C C 6.666 49.02 19000 1x3 1 96 2 4 C C 6.778 49.84 18500 1x3 1 96 2 4 C C 7.013 51.57 18250 1x3 1 96 2 4 C C 7.004 51.50 18750 1x3 1 96 2 4 C C 6.975 51.29

PAGE 72

58 From T able 3.5 the three node cluster achie v ed a maximum of 7.013 Gops. Using Equation 3.6 the three node cluster has a theoretical peak of 14.4 Gops b ut only achie v ed around fty percent of that. 3.4.4 F our Processor T ests The method of testing four processors is the same as the pre vious tests. The rst test will determine which grid layout deli v ers the best results. Besides the ob vious layouts of 1x4 and 4x1, a 4 processor test can also ha v e a 2x2 grid layout. From the results a 2x2 grid clearly outperformed the other layouts. From Figure 3.10 a block size of 96 o v erall performed much better than the others. Se v eral tests using a block size of 64 came close to that of 96 b ut the a v erage Gops w as much lo wer T o determine the maximum matrix size to be tested, Eqs. 3.4 and 3.5 are Figure 3.10. Block size ef fect on performance for 4 nodes used. The maximum matrix size that can t into RAM is around 21500 x 21500. Results for the four processor tests are summarized in T able 3.6

PAGE 73

59T able 3.6. Goto' s BLAS routine results–4 processors Ns Grid layout Bcast algorithm Block Size NDIVs NBMINs PF A CTs RF A CTs Gops % T Max 4000 4x1 1x4 2x2 1 128 2 8 C C 2.546 14.47 4000 2x2 0 1 2 3 4 5 128 2 8 C C 3.083 17.52 4000 2x2 2 32 64 96 2 3 4 1 2 4 8 L C R L C R 3.172 18.02 128 160 192 4000 2x2 2 96 2 3 4 1 2 4 8 L C R L C R 3.272 18.59 21500 2x2 2 96 2 4 C C 8.717 49.53 21000 2x2 2 96 2 4 C C 8.579 48.74 21700 2x2 1 96 2 4 C C 6.939 39.43 21250 2x2 1 96 2 4 C C 8.642 49.10

PAGE 74

60 3.4.5 Conclusions Se v eral conclusions can be made from the benchmarking tests. First, from the HPL test results, there is a strong correlation between softw are optimized for a particular processor architecture and its performance. Clearly the BLAS routines pro vided by Kazushige Goto with optimizations for a processors cache outperform the optimized A T LAS routines by as much as 29%. While no changes will be made to the algorithm used by SPOOLES to solv e linear equations, there is al w ays room for impro v ement in computational softw are. The A TLAS routines were long considered the best BLAS library until Goto' s routines made further impro v ements. From Figure 3.11 another important point is illustrated. As the number of processors increases and depending on the problem type, the actual maximum performance of a cluster when compared to the theoretical performance generally decreases as the number of processors increases [ 6 ]. As the number of processors increase, so does the time for each compute note to intercommunicate and share data. If a cluster w as only able to reach a small percentage of its theoretical peak, there may be an underlying problem with the netw ork or compute node setup. If the cluster is used to solv e communication intensi v e problems the netw ork may require Gig abit Ethernet or a proprietary netw ork that is designed for lo w latenc y and high bandwidth. On the other hand, if the problems are computational intensi v e, increasing the RAM or processor speed ares tw o possible solutions. Although the performance ratio decreases as the nodes increases, the adv antage of using more nodes is that the maximum problem size that can be solv ed increases. A problem size of 12000x12000 w as tested under one processor and took around tw o and a half hours with the virtual memory being used e xtensi v ely Using four processors, the same problem size w as solv ed in less than three minutes with no virtual memory being used. Although the solv e ef cienc y of using multiple processors is less than that of one processor by spreading the w ork among compute nodes, lar ge problems can be solv ed, although not ef ciently b ut in a much more reasonable time frame.

PAGE 75

61 Figure 3.11. Decrease in maximum performance The block size, NB is used by HPL to control the data distrib ution and computational granularity If the block size becomes too small, the number of messages passed between compute nodes increases, b ut if the block size is too lar ge, messages will not be passed ef ciently When just apollo w as benchmark ed with HPL, the best results were obtained with a block size of 128, b ut as the number of nodes increases, so does the importance of passing data between nodes. W ith multiple nodes, a block size of 96 allo wed the block ed matrix-multiply routines in HPL return the best results. Depending on the problem type and netw ork hardw are, parallel programs will per form strikingly dif ferent. F or e xample, when running cpi for the lar gest problem tested, one node took 53 seconds to calculate a solution. When running tw o compute nodes, it took around 27 seconds while four nodes only decreases the run-time to around 20 seconds. Running tw o nodes nearly increased the performance tw o-fold b ut four nodes didn' t achie v e the same performance. When running a test problem under HPL, one compute

PAGE 76

62 node completed a run in three minutes for a problem size of 10000x10000, tw o compute nodes completed the same test run in tw o minutes and fteen seconds while four nodes took one minute forty-nine seconds. F or computational intensi v e problems such as those solving linear system of equations, a f ast netw ork is critical to achie ving good scalability F or problems that just di vide up the w ork and send it to the compute nodes without a lot of interprocessor communication lik e cpi a f ast netw ork isn' t as important to achie ving acceptable speedup.

PAGE 77

CHAPTER 4 CALCULIX CalculiX [ 30 ] is an Open Source softw are package that pro vides the tools to create tw o-dimensional and three-dimensional geometry create a mesh out of the geometry apply boundary conditions and loadings, and then solv e the problem. CalculiX is free softw are, can be redistrib uted and/or modied under the terms of the GNU General Public License [ 31 ]. CalculiX w as de v eloped by a team at MTU AeroEngines in their spare time and were granted permission to publish their w ork. 4.1 Installation of CalculiX GraphiX There are tw o separate programs that mak e up CalculiX: cgx (CalculiX GraphiX) and ccx (CalculiX CrunchiX). cgx is the graphical pre-processor that creates the geometry and nite-element mesh and post-processor which vie ws the results. ccx is the solv er that calculates the displacement or temperature of the nodes. First go to http://www .dhondt.de/ and scroll to near the bottom of the page. Select the link under A vailable downloads for the gr aphical interface (CalculiX Gr aphiX: cgx): that reads a statically link Linux binary Sa v e the le to a folder in our case /home/apollo/hda8 Unzip the le by typing: redboots@apollo> gunzip cgx_1.1.exe.tar.gz redboots@apollo> tar -xvf cgx_1.1.exe.tar' This will create the le cgx 1.1.e xe Rename the le cgx 1.1.e xe to cgx become root, mo v e the le to /usr/local/bin and mak e it e x ecutable. redboots@apollo> mv cgx_1.1.exe cgx redboots@apollo> su 63

PAGE 78

64 Password:root@apollo> mv cgx /usr/local/bin root@apollo> chmod a+rx /usr/local/bin/cgx T o vie w a list of commands that cgx accepts, type cgx at a command prompt. redboots@apollo> cgx ------------------------------------------------------------------CALCULIX GRAPHICAL INTERFACE Version 1.200000 A 3-dimensional preand post-processor for finite elements Copyright (C) 1996, 2002 Klaus Wittig This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. ------------------------------------------------------------------usage: cgx [-b|-g|-c|-duns2d|-duns3d] filename -b build-mode, geometry file must be provided -c read an solver input file (ccx) -duns2d read duns result files (2D) -duns3d read duns result files (3D) -g use element-group-numbers from the result-file 4.2 Installation of CalculiX CrunchiX First go to http://www .dhondt.de/ and scroll to near the bottom of the page. Select the link under A vailable downloads for the solver (CalculiX Crunc hiX: ccx): that reads the sour ce code Sa v e this le to a folder in our case /home/apollo/hda8 and unzip.

PAGE 79

65 redboots@apollo> gunzip ccx_1.1.src.tar.gz redboots@apollo> tar -xvf ccx_1.1.src.tar This creates the folder CalculiX with the source code inside CalculiX/ccx 1.1/sr c In order to b uild the serial solv er for CalculiX, SPOOLES and ARP A CK need to be installed. ARP A CK is a collection of F ortran 77 subroutines designed to solv e lar ge scale eigen v alue problems [ 32 ] while SPOOLES pro vides the solv er for sparse, linear systems of equations [ 33 ]. 4.2.1 ARP A CK Installation T o do wnload and install the ARnoldi P A CKage (ARP A CK), rst go to the homepage at http://www .caam.rice.edu/softw are/ARP A CK/ and click the link for Download Softwar e Do wnload the zipped le arpac k96.tar .gz to a directory in our case /home/apollo/hda8/ and unpack. redboots@apollo> gunzip arpack96.tar.gz redboots@apollo> tar -xvf arpack96.tar This will create the folder ARP A CK Enclosed in ARP A CK is the source code, located in SRC e xamples, documentation, and se v eral Mak eles for dif ferent architectures located in ARMAKES Cop y the le$ARP A CK/ARMAKES/ARmak e .SUN4 to the main ARP A CK directory and rename it to ARmak e .inc redboots@apollo> cp ARMAKES/ARmake.SUN4 ARmake.inc Edit ARmak e .inc so that it points to the main ARP A CK directory the correct F ortran compiler and species Linux as the platform. The ARmak e .inc le used for our system is located in Appendix C.1 Finally while in the main ARP A CK directory mak e the library redboots@apollo> make lib This will create the ARP A CK library le libarpac k Linux.a in the current directory

PAGE 80

66 4.2.2 SPOOLES Installation T o install the SP arse Object Oriented Linear Equations Solv er (SPOOLES), rst go to the projects website at http://www .netlib .or g/linalg/spooles/spooles.2.2.html Near the middle of the webpage, there are links for documentation and softw are do wnloads. Do wnload the le spooles.2.2.tgz place it in the folder SPOOLES.2.2 and unpack. The main directory for SPOOLES is located in /home/apollo/hda8 for our installation. redboots@apollo> cd /home/apollo/hda8 redboots@apollo> mkdir SPOOLES.2.2 redboots@apollo> mv spooles.2.2.tgz SPOOLES.2.2 redboots@apollo> cd SPOOLES.2.2 redboots@apollo> tar -xvzf spooles.2.2.tgz This will unpack spooles.2.2.tgz in the folder SPOOLES.2.2 First, edit the le Mak e .inc so that it points to the correct compiler MPICH install location, MPICH libraries, and include directory The MPICH options are only for b uilding the parallel solv er which will be discussed later T o create the library while in the directory /home/apollo/hda8/SPOOLES.2.2 type: redboots@apollo> make lib This will create the library spooles.a in$SPOOLES.2.2/ 4.2.3 Compile CalculiX CrunchiX After ARP A CK and SPOOLES are installed, enter into the CalculiX CrunchiX source directory redboots@apollo> cd /home/apollo/hda8/CalculiX/ccx_1.1/src

PAGE 81

67 Edit the le Mak ele so that it points to the ARP A CK and SPOOLES main directories and to their compiled libraries, and also to the correct C and F ortran 77 compilers. The Mak ele used for our setup is seen in Appendix C.2 After the Mak ele has been edited, type: redboots@apollo> make This will compile the solv er for CalculiX. No w cop y the solv er ccx 1.1 to the /usr/local/bin and mak e it e x ecutable and readable by users. redboots@apollo> su Password:root@apollo> cp ccx_1.1 /usr/local/bin root@apollo> chmod a+rx /usr/local/bin/ccx_1.1 4.3 Geometric Capabilities This section describes briey the steps from creating a model to vie wing the results of a nite-element analysis. The rst step is to create a model. F or tw o-dimensional geometries, points, lines, and surf aces are dened and with three-dimensional geometries, points, lines, surf aces, and also bodies are dened. The easiest w ay to create a point in CalculiX is by dening its location in a threedimensional space. Each point can be assigned a name or you use the wild card character and let CalculiX name the point for you. After points, the ne xt geometric entity that is created are lines. Lines can be straight, an arc, or a spline. T o create a straight line 2 points are selected. F or an arc, a be ginning and endpoint are selected along with the arcs center point. F or a spline, multiple points are dened and then each point is selected to create the spline. Surf aces are dened by selecting 3 to 5 lines. By using the command qsur the

PAGE 82

68 mouse can be used to select lines and generate surf aces. Surf aces may be at or curv ed depending on the boundary dening lines. Bodies are dened by 5 to 7 surf aces. After a surf ace is created, it can be copied and translated or swept about a trajectory to create a body Also, bodies can be created by creating all the necessary surf aces and using the command qbod to select them all and create the desired body 4.4 Pre-processing CalculiX allo ws for the creation of f airly comple x models. Points, lines, surf aces, and bodies are created, in that order by either typing them or by using the mouse for selection through the interf ace. Another option, since the le format is straightforw ard, one can simply write the geometry le directly through your f a v orite te xt editor If a point is modied, the line that is dened partially by that point will also be modied and all related surf aces and bodies. This also holds true for modifying lines, surf aces and bodies and their associated geometric entities. T o be gin creating geometry with CalculiX, issue the command: redboots@apollo> cgx -b all.fbd where all.fbd is the geometry le being created. This brings up the main screen seen in Figure 4.1 Creation of geometric entities can be performed in a multitude of w ays as described in the follo wing. 4.4.1 Points Points are created by entering its location in three-dimensional space or by the splitting of lines. T o create a point, simply type, while the mouse cursor is within the CalculiX windo w the follo wing:

PAGE 83

69 Figure 4.1. Opening screen pnt p1 0.5 0.2 10 This creates a point named p1 at 0.5 in the x, 0.2 in the y and 10 in the z-direction. If the user w anted CalculiX to assign names to points, instead of typing p1 the user w ould replace it with Instead of using the CalculiX display the follo wing can be entered in the le all.fbd to create the same point. PNT p1 0.50 0.20 10.0 In Figure 4.2 p1 is plotted with its name. 4.4.2 Lines Lines are created by creating points then selecting these points or by entering a command. T o create a line by selection, at least 2 points must be created. First, enter the command:qlin A little selection box appears. When one of the points is within the selection box press b for be gin and l for line when the selection box is o v er the second point. b is al w ays

PAGE 84

70 Figure 4.2. p1 with label the rst k e y pressed when creating a line and l is the last. If an arc is desired, press b o v er the rst point, c for center of the second, and l o v er the third to create the arc. If a spline is desired, press s o v er all the dening points between pressing b and l Figure 4.3 sho ws a spline passing through 4 points. Figure 4.3. Spline

PAGE 85

71 4.4.3 Surf aces Surf aces can be created by using the command gsur or by using the mouse b ut are limited to 3 to 5 sides. T o create a surf ace using the mouse, the user enters the command qsur placing a selection box o v er a line and pressing 1 for the rst line, 2 for the second, and so on till up to 5 sides are selected. Figure 4.4 sho ws a surf ace created by 3 straight lines and the spline. Figure 4.4. Surf ace 4.4.4 Bodies The nal geometric entities that can be created with CalculiX are bodies. Bodies are dened by selecting v e to se v en surf aces using either selection by a mouse or by command line input. Another method of creating bodies is by sweeping a surf ace along an axis or rotating surf ace about a center line. Figure 4.5 sho ws a body created by sweeping the surf ace created in Figure 4.4 along the v ector 2, 10, 3.

PAGE 86

72 Figure 4.5. Body created by sweeping 4.5 Finite-Element Mesh Creation This section describes the nite-element mesh creating capabilities of CalculiX. CalculiX is able to create and handle se v eral tw o-dimensional and three-dimensional elements. T w o-dimensional elements that CalculiX can create are: 2 node beam element 3 node beam element 3 node triangular element 6 node triangular element 4 node shell element 8 node shell element Three-dimensional elements that CalculiX can create are: 8 node brick element

PAGE 87

73 20 node brick element Although CalculiX can not create other elements at this time, the nite-element solv er is able to handle them and CalculiX can then post-process the results. The threedimensional elements that can be solv ed b ut not created are: 4 node tetrahedral element 10 node tetrahedral element 6 node wedge element 15 node wedge element After the geometry is created, CalculiX allo ws the user to create a nite-element mesh. T o create a mesh, the user species the number of elements along an edge and then to create an 8-node brick element, issues the command: elty all he8 T wenty-node brick elements can be created by simply replacing he8 with he20 Ne xt, to create the mesh, type: mesh all

PAGE 88

CHAPTER 5 CREA TING GEOMETR Y WITH CALCULIX. In this chapter a tutorial is gi v en on creating a three-dimensional part, applying a nite element mesh and boundary conditions, and solving the problem. The v arious methods on creating and modifying geometric entities will be co v ered and also its usefulness as a nite element preand post-processor 5.1 CalculiX Geometry Generation The part that we will be creating is sho wn in Figure 5.1 Figure 5.1. Final part W e will be using man y dif ferent commands in CalculiX to create points, lines, surf aces, bodies, and modifying these entities. It is a relati v ely simple part, b ut it will help illustrate the e xibility CalculiX gi v es to the user when creating and editing parts and also its nite element capabilities. 74

PAGE 89

75 5.1.1 Creating Points The rst feature that will be created is the handle section. T o be gin, rst create points that dene the boundary of the cross section. pnt p1 -0.93181 -0.185 0.37 pnt p2 -0.93181 -0.185 0.87 pnt p3 -0.93181 -0.185 1.37 pnt p4 -1.4 -0.185 0.37 pnt p5 -1.4 -0.185 0.62 pnt p6 -1.4 -0.185 0.87 pnt p7 -1.4 -0.185 1.12 pnt p8 -1.4 -0.185 1.37 pnt p9 -1.9 -0.185 0.87 pnt p10 -1.65 -0.185 0.87 pnt p11 -1.15 -0.185 0.87 pnt p12 -1.22322 -0.185 1.04678 pnt p13 -1.22322 -0.185 0.69322 pnt p14 0 0 0.37 pnt p15 0 0 0.87 pnt p16 0 0 1.37 No w type: plot pa all where p stands for point, a plots the name of the point, and all plots all the points. If the points are hard to see you may mo v e them by pressing and holding the right mouse b utton and mo ving it. If you wish to rotate the points, press the left mouse b utton and hold, then mo v e the mouse. By pressing and holding the middle mouse b utton and mo ving it, you can zoom in or out on entities in the screen. If a mistak e w as made in the typing of the coordinates, you may delete them using the qdel command. qdel After you type the qdel command press A tin y square will appear around your mouse pointer T o mak e this selection box bigger mo v e your mouse halfw ay

PAGE 90

76 between the upper left corner and center of the CalculiX screen and press r No w mo v e your mouse about halfw ay between the lo wer right corner and the center and press r ag ain. This should mak e the selection box bigger If you wish to change the size and shape of the selection box, just type r in one position and mo v e your mouse to another position and type r ag ain. An ything that f alls within the box can be selected. If you w ant to select multiple points press: a This brings the selection box into mode:a T o select a point, enclose a point in the selection box and press: p where p stands for point. Other entities that may be selected are lines, l surf aces, s and bodies, b T o select multiple points, enclose se v eral points within the selection box and press p If you wish to go back to selecting only one point at a time, press: i No w that all the points are created your screen should look similar to Figure 5.2 The ne xt step is to cop y these points and translate them 0.37 in the positi v e ydirection. T o cop y points, add the points to a set then use the copy command to cop y and then translate that set. T o create a set type the follo wing: qadd set1 which adds the set set1 Go to the left side of the CalculiX screen and press the left mouse b utton. Select Orientation and then +y vie w This will orient the screen so that you are looking in the positi v e y-direction. Notice that a selection box appears at the tip of the

PAGE 91

77 Figure 5.2. Creating points Figure 5.3. Selection box mouse pointer No w mak e the selection box bigger by using r to resize it. Mak e the box around the same size as sho wn in Figure 5.3 No w press: a

PAGE 92

78 to enter into multiple selection mode. Then press: p to add all points within the box to set1 No w press: q to e xit from the selection mode. Mak e sure points fourteen through sixteen are not selected. No w that the points are added to the set we can no w cop y and translate the points. copy set1 set2 tra 0 0.37 0 5.1.2 Creating Lines No w create lines that connect these points by using the qlin command. The qlin command w orks by either selecting the be ginning and end points to create a line or by selecting the tw o end points and the center to create an arc. First, type: qlin Notice that there appears a selection box at the tip of the mouse pointer Mak e this box a little bigger so that it can select only one point. Place the box o v er p1 and press: b for be gin Place the selection box o v er p4 and press: l for line Press: q

PAGE 93

79 Figure 5.4. Creating lines to quit. No w plot the line that w as created. A line should be plotted as sho wn in Figure 5.4 T o create an arc, type qlin Mak e the selection box a little bigger Select p1 by putting the selection box o v er the point and pressing: b Put the selection box o v er p14 and press: c for center Finally place the selection box o v er p001 and press: l to create an arc as sho wn in Figure 5.5 No w create the remaining lines so that your screen looks lik e Figure 5.6 If an incorrect line w as created, you may delete it using the qdel command as described abo v e.

PAGE 94

80 Figure 5.5. Creating lines 5.1.3 Creating Surf aces The ne xt step is to create surf aces which are made up of four lines. T o create a surf ace, the command qsur and a selection box will be used to select lines that belong to the surf ace. qsur w orks by placing the selection box o v er a line, pressing the k e y 1 placing the selection box o v er another line, pressing the k e y 2 and repeating until all the lines that mak e up the surf ace are selected. First plot all lines with their labels: plot la all No w type: qsur A selection box appears at the tip of the mouse b utton. Mak e the selection box a little bigger and place o v er line L003 as sho wn in Figure 5.6 Press: 1

PAGE 95

81 Figure 5.6. Creating surf aces and the line will turn red. Put the selection box o v er line L005 and press: 2 No w put the selection box o v er line L00K and press: 3 Finally put the selection box o v er line L004 and press: 4 No w press: g to generate the surf ace. Press: q

PAGE 96

82 Figure 5.7. Creating surf ace A001 to end the command qsur If a line is hard to select, rotate or zoom the screen so that the line is easier to pick. No w plot the surf ace that w as just created with plus sa all The screen should look something similar to Figure 5.7 with the surf ace A001 plotted. No w create the surf ace that is bounded by lines L00X L00G L001 and L010 in the same manner as the rst surf ace. The screen should no w look lik e Figure 5.8 after the commands: plot l all plus sa all 5.1.4 Creating Bodies The ne xt step is to create a body out of surf aces. T o create a body the command qbod is used. qbod requires 5 to 7 surf aces to dene a body or e xactly 2. F or this e xample we will be using only 2 surf aces, that are connected by single lines to create a body If the other method w as used, we w ould ha v e to create 6 surf aces by selecting the 4 bounding lines, then select the six surf aces that w ould create a body which w ould be a longer more

PAGE 97

83 Figure 5.8. Creating surf ace A002 tedious approach. T o create a body rst type: qbod Mak e the selection box a little bigger so that selecting a surf ace is easier Place the selection box o v er surf ace A001 as sho wn in Figure 5.9 Press: s to select the surf ace. No w place the selection box o v er surf ace A002 and press: s No w press: g to generate the body and remaining surf aces. Y our screen should look similar to Figure 5.10 after typing the follo wing commands:

PAGE 98

84 Figure 5.9. Creating bodies plot p all plus l all plus sa all plus ba all Figure 5.10. Plotting bodies

PAGE 99

85 No w create the remaining surf aces and bodies so that your screen looks similar to Figure 5.11 Figure 5.11. Creating the handle 5.1.5 Creating the Cylinder The ne xt feature that will be created is the c ylinder First, create points that are on the c ylinder boundary pnt 0 0 0 pnt 0 -0.95 0 pnt 0 0.95 0 pnt 0 -0.625 0 pnt 0 0.625 0 pnt -0.93181 0.185 0 pnt -0.93181 -0.185 0 pnt -0.59699 -0.185 0 pnt -0.59699 0.185 0 pnt 0.71106 -0.63 0 pnt 0.71106 0.63 0 pnt 0.4678 0.41447 0 pnt 0.4678 -0.41447 0

PAGE 100

86 Notice that instead of gi ving each point a dif ferent name, we used to ha v e CalculiX generate a point number automatically Y our screen should look similar to Figure 5.12 after the commands: plot s all plus pa all Figure 5.12. Creating the c ylinder points No w create the lines that dene the boundary of the c ylinder using the qlin command as sho wn abo v e. First, type the command: plus l all The lines should be created from the points as sho wn in Figure 5.13 The ne xt step is to create the surf aces that are bounded by these lines. Use the qsur command and instead of placing the selection box o v er the name of each line and selecting it, place the selection box o v er a part of the line. As long as the selection box is placed o v er one line, the line will be selected.

PAGE 101

87 Figure 5.13. Creating the c ylinder lines Mak e the selection box a little bigger and press numbers one through four for each line that the selection box is o v er After the four lines are selected press g This will generate the surf ace. After the surf aces ha v e been generated, your screen should look similar to Figure 5.14 after the command: plus sa all The ne xt step is to create the bodies that dene the c ylinder First, add all surf aces belonging to the c ylinder to a set called cylinder1 The name of the surf aces that appears in Figure 5.15 may be dif ferent from the names on your screen. This does not matter just select the same surf aces. T ype the command: qadd cylinder1 A little square appears around the tip of the mouse pointer T o increase the size of the selection box, with your mouse pointer in the main screen, press r mo v e the mouse pointer a little lo wer and to the right, and press r ag ain. Mo v e your mouse such that the

PAGE 102

88 Figure 5.14. Creating the c ylinder surf aces Figure 5.15. Cylinder surf aces name of the surf ace you wish to select is within the selection box. No w press s to select the surf ace. The surf ace should be highlighted as sho wn in Figure 5.16 If you accidentally select the wrong surf ace, press q to quit qadd and the type the command del se setname where setname is the name of the set that you added the wrong

PAGE 103

89 Figure 5.16. Cylinder surf aces surf ace to. Once the surf ace is added to a set, you can no w create a body using the swep command: swep cylinder1 cylinder2 tra 0 0 1.75 In this case set cylinder1 is swept to the set cylinder2 along the z-axis a length of 1.75. Ag ain the set name cylinder2 is arbitrary T o vie w the bodies type the command: plot b all or plot ba all The rst method plots the bodies while the second option plots the bodies with their labels.

PAGE 104

90 5.1.6 Creating the P arallelepiped The ne xt feature to be created is the parallelepiped with a width of 1.26 and thickness of 0.25. First create the 8 points that dene the boundary The follo wing points are on the c ylinder: pnt 0.711056 0.63 0.7 pnt 0.711056 -0.63 0.7 pnt 0.95 0 0.7 pnt 0 0 0.7 The follo wing points dene the circular cutout and the center of the circle. pnt 3.5 0.63 0.7 pnt 3.5 -0.63 0.7 pnt 3.5 0 0.7 pnt 2.87 0 0.7 No w plot the points along with the surf aces with the follo wing commands: plot s all plus p all The screen should look similar to Figure 5.17 No w create the lines that form the base of the parallelepiped. W e will create the feature in tw o sections so that we can create the half circle. Use the command qlin to connect the points to form something similar to Figure 5.18 Use the follo wing command to plot all lines. plus l all

PAGE 105

91 Figure 5.17. Creating points for parallelepiped Figure 5.18. Creating lines for parallelepiped No w create the bottom tw o surf aces. Use the qsur command and select four lines that belong to one section and generate that surf ace. Repeat this procedure for the remaining

PAGE 106

92 section. Plot the surf aces so that their labels appear and add the tw o surf aces to the set par a1 plot sa all qadd para1 No w use the swep command to create a body using the sets par a1 and par a2 swep para1 para2 tra 0 0 0.25 5.1.7 Creating Horse-shoe Section The feature at the end of the part will be created ne xt. T o do this, use the pnt command to enter the follo wing coordinates: pnt 3.5 0 0.5 pnt 3.5 0.63 0.5 pnt 3.5 -0.63 0.5 pnt 2.87 0 0.5 pnt 4 0.63 0.5 pnt 4 -0.63 0.5 pnt 4 -0.25 0.5 pnt 4 0.25 0.5 pnt 3.5 0.25 0.5 pnt 3.5 -0.25 0.5 pnt 3.25 0 0.5 In this case we used the character to let CalculiX assign a name for each point. Ne xt use the qlin command to connect these points so that the lines look similar to those in Figure 5.19 The ne xt step is to create the bottom surf ace of the end feature T o do this, use the qsur command and pick the four lines that mak e up each surf ace. It is best to mak e the line labels visible. This helps in picking the lines when trying to create a surf ace. T o vie w the line labels, type the command: plus la all

PAGE 107

93 Figure 5.19. Creating lines for horse-shoe section Figure 5.20. Surf aces No w plot all the surf aces and the screen should look similar to that in Figure 5.20 No w create the body by e xtruding the surf ace in the positi v e z-direction 0.75 units. First, add the surf aces to a set called Send1 and then use the swep command to create the body

PAGE 108

94 swep Send1 Send2 tra 0 0 0.75 The part should no w look lik e that in Figure 5.21 Figure 5.21. Creating body for horse-shoe section 5.1.8 Creating the Slanted Section The ne xt feature that will be created is the slanted section. First, create the follo wing points. pnt 0.94174 0.125 1.75 pnt 0.94174 -0.125 1.75 pnt 0.94174 -0.125 0.95 pnt 0.94174 0.125 0.95 pnt 2.88253 0.125 0.95 pnt 2.88253 -0.125 0.95 pnt 2.88253 -0.125 1.25 pnt 2.88253 0.125 1.25 No w create lines that connect these points to create the outline sho wn in Figure 5.22 Mak e sure you connect all eight points creating twelv e lines. Use the pre viously created center points to create the arcs that connect the points at the base of both circular

PAGE 109

95 Figure 5.22. Creating lines for the slanted section sections. It may be dif cult to select some lines. T o get around this problem, you may ha v e to mo v e, zoom, or rotate the part so that the line is easier to pick. The ne xt step is to create the surf aces that are bounded by these lines. Use the qsur command and select the four lines for each surf ace. Plot the surf aces with the command: plot s all The part should look something lik e the one in Figure 5.23 5.2 Creating a Solid Mesh No w that the entire part is modeled, the ne xt part of this tutorial is to create a nite-element mesh, apply boundary constraints and loads, and solv e for the stresses and displacements. The solv er part of CalculiX, CalculiX CrunchiX (ccx), reads in the input stack from a le with the e xtension .inp The format for the input stack is similar to that of AB A Q US [ 30 ]. T o create the input stack, les are e xported through CalculiX that contain

PAGE 110

96 Figure 5.23. Final part the nodal and element data, loading data, and boundary conditions. The te xt les are then combined with material data added and read by the solv er The rst step is to create the mesh. If you type the command: plot ld all you will notice numbers on each line. These numbers are di visions that determine ho w man y elements are along each line. These numbers may be changed to increase or decrease the number of di visions. One reason to increase the di visions is to rene a mesh around a critical area. Another reason to increase or decrease the di visions is to ha v e the meshes of se v eral bodies line up to each other as close as possible. One disadv antage of CalculiX is that it is not an auto-mesher It does not mer ge bodies or meshes to create a single body If a part has se v eral bodies, it is up to the user to mer ge the nodes on the dif ferent bodies.

PAGE 111

97 5.2.1 Changing Element Di visions The ne xt step is to create the mesh. T o do this, rst specify the type of element to be used in the mesh. F or this e xample, eight-node brick elements are used. The command: elty all he8 species that the entire mesh will be of eight-node brick elements. Another option is to use he20 which are twenty-node brick elements. T ype the command: mesh all to mesh the part. Ne xt type: plot m all Then mo v e the pointer into the menu area, left click, select the V ie wing menu, and then select Lines The surf ace coloring disappears and the elements are sho wn in green lines. If you zoom in on the horse-shoe section of the part, and vie w it in the positi v e y-direction, you will see that the meshes do not line up as seen in Figure 5.24 Figure 5.24. Unaligned meshes

PAGE 112

98 T ype the command plus n all and notice that the nodes are not close enough to each other so that we can mer ge them. T o correct these problems, the line di visions must be changed for all lines. First, delete the mesh: del me all Ne xt plot the line di visions: plot ld all No w enter the command: qdiv This command allo ws you to change the line di vision by selecting the line with your mouse and typing in a di vision number Zoom in and rotate the handle section of the part so that it looks lik e Figure 5.25 Change all line di visions according to those displayed in Figure 5.25. Changing line di visions the Figure 5.25 Notice that if you try to change the di vision of the lines near the c ylinder

PAGE 113

99 it changes the wrong line. T o get around this use the follo wing method. First, type the command qdiv Ne xt, type a This enters into a mode where the selection box can pick multiple items at once instead of just one. Then create a selection box that includes all the numbers that are dif cult to change along with the numbers that change by mistak e as sho wn in Figure 5.26 Figure 5.26. Pick multiple di vision numbers Ne xt type in the number 9 while the selection box is in the position as sho wn in Figure 5.26 This should change the numbers within the box and possibly some surrounding numbers to 9 as sho wn in Figure 5.27 Ne xt change the tw o lines on the c ylinder to 32 One w ay to do this is to create a selection box and just pick part of the line a w ay from all other lines as sho wn in Figure 5.28 After the handle di visions are changed, change the di visions of the c ylinder to what is sho wn in Figure 5.29 Ne xt, change the di visions of the parallelepiped and the feature attached abo v e it to what is sho wn in Figure 5.30

PAGE 114

100 Figure 5.27. Change all numbers to 9 Figure 5.28. Select line a w ay from label The nal section to change is the horse-shoe feature. First, go to the Orientation menu and chose -z-orientation. Zoom in on the horse-shoe section and change the line di visions to what is sho wn in Figure 5.31 Finally choose the positi v e y-orientation and change the di visions to what is sho wn in Figure 5.32

PAGE 115

101 Figure 5.29. Change c ylinder di visions Figure 5.30. Change parallelepiped di visions No w that all the line di visions are changed, mesh the entire part ag ain and vie w the elements and nodes: mesh all plot m all

PAGE 116

102 Figure 5.31. Change horse-shoe section di visions Figure 5.32. Change horse-shoe section di visions plus n all Select the V ie wing menu and select Lines No w zoom in on the handle section in the positi v e y-direction. The screen should look similar to Figure 5.33

PAGE 117

103 Figure 5.33. Impro v ed element spacing No w the tw o meshes line up much closer than it did before which mak es it much easier to mer ge nodes. 5.2.2 Delete and Mer ge Nodes The ne xt step in creating the nite-element model is mer ging the nodes of the different features. The rst nodes to be mer ged are that of the handle and c ylinder First, zoom in on the c ylinder and handle in the +y-direction as seen in Figure 5.33 No w create a set called hcset1 for the handle-c ylinder qadd hcset1 Enter into multiple selection mode: a No w create a box that encompasses the nodes in the handle-c ylinder intersection as seen in Figure 5.34 Enter the letter: n

PAGE 118

104 Figure 5.34. First nodal set to select all nodes within that box. In the terminal there should be listed all the nodes that were selected. No w press: q to e xit the selection routine. No w plot all the nodes that were selected in blue: plot n hcset1 b No w vie w the nodes in the ne g ati v e z-direction. Y our screen should look similar to Figure 5.35 Y ou may ha v e more or less nodes that are displayed in the Figure 5.35 No w we may add another set that only includes the nodes which we w ant to mer ge. qadd hcset2 No w create a smaller box and enter into the multiple selection routine with the command: a

PAGE 119

105 Figure 5.35. Selected nodes Figure 5.36. Select more nodes Y ou should ha v e something similar to that in Figure 5.36 No w select the node that appears in the boundary between the handle and the c ylinder In Figure 5.37 you can see that I accidentally pick ed the wrong nodes near the bottom of the screen. T o correct this, delete set hcset2 re-plot set hcset1 and reselect the correct nodes.

PAGE 120

106 del se hcset2 plot n hcset1 b Figure 5.37. Selected wrong nodes Notice that the nodes in hcset1 appear in blue. No w repeat the abo v e steps and select the correct nodes. After the correct nodes are selected, your screen should look similar to Figure 5.38 No w vie w the part in the positi v e y-direction. It can be seen that there is an e xtra ro w of nodes near the bottom of the part as sho wn in Figure 5.39 T o correct this, enter the command: qrem hcset2 Enter into the multiple selection mode: a No w create a box around the nodes which need to be remo v ed from the set as sho wn in Figure 5.40

PAGE 121

107 Figure 5.38. Correct set of nodes Figure 5.39. Selected e xtra nodes Press n to remo v e the selected nodes. The nodes should disappear Repeat the abo v e steps if you need to remo v e additional nodes from the set. No w plot the nodes in hcset2 plot n hcset2 b Y our screen should look similar to Figure 5.41

PAGE 122

108 Figure 5.40. Select nodes to delete Figure 5.41. Final node set The ne xt step is to mer ge the nodes that are close together This ef fecti v ely joins the tw o features together to create one body T o mer ge the nodes we will use the mer g e command. The mer g e command is in the form mer g n set-name gtol where n species mer ging nodes, set-name is the name of the set that contains the entities to be mer ged and gtol is the maximum distance which tw o entities can be apart to be considered equal.

PAGE 123

109 Other options for entities are p for points, l for lines, and s for surf aces. T o determine gtol vie w the part in the positi v e y-direction. From the screen you should notice that the nodes are furthest apart near the bottom of the handle. T o determine the distance between the nodes use the command: qdis When you enter qdis a selection box appears. Mak e it lar ge enough so that you can select one node. Put the box o v er one node, press the n k e y go to the ne xt node and hit the n k e y ag ain. Look at the command screen and notice that the distances in the x, y and z directions are gi v en. An y tw o nodes selected at the bottom of the handle should ha v e a z-distance of around 0.012812 apart. W e will use a gtol of 0.013: merg n hcset2 0.013 Notice that the shape of the handle deforms a little. This deformation will introduce error and the analysis results around the deformation will be inaccurate. No w the ne xt step is to mer ge the slanted section and the parallelepiped to the c ylinder First, vie w the part in the positi v e y-direction with the nodes plotted. Add a set called cspset1 and select the nodes within the box as sho wn in Figure 5.42 Mak e sure you enter into the multiple selection mode by using the a command. After the nodes are selected enter the command: plot n cspset1 b to plot the nodes of set cspset1 in blue. The screen should look similar to Figure 5.43 No w vie w the part from the +z-direction. Add another set, cspset2 and select the nodes along the boundary between the c ylinder slanted section, and parallelepiped as sho wn in Figure 5.44 After the nodes along the boundaries are selected plot them in blue: with the command plot n cspset2 b

PAGE 124

110 Figure 5.42. Select nodes Figure 5.43. Plot nodes Notice that there are e xtra nodes contain within this set as sho wn in Figure 5.45 As sho wn abo v e, use the qr em command to remo v e the e xtra nodes from the set so that the screen looks similar to Figure 5.46 No w determine the maximum distance up to which nodes will be mer ged. Notice that on the top surf ace of the parallelepiped where it mer ges with the c ylinder the nodes

PAGE 125

111 Figure 5.44. Select nodes Figure 5.45. Select nodes are the furthest apart. No w determine the maximum distance with the qdis command as used before. The distance is around 0.020312 units. Check other nodes to see if this is the lar gest. No w mer ge all the nodes that are a maximum of 0.0205 apart. merg n cspset2 0.0205

PAGE 126

112 Figure 5.46. Final node set The ne xt set of nodes to mer ge are those that connect the horizontal interf ace between the slanted section and the parallelepiped. First, vie w the part in the positi v e ydirection. Use the qadd command to add a set called spset1 qadd spset1 Enter into the multiple selection mode: a And create a box that is similar to that in Figure 5.47 Select the nodes by pressing the n k e y No w vie w the part from the ne g ati v e zdirection. Ne xt, remo v e all unnecessary nodes from the set by using the qr em command. qrem spset1 No w select the e xtra nodes so that your screen looks similar to Figure 5.48 Y ou can also remo v e additional nodes by vie wing the part in the positi v e y-direction and remo ving nodes so that your screen looks similar to Figure 5.49

PAGE 127

113 Figure 5.47. Select nodes from the side Figure 5.48. Good node set The nal step before mer ging is to determine the maximum distance between nodes. Use the qdis command and zoom in on the middle section of the parallelepiped as sho wn in Figure 5.50 In that section of the part you can determine that the maximum distance between an y tw o nodes is around 0.041807 units. No w mer ge the nodes in spset1 so that the

PAGE 128

114 Figure 5.49. Final node set Figure 5.50. Determine node distance maximum distance is 0.04181. It is recommended that you sa v e your w ork up to this point in the fr d le format. send all frd

PAGE 129

115 This will send the nite element data that has been created so f ar The nodal and element data data will be sent to the le all.fr d If you mak e a mistak e in mer ging of the nodes, you can e xit without sa ving your w ork and reopen the le all.fr d No w mer ge the nodes: merg n spset1 0.04181 The nal part of the nite-element setup is mer ging the nodes between the slanted section, the parallelepiped, and the horse-shoe shaped feature. First, vie w the part in the positi v e y-direction, create a set called sphset1 and create a selection box as sho wn in Figure 5.51 Figure 5.51. Create selection box qadd sphset1 Enter multiple entity selection mode. a Once the nodes are selected plot the chosen nodes in blue with plot n sphset1 b

PAGE 130

116 Notice that there are man y e xtra nodes in the set. Remo v e them using the qr em command so that your screen looks lik e Figure 5.52 Figure 5.52. Final node set No w determine the maximum distance between an y tw o nodes using the qdis command. The maximum distance between an y tw o nodes occurs at the interf ace between the horse-shoe and slanted sections. The distance between these nodes is around 0.0279. No w mer ge these nodes up to a distance of 0.02791. merg n sphset1 0.02791 No w the mer ging of nodes is complete. 5.2.3 Apply Boundary Conditions This section will describe the steps in adding boundary conditions to the part and sa ving the data. The boundary conditions that will be applied are the follo wing; the inside of the handle will be x ed and the horse-shoe end f ace will ha v e a 130 lbf load applied. The material for the part will be AISI 1018 Steel with a modulus of elasticity of 30E+6 psi, a Poisson' s ratio of 0.33, and a yield of 60,000 psi.

PAGE 131

117 First, plot all nodes and vie w the part near the handle in the positi v e y-direction as sho wn in Figure 5.53 Figure 5.53. Side vie w of handle with nodes plotted Ne xt, add a set called x and enter into multiple selection mode. qadd fix a Mak e the selection box a little bigger and place it o v er what looks lik e one node as sho wn in Figure 5.54 Press n to select that node and all nodes behind it in the selection box. This will add all nodes within that box to the set x Do this for all the nodes around the inside surf ace of the handle. After all the nodes are added to the set x send the data to a le with the command: send fix spc 123 This will send the set x with single point constraints (spc) that x each node in the x (1), y (2), and z (3) directions. If the nodes were only to be x ed in the x and z directions, the command w ould be:

PAGE 132

118 Figure 5.54. Select nodes on handle inner surf ace send fix spc 13 Ne xt, create the set load and add all the nodes on the surf ace of the horse-shoe end, Figure 5.55 Figure 5.55. Add nodes to set load

PAGE 133

119 A 130 lbf load will be applied to the end of the horse-shoe surf ace. Since there are 130 nodes on this surf ace, simply apply a -1 lbf load to each node in the z-direction. send load force 0 0 -1 Finally send all the element and nodal data to a .msh le. This will contain the element and nodal data in a format readable by CaluliX CrunchiX. send all abq No w that all necessary les are created that dene the part, the input deck can be created. The input deck contains the element and nodal data, the boundary conditions, loading conditions, material denitions, the type of analysis, and what data to print out for elements and nodes. The input deck for this run is seen in Appendix D 5.2.4 Run Analysis After the input deck is created, the analysis can no w be performed. Simply type at the command line: redboots@apollo> ccx_1.1 -i all.inp all.inp is the name of the input deck le. After the analysis has completed, se v eral les will be created. The le ending in the e xtension fr d contains the results that are a v ailable. F or this test node displacement and element stress are computed. T o vie w the results, type:redboots@apollo> cgx all.frd On the left hand side of the CalculiX GraphiX interf ace, left-click the mouse, select Datasets then Str ess Ne xt, left-click on the left side of the CalculiX interf ace, select Datasets Entity and then Mises This will plot the v on Mises stress for the part, Figure 5.56

PAGE 134

120 Figure 5.56. v on Mises stress for the part

PAGE 135

CHAPTER 6 OPEN SOURCE SOL VERS Solving lar ge linear systems is one of the common tasks in the engineering and scientic elds. There are man y packages that solv e linear systems; some of the commonly kno wn open-source packages include LAP A CK, PETSc, ScaLAP A CK, SPOOLES, SuperLU, and T A UCS. All of these packages ha v e the ability to solv e the problem on a single computer while others allo w multi-threaded solving and also computations o v er a cluster In T able 6.1 a comparison is made between these solv ers. A more comprehensi v e table can be found at http://www .netlib .or g/utk/people/JackDong arra/la-sw .html T able 6.1. Comparison of solv ers Complex Real Language Serial LAPACKTaucs SuperLU SPOOLES ScaLAPACK PETSc XXXXX X X X X X X C & F77C & F77 C & F77 C C X XXXXX C & F77 MPI, PVM Iterative Parallel Direct MPI MPIMPI XX Sparse SparseSparseSparse F or the P arallel column, a distinction is made between which distrib uted memory message passing language is being used, and for the Direct and Iterati v e column a note is made as to which packages use sparse matrix form to contain the data. 6.1 SPOOLES SPOOLES (SP arse Object-Oriented Linear Equations Solv er) is used to solv e sparse real or comple x linear systems, Ax = b, for x. The matrices can be symmetric, Hermi121

PAGE 136

122 tian, square non-symmetric, or o v erdetermined [34]. SPOOLES can use either QR or LU f actorization in serial, multi-threaded, or parallel en vironments. SPOOLES could be considered a medium size program with o v er 120,000 lines of code. Being Object Oriented (OO), there are thirty-eight objects and o v er 1300 dif ferent functions with each object kno wing and operating on itself and kno wing v ery little about other objects [35]. SPOOLES w as supported in part by D ARP A with contrib utions by Cle v e Ashcraft, Daniel Pierce, Da vid K. W ah, and Jason W u. 6.1.1 Objects in SPOOLES SPOOLES is Object Oriented softw are written in the C language. Object Oriented programming uses objects that interact with each other Objects pro vide modularity and structure to the program and the y contain both data and functions [36]. The objects that mak e up SPOOLES are di vided into three cate gories, utility or dering, and numeric. The utility objects ha v e se v eral uses such as storing and operating on dense matrices, creating storage for real and comple x v ectors, generating a random number and storing permutation v ectors. The ordering objects can represent a partition of a graph during nested dissection, model an elimination tree for f actorization, represent the graph of a matrix, and pro vide an ordering based on minimum de gree, multi-section, or nested dissection algorithms. Numeric objects can assemble a sparse matrix, store and operate on a front during f actorization, hold submatrix data and operate on them, and numeric objects also solv e linear systems [37]. A complete list of the a v ailable utility ordering, and numeric objects are seen in T ables6.2 6.3 and 6.4 respectively. 6.1.2 Steps to Solv e Equations There are four main steps to solving equations in SPOOLES communicate, reorder f actor and solv e [34]. In later sections these steps will be e xplained in more detail for solving in serial and parallel en vironments. The multi-threaded capabilities of SPOOLES will not be look ed at because the goal is to compare the methods that we are

PAGE 137

123 T able 6.2. Utility objects A2 dense tw o dimensional array Coords object to hold coordinates in an y number of dimensions D V double precision v ector Drand random number generator I20hash hash table for the f actor submatrices IIheap simple heap object IV inte ger v ector IVL inte ger list object Ideq simple dequeue object Lock abstract mutual e xclusion lock Perm permutation v ector object Utilities v arious v ector and link ed list utility methods ZV double precision comple x v ector T able 6.3. Ordering objects BKL Block K ernihan-Liu algorithm object BPG bipartite graph object DST ree domain/separator tree object EGraph element graph object ET ree front tree object GP art graph partitioning algorithm object Graph graph object MSMD multi-state minimum de gree algorithm object Netw ork netw ork object for solving max o w problems Solv eMap map of submatrices to processes for solv es T ree tree object able to e xplore in our lab The computers that we use in our cluster are all single processor Intel Pentium 4s with no HyperThreading technology 6.1.3 Communicate The rst step, communicate, reads in the data for the matrices either from stored v alues in memory or from a le. The sparse matrix A is read in the follo wing manner for the rst line the number of ro ws, columns, and entries are read. Ne xt, the ro w and column numbers are read with the v alue of the nonzero entry in the sparse matrix. SPOOLES

PAGE 138

124 T able 6.4. Numeric objects Chv block che vron object for fronts ChvList object to hold lists of Chv objects ChvManager object to manage instances of Chv objects DenseMtx dense matrix object FrontMtx front matrix object ILUMtx simple preconditioner matrix object InpMtx sparse matrix object Iter Krylo v methods for iterati v e solv es P atchAndGoInfo modied f actors in the presence of zero or small pi v ots Pencil object to contain A + SemiImplMtx semi-implicit f actorization matrix object SubMtx object for dense or sparse submatrices SubMtxList object to hold lists of SubMtx objects SubMtxManager object to manage instances of SubMtx objects SymbF ac algorithm object to compute a symbolic f actorization only reads in the upper triangular matrix for symmetric A F or comple x matrices, there are v alues for the real and comple x part of the entry Belo w is an e xample of an input le for a real A 4 4 7 0 0 1 0 3 4 1 1 3 1 3 2 2 2 9 2 3 2 3 3 4 F or the abo v e e xample, A is a 4x4 real symmetric matrix with se v en entries in the upper triangle. It should be noted that labeling of the entries be gins with zero and not one.

PAGE 139

125 F or the right hand side, b the rst part that is read are the number of ro ws for the matrix and the number of columns with data in the matrix le. Ne xt, the ro w id is read along with the corresponding v alue of the entry If the matrix is real, one v alue is read, and if the matrix is comple x, tw o v alues are read, the real and the imaginary part. Belo w is an e xample of an input le for a real b 4 1 621114 F or the abo v e e xample, there are four ro ws in b which has one column of entries. If b w as comple x and four ro ws, the rst line w ould be: 4 2 6.1.4 Reorder The ne xt step in solving the system of equations is reordering the linear system of equations. There are three options to ordering; multiple minimum de gree, generalized nested dissection, and multi-section [ 35 ]. When solving a system of equations, entries in the matrix that were once zero become non-zero due to the elimination process. Because of this ”ll in”, the matrix become less sparse and tak es longer to solv e. By reordering the matrix, the f actorization produces less ll and thus reducing solv e time. SPOOLES rst nds the permutation matrix P and then permutes Ax = b into: [ 34 ] ( P AP T )( P x ) = P b (6.1) 6.1.5 F actor The third step in solving linear equations in v olv es f actoring the matrix A, Ax = b, using LU f actorization in the form:

PAGE 140

126 A = LD U (6.2) or A = U T D U (6.3) since A is symmetric. SPOOLES uses Crout' s algorithm for performing the LU decomposition, where D is a diagonal matrix, and U is an upper triangular matrix with 1' s along the diagonal. Crout' s algorithm is further e xplained in [ 38 ]. During the solv e, the user has a choice of using pi v oting to ensure numerical stability Pi v oting is a process of sorting ro ws and/or columns of a matrix so that the pi v ot is the lar gest entry in the column for partial pi v oting and lar gest entry in the matrix for complete pi v oting. F or SPOOLES, the magnitude of entries in both L and U are bounded by a user specied v alue, the recommended v alues are 100 or 1000 [ 35 ]. The pi v oting method used is SPOOLES is e xplained in more detail in [ 39 ]. 6.1.6 Solv e The last step in v olv ed is solving the linear equations using forw ard and backsolving. Substituting Eq. 6.3 into Eq. 6.1 and solving the follo wing for y : U T y = b (6.4) Then backsolv e Eq. 6.5 to get x U x = y (6.5)

PAGE 141

127 6.2 Code to Solv e Equations There are three en vironments in which SPOOLES can solv e equations: in serial, using multiple threads, and in parallel. When the serial dri v ers are used, only one processor is used solving the problem sequentially This en vironment is ideal for relati v ely small problems where the computational time is within a user dened acceptable period. Multithreaded SPOOLES allo ws the application to handle more than one operation at a time by separating tasks into indi vidual threads. A multi-threaded application may of fer some benets on a single-processor machine, such as an Intel Pentium 4 with HyperThreading, b ut a multiple processor computer is much more suited to taking full adv antage of multiple threads. The parallel routines of SPOOLES were originally designed to operate on MPP machines with a f ast netw ork interconnect b ut with increasing bandwidth and reduction of latenc y of current netw orking equipment, it is viable for SPOOLES to be run on multiple computers arranged in a cluster By using multiple computers connected by a f ast netw ork and coding applications utilizing a parallel interf ace such as MPI, certain problems may be solv ed in less time than with a single computer 6.3 Serial Code Initially the dri v er that CalculiX uses in solving equations connects to the serial routines supplied in SPOOLES. The code used by CalculiX is seen in Appendix E.1 The steps used by CalculiX are included in the four steps that SPOOLES di vides the w ork into.6.3.1 Communicate The rst step, communicate reads and stores the entries of the matrix A Ax = b This is carried out by the follo wing code se gment. mtxA = InpMtx_new() ; InpMtx_init(mtxA, INPMTX_BY_ROWS, type, nent, neqns) ;

PAGE 142

128 for(row=0;row
PAGE 143

129 The second part of communicate reading and storing the entries of b will be per formed later 6.3.2 Reorder The second step of solving a linear system of equations with CalculiX is to nd a lo w-ll ordering to reduce storage and computational requirements of a sparse matrix f actorization. This f alls into the r eor der step. The code that performs this operation is seen belo w graph = Graph_new() ; adjIVL = InpMtx_fullAdjacency(mtxA) ; nedges = IVL_tsize(adjIVL) ; Graph_init2(graph, 0, neqns, 0, nedges, neqns, nedges, adjIVL, NULL, NULL) ; maxdomainsize=800;maxzeros=1000;maxsize=64;frontETree=orderViaBestOfNDandMS(graph,maxdomainsize,maxzeros, maxsize,seed,msglvl,msgFile); The abo v e code initializes the Gr aph object which is used to represent the graph of a matrix. This step also uses the IVL object to represent adjacenc y lists, one edge list for each v erte x in the graph [ 37 ]. (maybe sho w e xample) The rst line Gr aph ne w() allocates storage for the Graph structure. The second line creates and returns an object that holds the full adjacenc y structure of A + A T This abo v e structure is used because the LU f actorization of SPOOLES w orks with symmetric matrices. The ordering that serial CalculiX uses is determined by nding the better of tw o methods nested dissection and multi-section. Much of the w ork used to nd either a nested dissection or multi-section ordering is identical, so little more time is used [ 40 ]. or derV iaBestOfNDandMS() splits a subgraph if it has more v ertices than the maxdomainsize ar gument and also transforms the front tree using the maxzer os and maxsize parameters [ 40 ]. These will be e xplained later

PAGE 144

130 6.3.3 F actor The ne xt four steps of the CalculiX code mak e up the factor step and in v olv e computing the numeric f actorization and a post-f actorization process. The rst three steps are those outlined in [ 41 ]. The steps are illustrated in Figure 6.1 Figure 6.1. Three steps to numeric f actorization The rst step in v olv es determining the permutation, permuting the matrix, and getting the symbolic f actorization. The code that performs these operations is seen belo w: oldToNewIV = ETree_oldToNewVtxPerm(frontETree) ; oldToNew = IV_entries(oldToNewIV) ; newToOldIV = ETree_newToOldVtxPerm(frontETree) ; newToOld = IV_entries(newToOldIV) ; ETree_permuteVertices(frontETree, oldToNewIV) ; InpMtx_permute(mtxA, oldToNew, oldToNew) ; InpMtx_mapToUpperTriangle(mtxA) ; InpMtx_changeCoordType(mtxA,INPMTX_BY_CHEVRONS);InpMtx_changeStorageMode(mtxA,INPMTX_BY_VECTORS);symbfacIVL = SymbFac_initFromInpMtx(frontETree, mtxA) ; The abo v e code models the elimination tree for the sparse f actorization. The permutation v ectors are e xtracted from the ET r ee object so that we get orderings for the forw ard and backsolv e.

PAGE 145

131 Ne xt, the front matrix object is initialized. This is performed by the follo wing code: frontmtx = FrontMtx_new() ; mtxmanager = SubMtxManager_new() ; SubMtxManager_init(mtxmanager, NO_LOCK, 0) ; FrontMtx_init(frontmtx, frontETree, symbfacIVL, type, symmetryflag, FRONTMTX_DENSE_FRONTS, pivotingflag, NO_LOCK, 0, NULL, mtxmanager, msglvl, msgFile) ; In a sequential frontal solv er for nite-element codes, a 'front' sweeps through the mesh, one element at a time, assembling the element stif fness matrices into a socalled 'frontal' matrix [ 42 ]. F r ontMtx ne w() allocates storage for the FrontMtx structure, SubMtxMana g er ne w() allocates storage for the SubMtxManger structure, and the front matrix is initialized by F r ontMtx init() SubMtxMana g er handles multiple instances of the SubMtx object. The SubMtx object holds and operates with double precision or comple x submatrices of a sparse matrix. F r ontMtx init() initializes the object, and allocating and initializing the internal objects as necessary [ 37 ]. The numeric f actorization is then calculated with the follo wing code: chvmanager = ChvManager_new() ; ChvManager_init(chvmanager, NO_LOCK, 1) ; DVfill(10, cpus, 0.0) ; IVfill(20, stats, 0) ; rootchv = FrontMtx_factorInpMtx(frontmtx, mtxA, tau, 0.0, chvmanager, &error,cpus, stats, msglvl, msgFile) ; ChvManager_free(chvmanager) ; The Chv object is the block che vron object that is used to store and operate on a front during a sparse f actorization. The w ord ”che vron” w as chosen by the authors to describe the front because if you rotate Figure 6.2 fortyv e de grees clockwise the entries resemble the che vron insignia of enlisted personnel in the armed forces. Also ”block” emphasizes that the diagonal may ha v e multiple entries [ 37 ]. ChvManager ne w() allocates memory for the ChvManager structure, while ChvManager init() initializes the object. FrontMtx f actorInpMtx() performs a serial f actor ization of mtxA with the follo wing parameters:

PAGE 146

132 X X X X X X X X X XXX X X X X XXX Figure 6.2. Arro whead matrix frontmtx storage space for the front matrix created by FrontMtx ne w(). mtxA the matrix being f actored. tau all entries in L and U, if the matrix is non-symmetric, ha v e a greater v alue than this. 0.0 all v alues stored in L and U, if the matrix is non-symmetric and when the fronts are stored in sparse format, will ha v e a greater v alue than this. chvmanager storage space for the structure that operates on the front matrix during f actorization error returns an error code cpus time in v olv ed in f actorization stats information on the f actorization msglvl ho w much information is to be sa v ed to a le about the f actorization and results msgFile the data to be sa v ed specied by msglvl is written in a le pointed to by msgFile

PAGE 147

133 The last part of the f actorization process in v olv es permuting the ro w and column adjacenc y objects, permuting the lo wer and upper matrices, and updating the block adjacenc y objects [ 37 ]. This is carried out by the follo wing code: FrontMtx_postProcess(frontmtx, msglvl, msgFile) ; 6.3.4 Communicate B Ne xt, the second part of communicate reading the the right hand side of Ax = b b is performed. This is accomplished by the follo wing code: mtxB = DenseMtx_new() ; DenseMtx_init(mtxB, type, 0, 0, neqns, nrhs, 1, neqns) ; DenseMtx_zero(mtxB) ; for ( jrow = 0 ; jrow < nrow ; jrow++ ) { for ( jrhs = 0 ; jrhs < nrhs ; jrhs++ ) { DenseMtx_setRealEntry(mtxB, jrow, jrhs, b[jrow]) ; } } First, DenseMtx ne w() is called which allocates storage for the DenseMtx object. The DenseMtx object contains a dense matrix, not stored in sparse format, along with ro w and column indices. Ne xt, DenseMtx init() initializes the DenseMtx object and calculates the bytes required for the w orkspace. The follo wing parameters are specied during DenseMtx init [ 37 ]. mtxB space created for the dense matrix B type species whether the matrix is real or comple x 3rd and 4th ar guments specify ro w and column ids of the matrix

PAGE 148

134 neqns number of equations and also equal to the number of ro ws, nro w in the matrix nrhs number of columns in the matrix 7th and 8th ar guments the dense matrix is stored in column major format so the ro w stride is 1 and the column stride is the number of equations or ro ws Lastly the entries in ro w jr ow and column jrhs are gi v en the v alue b[jr ow] with the call to Densemtx setRealEntry() The ne xt step in v olv es permuting the right hand side into the ne w ordering. When a lo w-ll ordering is found for the sparse matrix A the ne w ordering arrangement is applied to the dense matrix b This is performed by the follo wing code: DenseMtx_permuteRows(mtxB, oldToNewIV) ; 6.3.5 Solv e The last step, solve is to solv e the linear system for x This is performed by the follo wing code. mtxX = DenseMtx_new() ; DenseMtx_init(mtxX, type, 0, 0, neqns, nrhs, 1, neqns) ; DenseMtx_zero(mtxX) ; FrontMtx_solve(frontmtx,mtxX,mtxB,mtxmanager,cpus,msglvl,msgFile) ; The rst line allocates storage for the DenseMtx structure and assigns the storage space to mtxX DenseMtx init initializes the DenseMtx object and calculates the number of bytes required for the w orkspace. Ne xt, DenseMtx zer o() zeros the entries in the matrix mtxX. Finally the system of equations is solv ed by F r ontMtx solve() The parameters are as follo ws: frontmtx storage space for the front matrix mtxX entries of x, Ax = b, are written to mtxX

PAGE 149

135 mtxB entries of B are read in from mtxB mtxmanager manages the w orking storage used during the solv e cpus returns information on the solv e process msglvl species ho w much information of the solv e process is to be written to output msgFile output specied by msglvl is written here The last step of the solv e process is permuting the ro ws of the dense matrix, mtxX from the ne w ordering created by DenseMtx permuteRows() to the old ordering using the permutation v ector ne wT oOldIV This puts the solution into the original order of the matrices before the reordering steps occurred. W ithout permuting the solution into the original ordering, the stress and displacements w ould be incorrect for the nite-element solution. DenseMtx_permuteRows(mtxX, newToOldIV) ; Finally the v alues for x are returned to CalculiX so that stress, strain, and other data can be calculated. This is performed by: for ( jrow = 0 ; jrow < nrow ; jrow++ ) { b[jrow]=DenseMtx_entries(mtxX)[jrow]; } DenseMtx entries() simply returns the entries of x for each ro w The v alues are then stored in b[ ] and returned to CalculiX.

PAGE 150

136 6.4 P arallel Code Solving a linear system of equations using the parallel capabilities of SPOOLES is v ery similar to the serial v ersion. The major steps in solving the problem are the same. Ownership, distrib ution, and redistrib ution of fronts are introduced and are handled by a message passing interf ace that allo ws for the data to be local to each processor This message passing interf ace is MPI [ 1 ]. The optimized code is seen in Appendix E.2.2 The rst step before an y calls to the MPI routines occur is to initialize the MPI en vironment. This is handled by a call to MPI Init() MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myid) ; MPI_Comm_size(MPI_COMM_WORLD, &nproc) ; MPI_Get_processor_name(processor_name,&namelen);if(myid==0){printf("Solving the system of equations using SpoolesMPI\n\n"); }fprintf(stdout,"Process %d of %d on %s\n",myid, nproc, processor_name); MPI Comm r ank() indicates the rank of the process that calls it, starting from 0..npr oc-1 where npr oc is the number or processors a v ailable. npr oc is determined by MPI Comm size() MPI Get pr ocessor name() simple returns the name of the processor that is being called. MPI Init() is the only necessary function in the abo v e block of code, the other three are simply used to print out the process being completed on a certain node. 6.4.1 Communicate Similar to the serial code, the ne xt step is to read in the matrices. sprintf(buffer, "matrix.%d.input", myid) ; inputFile = fopen(buffer, "r") ; fscanf(inputFile, "%d %d %d", &neqns, &ncol, &nent) ; nrow = neqns; MPI_Barrier(MPI_COMM_WORLD) ; mtxA = InpMtx_new() ;

PAGE 151

137 InpMtx_init(mtxA, INPMTX_BY_ROWS, type, nent, 0) ; for ( ient = 0 ; ient < nent ; ient++ ) { fscanf(inputFile, "%d %d %le", &iroow, &jcol, &value) ; InpMtx_inputRealEntry(mtxA, iroow, jcol, value) ; } fclose(inputFile) ; In the serial code, data w as read from memory F or the parallel code, ho we v er all data is written to a le and is then read and v alues stored. The main reason wh y the communicate step for the parallel v ersion is handled in this manner is because this allo ws us to ha v e a solv er independent of CalculiX. This gi v es us more e xibility and lets us solv e a problem when we ha v e a linear system of equations, possibly written by another program, in the format accepted by the parallel solv er F or CalculiX operating in serial, the SPOOLES linear solv er w as compiled into the main e x ecutable ccx 1.1 The parallel solv er p solver operates as a separate program and is called by ccx 1.1 The second part of communicate is to read in the right hand side, b and create the DenseMtx object for b This is performed by the follo wing code. sprintf(buffer, "rhs.%d.input", myid); inputFile = fopen(buffer, "r") ; fscanf(inputFile, "%d %d", &nrow, &nrhs) ; mtxB = DenseMtx_new() ; DenseMtx_init(mtxB, type, 0, 0, nrow, nrhs, 1, nrow) ; DenseMtx_rowIndices(mtxB, &nrow, &rowind); for ( iroow = 0 ; iroow < nrow ; iroow++ ) { fscanf(inputFile, "%d", rowind + iroow) ; for ( jcol = 0 ; jcol < nrhs ; jcol++ ) {

PAGE 152

138 fscanf(inputFile, "%le", &value) ; DenseMtx_setRealEntry(mtxB, iroow, jcol, value) ; } }fclose(inputFile) ; Ag ain, the only dif ference between the serial and parallel codes is that the data is read from a le for the parallel code. 6.4.2 Reorder The ne xt step for the parallel code, r eor der is to nd a lo w-ll ordering, which is v ery similar to the serial code. graph = Graph_new() ; adjIVL = InpMtx_MPI_fullAdjacency(mtxA, stats, msglvl, msgFile, MPI_COMM_WORLD) ; nedges = IVL_tsize(adjIVL) ; Graph_init2(graph, 0, neqns, 0, nedges, neqns, nedges, adjIVL, NULL, NULL) ; T o construct the IVL object, InpMtx MPI fullAdjacency() is used. This contains the full adjacenc y structure of the graph of the matrix that is distrib uted among the processes [ 37 ]. Ne xt, each processor computes its o wn ordering based on the better of a generalized nested dissection and multi-section ordering and creates a front tree object. This is performed by the follo wing code. frontETree = orderViaBestOfNDandMS(graph, maxdomainsize,maxzeros, maxsize, seed, msglvl,msgFile) ; Because each processor uses a dif ferent random number seed when computing the ordering, the orderings will be dif ferent. Only one order can be used for the f actorization, so the master node determines which ordering is the best, and then distrib utes that ordering to all of the processors. This is performed by the belo w code.

PAGE 153

139 opcounts = DVinit(nproc, 0.0) ; opcounts[myid] = ETree_nFactorOps(frontETree, type, symmetryflag) ; MPI_Allgather((void *) &opcounts[myid], 1, MPI_DOUBLE, (void *) opcounts, 1, MPI_DOUBLE, MPI_COMM_WORLD) ; minops = DVmin(nproc, opcounts, &root) ; DVfree(opcounts) ; frontETree = ETree_MPI_Bcast(frontETree, root, msglvl, msgFile, MPI_COMM_WORLD) ; The call MPI Allgather() tak es the results from opcounts[myid] and sends it out to all processes. Ne xt, D Vmin() tak es the minimum entry from opcounts[] and puts it in the v ariable r oot opcounts[] returns the number of operations required by the ET r ee object. D Vfr ee() simply empties the storage tak en by opcounts Finally ET r ee MPI Bcast() is the broadcast method for an ET r ee object. The root processor broadcasts the ET r ee object that requires the least operations to the other nodes and returns a pointer to its ET r ee object [ 37 ]. The ne xt step, getting the permutations, is v ery similar to the serial code e xcept that local A and b matrices are permuted instead of global. The follo wing code performs this operation.oldToNewIV = ETree_oldToNewVtxPerm(frontETree) ; newToOldIV = ETree_newToOldVtxPerm(frontETree) ; ETree_permuteVertices(frontETree, oldToNewIV) ; InpMtx_permute(mtxA, IV_entries(oldToNewIV), IV_entries(oldToNewIV)) ; InpMtx_mapToUpperTriangle(mtxA) ; InpMtx_changeCoordType(mtxA, INPMTX_BY_CHEVRONS) ; InpMtx_changeStorageMode(mtxA, INPMTX_BY_VECTORS) ; DenseMtx_permuteRows(mtxB, oldToNewIV) ; The ne xt step is to generate the o wners map from v ertices to processors using the IV object which allo ws the distrib ution of A and b This is performed by the follo wing code.cutoff = 1./(2*nproc) ; cumopsDV = DV_new() ; DV_init(cumopsDV, nproc, NULL) ; ownersIV = ETree_ddMap(frontETree, type, symmetryflag, cumopsDV, cutoff) ; DV_free(cumopsDV) ;

PAGE 154

140 vtxmapIV = IV_new() ; IV_init(vtxmapIV, neqns, NULL) ; IVgather(neqns, IV_entries(vtxmapIV), IV_entries(ownersIV), ETree_vtxToFront(frontETree)) ; First, cutof f is dened. If the weight of the subtree is more than cutof f times the number of f actor operations, then the v erte x is in the multi-sector [ 37 ]. Ne xt, a double v ector is created by the call to D V ne w() D V init() initializes the object and a v ector is allocated with the size npr oc Ne xt, a map from the fronts to the processors is created by ET r ee ddMap() This domain decomposition map method in v olv es mapping domains to threads, then the fronts in the Shur complement are mapped to threads, both using independent balance maps [ 37 ]. The ne xt step is to redistrib ute the matrix and the right hand side. firsttag = 0 ; newA = InpMtx_MPI_split(mtxA, vtxmapIV, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; firsttag++ ; InpMtx_free(mtxA) ; mtxA = newA ; InpMtx_changeStorageMode(mtxA, INPMTX_BY_VECTORS) ; newB = DenseMtx_MPI_splitByRows(mtxB, vtxmapIV, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; DenseMtx_free(mtxB) ; mtxB = newB ; firsttag += nproc ; First, InpMtx MPI split() splits and redistrib utes the InpMtx object based on the mapIV object that maps the InpMtx object' s v ectors to processes [ 37 ]. Ne xt, the InpMtx needed by each processor is placed in mtxA after its storage w as cleared by InpMtx fr ee() Then, a ne w DenseMtx object is created by DenseMtx MPI splitByRows() and assigned to mtxB after its original entries were cleared by DenseMtx fr ee 6.4.3 F actor The third step, factor be gins with the symbolic f actorization which can be computed in a distrib uted manner At the end of the symbolic f actorization each process will

PAGE 155

141 o wn a portion of the IVL object, just enough for its f actorization [ 37 ]. This is performed by the follo wing code. symbfacIVL = SymbFac_MPI_initFromInpMtx(frontETree, ownersIV, mtxA, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; firsttag += frontETree->nfront ; The ne xt step is to initialize the front matrix. This operation is v ery similar to the serial code, the only dif ference being the F r ontMtx object only initializes the part of the f actor matrices that are o wned by their respecti v e processor mtxmanager = SubMtxManager_new() ; SubMtxManager_init(mtxmanager, NO_LOCK, 0) ; frontmtx = FrontMtx_new() ; FrontMtx_init(frontmtx, frontETree, symbfacIVL, type, symmetryflag, FRONTMTX_DENSE_FRONTS, pivotingflag, NO_LOCK, myid, ownersIV, mtxmanager, msglvl, msgFile) ; Notice that the ninth and tenth ar guments are myid and owner sIV not 0 and NULL as the y are for the serial code. This species what part of the f actor matrices are initialized for a processor Ne xt, the numeric f actorization is calculated. This step is also v ery similar to the serial code. chvmanager = ChvManager_new() ; ChvManager_init(chvmanager, NO_LOCK, 0) ; rootchv = FrontMtx_MPI_factorInpMtx(frontmtx, mtxA, tau, droptol, chvmanager, ownersIV, lookahead, &error, cpus, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; ChvManager_free(chvmanager) ; firsttag += 3*frontETree->nfront + 2 ; F r ontMtx MPI factorInpMtx() computes the numeric f actorization with added code to send and recei v e Chv messages identied by the parameter r stta g After the numeric f actorization is computed, it is then post-processed and the f actor matrices are split into submatrices. This is v ery similar to the serial code with F r ontMtx postPr ocess() being replaced by F r ontMtx MPI postPr ocess()

PAGE 156

142 FrontMtx_MPI_postProcess(frontmtx, ownersIV, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; firsttag += 5*nproc ; Ne xt, the SolveMap object is created. This operation species which threads o wn which submatrices by using a domain decomposition map. The domain decomposition map allo ws submatrix operations to be assigned to processors in the forw ard and backsolv e steps [ 37 ]. solvemap = SolveMap_new() ; SolveMap_ddMap(solvemap, frontmtx->symmetryflag, FrontMtx_upperBlockIVL(frontmtx),FrontMtx_lowerBlockIVL(frontmtx),nproc, ownersIV, FrontMtx_frontTree(frontmtx), seed, msglvl, msgFile); SolveMap ddMap maps the of f-diagonal submatrices to processors in a domain decomposition f ashion [ 37 ]. The ne xt step is to redistrib ute the submatrices of the f actors. FrontMtx_MPI_split(frontmtx, solvemap, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; F r ontMtx MPI split() splits and redistrib utes the F r ontMtx based on the solvemap object that maps submatrices to processes [ 37 ]. After this code is e x ecuted the submatrices that a processor o wns are no w local. The ne xt step is for each processor to create a local DenseMtx object to hold the ro ws of B that it o wns [ 34 ]. This is accomplished by the follo wing code. ownedColumnsIV = FrontMtx_ownedColumnsIV(frontmtx, myid, ownersIV, msglvl, msgFile) ; nmycol = IV_size(ownedColumnsIV) ; mtxX = DenseMtx_new() ; if ( nmycol > 0 ) { DenseMtx_init(mtxX, type, 0, 0, nmycol, nrhs, 1, nmycol) ; DenseMtx_rowIndices(mtxX, &nrow, &rowind) ; IVcopy(nmycol, rowind, IV_entries(ownedColumnsIV)) ; F r ontMtx ownedColumnsIV constructs and returns an IV object that contain the ids of the columns that belongs to the fronts o wned by processor myid [ 37 ]. IV size simply

PAGE 157

143 returns the size of v ector ownedColumnsIV which is the distrib uted v ector b my id where myid is a particular processor Ne xt, the DenseMtx object is initialized by DenseMtx init() and the number entries in IV entries(ownedColumnsIV) are copied from r owind to nmycol 6.4.4 Solv e Ne xt, the linear system is solv ed. This is performed in a manner v ery similarly to the serial code. solvemanager = SubMtxManager_new() ; SubMtxManager_init(solvemanager, NO_LOCK, 0) ; FrontMtx_MPI_solve(frontmtx, mtxX, mtxB, solvemanager, solvemap, cpus, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; SubMtxManager_free(solvemanager) ; SubMtxMana g er ne w() initializes the SubMtx object manager The SubMtx object manager is similar to the ChvMana g er object in that the SubMtx object manager handles a number of instances of SubMtx double precision matrix objects while ChvMana g er handles a number of instances of Chv objects. There are tw o actions that can be tak en by the SubMtx manager: creating ne w and freeing objects, and rec ycling objects. SubMtxMana g er ne w() allocates storage for the SubMtxMana g er while SubMtxMana g er fr ee() frees up space for the SubMtxMana g er F or rec ycling of the objects, the manager k eeps a list of free objects ordered by descending size and when an object is needed, the list is searched and the object with the required size is returned. F r ontMtx MPI solve() performs the forw ard and backsolv es. The last step is permuting the local matrix on each processor into the original order ing and assembling the nal solution onto processor 0 This is performed by the follo wing code.DenseMtx_permuteRows(mtxX, newToOldIV) ; IV_fill(vtxmapIV, 0) ; firsttag++ ; mtxX = DenseMtx_MPI_splitByRows(mtxX, vtxmapIV, stats, msglvl, \ msgFile, firsttag, MPI_COMM_WORLD) ;

PAGE 158

144 if ( myid == 0 ) { printf("%d\n", nrow); sprintf(buffer, "/home/apollo/hda8/CalculiX/ccx_1.1/src/ \ p_solver/x.result"); inputFile=fopen(buffer, "w"); for ( jrow = 0 ; jrow < ncol ; jrow++ ) { fprintf(inputFile, "%1.5e\n", DenseMtx_entries(mtxX)[jrow]); } fclose(inputFile); }MPI_Finalize() ; First, the ro ws are permuted back into the original ordering by using a ne w-to-old permutation v ector in the call to DenseMtx permuteRows Ne xt, the solution, mtxX is assembled by a call to DenseMtx MPI splitByRows() Then, the solution is written to a le by calling DenseMtx entries() to return the entries. Finally an y program that uses MPI needs a call to MPI F inalize() so that the MPI en vironment is terminated [ 14 ]. This does not terminate the task b ut an y call to a MPI function after MPI F inalize() returns an error

PAGE 159

CHAPTER 7 MA TRIX ORDERINGS In this chapter three dif ferent matrix ordering algorithms, minimum de gree, nested dissection, and multi-section, will be discussed. Their adv antages, disadv antages, and a brief history of their de v elopment will also re vie wed. By optimizing the ordering of the sparse matrix a substantial decrease in time can be achie v ed during the solv e [ 40 ]. 7.1 Ordering Optimization The ordering or graph partitioning of the linear system of equations is crucial for performance of the parallel code. During Gaussian elimination, an entry in A that w as zero may become non-zero, termed ll-in. In general, ll-in is considered to be undesir able because it increases the amount of memory required to store the linear system and increases processing po wer required to solv e the problem [ 41 ]. Often, the size of the problem that can be solv ed is limited by the computers main memory which for man y w orkstations is between one Gig abyte and the maximum for most 32bit w orkstations, four Gig abytes. By reordering the original matrix A, its LU f actors often become more sparse, ha v e less ll-in. Consider the matrix A in Figure 7.1 Performing an LU decomposition on A results in L and U as sho wn in Figures 7.2 and 7.3 As seen in the gures, both L and U both ha v e additional non-zero terms in spots where A does not. SPOOLES of fers four ordering schemes that may be implemented with the fourth being a better of nested dissection and multi-section orderings: minimum de gree generalized nested dissection multi-section 145

PAGE 160

146 Figure 7.1. Original A Figure 7.2. Lo wer matrix Figure 7.3. Upper matrix better ordering of nested dissection and multi-section

PAGE 161

147 7.2 Minimum De gree Ordering The minimum de gree ordering algorithm implemented in SPOOLES is described in more detail in Liu [ 43 ]. One of the most widely used sparse matrix heuristic orderings method before graph based ordering schemes, the minimum de gree algorithm is loosely similar to the algorithm proposed by Mark o witz in 1957 and has pro v ed v ery ef fecti v e in reducing computation time and ll-in [ 44 ]. When ordering sparse matrices, minimum de gree selects the v erte x with the smallest de gree and remo v es the v erte x from the graph by pi v oting it, in essence mak es the surrounding area of the v erte x a clique, and then mo v es on to the ne xt v erte x with the smallest de gree in the graph [ 43 ]. By choosing a v erte x with minimum de gree, the ll-in resulting from adding edges to the remaining v ertices is also at a minimum, than compared to a v erte x with a higher de gree [ 45 ]. These steps continue of selection and pi v oting until all v ertices ha v e been eliminated. The ne w elimination graph is formed by deleting the v erte x of smallest de gree and its incident edges from the graph and then adding ne w edges so that the adjacent nodes of the v erte x are no w pairwise adjacent [ 43 ]. These steps to forming the elimination graph are illustrated in Figure 7.4 [ 41 ]. The minimum de gree algorithm, ho we v er is not without its dra wbacks. One area of trouble is that after a v erte x has been eliminated and a ne w elimination graph is formed, the de gree of e v ery v erte x that has not been eliminated must be updated. This successi v e ne w elimination tree creation and remaining v ertices numbering update is the most time consuming part in the ordering process [ 43 ]. The goal of multiple e xternal minimum de gree introduced by Liu is to reduce the number of de gree updates. Multiple elimination, as this modication is called, in v olv es the elimination of multiple v ertices with the same current minimum de gree before a complete update is performed [ 43 ]. The v ertices with the same current minimum de gree form a set of independent nodes. The original minimum de gree algorithm has basically three steps in a main loop, Figure 7.5 :

PAGE 162

148 Figure 7.4. Steps to elimination graph From Figure 7.5 the rst step of the loop is to select a v erte x of minimum de gree and eliminated it. The second step transforms the elimination tree once the v erte x has been eliminated. The third step updates the de gree of the remaining v ertices. What multiple elimination does to the original algorithm is introduce a loop between the selection and elimination and the elimination graph transformation steps. After the rst tw o steps ha v e e x ecuted, then the de gree updates tak es place. This modication of the minimum de gree algorithm is seen in Figure 7.6 By applying multiple elimination, the ordering compared to the original minimum

PAGE 163

149 Figure 7.5. Minimum de gree algorithm de gree algorithm is dif ferent. This is attrib uted to all v ertices with the same de gree being eliminated, the de grees are updated, and a ne w minimum de gree is calculated [ 43 ]. The other modication of the minimum de gree algorithm used in SPOOLES is modication by e xternal de gree [ 43 ]. In the original minimum de gree ordering scheme, choosing a v erte x with minimum de gree(true de gree) and eliminating it results in forming a clique of the smallest size. By using multiple elimination, the size of the resulting clique is often dif ferent from the true de gree of the v erte x [ 43 ]. By using e xternal de gree, the

PAGE 164

150 Figure 7.6. Modied minimum de gree algorithm size of the clique formed by eliminating v erte x of certain de gree is the same as its e xternal de gree. External de gree is dened as the number of neighbors of a v erte x that are not indistinguishable from that v erte x [ 43 ].

PAGE 165

151 The function call that is used for a multiple minimum de gree ordering is the follo wing:orderViaMMD(graph, seed, msglvl, msgFile); 7.3 Nested Dissection Nested dissection [ 46 ] is considered a top do wn approach to matrix ordering. The algorithm uses global information about the structure of the matrix, that is, it e xamines the graph as a whole before ordering it. In comparison, minimum de gree determines which v erte x should be eliminated rst while nested dissection decides which v ertices should be eliminated last. The nested dissection algorithm be gins by selecting a separator a set of v ertices that partitions the graph into roughly equal parts. The v ertices in the separator are then labeled the highest and ordered last in the elimination sequence. The nested dissection algorithm then continues the loop of nding a separator and ordering until the entire graph has been ordered. By ordering the v ertices in the separator last, the matrix is permuted in a bordered block-diagonal structure. The zero of f-diagonal blocks suf fer no ll-in during the f actorization. By using a small separator the size of the of f-diagonal blocks is maximized [ 45 ]. Another desirable attrib ute of a small separator is that typically the separator is a clique. The properties of a clique limit the elimination of v ertices to a sequential oper ation, not parallel, because within the clique, the v ertices do not form independent sets. Because the y are dependent sets within a clique, the v ertices can not be split up between more than one processor The function call to use nested dissection for an ordering is: orderViaND(graph, maxdomainsize, seed, msglvl, msgFile); Maxdomainsize species the lar gest number of v ertices allo wed in a subgraph before it is split.

PAGE 166

152 7.4 Multi-section The multi-section ordering algorithm uses a multi-sector a subset of v ertices, that di vides the graph into tw o or more subgraphs [ 47 ]. Ordering my multi-section can be compared to an incomplete nested dissection ordering. The graphs are split into subgraphs, subgraphs become graphs and then are split into more subgraphs. This continues until the subgraphs reach some dened size. The user dened subgraph size is critical for the performance of SPOOLES. This v ariable will be discussed in more detail in later sections. The use of multi-section is closely related to domain decomposition. The domain decomposition step of multi-section occurs in three steps: Find an initial domain decomposition of the graph Find an initial separator formed of Multisection v ertices Impro v e the bisector All v ertices in the domains are ordered before the v ertices in the multi-sector Domains are independent of each other and, therefore, can be ordered independently As in the nested dissection ordering algorithm, after the graph is subdi vided, the minimum de gree algorithm is then used to order the subgraphs [ 47 ]. The results presented in [ 47 ] on the testing of multi-section v ersus nested dissection and multiple minimum de gree sho w that multi-section is v ery competiti v e with the well kno wn partitioning tool METIS [ 48 ]. Se v eral Harwell-Boeing [ 49 ] matrices were used and the results sho w that multi-section performs as well as or better than multi-section and nested dissection or man y of the cases [ 47 ]. The function call to use multi-section is: orderViaMS(graph, maxdomainsize, seed, msglvl, msgFile);

PAGE 167

CHAPTER 8 OPTIMIZING SPOOLES FOR A CO W This chapter will discuss the steps tak en to optimize SPOOLES for our Cluster of W orkstations. The parallel functions of SPOOLES were originally coded for Massi v ely P arallel Processing machines that ha v e a high-speed lo w-latenc y proprietary interconnect. By optimizing certain parameters, SPOOLES will be tailored and optimized for our netw ork which is non-proprietary 100 Mbps Ethernet. 8.1 Installation T o use the parallel functions of SPOOLES, the parallel MPI library spoolesMPI.a needs to be b uilt. First, do wnload SPOOLES from the website at [ 33 ]. Do wnload the le spooles.2.2.tgz place it in the folder SPOOLES.2.2 and unpack. redboots@apollo> cd /home/apollo/hda8 redboots@apollo> mkdir SPOOLES.2.2 redboots@apollo> mv spooles.2.2.tgz SPOOLES.2.2 redboots@apollo> cd SPOOLES.2.2 redboots@apollo> tar -xvzf spooles.2.2.tgz This will unpack spooles.2.2.tgz in the folder SPOOLES.2.2 First, edit the le Mak e .inc so that it points to the correct compiler MPICH install location, MPICH libraries, and include directory CC = mpicc MPI_INSTALL_DIR = /home/apollo/hda8/mpich-1.2.5.2 MPI_LIB_PATH = -L$(MPI_INSTALL_DIR)/lib MPI_INCLUDE_DIR = -I$(MPI_INSTALL_DIR)/include 153

PAGE 168

154 T o create the parallel library enter into the directory /home/apollo/hda8/SPOOLES.2.2/MPI and type: make lib This will create the library spoolesMPI.a in /home/apollo/hda8/SPOOLES.2.2/ MPI/sr c When p solver is compiled, the Mak ele links to the parallel library spoolesMPI.a P solver is the parallel solv er that utilizes the parallel routines of SPOOLES. The source code and binary e x ecutable p solver are located in /home/apollo/hda8/CalculiX/ccx 1.1/sr c/p solver The source code and Mak ele used to compile p solver are sho wn in Appendix E.2.1 and E.2.2 respecti v ely T o compile p solver type: redboots@apollo> make mpicc -O -I ../../../../SPOOLES.2.2 -DARCH="Linux" -c -o p_solver.o p_solver.c mpicc p_solver.o -o p_solver /home/apollo/hda8/SPOOLES.2.2/MPI/src/spoolesMPI.a/home/apollo/hda8/SPOOLES.2.2/spooles.a -lm -lpthread p solver is a separate program from ccx F or the tests performed during the optimization process, p solver is run by in v oking a call to mpirun F or a tw o processor test, the follo wing command is used. redboots@apollo> mpirun -np 2 p_solver 8.2 Optimization The test case that will be used for the optimization is a cantile v er beam, ten inches long, a 1x1 inch cross-section, with a 200lb load applied to the end, Figure 8.1 The material is AISI 1018 Steel.

PAGE 169

155 Figure 8.1. v on Mises stress of cantile v er The resulting mesh from the cantile v er beam has 136125 equations and 4924476 non-zero entries. First, a baseline test of the initial parallel solv er w as performed and compared to the serial solv er results. The baseline parallel solv er uses the recommended v alues for maxdomainsize maxzer os and maxsize 800, 1000, and 64 respecti v ely with the ordering or derV iaBestofNDandMS F or the rst test, the serial solv e required 48 seconds while the parallel solv er completed the run in 56 seconds for tw o nodes, 61 seconds for three nodes, and 63 seconds for four nodes. These results were a little surprising because of the longer parallel solv e times. Netw ork latenc y and bandwidth were assumed to be one of the culprits of the poor results.8.2.1 Multi-Processing En vironment MPE MPICH comes with tools that help visualize what and when functions are being called by a program that is running. The Multi-Processing En vironment (MPE) allo ws this logging and is located in

PAGE 170

156 /home/apollo/hda8/mpic h-1.2.5.2/mpe MPE pro vides performance analysis tools for MPI programs through a post-processing approach. Pro vided are the necessary libraries to link ag ainst and a log vie wer upshot Upshot reads the log les and presents a graphical vie w of the communication between nodes, what functions are called, and the time-frame in which the functions are called. These tools will help determine what communication occurs between nodes and help with the optimization process. T o enable logging, se v eral options need to be added to the Mak ele for p solver These are: MPE_LIBDIR = /home/apollo/hda8/mpich-1.2.5.2/mpe/lib LOG_LIBS = -L$(MPE_LIBDIR) -llmpe -lmpe p_solver: p_solver.o ${CC} p_solver.o -o $@ ${LIBS} ${LOG_LIBS} When p solver is no w run, it will create a log-le named p solver .clo g Con v ert the *.clog le to the format for upshot *.alog, and then run upshot The belo w e xample is a small test problem with 5400 equations and 173196 non-zero entries. redboots@apollo > /home/apollo/hda8/mpich-1.2.5.2/mpe/ \ bin/clog2alog p_solver redboots@apollo > /home/apollo/hda8/mpich-1.2.5.2/mpe/ \ viewers/upshot/bin/upshot p_solver.alog First, p solver w as run just using tw o processors. The resulting log for a small test problem is sho wn in Figure 8.2 At the top of the screen the MPI routines are color -coded. The black arro ws represent communication between nodes. On the right hand side it can be seen that there is a lot of communication occurring between the tw o nodes. Each communication has inher ent latenc y induced by the netw ork. V ie wing a zoomed portion of the log le sho ws that

PAGE 171

157 Figure 8.2. p solv er MPI communication–2 processors during a v ery short period of time, around tw o milliseconds, there are numerous messages being passed between nodes, Figure 8.3 Figure 8.3. p solv er MPI communication zoomed–2 processors

PAGE 172

158 A log w as also generated for a four processor test, Figure 8.4 As e xpected the communication between four nodes w as also signicant and more than a tw o node test. Figure 8.4. p solv er MPI communication–4 processors As a comparison, cpi w as also compiled with the option of creating a log of the MPI communication. From Figure 8.5 interprocessor communication for cpi is at a minimum. These tests illustrate an important note, depending on application, the netw ork can either ha v e little ef fect or it can hugely af fect the computational performance. As illustrated in Figure 2.3 cpi achie v es almost perfect scalability for tw o nodes. This can be attrib uted to ho w cpi simply di vides the problem into parts, sends the parts to each compute node, and the compute nodes return the result with no need for interprocessor communication during the solv e process. F or p solver on the other hand, there is constant communication between nodes: compute nodes each calculating a lo w-ll ordering, distrib ution and redistrib ution of sub-matrices, and mapping fronts to processors. From Figure 8.4 it is easy to see that by increasing the number of compute nodes also increases the need for a f ast netw ork with v ery lo w latenc y and high bandwidth. F or p solver by increasing the nodes, each node has to communicate with all the other nodes

PAGE 173

159 Figure 8.5. MPI communication for cpi to share the matrix data. Although more processing po wer is a v ailable with more nodes, decreased solv e time is not guaranteed with the netw ork being the limiting f actor 8.2.2 Reduce Ordering T ime The rst change to the code for p solver p solver .c will be to reduce the time nding a lo w-ll ordering. Finding a lo w-ll ordering for A is performed by all the compute nodes for the parallel solv e and by just one node for the serial solv e. F or the parallel solv er initially all nodes computed a lo w-ll ordering using a dif ferent random seed, collecti v ely decided which ordering is the best, and then used the best ordering for the f actorization. Instead of ha ving all nodes compute an ordering, only the master node will create an ordering and then broadcast that ordering to the other nodes. The function calls for the or dering are v ery similar to that of the serial code. Instead of using InpMtx MPI fulAdjacency() for the parallel code, it will be replaced by InpMtx fulAdjacency(mtxA) Also the follo wing code is remo v ed: opcounts = DVinit(nproc, 0.0) ;

PAGE 174

160 opcounts[myid] = ETree_nFactorOps(frontETree, type, \ symmetryflag) ; MPI_Allgather((void *) &opcounts[myid], 1, MPI_DOUBLE, (void *) opcounts, 1, MPI_DOUBLE, \ MPI_COMM_WORLD) ; minops = DVmin(nproc, opcounts, &root) ; DVfree(opcounts) ; After the code w as changed to only allo w one ordering, tw o nodes required 51 seconds, three nodes tak e 52 seconds, and four nodes required 61 seconds. The reduced calculation time is illustrated by comparing Figures 8.2 and 8.6 From Figure 8.6 there is reduction in time near the be ginning of the solv e process. Figure 8.6. First optimization

PAGE 175

161 8.2.3 Optimizing the F r ont T r ee Ne xt, the method of creating the fr ont tr ee that denes the blocking of the f actor matrices will be optimized. By optimizing the creation of the fr ontET r ee the data structures and computations performed during the f actor and solv e will be af fected. The four methods to create the fr ontET r ee object are through minimum de gree, generalized nested dissection, multi-section, and the better ordering of nested dissection and multi-section. These are described in more detail in the Matrix Or derings chapter Chapter 7 The ne xt tests performed determined which ordering achie v ed the best results. A minimum de gree ordering required 186 seconds for tw o nodes and around 51 seconds for the better of nested dissection and multi-section orderings. Using a top-do wn approach for the ordering approach returned the best results. F or equations arising from partial dif ferential equations with se v eral de grees of freedom at a node and also for three dimensional problems, multi-section and nested dissection are recommended [ 40 ]. The ne xt part of the ordering process that will be optimized are the v ariables maxdomainsize maxsize and maxzer os The maxdomainsize is used for the nested dissection and multi-section orderings. An y subgraph that is lar ger than maxdomainsize is split [ 40 ]. As a matrix gets lar ger so can the number of zeros entries that can be in a front. The maxzer os and maxsize v ariables af fect the ef cienc y of the f actor and solv es the most [ 40 ]. maxzer os specify the number of zeros that are can be in a front while maxsize similar to the block size of HPL discussed in Chapter 3 inuences the granularity of the Le v el 3 BLAS computations in the f actorization and solv e. 8.2.4 Maxdomainsize First, an optimum v alue for the maxdomainsize will be determined for our cluster The original v alue for the serial solv er used in CalculiX is 800. First, v alues higher and lo wer will be tested. F or the rst set of tests, a maxdomainsize of 700 completed the test in 47 seconds while a maxdomainsize of 900 took 49 seconds for a tw o processor test.

PAGE 176

162 The ne xt test used a maxdomainsize v alue of 650 and 750. The results for these tests had both tested v alues of maxdomainsize complete the test 51 seconds for a tw o processor test. The results from these tests, T able 8.1 indicate a maxdomainsize v alue of 700 achie v ed the best results for tw o processor tests and a maxdomainsize of 900 returned the be results for three and four processors. T able 8.1. maxdomainsize ef fect on solv e time (seconds) maxdomainsize 2 processors 3 processors 4 processors 650 51 57 63 700 47 54 61 750 51 54 62 800 51 52 61 850 50 51 52 900 49 51 51 8.2.5 Maxzer os and Maxsize The ne xt v ariables tested were maxzer os and maxsize maxzer os and maxsize are used to transform the ”front tree”. The structure of the f actor matrices, as well as the structure of the computations is controlled by a ”front tree”. Optimizing these parameters is essential to getting the best performance from the parallel libraries [ 40 ]. F or the follo wing tests all three v ariables, maxdomainsize maxzer os and maxsize will be v aried. F or the maxdomainsize test, a maxdomainsize of 700 returned the best results for tw o processor tests while a maxdomainsize of 900 performed much better for three and four processors tests. The results from these tests are sho wn in T ables 8.2 and 8.3

PAGE 177

163T able 8.2. Processor solv e time (seconds)–700 maxdomainsize maxdomainsize maxzer os maxsize T w o processors Three processors F our processors 700 100 32 48.1 58.5 64.7 700 100 48 46.6 57.3 64.6 700 100 64 46.3 56.7 63.0 700 100 80 45.9 56.3 61.5 700 100 96 45.8 57.8 61.0 700 1000 32 47.7 55.6 64.1 700 1000 48 46.4 54.2 61.3 700 1000 64 47.1 54.2 61.2 700 1000 80 45.6 53.7 59.9 700 1000 96 45.5 53.8 58.9 700 2000 32 47.7 55.4 62.6 700 2000 48 46.4 54.2 62.2 700 2000 64 46.2 54.4 61.5 700 2000 80 46.0 53.6 59.6 700 2000 96 45.6 54.1 58.7

PAGE 178

164T able 8.3. Processor solv e time (seconds)–900 maxdomainsize maxdomainsize maxzer os maxsize T w o processors Three processors F our processors 900 100 32 52.7 54.7 54.7 900 100 48 50.6 53.0 52.4 900 100 64 50.0 53.0 52.1 900 100 80 49.8 52.6 52.6 900 100 96 51.4 53.8 52.1 900 1000 32 51.8 52.4 54.3 900 1000 48 50.1 50.9 52.1 900 1000 64 49.6 50.7 51.4 900 1000 80 51.7 51.2 51.0 900 1000 96 51.2 51.7 52.1 900 2000 32 52.1 54.2 56.3 900 2000 48 50.0 51.1 54.3 900 2000 64 49.8 50.5 53.4 900 2000 80 50.0 51.1 52.8 900 2000 96 51.0 52.2 54.4

PAGE 179

165 8.2.6 Final T ests with Optimized Solv er The nal tests performed used the optimized solv er From the results in T ables 8.2 and 8.3 the v alues of maxdomainsize maxzer os and maxsize that returned the best results will be used for the nal tests. A tw o processor test will use a maxdomainsize maxzer os and maxsize of 700, 1000, and 96 respecti v ely: a three processor test will use 900, 1000, and 64: a four processor test will use 900, 1000, and 80. The optimized parameters and solv e times are sho wn in T able 8.4 T able 8.4. Results with optimized v alues # of processors maxdomainsize maxzer os maxsize Solv e time (seconds) 2 700 1000 96 45.5 3 900 1000 64 50.7 4 900 1000 80 51.0 The ne xt set of tests will retest the cantile v er beam with 5400 equations and 173196 non-zero entries with MPE logging enabled. This problem w as tested in Section 8.2.1 and pro vided a visualization of MPI communication. The purpose of retesting this small problem is to gi v e a comparison of the solv e processes before and after optimization. The rst test used tw o processors and the results are sho wn in Figure 8.7 From Figure 8.7 the optimized solv er requires less communication between nodes and also has around a ten percent less solv e time for the small test problem. The ne xt test used four processors. From Figure 8.8 the solv e time for the optimized solv er and also the communication between nodes decreased substantially The decrease in number of messages passed, represented by black arro ws, can be easily seen in Figure 8.8 The optimization results in around a thirtyv e percent decrease in solv e time. The nal test conducted used a matrix with 436,590 equations and 16,761,861 nonzero entries. The problem w as solv ed with one, tw o, three, and four processors. The

PAGE 180

166 Figure 8.7. Final optimization results for tw o processors Figure 8.8. Final optimization results for four processors results are summarized in T able 8.5 The results clearly sho w the benet of using multiple nodes. When just using a single node, p solver crashes and is unable to handle the lar ge problem. An error is returned for a one processor test that says the follo wing:

PAGE 181

167 T able 8.5. Results for lar ge test # of processors Solv e time (seconds) 1 ERR OR 2 3056 3 2489 4 1001 ALLOCATE failure : bytes 1083464, line 517, file DV.c F or the lar ge test problem, the RAM and virtual memory both were both lled and the serial solv er could not allocate more memory for the one processor test. By using multiple processors the problem is split and the w orkload and memory requirements are di vided among the nodes. From the results in T able 8.5 using four processors for lar ge tests returns more than a three-fold impro v ement o v er using tw o processors. Ev en though there is much more communication when using four processors, the communication penalty is alle viated with more processing po wer and more RAM reduces the need to use slo w virtual memory 8.3 Conclusions There are se v eral conclusions that can be dra wn from the optimization process. First, a netw ork tailored for the particular type of problem that will be solv ed is v ery important. Problems that split the problem e v enly send the w ork to a node, and just return the results without an y need for interprocessor communication perform better than problems that require a lot of interprocessor communication. The test problem cpi did not require a lot of communication between nodes and scaled v ery well compared to p solver which did require a lot of communication between nodes. The communication comparison between these tw o types of tests is illustrated in the logs created by MPE, Figures 8.2

PAGE 182

168 and 8.5 A 100 Mbps Ethernet netw ork is suf cient for a problem such as cpi b ut with problems that require hea vy communication, high bandwidth and lo w latenc y netw ork hardw are such as Myrinet [ 21 ] is essential for the performance to scale well and to achie v e acceptable solv e times. The performance of a cluster is also v ery dependent on well written softw are that is optimized for the a v ailable netw ork and computer hardw are. This is well illustrated in the optimization process for p solver F or the small test problem, around a ten per cent performance g ain for a tw o processor system and around a thirtyv e performance increase for four nodes w as achie v ed by optimizing p solver for our netw ork and computer architecture. By optimizing the granularity of the Le v el 3 BLAS computations in the f actorization and solv e process by changing the maxsize parameter the performance of p solver is greatly increased. Although a maxsize of 64 is the recommended v alue for most systems, primarily MPP' s, by changing the maxsize v alue to 96 for tw o processors and to 80 for four processors, p solver becomes tailored for our system. Although settings and program coding may be optimized for one architecture, this does not guarantee that the program will be optimized and perform well for another system. 8.3.1 Recommendations In order to achie v e better performance from our cluster more RAM and a f aster netw ork should be purchased. Unlik e an iterati v e system of equations solv er a direct solv er stores the entire problem in RAM. When the RAM is lled, slo w virtual memory is then used. By increasing the RAM, more of the problem can be stored in the RAM and decrease the dependence on the hard disk. From T able 8.4 the increased a v ailable RAM from using four processors disproportionately outperforms using just half of the processors. W ith more RAM for each node, increased performance should be e xpected. The netw ork on which our cluster communicates should also be upgraded. F or problems that do not require alot of communication such as cpi a 100 Mbps Ethernet is suf cient. Because of the hea vy communication in solving systems of equations, a high

PAGE 183

169 performance netw ork is almost essential for acceptable performance. W ith the prices of netw ork equipment al w ays decreasing, the return of in v estment becomes more attracti v e. If a high performance netw ork w as installed, a diskless cluster could also be setup, Figure 8.9 A diskless cluster of fers the beneit of easy system administration, cost of o wnership decreases, decreased noise and heat production, and less po wer consumption [ 50 ]. W ith a diskless cluster because all nodes share a common storage, updates can be applied to a single lesystem, backups can also be made from a single lesystem, and with the cost/Me g abyte less for lar ge dri v es, using lar ge capacity dri v es for storage becomes cheaper than using multiple small dri v es for each indi vidual node. A disadv antage of ha ving a diskless cluster is that is requires more interprocessor communication. W ith a high performance netw ork, ho we v er interprocessor communication should become less of a limiting f actor Figure 8.9. Diskless cluster

PAGE 184

APPENDIX A CPI SOURCE CODE Belo w is the source code for the test e xample cpi in Section 2.3.6 #include "mpi.h" #include #include double f(double); double f(double a) { return (4.0 / (1.0 + a*a)); }int main(int argc,char *argv[]) { int done = 0, n, myid, numprocs, i; double PI25DT = 3.141592653589793238462643; double mypi, pi, h, sum, x; double startwtime = 0.0, endwtime; int namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc,&argv);MPI_Comm_size(MPI_COMM_WORLD,&numprocs);MPI_Comm_rank(MPI_COMM_WORLD,&myid);MPI_Get_processor_name(processor_name,&namelen);fprintf(stdout,"Process %d of %d on %s\n", myid, numprocs, processor_name); n = 0; while (!done) { if (myid == 0) { printf("Enter the number of intervals: (0 quits) "); scanf("%d",&n); startwtime = MPI_Wtime(); }MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); if (n == 0) done = 1; else{ h = 1.0 / (double) n; sum = 0.0; for (i = myid + 1; i <= n; i += numprocs) { x = h ((double)i 0.5 ); sum += f(x); }mypi = h sum; 170

PAGE 185

171 MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) { printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi PI25DT)); endwtime = MPI_Wtime(); printf("wall clock time = %f\n", endwtime-startwtime); fflush( stdout ); } } }MPI_Finalize();return 0; }

PAGE 186

APPENDIX B BENCHMARKING RESUL TS The belo w sections will list the results from the benchmarking tests performed in Chapter 3 B.1 NetPIPE Results The rst set of NetPIPE results are from testing the netw ork with MPI o v er head. The rst column lists the test run, second column is the message size, third column lists ho w man y messages were sent, the fourth lists the throughput, and the last column is the round-trip time of the messages di vided by tw o. redboots@apollo> mpirun -np 2 NPmpi -o np.out.mpi 0: apollo 1: hydra4 Now starting the main loop 0: 1 bytes 1628 times--> 0.13 Mbps in 60.98usec 1: 2 bytes 1639 times--> 0.25 Mbps in 60.87usec 2: 3 bytes 1642 times--> 0.37 Mbps in 61.07usec 3: 4 bytes 1091 times--> 0.50 Mbps in 61.46usec 4: 6 bytes 1220 times--> 0.74 Mbps in 61.48usec 5: 8 bytes 813 times--> 0.99 Mbps in 61.86usec 6: 12 bytes 1010 times--> 1.46 Mbps in 62.53usec 7: 13 bytes 666 times--> 1.58 Mbps in 62.66usec 8: 16 bytes 736 times--> 1.93 Mbps in 63.15usec 9: 19 bytes 890 times--> 2.27 Mbps in 63.73usec 10: 21 bytes 991 times--> 2.50 Mbps in 64.09usec 11: 24 bytes 1040 times--> 2.84 Mbps in 64.52usec 12: 27 bytes 1097 times--> 3.16 Mbps in 65.11usec 13: 29 bytes 682 times--> 3.38 Mbps in 65.50usec 14: 32 bytes 737 times--> 3.70 Mbps in 65.94usec 15: 35 bytes 805 times--> 4.01 Mbps in 66.56usec 16: 45 bytes 858 times--> 5.02 Mbps in 68.43usec 17: 48 bytes 974 times--> 5.33 Mbps in 68.72usec 18: 51 bytes 1000 times--> 5.60 Mbps in 69.43usec 19: 61 bytes 564 times--> 6.52 Mbps in 71.40usec 20: 64 bytes 688 times--> 6.76 Mbps in 72.28usec 21: 67 bytes 713 times--> 7.03 Mbps in 72.72usec 22: 93 bytes 738 times--> 9.16 Mbps in 77.46usec 23: 96 bytes 860 times--> 9.42 Mbps in 77.73usec 24: 99 bytes 871 times--> 9.62 Mbps in 78.48usec 25: 125 bytes 463 times--> 11.52 Mbps in 82.80usec 26: 128 bytes 599 times--> 11.69 Mbps in 83.51usec 27: 131 bytes 608 times--> 11.86 Mbps in 84.28usec 28: 189 bytes 615 times--> 15.27 Mbps in 94.46usec 29: 192 bytes 705 times--> 15.52 Mbps in 94.39usec 30: 195 bytes 711 times--> 15.58 Mbps in 95.50usec 31: 253 bytes 365 times--> 18.25 Mbps in 105.75usec 32: 256 bytes 470 times--> 18.38 Mbps in 106.28usec 33: 259 bytes 474 times--> 18.49 Mbps in 106.88usec 34: 381 bytes 476 times--> 22.61 Mbps in 128.59usec 35: 384 bytes 518 times--> 22.76 Mbps in 128.75usec 36: 387 bytes 519 times--> 22.73 Mbps in 129.90usec 37: 509 bytes 262 times--> 25.57 Mbps in 151.85usec 38: 512 bytes 328 times--> 25.71 Mbps in 151.94usec 39: 515 bytes 330 times--> 25.68 Mbps in 153.02usec 40: 765 bytes 329 times--> 29.51 Mbps in 197.76usec 41: 768 bytes 337 times--> 29.69 Mbps in 197.34usec 42: 771 bytes 338 times--> 29.58 Mbps in 198.83usec 43: 1021 bytes 169 times--> 32.00 Mbps in 243.39usec 44: 1024 bytes 205 times--> 32.19 Mbps in 242.69usec 45: 1027 bytes 206 times--> 32.09 Mbps in 244.15usec 46: 1533 bytes 205 times--> 36.05 Mbps in 324.42usec 47: 1536 bytes 205 times--> 36.33 Mbps in 322.52usec 48: 1539 bytes 206 times--> 36.17 Mbps in 324.62usec 49: 2045 bytes 103 times--> 42.39 Mbps in 368.03usec 50: 2048 bytes 135 times--> 42.81 Mbps in 365.00usec 51: 2051 bytes 137 times--> 42.46 Mbps in 368.53usec 52: 3069 bytes 136 times--> 51.31 Mbps in 456.36usec 53: 3072 bytes 146 times--> 51.91 Mbps in 451.47usec 54: 3075 bytes 147 times--> 51.42 Mbps in 456.29usec 55: 4093 bytes 73 times--> 57.76 Mbps in 540.60usec 56: 4096 bytes 92 times--> 58.49 Mbps in 534.29usec 57: 4099 bytes 93 times--> 57.81 Mbps in 540.98usec 58: 6141 bytes 92 times--> 65.86 Mbps in 711.39usec 59: 6144 bytes 93 times--> 66.78 Mbps in 701.92usec 60: 6147 bytes 95 times--> 65.86 Mbps in 712.08usec 61: 8189 bytes 46 times--> 68.73 Mbps in 909.03usec 62: 8192 bytes 54 times--> 69.74 Mbps in 896.18usec 63: 8195 bytes 55 times--> 68.77 Mbps in 909.13usec 64: 12285 bytes 55 times--> 74.86 Mbps in 1251.99usec 65: 12288 bytes 53 times--> 76.00 Mbps in 1233.56usec 66: 12291 bytes 54 times--> 74.87 Mbps in 1252.53usec 67: 16381 bytes 26 times--> 77.13 Mbps in 1620.44usec 68: 16384 bytes 30 times--> 79.39 Mbps in 1574.45usec 69: 16387 bytes 31 times--> 78.86 Mbps in 1585.31usec 70: 24573 bytes 31 times--> 81.03 Mbps in 2313.56usec 71: 24576 bytes 28 times--> 81.61 Mbps in 2297.59usec 72: 24579 bytes 29 times--> 81.05 Mbps in 2313.69usec 73: 32765 bytes 14 times--> 83.01 Mbps in 3011.39usec 74: 32768 bytes 16 times--> 83.61 Mbps in 2990.00usec 75: 32771 bytes 16 times--> 83.00 Mbps in 3012.31usec 76: 49149 bytes 16 times--> 85.03 Mbps in 4410.13usec 77: 49152 bytes 15 times--> 85.60 Mbps in 4380.67usec 78: 49155 bytes 15 times--> 85.00 Mbps in 4411.87usec 79: 65533 bytes 7 times--> 86.08 Mbps in 5808.57usec 80: 65536 bytes 8 times--> 86.70 Mbps in 5767.06usec 81: 65539 bytes 8 times--> 86.02 Mbps in 5812.68usec 82: 98301 bytes 8 times--> 86.82 Mbps in 8638.06usec 83: 98304 bytes 7 times--> 87.40 Mbps in 8581.43usec 84: 98307 bytes 7 times--> 86.81 Mbps in 8639.50usec 85: 131069 bytes 3 times--> 86.30 Mbps in 11586.99usec 86: 131072 bytes 4 times--> 86.85 Mbps in 11513.74usec 87: 131075 bytes 4 times--> 86.39 Mbps in 11575.89usec 88: 196605 bytes 4 times--> 86.63 Mbps in 17314.26usec 89: 196608 bytes 3 times--> 86.88 Mbps in 17264.84usec 90: 196611 bytes 3 times--> 86.68 Mbps in 17305.98usec 91: 262141 bytes 3 times--> 86.30 Mbps in 23174.34usec 92: 262144 bytes 3 times--> 86.34 Mbps in 23164.51usec 93: 262147 bytes 3 times--> 86.38 Mbps in 23154.16usec 94: 393213 bytes 3 times--> 86.34 Mbps in 34744.65usec 95: 393216 bytes 3 times--> 86.34 Mbps in 34747.33usec 172

PAGE 187

173 96: 393219 bytes 3 times--> 86.37 Mbps in 34735.02usec 97: 524285 bytes 3 times--> 86.61 Mbps in 46183.49usec 98: 524288 bytes 3 times--> 86.58 Mbps in 46199.16usec 99: 524291 bytes 3 times--> 86.61 Mbps in 46182.65usec 100: 786429 bytes 3 times--> 86.88 Mbps in 69064.00usec 101: 786432 bytes 3 times--> 86.84 Mbps in 69089.16usec 102: 786435 bytes 3 times--> 86.92 Mbps in 69027.50usec 103: 1048573 bytes 3 times--> 87.04 Mbps in 91907.51usec 104: 1048576 bytes 3 times--> 87.00 Mbps in 91949.85usec 105: 1048579 bytes 3 times--> 86.96 Mbps in 91994.35usec 106: 1572861 bytes 3 times--> 87.01 Mbps in 137916.65usec 107: 1572864 bytes 3 times--> 87.00 Mbps in 137927.82usec 108: 1572867 bytes 3 times--> 87.11 Mbps in 137763.49usec 109: 2097149 bytes 3 times--> 87.09 Mbps in 183727.48usec 110: 2097152 bytes 3 times--> 87.04 Mbps in 183826.49usec 111: 2097155 bytes 3 times--> 87.19 Mbps in 183506.19usec 112: 3145725 bytes 3 times--> 87.14 Mbps in 275412.17usec 113: 3145728 bytes 3 times--> 87.14 Mbps in 275419.01usec 114: 3145731 bytes 3 times--> 87.26 Mbps in 275046.49usec 115: 4194301 bytes 3 times--> 87.18 Mbps in 367065.66usec 116: 4194304 bytes 3 times--> 87.16 Mbps in 367126.34usec 117: 4194307 bytes 3 times--> 87.30 Mbps in 366560.66usec 118: 6291453 bytes 3 times--> 87.24 Mbps in 550221.68usec 119: 6291456 bytes 3 times--> 87.21 Mbps in 550399.18usec 120: 6291459 bytes 3 times--> 87.35 Mbps in 549535.67usec 121: 8388605 bytes 3 times--> 87.32 Mbps in 732942.65usec 122: 8388608 bytes 3 times--> 87.29 Mbps in 733149.68usec 123: 8388611 bytes 3 times--> 87.37 Mbps in 732529.83usec B.2 NetPIPE TCP Results The belo w results are for testing the netw ork with just TCP o v erhead; the MPI o v erhead has been remo v ed. The steps to performing this test is found in Section 3.2.3 Send and receive buffers are 16384 and 87380 bytes (A bug in Linux doubles the requested buffer sizes) Now starting the main loop 0: 1 bytes 2454 times--> 0.19 Mbps in 40.45usec 1: 2 bytes 2472 times--> 0.38 Mbps in 40.02usec 2: 3 bytes 2499 times--> 0.57 Mbps in 40.50usec 3: 4 bytes 1645 times--> 0.75 Mbps in 40.51usec 4: 6 bytes 1851 times--> 1.12 Mbps in 41.03usec 5: 8 bytes 1218 times--> 1.47 Mbps in 41.64usec 6: 12 bytes 1500 times--> 2.18 Mbps in 42.05usec 7: 13 bytes 990 times--> 2.33 Mbps in 42.54usec 8: 16 bytes 1084 times--> 2.86 Mbps in 42.64usec 9: 19 bytes 1319 times--> 3.34 Mbps in 43.36usec 10: 21 bytes 1456 times--> 3.67 Mbps in 43.67usec 11: 24 bytes 1526 times--> 4.15 Mbps in 44.12usec 12: 27 bytes 1605 times--> 4.61 Mbps in 44.65usec 13: 29 bytes 995 times--> 4.90 Mbps in 45.14usec 14: 32 bytes 1069 times--> 5.35 Mbps in 45.60usec 15: 35 bytes 1165 times--> 5.77 Mbps in 46.28usec 16: 45 bytes 1234 times--> 7.11 Mbps in 48.30usec 17: 48 bytes 1380 times--> 7.56 Mbps in 48.43usec 18: 51 bytes 1419 times--> 7.93 Mbps in 49.05usec 19: 61 bytes 799 times--> 9.11 Mbps in 51.07usec 20: 64 bytes 963 times--> 9.47 Mbps in 51.54usec 21: 67 bytes 1000 times--> 9.79 Mbps in 52.21usec 22: 93 bytes 1029 times--> 12.44 Mbps in 57.02usec 23: 96 bytes 1169 times--> 12.79 Mbps in 57.28usec 24: 99 bytes 1182 times--> 13.04 Mbps in 57.92usec 25: 125 bytes 627 times--> 15.22 Mbps in 62.64usec 26: 128 bytes 791 times--> 15.41 Mbps in 63.36usec 27: 131 bytes 801 times--> 15.60 Mbps in 64.06usec 28: 189 bytes 810 times--> 19.22 Mbps in 75.01usec 29: 192 bytes 888 times--> 19.38 Mbps in 75.58usec 30: 195 bytes 889 times--> 19.38 Mbps in 76.77usec 31: 253 bytes 454 times--> 22.33 Mbps in 86.46usec 32: 256 bytes 576 times--> 22.52 Mbps in 86.73usec 33: 259 bytes 581 times--> 22.60 Mbps in 87.42usec 34: 381 bytes 582 times--> 26.75 Mbps in 108.65usec 35: 384 bytes 613 times--> 26.76 Mbps in 109.46usec 36: 387 bytes 611 times--> 26.77 Mbps in 110.31usec 37: 509 bytes 309 times--> 29.52 Mbps in 131.53usec 38: 512 bytes 379 times--> 29.58 Mbps in 132.05usec 39: 515 bytes 380 times--> 29.65 Mbps in 132.50usec 40: 765 bytes 381 times--> 33.19 Mbps in 175.83usec 41: 768 bytes 379 times--> 33.21 Mbps in 176.43usec 42: 771 bytes 378 times--> 33.27 Mbps in 176.82usec 43: 1021 bytes 190 times--> 35.24 Mbps in 221.04usec 44: 1024 bytes 225 times--> 35.25 Mbps in 221.62usec 45: 1027 bytes 226 times--> 35.14 Mbps in 222.98usec 46: 1533 bytes 225 times--> 38.33 Mbps in 305.15usec 47: 1536 bytes 218 times--> 38.42 Mbps in 304.99usec 48: 1539 bytes 218 times--> 38.45 Mbps in 305.38usec 49: 2045 bytes 109 times--> 45.56 Mbps in 342.44usec 50: 2048 bytes 145 times--> 45.59 Mbps in 342.70usec 51: 2051 bytes 146 times--> 45.57 Mbps in 343.41usec 52: 3069 bytes 145 times--> 54.56 Mbps in 429.13usec 53: 3072 bytes 155 times--> 54.70 Mbps in 428.45usec 54: 3075 bytes 155 times--> 54.69 Mbps in 428.94usec 55: 4093 bytes 77 times--> 60.32 Mbps in 517.71usec 56: 4096 bytes 96 times--> 60.36 Mbps in 517.74usec 57: 4099 bytes 96 times--> 60.34 Mbps in 518.28usec 58: 6141 bytes 96 times--> 68.00 Mbps in 689.04usec 59: 6144 bytes 96 times--> 68.04 Mbps in 688.98usec 60: 6147 bytes 96 times--> 68.04 Mbps in 689.27usec 61: 8189 bytes 48 times--> 71.93 Mbps in 868.57usec 62: 8192 bytes 57 times--> 71.94 Mbps in 868.83usec 63: 8195 bytes 57 times--> 71.95 Mbps in 868.97usec 64: 12285 bytes 57 times--> 77.49 Mbps in 1209.46usec 65: 12288 bytes 55 times--> 77.50 Mbps in 1209.66usec 66: 12291 bytes 55 times--> 77.49 Mbps in 1210.15usec 67: 16381 bytes 27 times--> 80.05 Mbps in 1561.30usec 68: 16384 bytes 32 times--> 80.09 Mbps in 1560.75usec 69: 16387 bytes 32 times--> 80.09 Mbps in 1561.09usec 70: 24573 bytes 32 times--> 82.95 Mbps in 2260.00usec 71: 24576 bytes 29 times--> 82.95 Mbps in 2260.29usec 72: 24579 bytes 29 times--> 82.93 Mbps in 2261.17usec 73: 32765 bytes 14 times--> 84.68 Mbps in 2951.85usec 74: 32768 bytes 16 times--> 84.69 Mbps in 2952.09usec 75: 32771 bytes 16 times--> 84.69 Mbps in 2952.38usec 76: 49149 bytes 16 times--> 86.13 Mbps in 4353.53usec 77: 49152 bytes 15 times--> 86.13 Mbps in 4354.13usec 78: 49155 bytes 15 times--> 86.14 Mbps in 4353.77usec 79: 65533 bytes 7 times--> 87.16 Mbps in 5736.14usec 80: 65536 bytes 8 times--> 87.17 Mbps in 5735.75usec 81: 65539 bytes 8 times--> 87.16 Mbps in 5736.81usec 82: 98301 bytes 8 times--> 87.92 Mbps in 8530.38usec 83: 98304 bytes 7 times--> 87.92 Mbps in 8530.44usec 84: 98307 bytes 7 times--> 87.92 Mbps in 8530.51usec 85: 131069 bytes 3 times--> 88.43 Mbps in 11308.34usec 86: 131072 bytes 4 times--> 88.44 Mbps in 11306.75usec 87: 131075 bytes 4 times--> 88.43 Mbps in 11308.88usec 88: 196605 bytes 4 times--> 88.83 Mbps in 16886.00usec 89: 196608 bytes 3 times--> 88.85 Mbps in 16882.83usec 90: 196611 bytes 3 times--> 88.85 Mbps in 16883.49usec 91: 262141 bytes 3 times--> 89.03 Mbps in 22463.82usec 92: 262144 bytes 3 times--> 89.03 Mbps in 22463.32usec 93: 262147 bytes 3 times--> 89.03 Mbps in 22464.17usec 94: 393213 bytes 3 times--> 89.29 Mbps in 33599.65usec 95: 393216 bytes 3 times--> 89.29 Mbps in 33600.17usec 96: 393219 bytes 3 times--> 89.29 Mbps in 33598.82usec 97: 524285 bytes 3 times--> 89.40 Mbps in 44740.16usec 98: 524288 bytes 3 times--> 89.40 Mbps in 44741.81usec 99: 524291 bytes 3 times--> 89.41 Mbps in 44740.48usec 100: 786429 bytes 3 times--> 89.53 Mbps in 67012.98usec 101: 786432 bytes 3 times--> 89.54 Mbps in 67012.85usec 102: 786435 bytes 3 times--> 89.54 Mbps in 67012.31usec 103: 1048573 bytes 3 times--> 89.60 Mbps in 89289.52usec 104: 1048576 bytes 3 times--> 89.60 Mbps in 89290.17usec

PAGE 188

174 105: 1048579 bytes 3 times--> 89.59 Mbps in 89290.98usec 106: 1572861 bytes 3 times--> 89.66 Mbps in 133836.16usec 107: 1572864 bytes 3 times--> 89.66 Mbps in 133835.48usec 108: 1572867 bytes 3 times--> 89.66 Mbps in 133835.67usec 109: 2097149 bytes 3 times--> 89.69 Mbps in 178382.67usec 110: 2097152 bytes 3 times--> 89.70 Mbps in 178379.85usec 111: 2097155 bytes 3 times--> 89.70 Mbps in 178381.51usec 112: 3145725 bytes 3 times--> 89.72 Mbps in 267488.52usec 113: 3145728 bytes 3 times--> 89.72 Mbps in 267485.17usec 114: 3145731 bytes 3 times--> 89.72 Mbps in 267486.83usec 115: 4194301 bytes 3 times--> 89.74 Mbps in 356587.84usec 116: 4194304 bytes 3 times--> 89.74 Mbps in 356588.32usec 117: 4194307 bytes 3 times--> 89.74 Mbps in 356588.50usec 118: 6291453 bytes 3 times--> 89.75 Mbps in 534800.34usec 119: 6291456 bytes 3 times--> 89.75 Mbps in 534797.50usec 120: 6291459 bytes 3 times--> 89.75 Mbps in 534798.65usec 121: 8388605 bytes 3 times--> 89.76 Mbps in 712997.33usec 122: 8388608 bytes 3 times--> 89.76 Mbps in 713000.15usec 123: 8388611 bytes 3 times--> 89.76 Mbps in 713000.17usec B.3 High Performance Linpack The follo wing section will list the important les for compiling and running HPL and also list some results. The steps to installing HPL are described in Section 3.3.1 B.3.1 HPL Mak eles Belo w is the Mak ele that w as used to compile HPL. The follo wing le is the rst input that is called when the user types mak e The name for this le is Mak ele and is located in /home/apollo/hda8/hpl/ ## -High Performance Computing Linpack Benchmark (HPL) # HPL 1.0a January 20, 2004 # Antoine P. Petitet # University of Tennessee, Knoxville # Innovative Computing Laboratories # (C) Copyright 2000-2004 All Rights Reserved ## -Copyright notice and Licensing terms: ## Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: ## 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. ## 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions, and the following disclaimer in the # documentation and/or other materials provided with the distribution. ## 3. All advertising materials mentioning features or use of this # software must display the following acknowledgement: # This product includes software developed at the University of # Tennessee, Knoxville, Innovative Computing Laboratories. ## 4. The name of the University, the name of the Laboratory, or the # names of its contributors may not be used to endorse or promote # products derived from this software without specific written # permission. ## -Disclaimer: ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR

PAGE 189

175 # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE UNIVERSITY # OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # ################################################################### ### ##SHELL = /bin/sh #arch = Linux_P4 ### Targets ############################################################# #all : install ## ################################################################### ### #install : startup refresh build #startup : $(MAKE) -f Make.top startup_dir arch=$(arch) $(MAKE) -f Make.top startup_src arch=$(arch) $(MAKE) -f Make.top startup_tst arch=$(arch) $(MAKE) -f Make.top refresh_src arch=$(arch) $(MAKE) -f Make.top refresh_tst arch=$(arch) #refresh : $(MAKE) -f Make.top refresh_src arch=$(arch) $(MAKE) -f Make.top refresh_tst arch=$(arch) #build : $(MAKE) -f Make.top build_src arch=$(arch) $(MAKE) -f Make.top build_tst arch=$(arch) #clean : $(MAKE) -f Make.top clean_src arch=$(arch) $(MAKE) -f Make.top clean_tst arch=$(arch) #clean_arch : $(MAKE) -f Make.top clean_arch_src arch=$(arch) $(MAKE) -f Make.top clean_arch_tst arch=$(arch) #clean_arch_all : $(MAKE) -f Make.top clean_arch_all arch=$(arch) #clean_guard : $(MAKE) -f Make.top clean_guard_src arch=$(arch) $(MAKE) -f Make.top clean_guard_tst arch=$(arch) ## ################################################################### ### The ne xt le species what BLAS routines to use, the location of the MPI install directory and what compiler to use. The name for this le is Mak e .Linux P4 and is located in /home/apollo/hda8/hpl/ # -High Performance Computing Linpack Benchmark (HPL)

PAGE 190

176 # HPL 1.0a January 20, 2004 # Antoine P. Petitet # University of Tennessee, Knoxville # Innovative Computing Laboratories # (C) Copyright 2000-2004 All Rights Reserved ## -Copyright notice and Licensing terms: ## Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: ## 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. ## 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions, and the following disclaimer in the # documentation and/or other materials provided with the distribution. ## 3. All advertising materials mentioning features or use of this # software must display the following acknowledgement: # This product includes software developed at the University of # Tennessee, Knoxville, Innovative Computing Laboratories. ## 4. The name of the University, the name of the Laboratory, or the # names of its contributors may not be used to endorse or promote # products derived from this software without specific written # permission. ## -Disclaimer: ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE UNIVERSITY # OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # ################################################################### ### ## --------------------------------------------------------------------# shell -------------------------------------------------------------# --------------------------------------------------------------------#SHELL = /bin/tcsh #CD = cd CP = cp LN_S = ln -s MKDIR = mkdir RM = /bin/rm -f TOUCH = touch ## --------------------------------------------------------------------# Platform identifier -----------------------------------------------# --------------------------------------------------------------------#ARCH = Linux_P4 ## --------------------------------------------------------------------# HPL Directory Structure / HPL library ------------------------------

PAGE 191

177 # --------------------------------------------------------------------#HOME = /home/apollo/hda8 TOPdir = $(HOME)/hpl INCdir = $(TOPdir)/include BINdir = $(TOPdir)/bin/$(ARCH) LIBdir = $(TOPdir)/lib/$(ARCH) #HPLlib = $(LIBdir)/libhpl.a ## --------------------------------------------------------------------# Message Passing library (MPI) -------------------------------------# --------------------------------------------------------------------# MPinc tells the C compiler where to find the Message Passing library # header files, MPlib is defined to be the name of the library to be # used. The variable MPdir is only used for defining MPinc and MPlib. #MPdir = /home/apollo/hda8/mpich-1.2.5.2 MPinc = -I$(MPdir)/include MPlib = $(MPdir)/lib/libmpich.a ## --------------------------------------------------------------------# Linear Algebra library (BLAS or VSIPL) ----------------------------# --------------------------------------------------------------------# LAinc tells the C compiler where to find the Linear Algebra library # header files, LAlib is defined to be the name of the library to be # used. The variable LAdir is only used for defining LAinc and LAlib. ##Wed, 19 Jan 2005, 22:29:33 EST # Below the user has a choice of using either the ATLAS or Goto # BLAS routines. To use the ATLAS routines, uncomment the # following 2 lines and comment the 3rd and 4th. To use Goto's BLAS # routines, comment the first 2 lines and uncomment line 3rd and 4th. # BEGIN BLAS specification LAdir = /home/apollo/hda8/goto_blas LAlib = $(LAdir)/libgoto_p4_512-r0.96.so $(LAdir)/xerbla.o #LAdir = /home/apollo/hda8/Linux_P4SSE2/lib #LAlib = $(LAdir)/libcblas.a $(LAdir)/libatlas.a # END BLAS specification ## --------------------------------------------------------------------# F77 / C interface -------------------------------------------------# --------------------------------------------------------------------# You can skip this section if and only if you are not planning to use # a BLAS library featuring a Fortran 77 interface. Otherwise, it is # necessary to fill out the F2CDEFS variable with the appropriate # options. **One and only one** option should be chosen in **each** of # the 3 following categories: ## 1) name space (How C calls a Fortran 77 routine) ## -DAdd_ : all lower case and a suffixed underscore (Suns, # Intel, ...), [default] # -DNoChange : all lower case (IBM RS6000), # -DUpCase : all upper case (Cray), # -DAdd__ : the FORTRAN compiler in use is f2c. ## 2) C and Fortran 77 integer mapping ## -DF77_INTEGER=int : Fortran 77 INTEGER is a C int, [default] # -DF77_INTEGER=long : Fortran 77 INTEGER is a C long, # -DF77_INTEGER=short : Fortran 77 INTEGER is a C short. #

PAGE 192

178 # 3) Fortran 77 string handling ## -DStringSunStyle : The string address is passed at the string loca# tion on the stack, and the string length is then # passed as an F77_INTEGER after all explicit # stack arguments, [default] # -DStringStructPtr : The address of a structure is passed by a # Fortran 77 string, and the structure is of the # form: struct {char *cp; F77_INTEGER len;}, # -DStringStructVal : A structure is passed by value for each Fortran # 77 string, and the structure is of the form: # struct {char *cp; F77_INTEGER len;}, # -DStringCrayStyle : Special option for Cray machines, which uses # Cray fcd (fortran character descriptor) for # interoperation. #F2CDEFS = ## --------------------------------------------------------------------# HPL includes / libraries / specifics ------------------------------# --------------------------------------------------------------------#HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) $(LAinc) $(MPinc) HPL_LIBS = $(HPLlib) $(LAlib) $(MPlib) ## Compile time options ----------------------------------------------## -DHPL_COPY_L force the copy of the panel L before bcast; # -DHPL_CALL_CBLAS call the cblas interface; # -DHPL_CALL_VSIPL call the vsip library; # -DHPL_DETAILED_TIMING enable detailed timers; ## By default HPL will: # *) not copy L before broadcast, # *) call the BLAS Fortran 77 interface, # *) not display detailed timing information. ##HPL_OPTS = -DHPL_CALL_CBLAS ## --------------------------------------------------------------------#HPL_DEFS = $(F2CDEFS) $(HPL_OPTS) $(HPL_INCLUDES) ## --------------------------------------------------------------------# Compilers / linkers Optimization flags --------------------------# --------------------------------------------------------------------#CC = /usr/bin/mpicc CCNOOPT = $(HPL_DEFS) CCFLAGS = $(HPL_DEFS) -fomit-frame-pointer -O -funroll-loops #$(HPL_DEFS)## On some platforms, it is necessary to use the Fortran linker to find # the Fortran internals used in the BLAS library. #LINKER = /usr/bin/mpif77 LINKFLAGS = $(CCFLAGS) -lm #ARCHIVER = ar ARFLAGS = r RANLIB = echo ## ---------------------------------------------------------------------

PAGE 193

179 B.3.2 HPL.dat File The belo w section will list an e xample HPL.dat le that w as used for the benchmarking tests. HPLinpack benchmark input file Innovative Computing Laboratory, University of Tennessee HPL.out output file name (if any) 1 device out (6=stdout,7=stderr,file) 1 # of problems sizes (N) 1000 Ns 6 # of NBs 32 64 96 128 160 192 NBs 1 PMAP process mapping (0=Row-,1=Column-major) 1 # of process grids (P x Q) 1 Ps 1 Qs -16.0 threshold 3 # of panel fact 0 1 2 PFACTs (0=left, 1=Crout, 2=Right) 4 # of recursive stopping criterium 1 2 4 8 NBMINs (>= 1) 3 # of panels in recursion 2 3 4 NDIVs 3 # of recursive panel fact. 0 1 2 RFACTs (0=left, 1=Crout, 2=Right) 1 # of broadcast 0 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM) 1 # of lookahead depth 1 DEPTHs (>=0) 2 SWAP (0=bin-exch,1=long,2=mix) 64 swapping threshold 0 L1 in (0=transposed,1=no-transposed) form 0 U in (0=transposed,1=no-transposed) form 1 Equilibration (0=no,1=yes) 8 memory alignment in double (> 0) B.3.3 First T est Results with A TLAS Belo w the results from the rst test using the A TLAS libraries are sho wn.

PAGE 194

180===================================================== == == == == == == == == == == == = HPLinpack 1.0a -High-Performance Linpack benchmark -January 20, 2004 Written by A. Petitet and R. Clint Whaley, Innovative Computing Labs., UTK ===================================================== == == == == == == == == == == == = An explanation of the input/output parameters follows: T/V : Wall time / encoded variant. N : The order of the coefficient matrix A. NB : The partitioning blocking factor. P : The number of process rows. Q : The number of process columns. Time : Time in seconds to solve the linear system. Gflops : Rate of execution for solving the linear system. The following parameter values will be used: N : 1000 NB : 32 64 96 128 160 192 PMAP : Column-major process mapping P : 1 Q : 1 PFACT : Left Crout Right NBMIN : 1 2 4 8 NDIV : 2 3 4 RFACT : Left Crout Right BCAST : 1ring DEPTH : 1 SWAP : Mix (threshold = 64) L1 : transposed form U : transposed form EQUIL : yes ALIGN : 8 double precision words ===================================================== == == == == == == == == == == == = T/V N NB P Q Time Gflops ---------------------------------------------------------------WC10L2L1 1000 32 1 1 0.53 1.258e+00 WC10L3L1 1000 32 1 1 0.53 1.256e+00 WC10L4L1 1000 32 1 1 0.53 1.256e+00 WC10L2L2 1000 32 1 1 0.53 1.263e+00 WC10L3L2 1000 32 1 1 0.53 1.260e+00 WC10L4L2 1000 32 1 1 0.53 1.262e+00 WC10L2L4 1000 32 1 1 0.53 1.265e+00 WC10L3L4 1000 32 1 1 0.53 1.264e+00 WC10L4L4 1000 32 1 1 0.53 1.264e+00 WC10L2L8 1000 32 1 1 0.53 1.265e+00 WC10L3L8 1000 32 1 1 0.53 1.264e+00 WC10L4L8 1000 32 1 1 0.53 1.264e+00 WC10L2C1 1000 32 1 1 0.53 1.258e+00 WC10L3C1 1000 32 1 1 0.53 1.256e+00 WC10L4C1 1000 32 1 1 0.53 1.257e+00 WC10L2C2 1000 32 1 1 0.53 1.264e+00 WC10L3C2 1000 32 1 1 0.53 1.258e+00 WC10L4C2 1000 32 1 1 0.53 1.263e+00 WC10L2C4 1000 32 1 1 0.53 1.266e+00 WC10L3C4 1000 32 1 1 0.53 1.262e+00 WC10L4C4 1000 32 1 1 0.53 1.264e+00 WC10L2C8 1000 32 1 1 0.53 1.266e+00 WC10L3C8 1000 32 1 1 0.53 1.266e+00 WC10L4C8 1000 32 1 1 0.53 1.265e+00 WC10L2R1 1000 32 1 1 0.53 1.260e+00 WC10L3R1 1000 32 1 1 0.53 1.256e+00 WC10L4R1 1000 32 1 1 0.53 1.258e+00 WC10L2R2 1000 32 1 1 0.53 1.266e+00 WC10L3R2 1000 32 1 1 0.53 1.261e+00 WC10L4R2 1000 32 1 1 0.53 1.263e+00 WC10L2R4 1000 32 1 1 0.53 1.262e+00 WC10L3R4 1000 32 1 1 0.53 1.265e+00 WC10L4R4 1000 32 1 1 0.53 1.265e+00 WC10L2R8 1000 32 1 1 0.53 1.263e+00 WC10L3R8 1000 32 1 1 0.53 1.262e+00 WC10L4R8 1000 32 1 1 0.53 1.262e+00 WC10C2L1 1000 32 1 1 0.54 1.245e+00 WC10C3L1 1000 32 1 1 0.53 1.256e+00 WC10C4L1 1000 32 1 1 0.53 1.257e+00 WC10C2L2 1000 32 1 1 0.53 1.264e+00 WC10C3L2 1000 32 1 1 0.53 1.260e+00 WC10C4L2 1000 32 1 1 0.53 1.262e+00 WC10C2L4 1000 32 1 1 0.53 1.267e+00 WC10C3L4 1000 32 1 1 0.53 1.264e+00 WC10C4L4 1000 32 1 1 0.53 1.265e+00 WC10C2L8 1000 32 1 1 0.53 1.266e+00 WC10C3L8 1000 32 1 1 0.53 1.265e+00 WC10C4L8 1000 32 1 1 0.53 1.265e+00 WC10C2C1 1000 32 1 1 0.53 1.261e+00 WC10C3C1 1000 32 1 1 0.53 1.256e+00 WC10C4C1 1000 32 1 1 0.53 1.257e+00 WC10C2C2 1000 32 1 1 0.53 1.266e+00 WC10C3C2 1000 32 1 1 0.53 1.261e+00 WC10C4C2 1000 32 1 1 0.53 1.262e+00 WC10C2C4 1000 32 1 1 0.53 1.264e+00 WC10C3C4 1000 32 1 1 0.53 1.264e+00 WC10C4C4 1000 32 1 1 0.53 1.265e+00 WC10C2C8 1000 32 1 1 0.53 1.267e+00 WC10C3C8 1000 32 1 1 0.53 1.266e+00 WC10C4C8 1000 32 1 1 0.53 1.265e+00 WC10C2R1 1000 32 1 1 0.53 1.261e+00 WC10C3R1 1000 32 1 1 0.53 1.254e+00 WC10C4R1 1000 32 1 1 0.53 1.256e+00 WC10C2R2 1000 32 1 1 0.53 1.265e+00 WC10C3R2 1000 32 1 1 0.53 1.263e+00 WC10C4R2 1000 32 1 1 0.53 1.263e+00 WC10C2R4 1000 32 1 1 0.53 1.267e+00 WC10C3R4 1000 32 1 1 0.53 1.265e+00 WC10C4R4 1000 32 1 1 0.53 1.265e+00 WC10C2R8 1000 32 1 1 0.53 1.264e+00 WC10C3R8 1000 32 1 1 0.53 1.262e+00 WC10C4R8 1000 32 1 1 0.53 1.259e+00 WC10R2L1 1000 32 1 1 0.53 1.260e+00 WC10R3L1 1000 32 1 1 0.53 1.260e+00 WC10R4L1 1000 32 1 1 0.53 1.254e+00 WC10R2L2 1000 32 1 1 0.53 1.266e+00 WC10R3L2 1000 32 1 1 0.53 1.263e+00 WC10R4L2 1000 32 1 1 0.53 1.258e+00 WC10R2L4 1000 32 1 1 0.53 1.267e+00 WC10R3L4 1000 32 1 1 0.53 1.267e+00 WC10R4L4 1000 32 1 1 0.53 1.263e+00 WC10R2L8 1000 32 1 1 0.53 1.262e+00 WC10R3L8 1000 32 1 1 0.53 1.266e+00 WC10R4L8 1000 32 1 1 0.53 1.269e+00

PAGE 195

181WC10R2C1 1000 32 1 1 0.53 1.261e+00 WC10R3C1 1000 32 1 1 0.53 1.259e+00 WC10R4C1 1000 32 1 1 0.53 1.254e+00 WC10R2C2 1000 32 1 1 0.53 1.265e+00 WC10R3C2 1000 32 1 1 0.53 1.250e+00 WC10R4C2 1000 32 1 1 0.53 1.260e+00 WC10R2C4 1000 32 1 1 0.53 1.267e+00 WC10R3C4 1000 32 1 1 0.53 1.267e+00 WC10R4C4 1000 32 1 1 0.53 1.268e+00 WC10R2C8 1000 32 1 1 0.53 1.268e+00 WC10R3C8 1000 32 1 1 0.53 1.267e+00 WC10R4C8 1000 32 1 1 0.53 1.269e+00 WC10R2R1 1000 32 1 1 0.53 1.260e+00 WC10R3R1 1000 32 1 1 0.53 1.257e+00 WC10R4R1 1000 32 1 1 0.53 1.254e+00 WC10R2R2 1000 32 1 1 0.53 1.267e+00 WC10R3R2 1000 32 1 1 0.53 1.264e+00 WC10R4R2 1000 32 1 1 0.53 1.260e+00 WC10R2R4 1000 32 1 1 0.53 1.267e+00 WC10R3R4 1000 32 1 1 0.53 1.267e+00 WC10R4R4 1000 32 1 1 0.53 1.268e+00 WC10R2R8 1000 32 1 1 0.53 1.263e+00 WC10R3R8 1000 32 1 1 0.53 1.257e+00 WC10R4R8 1000 32 1 1 0.53 1.266e+00 WC10L2L1 1000 64 1 1 0.49 1.371e+00 WC10L3L1 1000 64 1 1 0.49 1.368e+00 WC10L4L1 1000 64 1 1 0.49 1.362e+00 WC10L2L2 1000 64 1 1 0.49 1.375e+00 WC10L3L2 1000 64 1 1 0.49 1.371e+00 WC10L4L2 1000 64 1 1 0.49 1.365e+00 WC10L2L4 1000 64 1 1 0.48 1.378e+00 WC10L3L4 1000 64 1 1 0.49 1.373e+00 WC10L4L4 1000 64 1 1 0.49 1.369e+00 WC10L2L8 1000 64 1 1 0.48 1.380e+00 WC10L3L8 1000 64 1 1 0.49 1.377e+00 WC10L4L8 1000 64 1 1 0.49 1.372e+00 WC10L2C1 1000 64 1 1 0.49 1.373e+00 WC10L3C1 1000 64 1 1 0.49 1.369e+00 WC10L4C1 1000 64 1 1 0.49 1.363e+00 WC10L2C2 1000 64 1 1 0.49 1.377e+00 WC10L3C2 1000 64 1 1 0.49 1.370e+00 WC10L4C2 1000 64 1 1 0.49 1.366e+00 WC10L2C4 1000 64 1 1 0.48 1.380e+00 WC10L3C4 1000 64 1 1 0.49 1.375e+00 WC10L4C4 1000 64 1 1 0.49 1.369e+00 WC10L2C8 1000 64 1 1 0.48 1.380e+00 WC10L3C8 1000 64 1 1 0.49 1.376e+00 WC10L4C8 1000 64 1 1 0.49 1.373e+00 WC10L2R1 1000 64 1 1 0.49 1.372e+00 WC10L3R1 1000 64 1 1 0.49 1.366e+00 WC10L4R1 1000 64 1 1 0.49 1.357e+00 WC10L2R2 1000 64 1 1 0.48 1.378e+00 WC10L3R2 1000 64 1 1 0.49 1.374e+00 WC10L4R2 1000 64 1 1 0.49 1.367e+00 WC10L2R4 1000 64 1 1 0.48 1.381e+00 WC10L3R4 1000 64 1 1 0.49 1.377e+00 WC10L4R4 1000 64 1 1 0.49 1.371e+00 WC10L2R8 1000 64 1 1 0.48 1.378e+00 WC10L3R8 1000 64 1 1 0.49 1.372e+00 WC10L4R8 1000 64 1 1 0.49 1.353e+00 WC10C2L1 1000 64 1 1 0.49 1.372e+00 WC10C3L1 1000 64 1 1 0.49 1.368e+00 WC10C4L1 1000 64 1 1 0.49 1.365e+00 WC10C2L2 1000 64 1 1 0.49 1.376e+00 WC10C3L2 1000 64 1 1 0.49 1.371e+00 WC10C4L2 1000 64 1 1 0.49 1.366e+00 WC10C2L4 1000 64 1 1 0.48 1.379e+00 WC10C3L4 1000 64 1 1 0.49 1.375e+00 WC10C4L4 1000 64 1 1 0.49 1.367e+00 WC10C2L8 1000 64 1 1 0.48 1.381e+00 WC10C3L8 1000 64 1 1 0.49 1.375e+00 WC10C4L8 1000 64 1 1 0.49 1.373e+00 WC10C2C1 1000 64 1 1 0.49 1.372e+00 WC10C3C1 1000 64 1 1 0.49 1.368e+00 WC10C4C1 1000 64 1 1 0.49 1.362e+00 WC10C2C2 1000 64 1 1 0.49 1.376e+00 WC10C3C2 1000 64 1 1 0.49 1.375e+00 WC10C4C2 1000 64 1 1 0.49 1.364e+00 WC10C2C4 1000 64 1 1 0.48 1.380e+00 WC10C3C4 1000 64 1 1 0.49 1.375e+00 WC10C4C4 1000 64 1 1 0.49 1.368e+00 WC10C2C8 1000 64 1 1 0.48 1.381e+00 WC10C3C8 1000 64 1 1 0.49 1.374e+00 WC10C4C8 1000 64 1 1 0.49 1.372e+00 WC10C2R1 1000 64 1 1 0.49 1.371e+00 WC10C3R1 1000 64 1 1 0.49 1.366e+00 WC10C4R1 1000 64 1 1 0.49 1.361e+00 WC10C2R2 1000 64 1 1 0.49 1.376e+00 WC10C3R2 1000 64 1 1 0.49 1.374e+00 WC10C4R2 1000 64 1 1 0.49 1.368e+00 WC10C2R4 1000 64 1 1 0.48 1.383e+00 WC10C3R4 1000 64 1 1 0.49 1.376e+00 WC10C4R4 1000 64 1 1 0.49 1.371e+00 WC10C2R8 1000 64 1 1 0.48 1.378e+00 WC10C3R8 1000 64 1 1 0.49 1.371e+00 WC10C4R8 1000 64 1 1 0.49 1.371e+00 WC10R2L1 1000 64 1 1 0.49 1.370e+00 WC10R3L1 1000 64 1 1 0.49 1.372e+00 WC10R4L1 1000 64 1 1 0.49 1.370e+00 WC10R2L2 1000 64 1 1 0.49 1.377e+00 WC10R3L2 1000 64 1 1 0.48 1.378e+00 WC10R4L2 1000 64 1 1 0.49 1.377e+00 WC10R2L4 1000 64 1 1 0.48 1.380e+00 WC10R3L4 1000 64 1 1 0.48 1.379e+00 WC10R4L4 1000 64 1 1 0.48 1.383e+00 WC10R2L8 1000 64 1 1 0.49 1.372e+00 WC10R3L8 1000 64 1 1 0.48 1.380e+00 WC10R4L8 1000 64 1 1 0.48 1.383e+00 WC10R2C1 1000 64 1 1 0.49 1.373e+00 WC10R3C1 1000 64 1 1 0.49 1.372e+00 WC10R4C1 1000 64 1 1 0.49 1.371e+00 WC10R2C2 1000 64 1 1 0.49 1.377e+00 WC10R3C2 1000 64 1 1 0.48 1.379e+00 WC10R4C2 1000 64 1 1 0.49 1.377e+00 WC10R2C4 1000 64 1 1 0.48 1.380e+00 WC10R3C4 1000 64 1 1 0.49 1.367e+00 WC10R4C4 1000 64 1 1 0.48 1.381e+00 WC10R2C8 1000 64 1 1 0.48 1.380e+00 WC10R3C8 1000 64 1 1 0.48 1.379e+00 WC10R4C8 1000 64 1 1 0.48 1.383e+00

PAGE 196

182WC10R2R1 1000 64 1 1 0.49 1.372e+00 WC10R3R1 1000 64 1 1 0.49 1.371e+00 WC10R4R1 1000 64 1 1 0.49 1.370e+00 WC10R2R2 1000 64 1 1 0.49 1.378e+00 WC10R3R2 1000 64 1 1 0.49 1.377e+00 WC10R4R2 1000 64 1 1 0.48 1.378e+00 WC10R2R4 1000 64 1 1 0.48 1.380e+00 WC10R3R4 1000 64 1 1 0.48 1.380e+00 WC10R4R4 1000 64 1 1 0.48 1.382e+00 WC10R2R8 1000 64 1 1 0.49 1.376e+00 WC10R3R8 1000 64 1 1 0.49 1.377e+00 WC10R4R8 1000 64 1 1 0.48 1.378e+00 WC10L2L1 1000 96 1 1 0.38 1.764e+00 WC10L3L1 1000 96 1 1 0.38 1.748e+00 WC10L4L1 1000 96 1 1 0.38 1.768e+00 WC10L2L2 1000 96 1 1 0.38 1.773e+00 WC10L3L2 1000 96 1 1 0.38 1.759e+00 WC10L4L2 1000 96 1 1 0.37 1.784e+00 WC10L2L4 1000 96 1 1 0.37 1.783e+00 WC10L3L4 1000 96 1 1 0.38 1.766e+00 WC10L4L4 1000 96 1 1 0.37 1.786e+00 WC10L2L8 1000 96 1 1 0.38 1.779e+00 WC10L3L8 1000 96 1 1 0.38 1.768e+00 WC10L4L8 1000 96 1 1 0.37 1.787e+00 WC10L2C1 1000 96 1 1 0.38 1.770e+00 WC10L3C1 1000 96 1 1 0.38 1.751e+00 WC10L4C1 1000 96 1 1 0.38 1.770e+00 WC10L2C2 1000 96 1 1 0.38 1.775e+00 WC10L3C2 1000 96 1 1 0.38 1.762e+00 WC10L4C2 1000 96 1 1 0.38 1.781e+00 WC10L2C4 1000 96 1 1 0.38 1.777e+00 WC10L3C4 1000 96 1 1 0.38 1.773e+00 WC10L4C4 1000 96 1 1 0.37 1.792e+00 WC10L2C8 1000 96 1 1 0.38 1.778e+00 WC10L3C8 1000 96 1 1 0.38 1.770e+00 WC10L4C8 1000 96 1 1 0.37 1.787e+00 WC10L2R1 1000 96 1 1 0.38 1.765e+00 WC10L3R1 1000 96 1 1 0.38 1.745e+00 WC10L4R1 1000 96 1 1 0.38 1.777e+00 WC10L2R2 1000 96 1 1 0.37 1.782e+00 WC10L3R2 1000 96 1 1 0.38 1.766e+00 WC10L4R2 1000 96 1 1 0.37 1.783e+00 WC10L2R4 1000 96 1 1 0.38 1.781e+00 WC10L3R4 1000 96 1 1 0.38 1.769e+00 WC10L4R4 1000 96 1 1 0.37 1.789e+00 WC10L2R8 1000 96 1 1 0.38 1.780e+00 WC10L3R8 1000 96 1 1 0.38 1.773e+00 WC10L4R8 1000 96 1 1 0.38 1.774e+00 WC10C2L1 1000 96 1 1 0.38 1.765e+00 WC10C3L1 1000 96 1 1 0.38 1.746e+00 WC10C4L1 1000 96 1 1 0.38 1.773e+00 WC10C2L2 1000 96 1 1 0.38 1.773e+00 WC10C3L2 1000 96 1 1 0.38 1.763e+00 WC10C4L2 1000 96 1 1 0.37 1.785e+00 WC10C2L4 1000 96 1 1 0.38 1.779e+00 WC10C3L4 1000 96 1 1 0.38 1.767e+00 WC10C4L4 1000 96 1 1 0.37 1.785e+00 WC10C2L8 1000 96 1 1 0.38 1.780e+00 WC10C3L8 1000 96 1 1 0.38 1.772e+00 WC10C4L8 1000 96 1 1 0.37 1.792e+00 WC10C2C1 1000 96 1 1 0.38 1.766e+00 WC10C3C1 1000 96 1 1 0.39 1.715e+00 WC10C4C1 1000 96 1 1 0.38 1.773e+00 WC10C2C2 1000 96 1 1 0.38 1.776e+00 WC10C3C2 1000 96 1 1 0.38 1.760e+00 WC10C4C2 1000 96 1 1 0.37 1.787e+00 WC10C2C4 1000 96 1 1 0.37 1.782e+00 WC10C3C4 1000 96 1 1 0.38 1.766e+00 WC10C4C4 1000 96 1 1 0.37 1.782e+00 WC10C2C8 1000 96 1 1 0.38 1.779e+00 WC10C3C8 1000 96 1 1 0.38 1.770e+00 WC10C4C8 1000 96 1 1 0.37 1.785e+00 WC10C2R1 1000 96 1 1 0.38 1.769e+00 WC10C3R1 1000 96 1 1 0.38 1.751e+00 WC10C4R1 1000 96 1 1 0.38 1.771e+00 WC10C2R2 1000 96 1 1 0.38 1.777e+00 WC10C3R2 1000 96 1 1 0.38 1.762e+00 WC10C4R2 1000 96 1 1 0.37 1.784e+00 WC10C2R4 1000 96 1 1 0.38 1.779e+00 WC10C3R4 1000 96 1 1 0.38 1.773e+00 WC10C4R4 1000 96 1 1 0.37 1.791e+00 WC10C2R8 1000 96 1 1 0.38 1.776e+00 WC10C3R8 1000 96 1 1 0.38 1.768e+00 WC10C4R8 1000 96 1 1 0.37 1.784e+00 WC10R2L1 1000 96 1 1 0.38 1.760e+00 WC10R3L1 1000 96 1 1 0.38 1.758e+00 WC10R4L1 1000 96 1 1 0.38 1.769e+00 WC10R2L2 1000 96 1 1 0.38 1.779e+00 WC10R3L2 1000 96 1 1 0.38 1.767e+00 WC10R4L2 1000 96 1 1 0.38 1.770e+00 WC10R2L4 1000 96 1 1 0.38 1.775e+00 WC10R3L4 1000 96 1 1 0.38 1.774e+00 WC10R4L4 1000 96 1 1 0.38 1.779e+00 WC10R2L8 1000 96 1 1 0.37 1.785e+00 WC10R3L8 1000 96 1 1 0.38 1.781e+00 WC10R4L8 1000 96 1 1 0.37 1.782e+00 WC10R2C1 1000 96 1 1 0.38 1.762e+00 WC10R3C1 1000 96 1 1 0.38 1.760e+00 WC10R4C1 1000 96 1 1 0.38 1.762e+00 WC10R2C2 1000 96 1 1 0.38 1.775e+00 WC10R3C2 1000 96 1 1 0.38 1.775e+00 WC10R4C2 1000 96 1 1 0.38 1.774e+00 WC10R2C4 1000 96 1 1 0.38 1.777e+00 WC10R3C4 1000 96 1 1 0.38 1.775e+00 WC10R4C4 1000 96 1 1 0.38 1.780e+00 WC10R2C8 1000 96 1 1 0.38 1.778e+00 WC10R3C8 1000 96 1 1 0.38 1.776e+00 WC10R4C8 1000 96 1 1 0.37 1.784e+00 WC10R2R1 1000 96 1 1 0.38 1.771e+00 WC10R3R1 1000 96 1 1 0.38 1.760e+00 WC10R4R1 1000 96 1 1 0.38 1.764e+00 WC10R2R2 1000 96 1 1 0.38 1.779e+00 WC10R3R2 1000 96 1 1 0.38 1.774e+00 WC10R4R2 1000 96 1 1 0.38 1.773e+00 WC10R2R4 1000 96 1 1 0.38 1.782e+00 WC10R3R4 1000 96 1 1 0.37 1.784e+00 WC10R4R4 1000 96 1 1 0.38 1.770e+00 WC10R2R8 1000 96 1 1 0.38 1.775e+00 WC10R3R8 1000 96 1 1 0.38 1.773e+00 WC10R4R8 1000 96 1 1 0.38 1.773e+00

PAGE 197

183WC10L2L1 1000 128 1 1 0.40 1.686e+00 WC10L3L1 1000 128 1 1 0.39 1.703e+00 WC10L4L1 1000 128 1 1 0.39 1.697e+00 WC10L2L2 1000 128 1 1 0.39 1.693e+00 WC10L3L2 1000 128 1 1 0.39 1.705e+00 WC10L4L2 1000 128 1 1 0.39 1.695e+00 WC10L2L4 1000 128 1 1 0.39 1.698e+00 WC10L3L4 1000 128 1 1 0.39 1.713e+00 WC10L4L4 1000 128 1 1 0.39 1.705e+00 WC10L2L8 1000 128 1 1 0.39 1.698e+00 WC10L3L8 1000 128 1 1 0.39 1.714e+00 WC10L4L8 1000 128 1 1 0.39 1.701e+00 WC10L2C1 1000 128 1 1 0.40 1.686e+00 WC10L3C1 1000 128 1 1 0.39 1.699e+00 WC10L4C1 1000 128 1 1 0.39 1.692e+00 WC10L2C2 1000 128 1 1 0.40 1.691e+00 WC10L3C2 1000 128 1 1 0.39 1.702e+00 WC10L4C2 1000 128 1 1 0.39 1.698e+00 WC10L2C4 1000 128 1 1 0.39 1.702e+00 WC10L3C4 1000 128 1 1 0.39 1.715e+00 WC10L4C4 1000 128 1 1 0.39 1.702e+00 WC10L2C8 1000 128 1 1 0.39 1.696e+00 WC10L3C8 1000 128 1 1 0.39 1.705e+00 WC10L4C8 1000 128 1 1 0.39 1.701e+00 WC10L2R1 1000 128 1 1 0.40 1.685e+00 WC10L3R1 1000 128 1 1 0.39 1.697e+00 WC10L4R1 1000 128 1 1 0.39 1.692e+00 WC10L2R2 1000 128 1 1 0.39 1.697e+00 WC10L3R2 1000 128 1 1 0.39 1.712e+00 WC10L4R2 1000 128 1 1 0.39 1.701e+00 WC10L2R4 1000 128 1 1 0.39 1.699e+00 WC10L3R4 1000 128 1 1 0.39 1.713e+00 WC10L4R4 1000 128 1 1 0.39 1.706e+00 WC10L2R8 1000 128 1 1 0.39 1.693e+00 WC10L3R8 1000 128 1 1 0.39 1.706e+00 WC10L4R8 1000 128 1 1 0.39 1.703e+00 WC10C2L1 1000 128 1 1 0.40 1.687e+00 WC10C3L1 1000 128 1 1 0.39 1.702e+00 WC10C4L1 1000 128 1 1 0.40 1.687e+00 WC10C2L2 1000 128 1 1 0.40 1.690e+00 WC10C3L2 1000 128 1 1 0.39 1.703e+00 WC10C4L2 1000 128 1 1 0.39 1.694e+00 WC10C2L4 1000 128 1 1 0.39 1.697e+00 WC10C3L4 1000 128 1 1 0.39 1.712e+00 WC10C4L4 1000 128 1 1 0.39 1.700e+00 WC10C2L8 1000 128 1 1 0.39 1.703e+00 WC10C3L8 1000 128 1 1 0.39 1.714e+00 WC10C4L8 1000 128 1 1 0.39 1.704e+00 WC10C2C1 1000 128 1 1 0.40 1.685e+00 WC10C3C1 1000 128 1 1 0.39 1.699e+00 WC10C4C1 1000 128 1 1 0.40 1.690e+00 WC10C2C2 1000 128 1 1 0.39 1.693e+00 WC10C3C2 1000 128 1 1 0.39 1.706e+00 WC10C4C2 1000 128 1 1 0.39 1.698e+00 WC10C2C4 1000 128 1 1 0.39 1.701e+00 WC10C3C4 1000 128 1 1 0.39 1.717e+00 WC10C4C4 1000 128 1 1 0.39 1.699e+00 WC10C2C8 1000 128 1 1 0.39 1.699e+00 WC10C3C8 1000 128 1 1 0.39 1.709e+00 WC10C4C8 1000 128 1 1 0.39 1.703e+00 WC10C2R1 1000 128 1 1 0.40 1.684e+00 WC10C3R1 1000 128 1 1 0.39 1.699e+00 WC10C4R1 1000 128 1 1 0.40 1.679e+00 WC10C2R2 1000 128 1 1 0.39 1.697e+00 WC10C3R2 1000 128 1 1 0.39 1.713e+00 WC10C4R2 1000 128 1 1 0.39 1.700e+00 WC10C2R4 1000 128 1 1 0.39 1.700e+00 WC10C3R4 1000 128 1 1 0.39 1.711e+00 WC10C4R4 1000 128 1 1 0.39 1.705e+00 WC10C2R8 1000 128 1 1 0.39 1.698e+00 WC10C3R8 1000 128 1 1 0.39 1.707e+00 WC10C4R8 1000 128 1 1 0.39 1.701e+00 WC10R2L1 1000 128 1 1 0.40 1.688e+00 WC10R3L1 1000 128 1 1 0.39 1.693e+00 WC10R4L1 1000 128 1 1 0.40 1.681e+00 WC10R2L2 1000 128 1 1 0.40 1.690e+00 WC10R3L2 1000 128 1 1 0.39 1.694e+00 WC10R4L2 1000 128 1 1 0.40 1.687e+00 WC10R2L4 1000 128 1 1 0.40 1.663e+00 WC10R3L4 1000 128 1 1 0.39 1.704e+00 WC10R4L4 1000 128 1 1 0.39 1.708e+00 WC10R2L8 1000 128 1 1 0.39 1.703e+00 WC10R3L8 1000 128 1 1 0.39 1.701e+00 WC10R4L8 1000 128 1 1 0.39 1.705e+00 WC10R2C1 1000 128 1 1 0.40 1.686e+00 WC10R3C1 1000 128 1 1 0.40 1.686e+00 WC10R4C1 1000 128 1 1 0.40 1.681e+00 WC10R2C2 1000 128 1 1 0.40 1.691e+00 WC10R3C2 1000 128 1 1 0.39 1.693e+00 WC10R4C2 1000 128 1 1 0.39 1.692e+00 WC10R2C4 1000 128 1 1 0.39 1.701e+00 WC10R3C4 1000 128 1 1 0.39 1.702e+00 WC10R4C4 1000 128 1 1 0.39 1.703e+00 WC10R2C8 1000 128 1 1 0.39 1.696e+00 WC10R3C8 1000 128 1 1 0.39 1.701e+00 WC10R4C8 1000 128 1 1 0.39 1.704e+00 WC10R2R1 1000 128 1 1 0.40 1.683e+00 WC10R3R1 1000 128 1 1 0.40 1.688e+00 WC10R4R1 1000 128 1 1 0.40 1.686e+00 WC10R2R2 1000 128 1 1 0.39 1.699e+00 WC10R3R2 1000 128 1 1 0.39 1.698e+00 WC10R4R2 1000 128 1 1 0.40 1.686e+00 WC10R2R4 1000 128 1 1 0.39 1.699e+00 WC10R3R4 1000 128 1 1 0.39 1.705e+00 WC10R4R4 1000 128 1 1 0.39 1.705e+00 WC10R2R8 1000 128 1 1 0.39 1.694e+00 WC10R3R8 1000 128 1 1 0.39 1.701e+00 WC10R4R8 1000 128 1 1 0.39 1.704e+00 WC10L2L1 1000 160 1 1 0.35 1.884e+00 WC10L3L1 1000 160 1 1 0.36 1.859e+00 WC10L4L1 1000 160 1 1 0.36 1.856e+00 WC10L2L2 1000 160 1 1 0.35 1.884e+00 WC10L3L2 1000 160 1 1 0.36 1.862e+00 WC10L4L2 1000 160 1 1 0.36 1.867e+00 WC10L2L4 1000 160 1 1 0.35 1.899e+00 WC10L3L4 1000 160 1 1 0.36 1.864e+00 WC10L4L4 1000 160 1 1 0.36 1.865e+00 WC10L2L8 1000 160 1 1 0.35 1.894e+00 WC10L3L8 1000 160 1 1 0.36 1.876e+00 WC10L4L8 1000 160 1 1 0.35 1.884e+00

PAGE 198

184WC10L2C1 1000 160 1 1 0.36 1.880e+00 WC10L3C1 1000 160 1 1 0.36 1.860e+00 WC10L4C1 1000 160 1 1 0.36 1.856e+00 WC10L2C2 1000 160 1 1 0.35 1.886e+00 WC10L3C2 1000 160 1 1 0.36 1.874e+00 WC10L4C2 1000 160 1 1 0.36 1.860e+00 WC10L2C4 1000 160 1 1 0.35 1.894e+00 WC10L3C4 1000 160 1 1 0.36 1.865e+00 WC10L4C4 1000 160 1 1 0.36 1.863e+00 WC10L2C8 1000 160 1 1 0.35 1.897e+00 WC10L3C8 1000 160 1 1 0.36 1.861e+00 WC10L4C8 1000 160 1 1 0.36 1.877e+00 WC10L2R1 1000 160 1 1 0.36 1.877e+00 WC10L3R1 1000 160 1 1 0.36 1.859e+00 WC10L4R1 1000 160 1 1 0.36 1.856e+00 WC10L2R2 1000 160 1 1 0.35 1.894e+00 WC10L3R2 1000 160 1 1 0.36 1.872e+00 WC10L4R2 1000 160 1 1 0.36 1.866e+00 WC10L2R4 1000 160 1 1 0.35 1.899e+00 WC10L3R4 1000 160 1 1 0.36 1.871e+00 WC10L4R4 1000 160 1 1 0.36 1.876e+00 WC10L2R8 1000 160 1 1 0.35 1.899e+00 WC10L3R8 1000 160 1 1 0.36 1.869e+00 WC10L4R8 1000 160 1 1 0.36 1.877e+00 WC10C2L1 1000 160 1 1 0.36 1.879e+00 WC10C3L1 1000 160 1 1 0.36 1.865e+00 WC10C4L1 1000 160 1 1 0.36 1.859e+00 WC10C2L2 1000 160 1 1 0.35 1.886e+00 WC10C3L2 1000 160 1 1 0.36 1.867e+00 WC10C4L2 1000 160 1 1 0.37 1.827e+00 WC10C2L4 1000 160 1 1 0.35 1.894e+00 WC10C3L4 1000 160 1 1 0.36 1.871e+00 WC10C4L4 1000 160 1 1 0.36 1.866e+00 WC10C2L8 1000 160 1 1 0.35 1.896e+00 WC10C3L8 1000 160 1 1 0.36 1.871e+00 WC10C4L8 1000 160 1 1 0.36 1.878e+00 WC10C2C1 1000 160 1 1 0.35 1.884e+00 WC10C3C1 1000 160 1 1 0.36 1.863e+00 WC10C4C1 1000 160 1 1 0.36 1.854e+00 WC10C2C2 1000 160 1 1 0.35 1.888e+00 WC10C3C2 1000 160 1 1 0.36 1.867e+00 WC10C4C2 1000 160 1 1 0.36 1.869e+00 WC10C2C4 1000 160 1 1 0.35 1.898e+00 WC10C3C4 1000 160 1 1 0.36 1.864e+00 WC10C4C4 1000 160 1 1 0.36 1.865e+00 WC10C2C8 1000 160 1 1 0.35 1.894e+00 WC10C3C8 1000 160 1 1 0.36 1.867e+00 WC10C4C8 1000 160 1 1 0.35 1.883e+00 WC10C2R1 1000 160 1 1 0.36 1.878e+00 WC10C3R1 1000 160 1 1 0.36 1.858e+00 WC10C4R1 1000 160 1 1 0.36 1.851e+00 WC10C2R2 1000 160 1 1 0.35 1.891e+00 WC10C3R2 1000 160 1 1 0.36 1.876e+00 WC10C4R2 1000 160 1 1 0.36 1.874e+00 WC10C2R4 1000 160 1 1 0.35 1.892e+00 WC10C3R4 1000 160 1 1 0.36 1.868e+00 WC10C4R4 1000 160 1 1 0.36 1.869e+00 WC10C2R8 1000 160 1 1 0.35 1.893e+00 WC10C3R8 1000 160 1 1 0.36 1.873e+00 WC10C4R8 1000 160 1 1 0.36 1.877e+00 WC10R2L1 1000 160 1 1 0.36 1.876e+00 WC10R3L1 1000 160 1 1 0.36 1.837e+00 WC10R4L1 1000 160 1 1 0.36 1.834e+00 WC10R2L2 1000 160 1 1 0.35 1.891e+00 WC10R3L2 1000 160 1 1 0.36 1.849e+00 WC10R4L2 1000 160 1 1 0.36 1.855e+00 WC10R2L4 1000 160 1 1 0.35 1.889e+00 WC10R3L4 1000 160 1 1 0.36 1.861e+00 WC10R4L4 1000 160 1 1 0.36 1.858e+00 WC10R2L8 1000 160 1 1 0.35 1.901e+00 WC10R3L8 1000 160 1 1 0.36 1.864e+00 WC10R4L8 1000 160 1 1 0.36 1.866e+00 WC10R2C1 1000 160 1 1 0.36 1.877e+00 WC10R3C1 1000 160 1 1 0.36 1.835e+00 WC10R4C1 1000 160 1 1 0.36 1.837e+00 WC10R2C2 1000 160 1 1 0.35 1.891e+00 WC10R3C2 1000 160 1 1 0.36 1.846e+00 WC10R4C2 1000 160 1 1 0.36 1.855e+00 WC10R2C4 1000 160 1 1 0.35 1.890e+00 WC10R3C4 1000 160 1 1 0.36 1.861e+00 WC10R4C4 1000 160 1 1 0.36 1.861e+00 WC10R2C8 1000 160 1 1 0.35 1.893e+00 WC10R3C8 1000 160 1 1 0.36 1.849e+00 WC10R4C8 1000 160 1 1 0.36 1.864e+00 WC10R2R1 1000 160 1 1 0.36 1.876e+00 WC10R3R1 1000 160 1 1 0.36 1.842e+00 WC10R4R1 1000 160 1 1 0.36 1.839e+00 WC10R2R2 1000 160 1 1 0.35 1.889e+00 WC10R3R2 1000 160 1 1 0.36 1.850e+00 WC10R4R2 1000 160 1 1 0.36 1.859e+00 WC10R2R4 1000 160 1 1 0.35 1.892e+00 WC10R3R4 1000 160 1 1 0.36 1.868e+00 WC10R4R4 1000 160 1 1 0.36 1.861e+00 WC10R2R8 1000 160 1 1 0.35 1.892e+00 WC10R3R8 1000 160 1 1 0.36 1.861e+00 WC10R4R8 1000 160 1 1 0.36 1.863e+00 WC10L2L1 1000 192 1 1 0.37 1.792e+00 WC10L3L1 1000 192 1 1 0.38 1.753e+00 WC10L4L1 1000 192 1 1 0.37 1.788e+00 WC10L2L2 1000 192 1 1 0.37 1.795e+00 WC10L3L2 1000 192 1 1 0.38 1.753e+00 WC10L4L2 1000 192 1 1 0.37 1.793e+00 WC10L2L4 1000 192 1 1 0.38 1.757e+00 WC10L3L4 1000 192 1 1 0.38 1.767e+00 WC10L4L4 1000 192 1 1 0.37 1.805e+00 WC10L2L8 1000 192 1 1 0.37 1.804e+00 WC10L3L8 1000 192 1 1 0.38 1.768e+00 WC10L4L8 1000 192 1 1 0.37 1.803e+00 WC10L2C1 1000 192 1 1 0.37 1.788e+00 WC10L3C1 1000 192 1 1 0.38 1.753e+00 WC10L4C1 1000 192 1 1 0.37 1.793e+00 WC10L2C2 1000 192 1 1 0.37 1.796e+00 WC10L3C2 1000 192 1 1 0.38 1.755e+00 WC10L4C2 1000 192 1 1 0.37 1.796e+00 WC10L2C4 1000 192 1 1 0.37 1.798e+00 WC10L3C4 1000 192 1 1 0.38 1.763e+00 WC10L4C4 1000 192 1 1 0.37 1.807e+00 WC10L2C8 1000 192 1 1 0.37 1.807e+00 WC10L3C8 1000 192 1 1 0.38 1.766e+00 WC10L4C8 1000 192 1 1 0.37 1.803e+00

PAGE 199

185WC10L2R1 1000 192 1 1 0.37 1.787e+00 WC10L3R1 1000 192 1 1 0.38 1.749e+00 WC10L4R1 1000 192 1 1 0.37 1.788e+00 WC10L2R2 1000 192 1 1 0.37 1.803e+00 WC10L3R2 1000 192 1 1 0.38 1.764e+00 WC10L4R2 1000 192 1 1 0.37 1.801e+00 WC10L2R4 1000 192 1 1 0.37 1.799e+00 WC10L3R4 1000 192 1 1 0.38 1.765e+00 WC10L4R4 1000 192 1 1 0.37 1.807e+00 WC10L2R8 1000 192 1 1 0.37 1.805e+00 WC10L3R8 1000 192 1 1 0.38 1.770e+00 WC10L4R8 1000 192 1 1 0.37 1.803e+00 WC10C2L1 1000 192 1 1 0.37 1.786e+00 WC10C3L1 1000 192 1 1 0.38 1.747e+00 WC10C4L1 1000 192 1 1 0.37 1.784e+00 WC10C2L2 1000 192 1 1 0.37 1.794e+00 WC10C3L2 1000 192 1 1 0.38 1.755e+00 WC10C4L2 1000 192 1 1 0.37 1.796e+00 WC10C2L4 1000 192 1 1 0.37 1.799e+00 WC10C3L4 1000 192 1 1 0.38 1.762e+00 WC10C4L4 1000 192 1 1 0.37 1.799e+00 WC10C2L8 1000 192 1 1 0.37 1.800e+00 WC10C3L8 1000 192 1 1 0.38 1.766e+00 WC10C4L8 1000 192 1 1 0.37 1.807e+00 WC10C2C1 1000 192 1 1 0.37 1.792e+00 WC10C3C1 1000 192 1 1 0.38 1.746e+00 WC10C4C1 1000 192 1 1 0.37 1.785e+00 WC10C2C2 1000 192 1 1 0.37 1.796e+00 WC10C3C2 1000 192 1 1 0.38 1.753e+00 WC10C4C2 1000 192 1 1 0.37 1.790e+00 WC10C2C4 1000 192 1 1 0.37 1.805e+00 WC10C3C4 1000 192 1 1 0.38 1.762e+00 WC10C4C4 1000 192 1 1 0.37 1.788e+00 WC10C2C8 1000 192 1 1 0.37 1.800e+00 WC10C3C8 1000 192 1 1 0.38 1.762e+00 WC10C4C8 1000 192 1 1 0.37 1.800e+00 WC10C2R1 1000 192 1 1 0.37 1.793e+00 WC10C3R1 1000 192 1 1 0.38 1.750e+00 WC10C4R1 1000 192 1 1 0.37 1.785e+00 WC10C2R2 1000 192 1 1 0.37 1.800e+00 WC10C3R2 1000 192 1 1 0.38 1.757e+00 WC10C4R2 1000 192 1 1 0.37 1.795e+00 WC10C2R4 1000 192 1 1 0.37 1.803e+00 WC10C3R4 1000 192 1 1 0.38 1.769e+00 WC10C4R4 1000 192 1 1 0.37 1.808e+00 WC10C2R8 1000 192 1 1 0.37 1.800e+00 WC10C3R8 1000 192 1 1 0.38 1.762e+00 WC10C4R8 1000 192 1 1 0.37 1.801e+00 WC10R2L1 1000 192 1 1 0.37 1.786e+00 WC10R3L1 1000 192 1 1 0.38 1.742e+00 WC10R4L1 1000 192 1 1 0.38 1.741e+00 WC10R2L2 1000 192 1 1 0.37 1.796e+00 WC10R3L2 1000 192 1 1 0.38 1.748e+00 WC10R4L2 1000 192 1 1 0.39 1.725e+00 WC10R2L4 1000 192 1 1 0.37 1.798e+00 WC10R3L4 1000 192 1 1 0.38 1.758e+00 WC10R4L4 1000 192 1 1 0.38 1.774e+00 WC10R2L8 1000 192 1 1 0.37 1.804e+00 WC10R3L8 1000 192 1 1 0.38 1.760e+00 WC10R4L8 1000 192 1 1 0.38 1.771e+00 WC10R2C1 1000 192 1 1 0.37 1.785e+00 WC10R3C1 1000 192 1 1 0.38 1.741e+00 WC10R4C1 1000 192 1 1 0.38 1.737e+00 WC10R2C2 1000 192 1 1 0.37 1.799e+00 WC10R3C2 1000 192 1 1 0.38 1.755e+00 WC10R4C2 1000 192 1 1 0.38 1.758e+00 WC10R2C4 1000 192 1 1 0.37 1.799e+00 WC10R3C4 1000 192 1 1 0.38 1.757e+00 WC10R4C4 1000 192 1 1 0.38 1.769e+00 WC10R2C8 1000 192 1 1 0.37 1.797e+00 WC10R3C8 1000 192 1 1 0.38 1.763e+00 WC10R4C8 1000 192 1 1 0.38 1.775e+00 WC10R2R1 1000 192 1 1 0.37 1.785e+00 WC10R3R1 1000 192 1 1 0.38 1.742e+00 WC10R4R1 1000 192 1 1 0.38 1.736e+00 WC10R2R2 1000 192 1 1 0.37 1.797e+00 WC10R3R2 1000 192 1 1 0.38 1.748e+00 WC10R4R2 1000 192 1 1 0.38 1.770e+00 WC10R2R4 1000 192 1 1 0.37 1.807e+00 WC10R3R4 1000 192 1 1 0.38 1.760e+00 WC10R4R4 1000 192 1 1 0.38 1.772e+00 WC10R2R8 1000 192 1 1 0.37 1.798e+00 WC10R3R8 1000 192 1 1 0.38 1.760e+00 WC10R4R8 1000 192 1 1 0.38 1.770e+00 ======================================================= == == == == == == == == == == = Finished 648 tests with the following results: 648 tests completed without checking, 0 tests skipped because of illegal input values. ----------------------------------------------------------------End of Tests. ======================================================= == == == == == == == == == == =

PAGE 200

186 B.3.4 HPL.dat for Second T est with A TLAS Libraries Belo w is the HPL.dat le used for the second test. HPLinpack benchmark input file Innovative Computing Laboratory, University of Tennessee HPL.out output file name (if any) 1 device out (6=stdout,7=stderr,file) 1 # of problems sizes (N) 1000 Ns 1 # of NBs 160 NBs 1 PMAP process mapping (0=Row-,1=Column-major) 1 # of process grids (P x Q) 1 Ps 1 Qs -16.0 threshold 3 # of panel fact 0 1 2 PFACTs (0=left, 1=Crout, 2=Right) 4 # of recursive stopping criterium 1 2 4 8 NBMINs (>= 1) 3 # of panels in recursion 2 3 4 NDIVs 3 # of recursive panel fact. 0 1 2 RFACTs (0=left, 1=Crout, 2=Right) 1 # of broadcast 0 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM) 1 # of lookahead depth 1 DEPTHs (>=0) 2 SWAP (0=bin-exch,1=long,2=mix) 64 swapping threshold 0 L1 in (0=transposed,1=no-transposed) form 0 U in (0=transposed,1=no-transposed) form 1 Equilibration (0=no,1=yes) 8 memory alignment in double (> 0) B.3.5 Second T est Results with A TLAS Belo w is the second test results using the A TLAS libraries. ====================================================== == == == == == == == == == == == HPLinpack 1.0a -High-Performance Linpack benchmark -January 20, 2004 Written by A. Petitet and R. Clint Whaley, Innovative Computing Labs., UTK ====================================================== == == == == == == == == == == == An explanation of the input/output parameters follows: T/V : Wall time / encoded variant. N : The order of the coefficient matrix A. NB : The partitioning blocking factor. P : The number of process rows. Q : The number of process columns. Time : Time in seconds to solve the linear system. Gflops : Rate of execution for solving the linear system. The following parameter values will be used: N : 1000 NB : 160 PMAP : Column-major process mapping P : 1 Q : 1 PFACT : Left Crout Right NBMIN : 1 2 4 8 NDIV : 2 3 4 RFACT : Left Crout Right BCAST : 1ring DEPTH : 1 SWAP : Mix (threshold = 64) L1 : transposed form

PAGE 201

187 U : transposed form EQUIL : yes ALIGN : 8 double precision words ====================================================== == == == == == == == == == == == T/V N NB P Q Time Gflops ----------------------------------------------------------------WC10L2L1 1000 160 1 1 0.36 1.876e+00 WC10L3L1 1000 160 1 1 0.36 1.866e+00 WC10L4L1 1000 160 1 1 0.36 1.861e+00 WC10L2L2 1000 160 1 1 0.35 1.888e+00 WC10L3L2 1000 160 1 1 0.37 1.828e+00 WC10L4L2 1000 160 1 1 0.36 1.864e+00 WC10L2L4 1000 160 1 1 0.35 1.900e+00 WC10L3L4 1000 160 1 1 0.36 1.869e+00 WC10L4L4 1000 160 1 1 0.36 1.868e+00 WC10L2L8 1000 160 1 1 0.35 1.896e+00 WC10L3L8 1000 160 1 1 0.36 1.865e+00 WC10L4L8 1000 160 1 1 0.35 1.886e+00 WC10L2C1 1000 160 1 1 0.35 1.884e+00 WC10L3C1 1000 160 1 1 0.36 1.860e+00 WC10L4C1 1000 160 1 1 0.36 1.856e+00 WC10L2C2 1000 160 1 1 0.35 1.888e+00 WC10L3C2 1000 160 1 1 0.36 1.865e+00 WC10L4C2 1000 160 1 1 0.36 1.872e+00 WC10L2C4 1000 160 1 1 0.35 1.900e+00 WC10L3C4 1000 160 1 1 0.36 1.863e+00 WC10L4C4 1000 160 1 1 0.36 1.868e+00 WC10L2C8 1000 160 1 1 0.35 1.894e+00 WC10L3C8 1000 160 1 1 0.36 1.870e+00 WC10L4C8 1000 160 1 1 0.35 1.884e+00 WC10L2R1 1000 160 1 1 0.36 1.879e+00 WC10L3R1 1000 160 1 1 0.36 1.860e+00 WC10L4R1 1000 160 1 1 0.36 1.855e+00 WC10L2R2 1000 160 1 1 0.35 1.893e+00 WC10L3R2 1000 160 1 1 0.36 1.878e+00 WC10L4R2 1000 160 1 1 0.36 1.865e+00 WC10L2R4 1000 160 1 1 0.35 1.900e+00 WC10L3R4 1000 160 1 1 0.36 1.866e+00 WC10L4R4 1000 160 1 1 0.36 1.871e+00 WC10L2R8 1000 160 1 1 0.35 1.900e+00 WC10L3R8 1000 160 1 1 0.36 1.869e+00 WC10L4R8 1000 160 1 1 0.36 1.881e+00 WC10C2L1 1000 160 1 1 0.36 1.879e+00 WC10C3L1 1000 160 1 1 0.36 1.859e+00 WC10C4L1 1000 160 1 1 0.36 1.852e+00 WC10C2L2 1000 160 1 1 0.35 1.892e+00 WC10C3L2 1000 160 1 1 0.36 1.869e+00 WC10C4L2 1000 160 1 1 0.36 1.858e+00 WC10C2L4 1000 160 1 1 0.35 1.897e+00 WC10C3L4 1000 160 1 1 0.36 1.863e+00 WC10C4L4 1000 160 1 1 0.36 1.875e+00 WC10C2L8 1000 160 1 1 0.35 1.902e+00 WC10C3L8 1000 160 1 1 0.36 1.869e+00 WC10C4L8 1000 160 1 1 0.36 1.881e+00 WC10C2C1 1000 160 1 1 0.36 1.880e+00 WC10C3C1 1000 160 1 1 0.36 1.866e+00 WC10C4C1 1000 160 1 1 0.36 1.860e+00 WC10C2C2 1000 160 1 1 0.35 1.889e+00 WC10C3C2 1000 160 1 1 0.36 1.871e+00 WC10C4C2 1000 160 1 1 0.36 1.861e+00 WC10C2C4 1000 160 1 1 0.35 1.897e+00 WC10C3C4 1000 160 1 1 0.36 1.867e+00 WC10C4C4 1000 160 1 1 0.36 1.870e+00 WC10C2C8 1000 160 1 1 0.35 1.894e+00 WC10C3C8 1000 160 1 1 0.36 1.868e+00 WC10C4C8 1000 160 1 1 0.36 1.879e+00 WC10C2R1 1000 160 1 1 0.35 1.886e+00 WC10C3R1 1000 160 1 1 0.36 1.865e+00 WC10C4R1 1000 160 1 1 0.36 1.854e+00 WC10C2R2 1000 160 1 1 0.35 1.894e+00 WC10C3R2 1000 160 1 1 0.36 1.875e+00 WC10C4R2 1000 160 1 1 0.36 1.864e+00 WC10C2R4 1000 160 1 1 0.35 1.905e+00 WC10C3R4 1000 160 1 1 0.36 1.866e+00 WC10C4R4 1000 160 1 1 0.36 1.873e+00 WC10C2R8 1000 160 1 1 0.35 1.893e+00 WC10C3R8 1000 160 1 1 0.36 1.867e+00 WC10C4R8 1000 160 1 1 0.35 1.884e+00 WC10R2L1 1000 160 1 1 0.35 1.884e+00 WC10R3L1 1000 160 1 1 0.36 1.838e+00 WC10R4L1 1000 160 1 1 0.36 1.832e+00 WC10R2L2 1000 160 1 1 0.35 1.885e+00 WC10R3L2 1000 160 1 1 0.36 1.845e+00 WC10R4L2 1000 160 1 1 0.36 1.857e+00 WC10R2L4 1000 160 1 1 0.36 1.862e+00 WC10R3L4 1000 160 1 1 0.36 1.852e+00

PAGE 202

188 WC10R4L4 1000 160 1 1 0.36 1.860e+00 WC10R2L8 1000 160 1 1 0.35 1.894e+00 WC10R3L8 1000 160 1 1 0.36 1.864e+00 WC10R4L8 1000 160 1 1 0.36 1.866e+00 WC10R2C1 1000 160 1 1 0.36 1.879e+00 WC10R3C1 1000 160 1 1 0.36 1.839e+00 WC10R4C1 1000 160 1 1 0.36 1.831e+00 WC10R2C2 1000 160 1 1 0.35 1.893e+00 WC10R3C2 1000 160 1 1 0.36 1.853e+00 WC10R4C2 1000 160 1 1 0.36 1.856e+00 WC10R2C4 1000 160 1 1 0.35 1.892e+00 WC10R3C4 1000 160 1 1 0.36 1.857e+00 WC10R4C4 1000 160 1 1 0.36 1.860e+00 WC10R2C8 1000 160 1 1 0.35 1.899e+00 WC10R3C8 1000 160 1 1 0.36 1.857e+00 WC10R4C8 1000 160 1 1 0.36 1.866e+00 WC10R2R1 1000 160 1 1 0.36 1.881e+00 WC10R3R1 1000 160 1 1 0.36 1.841e+00 WC10R4R1 1000 160 1 1 0.36 1.837e+00 WC10R2R2 1000 160 1 1 0.35 1.898e+00 WC10R3R2 1000 160 1 1 0.36 1.850e+00 WC10R4R2 1000 160 1 1 0.36 1.859e+00 WC10R2R4 1000 160 1 1 0.35 1.894e+00 WC10R3R4 1000 160 1 1 0.36 1.860e+00 WC10R4R4 1000 160 1 1 0.36 1.867e+00 WC10R2R8 1000 160 1 1 0.35 1.893e+00 WC10R3R8 1000 160 1 1 0.36 1.856e+00 WC10R4R8 1000 160 1 1 0.36 1.866e+00 ====================================================== == == == == == == == == == == == Finished 108 tests with the following results: 108 tests completed without checking, 0 tests skipped because of illegal input values. ----------------------------------------------------------------End of Tests. ====================================================== == == == == == == == == == == == B.3.6 Final T est with A TLAS Libraries Belo w is the HPL.dat le used for the nal lar ge test using the A TLAS libraries.HPLinpack benchmark input file Innovative Computing Laboratory, University of Tennessee HPL.out output file name (if any) 1 device out (6=stdout,7=stderr,file) 1 # of problems sizes (N) 10000 Ns 1 # of NBs 160 NBs 1 PMAP process mapping (0=Row-,1=Column-major) 1 # of process grids (P x Q) 1 Ps 1 Qs 16.0 threshold 1 # of panel fact 1 PFACTs (0=left, 1=Crout, 2=Right) 1 # of recursive stopping criterium 4 NBMINs (>= 1) 1 # of panels in recursion 2 NDIVs 1 # of recursive panel fact. 1 RFACTs (0=left, 1=Crout, 2=Right) 1 # of broadcast 0 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM) 1 # of lookahead depth 1 DEPTHs (>=0) 2 SWAP (0=bin-exch,1=long,2=mix) 64 swapping threshold

PAGE 203

189 0 L1 in (0=transposed,1=no-transposed) form 0 U in (0=transposed,1=no-transposed) form 1 Equilibration (0=no,1=yes) 8 memory alignment in double (> 0) Belo w is the results from the nal test with the A TLAS libraries. ====================================================== == == == == == == == == == == == HPLinpack 1.0a -High-Performance Linpack benchmark -January 20, 2004 Written by A. Petitet and R. Clint Whaley, Innovative Computing Labs., UTK ====================================================== == == == == == == == == == == == An explanation of the input/output parameters follows: T/V : Wall time / encoded variant. N : The order of the coefficient matrix A. NB : The partitioning blocking factor. P : The number of process rows. Q : The number of process columns. Time : Time in seconds to solve the linear system. Gflops : Rate of execution for solving the linear system. The following parameter values will be used: N : 10000 NB : 160 PMAP : Column-major process mapping P : 1 Q : 1 PFACT : Crout NBMIN : 4 NDIV : 2 RFACT : Crout BCAST : 1ring DEPTH : 1 SWAP : Mix (threshold = 64) L1 : transposed form U : transposed form EQUIL : yes ALIGN : 8 double precision words ----------------------------------------------------------------The matrix A is randomly generated for each test. The following scaled residual checks will be computed: 1) ||Ax-b||_oo / ( eps ||A||_1 N ) 2) ||Ax-b||_oo / ( eps ||A||_1 ||x||_1 ) 3) ||Ax-b||_oo / ( eps ||A||_oo ||x||_oo ) The relative machine precision (eps) is taken to be 1.110223e-16 Computational tests pass if scaled residuals are less than 16.0 ====================================================== == == == == == == == == == == == T/V N NB P Q Time Gflops ----------------------------------------------------------------WC10C2C4 10000 160 1 1 234.77 2.840e+00 ----------------------------------------------------------------||Ax-b||_oo / ( eps ||A||_1 N ) = 0.0988430 ...... PASSED ||Ax-b||_oo / ( eps ||A||_1 ||x||_1 ) = 0.0233892 ...... PASSED ||Ax-b||_oo / ( eps ||A||_oo ||x||_oo ) = 0.0052280 ...... PASSED ====================================================== == == == == == == == == == == == Finished 1 tests with the following results: 1 tests completed and passed residual checks, 0 tests completed and failed residual checks, 0 tests skipped because of illegal input values. ----------------------------------------------------------------End of Tests. ====================================================== == == == == == == == == == == == B.3.7 HPL.dat File for Multi-processor T est Belo w is the HPL.dat le used for the rst multi-processor test using Goto' s libraries.HPLinpack benchmark input file

PAGE 204

190 Innovative Computing Laboratory, University of Tennessee HPL.out output file name (if any) 1 device out (6=stdout,7=stderr,file) 1 # of problems sizes (N) 3500 Ns 1 # of NBs 128 NBs 1 PMAP process mapping (0=Row-,1=Column-major) 2 # of process grids (P x Q) 1 2 Ps 2 1 Qs -16.0 threshold 1 # of panel fact 1 PFACTs (0=left, 1=Crout, 2=Right) 1 # of recursive stopping criterium 8 NBMINs (>= 1) 1 # of panels in recursion 2 NDIVs 1 # of recursive panel fact. 1 RFACTs (0=left, 1=Crout, 2=Right) 1 # of broadcast 1 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM) 1 # of lookahead depth 1 DEPTHs (>=0) 2 SWAP (0=bin-exch,1=long,2=mix) 64 swapping threshold 0 L1 in (0=transposed,1=no-transposed) form 0 U in (0=transposed,1=no-transposed) form 1 Equilibration (0=no,1=yes) 8 memory alignment in double (> 0) B.3.8 Goto' s Multi-processor T ests Belo w is the results from the rst test with Goto' s BLAS routines. ====================================================== == == == == == == == == == == == HPLinpack 1.0a -High-Performance Linpack benchmark -January 20, 2004 Written by A. Petitet and R. Clint Whaley, Innovative Computing Labs., UTK ====================================================== == == == == == == == == == == == An explanation of the input/output parameters follows: T/V : Wall time / encoded variant. N : The order of the coefficient matrix A. NB : The partitioning blocking factor. P : The number of process rows. Q : The number of process columns. Time : Time in seconds to solve the linear system. Gflops : Rate of execution for solving the linear system. The following parameter values will be used: N : 3500 NB : 128 PMAP : Column-major process mapping P : 1 2 Q : 2 1 PFACT : Crout NBMIN : 8 NDIV : 2 RFACT : Crout BCAST : 1ringM DEPTH : 1 SWAP : Mix (threshold = 64) L1 : transposed form U : transposed form EQUIL : yes ALIGN : 8 double precision words ====================================================== == == == == == == == == == == == T/V N NB P Q Time Gflops ----------------------------------------------------------------WC11C2C8 3500 128 1 2 9.41 3.040e+00

PAGE 205

191 WC11C2C8 3500 128 2 1 13.43 2.129e+00 ====================================================== == == == == == == == == == == == Finished 2 tests with the following results: 2 tests completed without checking, 0 tests skipped because of illegal input values. ----------------------------------------------------------------End of Tests. ====================================================== == == == == == == == == == == == B.3.9 HPL.dat File for T esting Broadcast Algorithms Belo w is the HPL.dat le used to test the broadcast algorithms for the second test using Goto' s routines. HPLinpack benchmark input file Innovative Computing Laboratory, University of Tennessee HPL.out output file name (if any) 1 device out (6=stdout,7=stderr,file) 1 # of problems sizes (N) 3500 Ns 1 # of NBs 128 NBs 1 PMAP process mapping (0=Row-,1=Column-major) 1 # of process grids (P x Q) 1 Ps 2 Qs -16.0 threshold 1 # of panel fact 1 PFACTs (0=left, 1=Crout, 2=Right) 1 # of recursive stopping criterium 8 NBMINs (>= 1) 1 # of panels in recursion 2 NDIVs 1 # of recursive panel fact. 1 RFACTs (0=left, 1=Crout, 2=Right) 6 # of broadcast 0 1 2 3 4 5 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM) 1 # of lookahead depth 1 DEPTHs (>=0) 2 SWAP (0=bin-exch,1=long,2=mix) 64 swapping threshold 0 L1 in (0=transposed,1=no-transposed) form 0 U in (0=transposed,1=no-transposed) form 1 Equilibration (0=no,1=yes) 8 memory alignment in double (> 0) B.3.10 Final T est with Goto' s Libraries Belo w is one of the HPL.dat les used for the nal tests using Goto' s libraries. HPLinpack benchmark input file Innovative Computing Laboratory, University of Tennessee HPL.out output file name (if any) 1 device out (6=stdout,7=stderr,file) 1 # of problems sizes (N) 14500 Ns 1 # of NBs 128 NBs 1 PMAP process mapping (0=Row-,1=Column-major)

PAGE 206

192 1 # of process grids (P x Q) 1 Ps 2 Qs 16.0 threshold 1 # of panel fact 1 PFACTs (0=left, 1=Crout, 2=Right) 1 # of recursive stopping criterium 4 NBMINs (>= 1) 1 # of panels in recursion 2 NDIVs 1 # of recursive panel fact. 1 RFACTs (0=left, 1=Crout, 2=Right) 1 # of broadcast 2 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM) 1 # of lookahead depth 1 DEPTHs (>=0) 2 SWAP (0=bin-exch,1=long,2=mix) 64 swapping threshold 0 L1 in (0=transposed,1=no-transposed) form 0 U in (0=transposed,1=no-transposed) form 1 Equilibration (0=no,1=yes) 8 memory alignment in double (> 0)

PAGE 207

APPENDIX C CALCULIX INST ALLA TION This section will list the Mak eles used to install CalculiX as described in 4 C.1 ARP A CK Mak ele Belo w is the Mak ele used to compile ARP A CK. ################################################################### ######## ## Program: ARPACK ## Module: ARmake.inc ## Purpose: Top-level Definitions ## Creation date: February 22, 1996 ## Modified: ## Send bug reports, comments or suggestions to arpack@caam.rice.edu #################################################################### ######### ## %---------------------------------% # | SECTION 1: PATHS AND LIBRARIES | # %---------------------------------% ### %--------------------------------------% # | You should change the definition of | # | home if ARPACK is built some place | # | other than your home directory. | # %--------------------------------------% ## Added by Paul Thu, 04 Mar 2004, 18:41:27 EST home = /home/apollo/hda8/ARPACK ## %--------------------------------------% # | The platform identifier to suffix to | # | the end of library names | # %--------------------------------------% ## Added by Paul Thu, 04 Mar 2004, 18:42:17 EST PLAT = Linux ## %------------------------------------------------------% # | The directories to find the various pieces of ARPACK | # %------------------------------------------------------% #BLASdir = $(home)/BLAS LAPACKdir = $(home)/LAPACK UTILdir = $(home)/UTIL SRCdir = $(home)/SRC #DIRS = $(BLASdir) $(LAPACKdir) $(UTILdir) $(SRCdir) 193

PAGE 208

194 ## %------------------------------------------------------------------% # | Comment out the previous line and uncomment the following | # | if you already have the BLAS and LAPACK installed on your system. | # | NOTE: ARPACK assumes the use of LAPACK version 2 codes. | # %------------------------------------------------------------------% ##DIRS = $(UTILdir) $(SRCdir) ## %---------------------------------------------------% # | The name of the libraries to be created/linked to | # %---------------------------------------------------% #ARPACKLIB = $(home)/libarpack_$(PLAT).a LAPACKLIB = BLASLIB = #ALIBS = $(ARPACKLIB) $(LAPACKLIB) $(BLASLIB) ### %---------------------------------------------------------% # | SECTION 2: COMPILERS | # | | # | The following macros specify compilers, linker/loaders, | # | the archiver, and their options. You need to make sure | # | these are correct for your system. | # %---------------------------------------------------------% ### %------------------------------% # | Make our own suffixes' list. | # %------------------------------% #.SUFFIXES:.SUFFIXES: .f .o ## %------------------% # | Default command. | # %------------------% #.DEFAULT:@$(ECHO) "Unknown target $@, try: make help" ## %-------------------------------------------% # | Command to build .o files from .f files. | # %-------------------------------------------% #.f.o:@$(ECHO) Making $@ from $< @$(FC) -c $(FFLAGS) $< ## %-----------------------------------------% # | Various compilation programs and flags. | # | You need to make sure these are correct | # | for your system. | # %-----------------------------------------% ## Added by Paul Thu, 04 Mar 2004, 18:43:19 EST FC = g77 # Added by Paul Thu, 04 Mar 2004, 18:58:52 EST FFLAGS = -O LDFLAGS = CD = cd

PAGE 209

195 ECHO = echo LN = ln LNFLAGS = -s MAKE = /usr/bin/make RM = rm RMFLAGS = -f SHELL = /bin/sh ## %---------------------------------------------------------------% # | The archiver and the flag(s) to use when building an archive | # | (library). Also the ranlib routine. If your system has no | # | ranlib, set RANLIB = touch. | # %---------------------------------------------------------------% #AR = ar ARFLAGS = rv #RANLIB = touch RANLIB = ranlib ## %----------------------------------% # | This is the general help target. | # %----------------------------------% #help:@$(ECHO) "usage: make ?" C.2 CalculiX CrunchiX Mak ele Belo w is the Mak ele used to compile CalculiX CrunchiX. CFLAGS = -Wall -O -I ../../../SPOOLES.2.2 \ -DARCH="Linux" -L ../../../SPOOLES.2.2 FFLAGS = -Wall -O CC=gccFC=g77.c.o : $(CC) $(CFLAGS) -c $< .f.o : $(FC) $(FFLAGS) -c $< SCCXF = \ add_pr.f \ add_sm_ei.f \ add_sm_st.f \ allocation.f \ amplitudes.f \ anisonl.f \ anisotropic.f \ beamsections.f \ bounadd.f \ boundaries.f \ buckles.f \ calinput.f \ cfluxes.f \ changedepterm.f \ cloads.f \ conductivities.f \ controlss.f \ couptempdisps.f \ creeps.f \ cychards.f \ cycsymmods.f \ dasol.f \ datest.f \ datri.f \ defplasticities.f \ defplas.f \ densities.f \ depvars.f \ deuldlag.f \ dfluxes.f \ dgesv.f \ diamtr.f \ dloads.f \ dot.f \ dredu.f \ dsort.f \ dynamics.f \ dynsolv.f \

PAGE 210

196 el.f \ elastics.f \ elements.f \ elprints.f \ envtemp.f \ equations.f \ expansions.f \ extrapolate.f \ e_c3d.f \ e_c3d_th.f \ e_c3d_rhs.f \ fcrit.f \ films.f \ finpro.f \ forcadd.f \ frd.f \ frdclose.f \ frequencies.f \ fsub.f \ fsuper.f \ gen3delem.f \ genran.f \ getnewline.f \ graph.f \ headings.f \ heattransfers.f \ hyperel.f \ hyperelastics.f \ hyperfoams.f \ ident.f \ ident2.f \ include.f \ incplas.f \ initialconditions.f \ inputerror.f \ isorti.f \ isortid.f \ isortidc.f \ isortii.f \ isortiid.f \ label.f \ linel.f \ lintemp.f \ lintemp_th.f \ loadadd.f \ loadaddt.f \ mafillpr.f \ mafillsm.f \ mafillsmcs.f \ massflowrates.f \ matdata_co.f \ matdata_he.f \ matdata_tg.f \ materialdata.f \ materials.f \ modaldampings.f \ modaldynamics.f \ mpcs.f \ nident.f \ nident2.f \ near2d.f \ noanalysis.f \ nodalthicknesses.f \ nodeprints.f \ nodes.f \ noelfiles.f \ noelsets.f \ nonlinmpc.f \ normals.f \ norshell.f \ number.f \ onf.f \ op.f \ openfile.f \ orientations.f \ orthonl.f \ orthotropic.f \ out.f \ parser.f \ physicalconstants.f \ planempc.f \ plastics.f \ plcopy.f \ plinterpol.f \ plmix.f \ polynom.f \ profil.f \ radflowload.f \ radiates.f \ ranewr.f \ rearrange.f \ rectcyl.f \ renumber.f \ results.f \ rhs.f \ rigidbodies.f \ rigidmpc.f \ rootls.f \ rubber.f \ saxpb.f \ selcycsymmods.f \ shape3tri.f \ shape4q.f \ shape4tet.f \ shape6tri.f \ shape6w.f \ shape8h.f \ shape8q.f \ shape10tet.f \ shape15w.f \ shape20h.f \ shellsections.f \ solidsections.f \ spcmatch.f \ specificheats.f \ statics.f \ steps.f \ stiff2mat.f \ stop.f \ str2mat.f \ straightmpc.f \ surfaces.f \ temperatures.f \ tempload.f \ ties.f \ transformatrix.f \ transforms.f \ ucreep.f \ uhardening.f \ umat.f \

PAGE 211

197 umat_aniso_creep.f \ umat_aniso_plas.f \ umat_elastic_fiber.f \ umat_ideal_gas.f \ umat_lin_iso_el.f \ umat_single_crystal.f \ umat_tension_only.f \ umat_user.f \ umpc_mean_rot.f \ umpc_user.f \ usermaterials.f \ usermpc.f \ viscos.f \ wcoef.f \ writebv.f \ writeev.f \ writeevcs.f \ writempc.f \ writesummary.fSCCXC = \ arpack.c \ arpackbu.c \ arpackcs.c \ cascade.c \ dyna.c \ frdcyc.c \ insert.c \ mastruct.c \ mastructcs.c \ nonlingeo.c \ pcgsolver.c \ preiter.c \ prespooles.c \ profile.c \ remastruct.c \ spooles.c \ strcmp1.c \ strcpy1.c \ u_calloc.cSCCXMAIN = ccx_1.1.c OCCXF = $(SCCXF:.f=.o) OCCXC = $(SCCXC:.c=.o) OCCXMAIN = $(SCCXMAIN:.c=.o) DIR=/home/apollo/hda8/SPOOLES.2.2LIBS = \ $(DIR)/spooles.a \ $(DIR)/MPI/src/spoolesMPI.a \ /home/apollo/hda8/ARPACK/ \ libarpack_Linux.a -lm \ -lpthread ccx_1.1: $(OCCXMAIN) ccx_1.1.a $(LIBS) g77 -Wall -O -o $@ $(OCCXMAIN) \ ccx_1.1.a $(LIBS) ccx_1.1.a: $(OCCXF) $(OCCXC) ar vr $@ $?

PAGE 212

APPENDIX D CALCULIX CR UNCHIX INPUT FILE Belo w is the input deck for the e xample in Chapter 5 *************************************************************HEADING User Baseline File ** Link to the element and node data *INCLUDE,input=../thesis/all.msh*************************************ORIENTATION, NAME=SO, SYSTEM=RECTANGULAR 1.000000,0.000000,0.000000,0.000000,1.000000,0.000000 *MATERIAL, NAME=M1 *ELASTIC, TYPE=ISO 3.000000e+07, 3.300000e-01, 300.000000 *EXPANSION, TYPE=ISO 1.400000e-05, 300.000000 *DENSITY7.200000e-04,300.000000*SOLID SECTION, ELSET=Eall, MATERIAL=M1,ORIENTATION=SO ************************************************************* Start Load Step (1) of (1) ***STEP, INC=100 ***STATIC,SOLVER=ITERATIVE CHOLESKY *STATIC,SOLVER=SPOOLES 1.000000, 1.000000, 1.000000e-05, 1.000000e+30 ** Link to the boundary condition files *INCLUDE,input=../thesis/fix.spc*INCLUDE,input=../thesis/load.frc****************************************NODE FILE U*EL FILE S*NODE PRINT, NSET=Nall U*EL PRINT, ELSET=Eall S************************************************************* End Load Step (1) of (1) *END STEP 198

PAGE 213

APPENDIX E SERIAL AND P ARALLEL SOL VER SOURCE CODE The belo w sections will display the source code for both the serial and parallel solv ers. The serial code will be the original supplied by CalculiX while the parallel code will be the nal optimized program. E.1 Serial Code Belo w is the source code for the serial solv er of CalculiX CrunchiX. /* CalculiX A 3-dimensional finite element program */ /* Copyright (C) 1998 Guido Dhondt */ /* This program is free software; you can redistribute it and/or */ /* modify it under the terms of the GNU General Public License as */ /* published by the Free Software Foundation(version 2); */ /* */ /* This program is distributed in the hope that it will be useful, */ /* but WITHOUT ANY WARRANTY; without even the implied warranty of */ /* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the */ /* GNU General Public License for more details. */ /* You should have received a copy of the GNU General Public License */ /* along with this program; if not, write to the Free Software */ /* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ #include #include #include #include #include #include #include #include #include "CalculiX.h" void spooles(double *ad, double *au, double *b, int *icol, int *irow, int *neq, int *nzs){ char buffer[20]; int ipoint,ipo; DenseMtx *mtxB, *mtxX ; Chv *rootchv ; 199

PAGE 214

200 ChvManager *chvmanager ; SubMtxManager *mtxmanager ; FrontMtx *frontmtx ; InpMtx *mtxA ; double tau = 100.; double cpus[10] ; ETree *frontETree ; FILE *msgFile, *inputFile, *densematrix, *inputmatrix; Graph *graph ; int jrow, jrhs, msglvl=0, ncol, nedges,error, nent, neqns, nrhs, nrow, pivotingflag=1, seed=7892713, symmetryflag=0, type=1,row, col,maxdomainsize,maxzeros,maxsize; int *newToOld, *oldToNew ; int stats[20] ; IV *newToOldIV, *oldToNewIV ; IVL *adjIVL, *symbfacIVL ; time_t t1, t2; /* solving the system of equations using spooles */ printf("Solving the system of equations using spooles\n\n"); /* Compute solve time */ (void) time(&t1); /* --------------------------------------------------all-in-one program to solve A X = B (1) read in matrix entries and form DInpMtx object (2) form Graph object (3) order matrix and form front tree (4) get the permutation, permute the matrix and front tree and get the symbolic factorization (5) compute the numeric factorization (6) read in right hand side entries (7) compute the solution created -98jun04, cca -------------------------------------------------*/ if ( (msgFile = fopen("spooles.out", "a")) == NULL ) { fprintf(stderr, "\n fatal error in spooles.c" "\n unable to open file spooles.out\n") ; } /* --------------------------------------------STEP 1: read the entries from the input file and create the InpMtx object -------------------------------------------*/

PAGE 215

201 nrow=*neq;ncol=*neq;nent=*nzs+*neq;neqns=nrow;ipoint=0;mtxA = InpMtx_new() ; InpMtx_init(mtxA, INPMTX_BY_ROWS, type, nent, neqns) ; for(row=0;row 1 ) { fprintf(msgFile, "\n\n input matrix") ; InpMtx_writeForHumanEye(mtxA, msgFile) ; fflush(msgFile) ; }/*added to print matrix*/ /*sprintf(buffer, "inputmatrix.m"); inputmatrix = fopen(buffer, "w"); InpMtx_writeForMatlab(mtxA,"A", inputmatrix); fclose(inputmatrix);*/ /*---------------------------------------------------------------*//* -------------------------------------------------STEP 2 : find a low-fill ordering (1) create the Graph object (2) order the graph using multiple minimum degree ------------------------------------------------*/ graph = Graph_new() ; adjIVL = InpMtx_fullAdjacency(mtxA) ; nedges = IVL_tsize(adjIVL) ; Graph_init2(graph, 0, neqns, 0, nedges, neqns, nedges, adjIVL, NULL, NULL) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n graph of the input matrix") ; Graph_writeForHumanEye(graph, msgFile) ; fflush(msgFile) ; }maxdomainsize=800;maxzeros=1000;maxsize=64;frontETree=orderViaBestOfNDandMS(graph,maxdomainsize,maxzeros,

PAGE 216

202 maxsize,seed,msglvl,msgFile); if ( msglvl > 1 ) { fprintf(msgFile, "\n\n front tree from ordering") ; ETree_writeForHumanEye(frontETree, msgFile) ; fflush(msgFile) ; }/*---------------------------------------------------------------*//* -----------------------------------------------------STEP 3: get the permutation, permute the matrix and front tree and get the symbolic factorization ----------------------------------------------------*/ oldToNewIV = ETree_oldToNewVtxPerm(frontETree) ; oldToNew = IV_entries(oldToNewIV) ; newToOldIV = ETree_newToOldVtxPerm(frontETree) ; newToOld = IV_entries(newToOldIV) ; ETree_permuteVertices(frontETree, oldToNewIV) ; InpMtx_permute(mtxA, oldToNew, oldToNew) ; InpMtx_mapToUpperTriangle(mtxA) ; InpMtx_changeCoordType(mtxA,INPMTX_BY_CHEVRONS);InpMtx_changeStorageMode(mtxA,INPMTX_BY_VECTORS);symbfacIVL = SymbFac_initFromInpMtx(frontETree, mtxA) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n old-to-new permutation vector") ; IV_writeForHumanEye(oldToNewIV, msgFile) ; fprintf(msgFile, "\n\n new-to-old permutation vector") ; IV_writeForHumanEye(newToOldIV, msgFile) ; fprintf(msgFile, "\n\n front tree after permutation") ; ETree_writeForHumanEye(frontETree, msgFile) ; fprintf(msgFile, "\n\n input matrix after permutation") ; InpMtx_writeForHumanEye(mtxA, msgFile) ; fprintf(msgFile, "\n\n symbolic factorization") ; IVL_writeForHumanEye(symbfacIVL, msgFile) ; fflush(msgFile) ; } /*---------------------------------------------------------------*//* ------------------------------------------STEP 4: initialize the front matrix object -----------------------------------------*/ frontmtx = FrontMtx_new() ; mtxmanager = SubMtxManager_new() ; SubMtxManager_init(mtxmanager, NO_LOCK, 0) ; FrontMtx_init(frontmtx, frontETree, symbfacIVL, type, symmetryflag, FRONTMTX_DENSE_FRONTS, pivotingflag, NO_LOCK, 0, NULL, mtxmanager, msglvl, msgFile) ; /*---------------------------------------------------------------*//*

PAGE 217

203 -----------------------------------------STEP 5: compute the numeric factorization ----------------------------------------*/ chvmanager = ChvManager_new() ; ChvManager_init(chvmanager, NO_LOCK, 1) ; DVfill(10, cpus, 0.0) ; IVfill(20, stats, 0) ; rootchv = FrontMtx_factorInpMtx(frontmtx, mtxA, tau, 0.0, chvmanager, &error,cpus, stats, msglvl, msgFile) ; ChvManager_free(chvmanager) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n factor matrix") ; FrontMtx_writeForHumanEye(frontmtx, msgFile) ; fflush(msgFile) ; }if ( rootchv != NULL ) { fprintf(msgFile, "\n\n matrix found to be singular\n") ; exit(-1) ; }if(error>=0){ fprintf(msgFile,"\n\nerror encountered at front %d",error); exit(-1); } /*---------------------------------------------------------------*//* --------------------------------------STEP 6: post-process the factorization -------------------------------------*/ FrontMtx_postProcess(frontmtx, msglvl, msgFile) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n factor matrix after post-processing") ; FrontMtx_writeForHumanEye(frontmtx, msgFile) ; fflush(msgFile) ; } /*---------------------------------------------------------------*//* -----------------------------------------STEP 7: read the right hand side matrix B ----------------------------------------*/ nrhs=1;mtxB = DenseMtx_new() ; DenseMtx_init(mtxB, type, 0, 0, neqns, nrhs, 1, neqns) ; DenseMtx_zero(mtxB) ; for ( jrow = 0 ; jrow < nrow ; jrow++ ) { for ( jrhs = 0 ; jrhs < nrhs ; jrhs++ ) { DenseMtx_setRealEntry(mtxB, jrow, jrhs, b[jrow]) ; }

PAGE 218

204 }if ( msglvl > 1 ) { fprintf(msgFile, "\n\n rhs matrix in original ordering") ; DenseMtx_writeForHumanEye(mtxB, msgFile) ; fflush(msgFile) ; } /*---------------------------------------------------------------*//* ---------------------------------------------------------STEP 8: permute the right hand side into the new ordering --------------------------------------------------------*/ DenseMtx_permuteRows(mtxB, oldToNewIV) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n right hand side matrix in new ordering"); DenseMtx_writeForHumanEye(mtxB, msgFile) ; fflush(msgFile) ; } /*---------------------------------------------------------------*//* -------------------------------STEP 9: solve the linear system ------------------------------*/ mtxX = DenseMtx_new() ; DenseMtx_init(mtxX, type, 0, 0, neqns, nrhs, 1, neqns) ; DenseMtx_zero(mtxX) ; FrontMtx_solve(frontmtx,mtxX,mtxB,mtxmanager,cpus,msglvl,msgFile) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n solution matrix in new ordering") ; DenseMtx_writeForHumanEye(mtxX, msgFile) ; fflush(msgFile) ; } /*---------------------------------------------------------------*//* --------------------------------------------------------STEP 10: permute the solution into the original ordering -------------------------------------------------------*/ DenseMtx_permuteRows(mtxX, newToOldIV) ; /* *ipb=DenseMtx_entries(mtxX); */ if ( msglvl > 1 ) { fprintf(msgFile, "\n\n solution matrix in original ordering") ; DenseMtx_writeForHumanEye(mtxX, msgFile) ; fflush(msgFile) ; } /*---------------------------------------------------------------*/ sprintf(buffer, "y.result");

PAGE 219

205 inputFile=fopen(buffer, "w"); for ( jrow = 0 ; jrow < nrow ; jrow++ ) { b[jrow]=DenseMtx_entries(mtxX)[jrow];fprintf(inputFile, "%1.5e\n", b[jrow]); }fclose(inputFile); /* -----------free memory ----------*/ FrontMtx_free(frontmtx) ; DenseMtx_free(mtxX) ; DenseMtx_free(mtxB) ; IV_free(newToOldIV) ; IV_free(oldToNewIV) ; InpMtx_free(mtxA) ; ETree_free(frontETree) ; IVL_free(symbfacIVL) ; SubMtxManager_free(mtxmanager) ; Graph_free(graph) ; /*----------------------------------------------------------------* / fclose(msgFile); (void) time(&t2); printf("Time for Serial SPOOLES to solve: %d\n",(int) t2-t1); return; } E.2 Optimized P arallel Code F or this section, the optimized parallel code will be listed. E.2.1 P solver Mak ele Belo w is the Mak ele used to compile p solver .c CC = mpicc OPTLEVEL = -O MPE_INCDIR = /home/apollo/hda8/mpich-1.2.5.2/mpe/include INCLUDE_DIR = -I$(MPE_INCDIR) MPE_CFLAGS = -DMPI_LINUX -DUSE_STDARG -DHAVE_PROTOTYPES CFLAGS = $(OPTLEVEL) -I ../../../../SPOOLES.2.2 -DARCH="Linux" all: p_solver.o p_solver DIR=/home/apollo/hda8/SPOOLES.2.2

PAGE 220

206 MPI_INSTALL_DIR = /home/apollo/hda8/mpich-1.2.5.2 MPI_LIB_PATH = -L$(MPI_INSTALL_DIR)/lib MPI_LIBS = $(MPI_LIB_PATH) -lmpich MPI_INCLUDE_DIR = -I$(MPI_INSTALL_DIR)/include #Uncomment the below two lines so that log file can be created MPE_LIBDIR = /home/apollo/hda8/mpich-1.2.5.2/mpe/lib LOG_LIBS = -L$(MPE_LIBDIR) -llmpe -lmpe LIBS = \ $(DIR)/MPI/src/spoolesMPI.a \ $(DIR)/spooles.a -lm \ -lpthreadp_solver: p_solver.o ${CC} p_solver.o -o $@ ${LIBS} ${LOG_LIBS} E.2.2 P solver Source Code Belo w is the source code for the optimized parallel solv er /* This program is free software; you can redistribute it and/or */ /* modify it under the terms of the GNU General Public License as */ /* published by the Free Software Foundation(version 2); */ /* */ /* This program is distributed in the hope that it will be useful */ /* but WITHOUT ANY WARRANTY; without even the implied warranty of */ /* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the */ /* GNU General Public License for more details. */ /* */ /* You should have received a copy of the GNU General Public */ /* License along with this program; if not, write to the Free */ /* Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, */ /* USA. */ #include #include #include #include #include #include #include "/home/apollo/hda8/CalculiX/ccx_1.1/src/CalculiX.h" #include "/home/apollo/hda8/SPOOLES.2.2/MPI/spoolesMPI.h" #include "/home/apollo/hda8/SPOOLES.2.2/SPOOLES.h" #include "/home/apollo/hda8/SPOOLES.2.2/timings.h" int main(int argc, char *argv[]) { char buffer[20]; DenseMtx *mtxX, *mtxB, *newB; Chv *rootchv ; ChvManager *chvmanager ;

PAGE 221

207 SubMtxManager *mtxmanager, *solvemanager ; FrontMtx *frontmtx ; InpMtx *mtxA, *newA; double cutoff, droptol = 0.0, minops, tau = 100.; double cpus[20] ; double *opcounts ; DV *cumopsDV ; ETree *frontETree ; FILE *inputFile, *msgFile, *densematrix, *inputmatrix ; Graph *graph ; int jcol, jrow, error, firsttag, ncol, lookahead =0, msglvl=0, nedges, nent, neqns, nmycol, nrhs, ient,iroow, nrow, pivotingflag=0, root, seed=7892713, symmetryflag=0, type=1 ; int stats[20] ; int *rowind ; IV *newToOldIV, *oldToNewIV, *ownedColumnsIV, *ownersIV, *vtxmapIV ; IVL *adjIVL, *symbfacIVL ; SolveMap *solvemap ; double value; int myid, nproc; int namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; double starttime = 0.0, endtime; int maxdomainsize, maxsize, maxzeros; /* Solving the system of equations using Spooles */ /*----------------------------------------------------------------* / /* -----------------------------------------------------------------Find out the identity of this process and the number of processes ----------------------------------------------------------------*/MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myid) ; MPI_Comm_size(MPI_COMM_WORLD, &nproc) ; MPI_Get_processor_name(processor_name,&namelen);if(myid==0){printf("Solving the system of equations using SpoolesMPI\n\n"); }fprintf(stdout,"Process %d of %d on %s\n",myid+1,nproc, processor_name); /* Start a timer to determine how long the solve process takes */ starttime=MPI_Wtime();/*----------------------------------------------------------------* /

PAGE 222

208 sprintf(buffer, "res.%d", myid) ; if ( (msgFile = fopen(buffer, "w")) == NULL ) { fprintf(stderr, "\n fatal error in spooles.c" "\n unable to open file res\n"); }/* --------------------------------------------STEP 1: Read the entries from the input file and create the InpMtx object -------------------------------------------*//* Read in the input matrix, A */ sprintf(buffer, "matrix.%d.input", myid) ; inputFile = fopen(buffer, "r") ; fscanf(inputFile, "%d %d %d", &neqns, &ncol, &nent) ; nrow = neqns; MPI_Barrier(MPI_COMM_WORLD) ; mtxA = InpMtx_new() ; InpMtx_init(mtxA, INPMTX_BY_ROWS, type, nent, 0) ; for ( ient = 0 ; ient < nent ; ient++ ) { fscanf(inputFile, "%d %d %lf", &iroow, &jcol, &value) ; InpMtx_inputRealEntry(mtxA, iroow, jcol, value) ; } fclose(inputFile) ; /* Change the storage mode to vectors */ InpMtx_sortAndCompress(mtxA);InpMtx_changeStorageMode(mtxA, INPMTX_BY_VECTORS) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n input matrix") ; InpMtx_writeForHumanEye(mtxA, msgFile) ; fflush(msgFile) ; } /*----------------------------------------------------------------* / /* ----------------------------------------------------STEP 2: Read the right hand side entries from the input file and create the DenseMtx object for B ---------------------------------------------------*/sprintf(buffer, "rhs.%d.input", myid); inputFile = fopen(buffer, "r") ;

PAGE 223

209 fscanf(inputFile, "%d %d", &nrow, &nrhs) ; mtxB = DenseMtx_new() ; DenseMtx_init(mtxB, type, 0, 0, nrow, nrhs, 1, nrow) ; DenseMtx_rowIndices(mtxB, &nrow, &rowind); for ( iroow = 0 ; iroow < nrow ; iroow++ ) { fscanf(inputFile, "%d", rowind + iroow) ; for ( jcol = 0 ; jcol < nrhs ; jcol++ ) { fscanf(inputFile, "%lf", &value) ; DenseMtx_setRealEntry(mtxB, iroow, jcol, value) ; } }fclose(inputFile) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n rhs matrix in original ordering") ; DenseMtx_writeForHumanEye(mtxB, msgFile) ; fflush(msgFile) ; } /*----------------------------------------------------------------* / /* -------------------------------------------------------STEP 3 : Find a low-fill ordering (1) Processor 0 creates the Graph object (2) Processor 0 orders the graph using the better of Nested Dissection and Multisection (3) Optimal front matrix paremeters are chosen depending on the number of processors (4) Broadcast ordering to the other processors ------------------------------------------------------*/if (myid==0){ graph = Graph_new() ; adjIVL = InpMtx_fullAdjacency(mtxA) ; nedges = IVL_tsize(adjIVL) ; Graph_init2(graph, 0, neqns, 0, nedges, neqns, nedges, adjIVL, NULL, NULL) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n graph of the input matrix") ; Graph_writeForHumanEye(graph, msgFile) ; fflush(msgFile) ; }/* Below choose the optimized values for maxdomainsize, */ /* maxzeros, and maxsize depending on the number of */

PAGE 224

210 /* processors. */ if (nproc==2) {maxdomainsize=700; maxzeros=1000; maxsize=96; }else if (nproc==3) {maxdomainsize=900; maxzeros=1000; maxsize=64; }else{maxdomainsize=900; maxzeros=1000; maxsize=80; }/* Perform an ordering with the better of nested dissection and */ /* multi-section. */ frontETree = orderViaBestOfNDandMS(graph,maxdomainsize,maxzeros, maxsize, seed,msglvl,msgFile) ; Graph_free(graph) ; }else{}/* The ordering is now sent to all processors with MPI_Bcast. */ frontETree = ETree_MPI_Bcast(frontETree, root, msglvl, msgFile, MPI_COMM_WORLD) ; /*----------------------------------------------------------------* / /* -------------------------------------------------------STEP 4: Get the permutations, permute the front tree, permute the matrix and right hand side. ------------------------------------------------------*//* Very similar to the serial code */ oldToNewIV = ETree_oldToNewVtxPerm(frontETree) ; newToOldIV = ETree_newToOldVtxPerm(frontETree) ; ETree_permuteVertices(frontETree, oldToNewIV) ; InpMtx_permute(mtxA, IV_entries(oldToNewIV), IV_entries(oldToNewIV)) ; InpMtx_mapToUpperTriangle(mtxA) ; InpMtx_changeCoordType(mtxA, INPMTX_BY_CHEVRONS) ; InpMtx_changeStorageMode(mtxA, INPMTX_BY_VECTORS) ; DenseMtx_permuteRows(mtxB, oldToNewIV) ;

PAGE 225

211 /*---------------------------------------------------------------*//* -------------------------------------------STEP 5: Generate the owners map IV object and the map from vertices to owners ------------------------------------------*//* This is all new from the serial code: */ /* Obtains map from fronts to processors. Also a map */ /* from vertices to processors is created that enables */ /* the matrix A and right hand side B to be distributed*/ /* as necessary. */ cutoff = 1./(2*nproc) ; cumopsDV = DV_new() ; DV_init(cumopsDV, nproc, NULL) ; ownersIV = ETree_ddMap(frontETree, type, symmetryflag, cumopsDV, cutoff) ; DV_free(cumopsDV) ; vtxmapIV = IV_new() ; IV_init(vtxmapIV, neqns, NULL) ; IVgather(neqns, IV_entries(vtxmapIV), IV_entries(ownersIV), ETree_vtxToFront(frontETree)) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n map from fronts to owning processes") ; IV_writeForHumanEye(ownersIV, msgFile) ; fprintf(msgFile, "\n\n map from vertices to owning processes") ; IV_writeForHumanEye(vtxmapIV, msgFile) ; fflush(msgFile) ; }/*---------------------------------------------------------------*//* ---------------------------------------------------STEP 6: Redistribute the matrix and right hand side --------------------------------------------------*//* Now the entries of A and B are assembled and distributed */ firsttag = 0 ; newA = InpMtx_MPI_split(mtxA, vtxmapIV, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; firsttag++ ; InpMtx_free(mtxA) ; mtxA = newA ; InpMtx_changeStorageMode(mtxA, INPMTX_BY_VECTORS) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n split InpMtx") ;

PAGE 226

212 InpMtx_writeForHumanEye(mtxA, msgFile) ; fflush(msgFile) ; }newB = DenseMtx_MPI_splitByRows(mtxB, vtxmapIV, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; DenseMtx_free(mtxB) ; mtxB = newB ; firsttag += nproc ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n split DenseMtx B") ; DenseMtx_writeForHumanEye(mtxB, msgFile) ; fflush(msgFile) ; }/*---------------------------------------------------------------*//* ------------------------------------------STEP 7: Compute the symbolic factorization -----------------------------------------*/symbfacIVL = SymbFac_MPI_initFromInpMtx(frontETree, ownersIV, mtxA, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; firsttag += frontETree->nfront ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n local symbolic factorization") ; IVL_writeForHumanEye(symbfacIVL, msgFile) ; fflush(msgFile) ; }/*---------------------------------------------------------------*//* -----------------------------------STEP 8: initialize the front matrix ----------------------------------*//* Very similar to the serial code. The arguments, myid and */ /* ownersIV tell the front matrix object to initialize only those */ /* parts of the factor matrices that it owns */ mtxmanager = SubMtxManager_new() ; SubMtxManager_init(mtxmanager, NO_LOCK, 0) ; frontmtx = FrontMtx_new() ; FrontMtx_init(frontmtx, frontETree, symbfacIVL, type, symmetryflag, FRONTMTX_DENSE_FRONTS, pivotingflag, NO_LOCK, myid, ownersIV, mtxmanager, msglvl, msgFile) ; /*---------------------------------------------------------------*//* ---------------------------------STEP 9: Compute the factorization ---------------------------------

PAGE 227

213 *//* Similar to the serial code */ chvmanager = ChvManager_new() ; /* For the serial code, the 0 is replaced by a 1 */ ChvManager_init(chvmanager, NO_LOCK, 0) ; rootchv = FrontMtx_MPI_factorInpMtx(frontmtx, mtxA, tau, droptol, chvmanager, ownersIV, lookahead, &error, cpus, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; ChvManager_free(chvmanager) ; firsttag += 3*frontETree->nfront + 2 ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n numeric factorization") ; FrontMtx_writeForHumanEye(frontmtx, msgFile) ; fflush(msgFile) ; }if ( error >= 0 ) { fprintf(stderr, "\n proc %d : factorization error at front %d", myid, error); MPI_Finalize() ; exit(-1) ; }/*---------------------------------------------------------------*//* ------------------------------------------------STEP 10: Post-process the factorization and split the factor matrices into submatrices -----------------------------------------------*//* Very similar to the serial code */ FrontMtx_MPI_postProcess(frontmtx, ownersIV, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; firsttag += 5*nproc ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n numeric factorization after postprocessing"); FrontMtx_writeForHumanEye(frontmtx, msgFile) ; fflush(msgFile) ; }/*---------------------------------------------------------------*//* -----------------------------------STEP 11: Create the solve map object ----------------------------------*/solvemap = SolveMap_new() ; SolveMap_ddMap(solvemap, frontmtx->symmetryflag, FrontMtx_upperBlockIVL(frontmtx),FrontMtx_lowerBlockIVL(frontmtx),nproc, ownersIV, FrontMtx_frontTree(frontmtx),

PAGE 228

214 seed, msglvl, msgFile); if ( msglvl > 1 ) { SolveMap_writeForHumanEye(solvemap, msgFile) ; fflush(msgFile) ; }/*---------------------------------------------------------------*//* ----------------------------------------------------STEP 12: Redistribute the submatrices of the factors ---------------------------------------------------*//* Now submatrices that a processor owns are local to that processor */ FrontMtx_MPI_split(frontmtx, solvemap, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n numeric factorization after split") ; FrontMtx_writeForHumanEye(frontmtx, msgFile) ; fflush(msgFile) ; }/*---------------------------------------------------------------*//* ------------------------------------------STEP 13: Create a solution DenseMtx object -----------------------------------------*/ownedColumnsIV = FrontMtx_ownedColumnsIV(frontmtx, myid, ownersIV, msglvl, msgFile) ; nmycol = IV_size(ownedColumnsIV) ; mtxX = DenseMtx_new() ; if ( nmycol > 0 ) { DenseMtx_init(mtxX, type, 0, 0, nmycol, nrhs, 1, nmycol) ; DenseMtx_rowIndices(mtxX, &nrow, &rowind) ; IVcopy(nmycol, rowind, IV_entries(ownedColumnsIV)) ; }/*---------------------------------------------------------------*//* --------------------------------STEP 14: Solve the linear system -------------------------------*//* Very similar to the serial code */ solvemanager = SubMtxManager_new() ; SubMtxManager_init(solvemanager, NO_LOCK, 0) ; FrontMtx_MPI_solve(frontmtx, mtxX, mtxB, solvemanager, solvemap, cpus, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; SubMtxManager_free(solvemanager) ; if ( msglvl > 1 ) {

PAGE 229

215 fprintf(msgFile, "\n solution in new ordering") ; DenseMtx_writeForHumanEye(mtxX, msgFile) ; }/*---------------------------------------------------------------*//* --------------------------------------------------------STEP 15: Permute the solution into the original ordering and assemble the solution onto processor 0 -------------------------------------------------------*/DenseMtx_permuteRows(mtxX, newToOldIV) ; if ( msglvl > 1 ) { fprintf(msgFile, "\n\n solution in old ordering") ; DenseMtx_writeForHumanEye(mtxX, msgFile) ; fflush(msgFile) ; }IV_fill(vtxmapIV, 0) ; firsttag++ ; mtxX = DenseMtx_MPI_splitByRows(mtxX, vtxmapIV, stats, msglvl, msgFile, firsttag, MPI_COMM_WORLD) ; /* End the timer */ endtime=MPI_Wtime();/* Determine how long the solve operation took */ fprintf(stdout,"Total time for %s: %f\n",processor_name, endtime-starttime); /* Now gather the solution the processor 0 */ if ( myid == 0) { printf("%d\n", nrow); sprintf(buffer, "x.result"); inputFile=fopen(buffer, "w"); for ( jrow = 0 ; jrow < ncol ; jrow++ ) { fprintf(inputFile, "%1.5e\n", DenseMtx_entries(mtxX)[jrow]); } fclose(inputFile); }/*----------------------------------------------------------------* / /* End the MPI environment */ MPI_Finalize() ; /* Free up memory */ InpMtx_free(mtxA) ; DenseMtx_free(mtxB) ; FrontMtx_free(frontmtx) ; DenseMtx_free(mtxX);

PAGE 230

216 IV_free(newToOldIV);IV_free(oldToNewIV);ETree_free(frontETree);IVL_free(symbfacIVL);SubMtxManager_free(mtxmanager);return(0) ; }

PAGE 231

REFERENCES [1] Message P assing Interf ace F orum. Message passing interf ace forum home page. Online, October 2003. http://www.mpiforum.org 2 12 136 [2] Mathematics and Computer Science Di vision Ar gonne National Laboratory Mpich a portable implementation of mpi. Online, October 2003. http://wwwunix.mcs.anl.gov/mpi/mpich/ 2 [3] LAM/MPI P arallel Computing. Lam/mpi parallel computing. Online, April 2005. http://www.lammpi.org 2 [4] Thomas L. Sterling, John Salmon, Donald J. Beck er and Daniel F Sa v arese. How to b uild a Beowulf: a guide to the implementation and application of PC cluster s MIT Press, Cambridge, MA, 1999. 2 [5] Beo wulf.or g. Beo wulf.or g: The beo wulf cluster site. Online, October 2003. http://www.beowulf.org 2 [6] Robert G. Bro wn. Engineering a beo wulf-style compute cluster Online, January 2005. http://www.phy.duke.edu/rgb/Beowulf/beowulf_book.php 3 60 [7] Thomas E. Anderson, Da vid E. Culler and Da vid A. P atterson. Case for netw orks of w orkstations. IEEE Micr o 15(1):54—64, February 1995. 3 [8] Fedora Project. Fedora project, sponsored by red hat. Online, April 2005. http://fedora.redhat.com 7 [9] The Linux K ernel Archi v es. The linux k ernel archi v es. Online, May 2005. http://www.kernel.org 7 [10] Redhat. Red hat enterprise linux documentation. Online, May 2005. http://www.redhat.com/docs/manuals/enterprise/ 8 [11] The Linux Documentation Project. Linux ip masquerade ho wto. Online, April 2005. http://en.tldp.org/HOWTO/IPMasqueradeHOWTO/index.html/ 8 [12] Netlter The netlger/iptables project. Online, April 2005. http://www.netfilter.org/ 8 9 217

PAGE 232

218 [13] Y Rekhter B. Mosk o witz, D. Karrenber g, and G. J. de Groot. Address allocation for pri v ate internets. Online, April 2005. ftp://ftp.rfceditor.org/innotes/rfc1918.txt 12 [14] Message P assing Interf ace F orum. MPI: A Messa g e-P assing Interface Standar d Message P assing Interf ace F orum, June 1995. http://mpiforum.org/docs/mpi11html/mpireport.html 12 13 144 [15] W illiam Gropp, Ewing Lusk, Nathan Doss, and Anthon y Skjellum. High-performance, portable implementation of the mpi standard. P ar allel Computing 22(6):789—829, 1996. 14 [16] W illiam Saphir A surv e y of mpi implementations. T echnical Report 1, The National HPCC Softw are Exchange, Berk ele y CA, No v ember 1997. 14 [17] The Linux Documentation Project. The linux documentation project. Online, April 2005. http://www.tldp.org/ 14 15 [18] W illiam Gropp and Ewing Lusk. Installation guide to mpich, a portable implementation of mpi. Online, September 2004. wwwunix.mcs.anl.gov/mpi/mpich/docs/install/paper.htm 16 [19] Scalable Computing Laboratory Netpipe. Online, June 2004. http://www.scl.ameslab.gov/netpipe/ 22 [20] W illiam Gropp, Ewing Lusk, and Thomas Sterling. Beowulf Cluster Computing with Linux The MIT Press, Cambridge, MA, second edition, 2003. 22 [21] Myricom. Myricom home page. Online, March 2005. http://www.myri.com/ 23 168 [22] J. Piernas, A. Flores, and Jose M. Garcia. Analyzing the performance of MPI in a cluster of w orkstations based on f ast ethernet. In Pr oceedings of the 4th Eur opean PVM/MPI User s' Gr oup Meeting on Recent Advances in P ar allel V irtual Mac hine and Messa g e P assing Interface v olume 1332, pages 17—24, Crack o w Poland, No v ember 1997. Springer 26 31 [23] Q. Snell, A. Mikler and J. Gustafson. Netpipe: A netw ork protocol independent performace e v aluator Online, June 2004. www.scl.ameslab.gov/netpipe/ 28 30 [24] Uni v ersity of T ennessee Computer Science Department. Hpl a portable implementation of the high-performance linpack benchmark for distrib uted-memory computers. Online, January 2004. http://www.netlib.org/benchmark/hpl/ 31 37 40 41 42 50

PAGE 233

219 [25] R. Clint Whale y and Jack Dong arra. Automatically tuned linear algebra softw are. T echnical Report UT -CS-97-366, Uni v ersity of T ennessee, December 1997. 33 [26] Kazushige Goto. High-performance blas. Online, October 2004. http://www.cs.utexas.edu/users/flame/goto 34 [27] J. Dong arra, J. Du Croz, S. Hammarling, and I. Duf f. A set of le v el 3 basic linear algebra subprograms. A CM T r ansactions on Mathematical Softwar e 16(1):1—17, March 1990. 50 [28] J. Dong arra, R. Geijn, and D. W alk er Lapack w orking note 43: A look at scalable dense linear algebra libraries. T echnical report, Uni v ersity of T ennessee, Knoxville, TN, 1992. 50 [29] C. La wson, R. Hanson, D. Kincaid, and F Krogh. Basic linear algebra subprograms for fortran usage. A CM T r ansactions on Mathematical Softwar e 5(3):308—323, September 1979. 50 [30] Guido Dhondt and Klaus W ittig. Calculix: A three-dimensional structural nite element program. Online, August 2003. http://www.calculix.de/ 63 95 [31] Free Softw are F oundation. The gnu operating system. Online, April 2005. http://www.gnu.org/ 63 [32] Rich Lehoucq, Kristi Maschhof f, Dann y Sorensen, and Chao Y ang. Arpack arnoldi package. Online, December 2004. http://www.caam.rice.edu/software/ARPACK 65 [33] Boeing Phantom W orks. Spooles 2.2: Sparse object oriented linear equations solv er Online, October 2003. http://www.netlib.org/linalg/spooles/spooles.2.2.html 65 153 [34] Cle v e Ashcraft and Roger Grimes. Solving linear systems using spooles 2.2. Online, March 2003. http://www.netlib.org/linalg/spooles/spooles.2.2.html 122 125 142 [35] Cle v e Ashcraft, Roger Grimes, Daniel Pierce, and Da vid W ah. SPOOLES: An Object-Oriented Spar se Matrix Libr ary Boeing Phantom W orks, 2002. 122 125 126 [36] W ikipedia. Object-oriented programming. Online, April 2005. http://en.wikipedia.org/wiki/Objectoriented 122

PAGE 234

220 [37] Cle v e Ashcraft, Daniel Pierce, Da vid W ah, and Jason W u. The Refer ence Manual for SPOOLES, Release 2.2: An Object, Oriented Softwar e Libr ary for Solving Spar se Linear Systems of Equations Boeing Phantom W orks, 1999. 122 129 131 133 138 139 140 141 142 [38] W illiam H. Press, Saul A. T euk olsk y W illiam T V etterling, and Brian P Flannery Numerical Recipes in F ortr an 77 The Art of Scientic Computing Cambridge Uni v ersity Press, Ne w Y ork, NY second edition, 2001. 126 [39] Cle v e Ashcraft, Roger Grimes, and J.G. Le wis. Accurate symmetric indenite linear equation solv ers. T echnical report, Boeing Computer Servies, 1995. 126 [40] Cle v e Ashcraft. Ordering sparse matrices and transforming front trees. T echnical report, Boeing Shared Services Group, 1999. 129 145 161 162 [41] Alan Geor ge and Joseph W Liu. Computer Solution of Lar g e Spar se P ositive Denite Systems Prentice-Hall, Inc, Engle w ood Clif fs, NJ, 1981. 130 145 147 [42] T imoth y W alsh and Leszek Demk o wicz. A parallel multifrontal solv er for hp-adapti v e nite elements. T echnical report, The Institute for Computational Engineering and Sciences, January 1999. 131 [43] Joseph W H. Liu. Modication of the minimum-de gree algorithm by multiple elimination. A CM T r ansactions on Mathematical Softwar e 11(2):141—153, 1985. 147 149 150 [44] Alan Geor ge and Joseph W H. Liu. The e v olution of the minimum de gree ordering algorithm. SIAM Re vie w 31(1):1—19, 1989. 147 [45] B. K umar P Sadayappan, and C.-H. Huang. On sparse matrix reordering for parallel f actorization. In ICS '94: Pr oceedings of the 8th international confer ence on Super computing pages 431—438, Ne w Y ork, NY 1994. A CM Press. 147 151 [46] Alan Geor ge. Nested dissection of a re gular nite element mesh. SIAM J ournal of Numerical Analysis 10:345—363, 1973. 151 [47] Cle v e Ashcraft and Joseph W H. Liu. Rob ust ordering of sparse matrices using multisection. SIAM J ournal on Matrix Analysis and Applications 19(3):816—832, 1997. 152 [48] Geor ge Karypis. Metis: F amily of multile v el partitioning algorithms. Online, October 2004. http://wwwusers.cs.umn.edu/karypis/metis/ 152 [49] I. S. Duf f, Roger G. Grimes, and John G. Le wis. Sparse matrix test problems. A CM T r ansactions on Mathematical. Softwar e 15(1):1—14, 1989. 152

PAGE 235

221 [50] Baris Guler Munira Hussain, T au Leng, and V ictor Mashayekhi. The adv antages of diskless hpc clusters using nas. Online, April 2005. http://www.dell.com 169

PAGE 236

BIOGRAPHICAL SKETCH P aul C. Johnson w as born on October 19, 1980, and gre w up in Oppenheim, NY He be g an under graduate education in the f all of 1998 and recei v ed his Bachelor of Science in Mechanical Engineering from Manhattan Colle ge in May of 2002. P aul continued his education by pursuing a Master of Science de gree in Mechanical Engineering at the Uni v ersity of Florida. He w ork ed in the Computational Laboratory for Electromagnetics and Solid Mechanics under the advisement of Dr Loc V u-Quoc. His research inter ests include adapting and optimizing a parallel system of equations solv er to a cluster of w orkstations. 222


Permanent Link: http://ufdc.ufl.edu/UFE0012121/00001

Material Information

Title: Parallel Computational Mechanics with a Cluster of Workstations
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0012121:00001

Permanent Link: http://ufdc.ufl.edu/UFE0012121/00001

Material Information

Title: Parallel Computational Mechanics with a Cluster of Workstations
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0012121:00001


This item has the following downloads:


Full Text











PARALLEL COMPUTATIONAL MECHANICS WITH A CLUSTER OF
WORKSTATIONS












By

PAUL C. JOHNSON




















A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS OF THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2005































Copyright 2005

by

Paul C. Johnson














ACKNOWLEDGEMENTS

I wish to express my sincere gratitude to Professor Loc Vu-Quoc for his support and

guidance throughout my master's study. His steady and thorough approach to teaching

has inspired me to accept any challenge with determination. I also would like to express

my gratitude to the supervisory committee members: Professors Alan D. George and

Ashok V. Kumar.

Many thanks go to my friends who have always been there for me whenever I

needed help or friendship.

Finally, I would like to thank my parents Charles and Helen Johnson, grandparents

Wendell and Giselle Cernansky, and the Collins family for all of the support that they

have given me over the years. I sincerely appreciate all that they have sacrificed and can

not imagine how I would have proceeded without their love, support, and encouragement.















TABLE OF CONTENTS


ACKNOWLEDGEMENTS ............................


page

iii


ABSTRACT .......

1 PARALLEL COMPUTING ......

1.1 Types of Parallel Processing .
1.1.1 Clusters . . .
1.1.2 Beowulf Cluster ......
1.1.3 Network of Workstations

2 NETWORK SETUP ....

2.1 Network and Computer Hardware
2.2 Network Configuration ......
2.2.1 Configuration Files .
2.2.2 Internet Protocol Forwarding
2.3 MPI-Message Passing Interface
2.3.1 Goals .....
2.3.2 MPICH ....
2.3.3 Installation ...
2.3.4 Enable SSH .. ......
2.3.5 Edit Machines.LINUX
2.3.6 Test Examples ..
2.3.7 Conclusions .. ......


Masquerading


3 BENCHMARKING .. ...........

3.1 Performance Metrics.......
3.2 Network Analysis .....
3.2.1 NetPIPE .. ..........
3.2.2 Test Setup .. ..........
3.2.3 Results ......
3.3 High Performance Linpack-Single Node
3.3.1 Installation .....
3.3.2 ATLAS Routines ...
3.3.3 Goto BLAS Libraries ......











3.3.4 Using either Library .................. .. 35
3.3.5 Benchmarking .................. ....... .. 36
3.3.6 Main Algorithm .................. ....... .. 37
3.3.7 HPL.dat Options .................. ... .. .. 37
3.3.8 Test Setup .................. .......... .. 42
3.3.9 Results .................. ........... .. 43
3.3.10 Goto's BLAS Routines . ............ 47
3.4 HPL-Multiple Node Tests .................. ... .. 49
3.4.1 Two Processor Tests .................. .. 49
3.4.2 Process Grid .................. ........ .. 50
3.4.3 Three Processor Tests .................. .. 55
3.4.4 Four Processor Tests .................. .. 58
3.4.5 Conclusions .................. ......... .. 60

4 CALCULIX . .......... ................ 63

4.1 Installation of CalculiX GraphiX ................... .. .. 63
4.2 Installation of CalculiX CrunchiX .......... 64
4.2.1 ARPACK Installation .................. .. 65
4.2.2 SPOOLES Installation .................. ..... 66
4.2.3 Compile CalculiX CrunchiX .................. .. 66
4.3 Geometric Capabilities .................. ....... .. 67
4.4 Pre-processing .................. ........... .. 68
4.4.1 Points .................. ............ .. 68
4.4.2 Lines . . . . . . ... .. 69
4.4.3 Surfaces .. ... .. ... .. .. ... ... .. ..... .. 71
4.4.4 Bodies . . . . . . ... .. 71
4.5 Finite-Element Mesh Creation .................. ..... 72

5 CREATING GEOMETRY WITH CALCULIX. ........... .. 74

5.1 CalculiX Geometry Generation ................ .. .. 74
5.1.1 Creating Points .................. ..... .. .. 75
5.1.2 Creating Lines .................. ....... .. 78
5.1.3 Creating Surfaces .................. ..... .. 80
5.1.4 Creating Bodies .................. ....... .. 82
5.1.5 Creating the Cylinder .................. .. 85
5.1.6 Creating the Parallelepiped ............ .. .. .. 90
5.1.7 Creating Horse-shoe Section .................. .. 92
5.1.8 Creating the Slanted Section .................. .. 94
5.2 Creating a Solid M esh .................. ....... .. 95
5.2.1 Changing Element Divisions .................. .. 97
5.2.2 Delete and Merge Nodes ............... .. .. 103













5.2.4 Run Analysis .......... . . 119

6 OPEN SOURCE SOLVERS .................. ..... 121

6.1 SPOOLES ......... ......... ...... ...... 121
6.1.1 Objects in SPOOLES .................. .. .. 122
6.1.2 Steps to Solve Equations .................. .. 122
6.1.3 Communicate .................. ........ 123
6.1.4 Reorder .................. ........... 125
6.1.5 Factor . . . . . . .... 125
6.1.6 Solve . . . . . . .... 126
6.2 Code to Solve Equations .................. ..... 127
6.3 Serial Code .................. ............ 127
6.3.1 Communicate .................. ........ 127
6.3.2 Reorder .... ........... . . 129
6.3.3 Factor . . . . . . .... 130
6.3.4 Communicate B .................. ....... 133
6.3.5 Solve . . . . . . .... 134
6.4 Parallel Code .................. ............ 136
6.4.1 Communicate .................. ........ 136
6.4.2 Reorder .................. ........... 138
6.4.3 Factor . . . . . . .... 140
6.4.4 Solve . . . . . . .... 143

7 MATRIX ORDERINGS .................. ......... 145

7.1 Ordering Optimization .................. ....... 145
7.2 Minimum Degree Ordering .................. ... 147
7.3 Nested Dissection .................. ......... 151
7.4 M ulti-section .................. ............ 152

8 OPTIMIZING SPOOLES FOR A COW .................. 153

8.1 Installation . . . . . . . .... 153
8.2 Optimization ................ . . ... 154
8.2.1 Multi-Processing Environment MPE ............. 155
8.2.2 Reduce Ordering Time .................. .... 159
8.2.3 Optimizing the Front Tree .... .......... .... 161
8.2.4 M axdomainsize .................. ....... 161
8.2.5 Maxzeros and Maxsize .................. .... 162
8.2.6 Final Tests with Optimized Solver ............... 165
8.3 Conclusions . . . . . . . .167
8.3.1 Recommendations .................. .. 168


5.2.3 Apply Boundary Conditions


. . . . . 116











A CPI SOURCE CODE .....................


B BENCHMARKING RESULTS ........... ............. 172

B.1 NetPIPE Results ................... ....... 172
B.2 NetPIPE TCP Results ................... ..... 173
B.3 High Performance Linpack .................. ... 174
B.3.1 HPL M akefiles .................. ....... 174
B.3.2 HPL.dat File ............ . . ... 179
B.3.3 First Test Results with ATLAS ................. 179
B.3.4 HPL.dat for Second Test with ATLAS Libraries . ... 186
B.3.5 Second Test Results with ATLAS ................ 186
B.3.6 Final Test with ATLAS Libraries ................ 188
B.3.7 HPL.dat File for Multi-processor Test ............. 189
B.3.8 Goto's Multi-processor Tests ... . . 190
B.3.9 HPL.dat File for Testing Broadcast Algorithms . ... 191
B.3.10Final Test with Goto's Libraries ................. 191

C CALCULIX INSTALLATION ........... .... .. 193

C.1 ARPACK Makefile ............. . . 193
C.2 CalculiX CrunchiX Makefile .................. ... 195

D CALCULIX CRUNCHIX INPUT FILE .................. 198


E SERIAL AND PARALLEL SOLVER SOURCE CODE . .... 199

E.1 Serial Code .................. ............ .. 199
E.2 Optimized Parallel Code .................. .... 205
E.2.1 Psolver Makefile .................. ..... 205
E.2.2 Psolver Source Code .................. ... 206

REFERENCES .................. ............... .. 217

BIOGRAPHICAL SKETCH .................. ........ 222















LIST OF TABLES

Table page


3.1 Network hardware comparison . ............. 23

3.2 ATLAS BLAS routine results . ............. 46

3.3 Goto's BLAS routine results .. ............. . 48

3.4 Goto's BLAS routine results-2 processors ................ .. 54

3.5 Goto's BLAS routine results-3 processors ................ .. 57

3.6 Goto's BLAS routine results-4 processors ................ .. 59


6.1 Comparison of solvers .................. ...... .. .. 121

6.2 Utility objects .................. ............ .. 123

6.3 Ordering objects .................. ......... .. .. 123

6.4 Numeric objects .................. ........... .. 124


8.1 maxdomainsize effect on solve time (seconds) . . ..... 162

8.2 Processor solve time (seconds)-700 maxdomainsize . . ... 163

8.3 Processor solve time (seconds)-900 maxdomainsize . . ... 164

8.4 Results with optimized values ................ ... .. 165

8.5 Results for large test .................. ......... .. 167















LIST OF FIGURES


Figure page


1.1 Beowulf layout ................... .... 4


2.1 Network hardware configuration .................. 6

2.2 Original network configuration .................. 7

2.3 cpi results ............... ........... .. .. 19


3.1 M message size vs. throughput ............... .. .. 26

3.2 MPI vs. TCP throughput comparison ............. .. 29

3.3 MPI vs. TCP saturation comparison ................ 29

3.4 Decrease in effective throughput with MPI ................ 30

3.5 Throughput vs. time .................. ......... .. 31

3.6 Block size effect on performance for 1 node ............... .. 43

3.7 2D block-cyclic layout .................. ........ .. 51

3.8 Block size effect on performance for 2 nodes .............. .. 52

3.9 Block size effect on performance for 3 nodes .............. .. 55

3.10 Block size effect on performance for 4 nodes .............. .. 58

3.11 Decrease in maximum performance ................ 61


4.1 Opening screen .................. ............ .. 69

4.2 pl with label .................. ............. .. 70











Spline .....

Surface .....

Body created by sweeping


5.1 Final part . . . .

5.2 Creating points .........

5.3 Selection box .. .........

5.4 Creating lines ..........

5.5 Creating lines .. .........

5.6 Creating surfaces .. .......

5.7 Creating surface A001 ..

5.8 Creating surface A002 ..

5.9 Creating bodies ..........

5.10 Plotting bodies .. ........

5.11 Creating the handle ........

5.12 Creating the cylinder points .

5.13 Creating the cylinder lines .

5.14 Creating the cylinder surfaces .

5.15 Cylinder surfaces ......

5.16 Cylinder surfaces ......

5.17 Creating points for parallelepiped

5.18 Creating lines for parallelepiped












5.19

5.20

5.21

5.22

5.23

5.24

5.25

5.26

5.27

5.28

5.29

5.30

5.31

5.32

5.33

5.34

5.35

5.36

5.37

5.38

5.39


Creating lines for horse-shoe section .

Surfaces . ...

Creating body for horse-shoe section .

Creating lines for the slanted section .

Final part . . . .

Unaligned meshes . .

Changing line divisions . ...

Pick multiple division numbers . .

Change all numbers to 9 . .

Select line away from label . ..

Change cylinder divisions . .

Change parallelepiped divisions .

Change horse-shoe section divisions .

Change horse-shoe section divisions .

Improved element spacing . .

First nodal set . .

Selected nodes . .

Select more nodes . .

Selected wrong nodes . .

Correct set of nodes . .

Selected extra nodes . .


. ... . 93

. .. 9 3

. ... . 94

. ... . 9 5

. .. 96

. .. 97

. .. 9 8

. . ... . 99

. .. . 100

. .. . 100

. .. . 10 1

. ... . 10 1

... . 102

... . 102

. .. . 103

. .. . 104

. .. . 105

. .. . 105

. .. . 106

. .. . 107

. .. . 107











5.40 Select nodes to delete ......... . . 108

5.41 Final node set .................... . . 108

5.42 Select nodes ................... .. . . 110

5.43 Plot nodes ....................... . . ...... 110

5.44 Select nodes ................... ... . . 111

5.45 Select nodes ................... ... . . 111

5.46 Final node set .................... . . 112

5.47 Select nodes from the side .................. .... 113

5.48 Good node set .................. ............ 113

5.49 Final node set . . . . . . . .... 114

5.50 Determine node distance .................. ....... 114

5.51 Create selection box .................. ......... 115

5.52 Final node set .................. ............. 116

5.53 Side view of handle with nodes plotted .................. 117

5.54 Select nodes on handle inner surface ........ . 118

5.55 Add nodes to set load .................. ......... 118

5.56 von Mises stress for the part .................. ... 120


6.1 Three steps to numeric factorization ................ 130

6.2 Arrowhead matrix .................. .......... 132


7.1 Original A .................. .............. 146

7.2 Lower matrix .................. ............. 146











7.3 Upper matrix .................... . . 146

7.4 Steps to elimination graph .. ............... . ..148

7.5 Minimum degree algorithm ............... . ..149

7.6 Modified minimum degree algorithm ........ . 150


8.1 von Mises stress of cantilever ................ ... .. 155

8.2 p_solver MPI communication-2 processors . . ..... 157

8.3 p_solver MPI communication zoomed-2 processors . . ... 157

8.4 p_solver MPI communication-4 processors . . ..... 158

8.5 MPI communication for cpi ................ .. .. 159

8.6 First optimization .................. ........... .. 160

8.7 Final optimization results for two processors . . ..... 166

8.8 Final optimization results for four processors . . ..... 166

8.9 Diskless cluster .................. ............ .. 169















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science


PARALLEL COMPUTATIONAL MECHANICS WITH A CLUSTER OF
WORKSTATIONS


By

Paul Johnson

December 2005

Chairman: Loc Vu-Quoc
Major Department: Mechanical and Aerospace Engineering

Presented are the steps to creating, benchmarking, and adapting an optimized paral-

lel system of equations solver provided by SPOOLES to a Cluster of Workstations (CoW)

constructed from Commodity Off The Shelf (COTS) components. The parallel system of

equations solver is used in conjunction with the pre- and post-processing capabilities of

CalculiX, a freely available three-dimensional structural finite-element program. In the

first part, parallel computing is introduced with the different architectures explained and

compared. Chapter 2 explains the process of building a Cluster of Workstations. Ex-

plained is the setup of computer and network hardware and the underlying software that

allows interprocessor communication. Next, a thorough benchmarking of the cluster with

several applications that report network latency and bandwidth and overall system perfor-

mance is explained. In the last chapter, the parallel solver is optimized for our Cluster of

Workstations with recommendations to further improve performance.















CHAPTER 1
PARALLEL COMPUTING

Software has traditionally been for serial computation, performed by a single Cen-

tral Processing Unit (CPU). With computational requirements always increasing with the

growing complexity of software, harnessing more computational power is always de-

manded. One way is to increase the computational power of a single computer, but this

method can become very expensive and has its limits or a supercomputer with vector

processors can be used, but that can also be very expensive. Parallel computing is an-

other method which basically utilizes the computational resources of multiple processors

simultaneously by dividing the problem amongst the processors.

Parallel computing has a wide range of uses that may not be widely known but

affects a large number of people. Some uses include predicting weather patterns, deter-

mining airplane schedules, unraveling DNA, and making automobiles safer. By using

parallel computing, larger problems can be solved and also time to solve these problems

decreases.

1.1 Types of Parallel Processing

There are several types of parallel architectures, Symmetric MultiProcessing (SMP),

Massively Parallel Processing (MPP), and clusters. Symmetric multiprocessing systems

contain processors that share the same memory and memory bus. These systems are

limited to their number of CPUs because as the number of CPUs increases, so does the

requirement of having a very high speed bus to efficiently handle the data. Massively

parallel processing systems overcome this limitation by using a message passing system.











The message passing scheme can connect thousands of processors each with their own

memory by using a high speed, low latency network. Often the message passing systems

are proprietary but the MPI [1] standard can also be used.

1.1.1 Clusters

Clusters are distributed memory systems built from Commodity Off The Shelf

(COTS) components connected by a high speed network. Unlike MPP systems, how-

ever, clusters largely do not use a proprietary message passing system. They often use

one of the many MPI [1] standard implementations such as MPICH [2] and LAM/MPI

[3]. Clusters offer high availability, scalability, and the benefit of building a system with

supercomputer power at a fraction of the cost [4]. By using commodity computer systems

and network equipment along with the free Linux operating system, clusters can be built

by large corporations or by an enthusiast in their basement. They can be built from prac-

tically any computer, from an Intel 486 based system to a high end Itanium workstation.

Another benefit of using a cluster, is that the user is not tied to a specific vendor or its

offerings. The cluster builder can customize the cluster to their specific problem using

hardware and software that presents the most benefit or what they are most familiar with.

1.1.2 Beowulf Cluster

A Beowulf cluster is a cluster of computers that is dedicated along with the network

only to parallel computing and nothing else [5]. The Beowulf concept began in 1993 with

Donald Becker and Thomas Sterling outlining a commodity component based cluster that

would be cost effective and an alternative to expensive supercomputers. In 1994, while

working at Center of Excellence in Space Data and Information Sciences (CESDIS), the

Beowulf Project was started. The first Beowulf cluster was composed of sixteen Intel

DX4 processors connected by channel bonded Ethernet [5]. The project was an instant

success and led to further research in the possibilities of creating a high performance

system based on commodity products.











For a Beowulf cluster there are compute nodes and a master node which presides

over the compute nodes. The compute nodes of a Beowulf cluster may not even have

a monitor, keyboard, mouse, or video card. The compute nodes are all COTS comput-

ers, generally identical, that run open source software and a variant of the Linux or Unix

operating system [6]. Linux is a robust, multitasking derivative of the Unix operating

system that allows users to view the underlying source code, modify it to their needs if

necessary, and also escape some of the vendor lock-in issues of some proprietary oper-

ating systems. Some benefits of using Linux are that it is very customizable, runs under

multiple platforms, and it can be obtained from numerous websites for free.

For Beowulf clusters there is a master node that often has a monitor and keyboard

and also has a network connection to the outside world and another network card for

connecting to the cluster. The master node performs such activities as data backup, data

and workload distribution, gathering statistics on the nodes performance or state, and

allowing users to submit a problem to the cluster. Figure 1.1 is a sample configuration of

a Beowulf cluster.

1.1.3 Network of Workstations

Network of Workstations (NoW), is another cluster configuration that strives to

harness the power of underutilized workstations. This type of cluster is also similar to

a Cluster of Workstations (CoW) and Pile of PCs (PoPs) [7]. The workstations can be

located throughout a building or office and are connected by a high speed switched net-

work. This type of cluster is not a Beowulf cluster because the compute nodes are also

used for other activities, not just computation. A NoW cluster has the advantage of using

an existing high-speed LAN and with workstations always being upgraded, the technol-

ogy deployed in a NoW will stay current and not suffer the technology lag time as often

seen with traditional MPP machines [7].

The cluster that we use for our research is considered a Cluster of Workstations.

This type of cluster can be described as being in between a Beowulf cluster and a Network











Compute nodes


Figure 1.1. Beowulf layout



of Workstations. Workstations are used for computation and other activities as with NoWs

but are also more isolated from the campus network as with a Beowulf cluster.















CHAPTER 2
NETWORK SETUP

In this chapter different aspects of setting up a high performance computational

network will be discussed. The steps taken to install the software so that the computers

can communicate with each other, how the hardware is configured, how the network is

secured, and also how the internal computers can still access the World Wide Web will be

explained.

2.1 Network and Computer Hardware

The cluster that was built in our lab is considered a Cluster of Workstations, or

CoW. Other similar clusters are Network of Workstations (NoW), and Pile of PCs (PoPs).

The cluster consists of Commodity Off The Shelf (COTS) components, linked together

by switched Ethernet.

The cluster consists of four nodes, apollo, euclid, hydra3, and hydra4 with apollo

being the master node. They are arranged as shown in Figure 2.1.

Hydra3 and hydra4 each have one 2.0 GHz Pentium 4 processor with 512 KB L2

cache, Streaming SIMD Extensions 2 (SSE2), and operate on a 400 MHz system bus.

Both hydra3 and hydra4 have 40 GB Seagate Barracuda hard drives, operating at 7200

rpm, with 2 MB cache. Apollo and euclid each have one 2.4 GHz Pentium 4 processor

with 512 KB L2 cache, SSE2, and also operate on a 400 MHz system bus. Apollo and

euclid each have a 30 GB Seagate Barracuda drive operating at 7200 rpm and with a 2

MB cache. Each computer in the cluster has 1 GB of PC2100 DDRAM. The computers

are connected by a Netgear FS605 5 port 10/100 switch. As you can probably tell by the

above specs, our budget is a little on the low side.











Hydra4



Euclid Hydra3








Firewall



4 Port Switch


Switch

Internet


Apollo

Figure 2.1. Network hardware configuration



When deciding to build a high performance computational network one aspect to

consider is whether there are sufficient funds to invest in network equipment. A bottleneck

for most clusters is the network. Without a high throughput and low latency network, a

cluster is almost useless for certain applications. Even though the price of networking

equipment is always falling, a network even for a small cluster can be expensive if Gigabit,

Myrinet, or other high performance hardware is used.

2.2 Network Configuration

This section will explain the steps on how to get the computers in the cluster com-

municating with each other. One of the most important part of a cluster is the communi-











cation backbone on which data is transferred. By properly configuring the network, the

performance of a cluster is maximized and its construction is more justifiable. Each node

in the cluster is running Red Hat's Fedora Core One [8] with a 2.4.22-1.2115.nptl kernel

[9].

2.2.1 Configuration Files

Internet Protocol (IP) is a data-oriented method used for communication over a

network by source and destination hosts. Each host on the end of an IP communication

has an IP address that uniquely identifies it from all other computers. IP sends data

between hosts split into packets and the Transmission Control Protocol (TCP) puts the

packets back together.

Initially the computers in the lab were set up as shown in Figure 2.2.

Euclid Hydra4 Hydra3


Apollo


Figure 2.2. Original network configuration











The computers could be set up as a cluster in this configuration but this presents

the problem of having the traffic between the four computers first go out to the campus

servers then back into our lab. Obviously the data traveling from the lab computers to the

campus servers adds a time and bandwidth penalty. All our parallel jobs would be greatly

influenced by the traffic of the Universities network. To solve this problem, an internal

network was set up as illustrated in Figure 2.1.

The internal network is setup so that traffic for euclid, hydra3, and hydra4 goes

through apollo to reach the Internet. Apollo has two network cards, one connected to the

outside world and another that connects to the internal network. Since euclid, hydra3, and

hydra4 are all workstations used by students in the lab, they need to be allowed access to

the outside world. This is accomplished through IPforwarding and masquerading rules

within the Linux firewall, iptables.

IP masquerading allows computers with no known IP addresses outside their net-

work so that they can communicate with computers with known IP addresses. IP for-

warding allows incoming packets to be sent to another host. Masquerading allows euclid,

hydra4, and hydra3 access to the World Wide Web with the packets going through and

looking like they came from apollo and forwarding allows packets destined to one of

these computers be routed through apollo to their correct destination. Excellent tutorials

on configuring iptables, IPforwarding and masquerading can be found at the Red Hat on-

line documentation [10] and The Linux Documentation Project [11]. To enableforward-

ing and masquerading, there are several files that need to be edited. They are /i'c h,\i\,

/eIlL /ht\ dl//w,I and /etc/sysconfig/iptables. The hosts file maps IPs to hostnames before

DNS is called, /eiL Ih,,i\ u/l,,w specifies which hosts are allowed to connect and also what

services they can run, and iptables enables packet filtering, Network Address Translation

(NAT), and other packet mangling [12].


The format for /eit hI,t,\ is IP address, followed by the hosts name with domain











information, and an alias for the host. For our cluster, each computer has all the other

computers in the cluster listed and also several computers on the University's network.

For example, a partial hosts file for apollo is:

192.168.0.3 euclid.xxx.ufl.edu euclid

192.168.0.5 hydra4.xxx.ufl.edu hydra4

192.168.0.6 hydra3.xxx.ufl.edu hydra3

The file /,e h.li,\ a//,low specifies which hosts are allowed to connect and what ser-

vices they are allowed to use, i.e. sshd and sendmail. This is the first checkpoint for all in-

coming network traffic. If a computer that is trying to connect is not listed in hosts.allow,

it will be rejected. The file has the format of the name of the daemon access will be

granted too followed by the host that is allowed access to that daemon, and then ALLOW.

For example, a partial hosts.allow file for apollo is:

ALL: 192.168.0.3: ALLOW

ALL: 192.168.0.5: ALLOW

ALL: 192.168.0.6: ALLOW

This will allow euclid, hydra4, and hydra3 access to all services on apollo.

2.2.2 Internet Protocol Forwarding and Masquerading

When information is sent over a network, it travels from its origin to its destination

in packets. The beginning of the packet, header, specifies its destination, where it came

from, and other administrative details [12]. Using this information, iptables can, using

specified rules, filter the traffic, dropping/accepting packets according to these rules, and

redirect traffic to other computers. Rules grouped into chains, and chains are grouped

into tables. By iptables, I am also referring to the underlying netfilter framework. The

netfilter framework is a set of hooks within the kernel that inspects packets while iptables

configures the netfilter rules.











Because all the workstations in the lab require access to the Internet, iptables will

be used to forward packets to specified hosts and also allow the computers to have a

private IP address that is masqueraded to look like it has a public IP address. The goal

of this section is not to explain in detail all the rules specified in our iptables file but to

just explain how forwarding and masquerading are set up. For our network we have an

iptables script, named iptables-script, that sets the rules for iptables. The script is located

in etc/sysconfig/. To run the script, simply type as root:

root@apollo> ./iptablesscript

This will make active the rules defined in the script. To ensure that these rules

are loaded each time the system is rebooted, create the file iptables in the directory

/etc/sysconfig with the following command:

root@apollo> /sbin/iptables-save -c > iptables

To set up IPforwarding and masquerading, first open the file

/etc/sysconfig/networking/devices/ifcfg-ethl. There are two network cards in apollo, ethO,

which is connected to the external network, and eth], which is connected to the internal

network. Add the following lines to ifcfg-ethl:


IPADDR=192.168.0.4

NETWORK=192.168.0.0

NETMASK=255.255.255.0

BROADCAST=192.168.0.255

This will set the IP address ofeth] to 192.168.0.4. Next, open the file

iptablesscript and add the following lines to the beginning of the file:

# Disable forwarding
echo 0 > /proc/sys/net/ipv4/ipforward


# load some modules (if needed)












# Flush
iptables -t nat -F POSTROUTING
iptables -t nat -F PREROUTING
iptables -t nat -F OUTPUT
iptables -F

# Set some parameters
LAN IP NET='192.168.0.1/24'
LAN NIC='ethl'
FORWARD IP='192.168.0.4'

# Set default policies
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT

# Enable masquerading and forwarding
iptables -t nat -A POSTROUTING -s $LAN IP NET -j MASQUERADE
iptables -A FORWARD -j ACCEPT -i $LAN NIC -s $LAN IP NET
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT

# Open SSH of apollo to LAN
iptables -A FORWARD -j ACCEPT -p tcp --dport 22
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 22 -j DNAT \
--to 192.168.0.4:22

# Enable forwarding
echo 1 > /proc/sys/net/ipv4/ipforward

Following the above lines is the original iptablesscript rules. First, IPforwarding

is disabled and the present running iptables rules are flushed. Next, some alias' are set that

just make it easier to read and write the rules. After that, the default policies are set. All

incoming and forwarded traffic will be dropped and all packets sent out will be allowed.

With just these rules in place, there would be no incoming traffic allowed. Now, with the

default policies in place, other rules will be appended to them so that certain connections

are allowed. By setting INPUT and FORWARD to ACCEPT, the network would allow

unrestricted access, NOT a good idea! The next three lines enable masquerading of the

internal network via NAT (Network Address Translation) so that all traffic appears to be

coming from a single IP address, apollo's, and forwarding of IP packets to the internal

network. Next, SSHis allowed between the computers on the internal network and apollo.

Finally, IPforwarding is enabled in the kernel.











To allow the computers on the internal network access to apollo, the following

lines need to be added to the file etc/sysconfig/networking/devices/ifcfg-ethO on euclid,

hydra3, and hydra4. The below IP address is for hydra3. The IP address for the internal

computers will be in the IP range for internal networks setup by RFC 1918 [13].

BROADCAST=192.168.0.255

IPADDR=192.168.0.6

NETWORK=192.168.0.0

GATEWAY=192.168.0.4

It is important to set GATEWAY to the IP address of apollo's internal network card

so that the cluster computers traffic is routed through apollo.

2.3 MPI-Message Passing Interface

A common framework for many parallel machines is that they utilize message pass-

ing so that processes can communicate. The standardization of a message passing sys-

tem began in 1992 at the Workshop on Standards for Message Passing in a Distributed

Memory Environment sponsored by the Center for Research on Parallel Computing [14].

When the Message Passing Interface, or MPI [1], was conceived, it incorporated the at-

tractive features of several other message passing system and its development involved

about 60 people and 40 organizations from universities, government laboratories, and

industry [14].

2.3.1 Goals

By creating a message passing standard, portability between computer architectures

and ease-of-use are achieved. With a common base of routines, vendors can efficiently

implement those routines and it is also easier to provide support for hardware. To achieve

the aforementioned benefits, goals were set by the Forum. These goals are: [14]


Design an API, Application Programming Interface, that defines how software











communicates with one another.

Allow efficient communication by avoiding memory-to-memory copying and al-

lowing overlap of computation and communication.

Allow the software using MPI to be used in a heterogeneous environment.

Allow convenient C and Fortran 77 bindings for the interface.

Make the communication interface reliable

Define an interface that is not too different from other libraries and provide exten-

sions for greater flexibility.

Define an interface that can be run on many different hardware platforms such as

distributed memory multiprocessors and networks of workstations

Semantics of the interface should be language independent.

The interface should be designed to allow for thread safety.


The above goals offer great benefit for the programmer of all application sizes. By

keeping the logical structure of MPI language independent, new programmers to MPI

will more readily grasp the concepts while programmers of large applications will benefit

from the similarity to other libraries and also the C and F77 bindings.

There are some aspects that are not included in the standard. These include: [14]


Explicit shared-memory operations.

Program construction tools.

Debugging facilities.

Support for task management.











2.3.2 MPICH

There are many implementations of MPI; MPI/Pro, Chimp MPI, implementations

by hardware vendors IBM, HP, SUN, SGI, Digital, and others, with MPICH and LAM/MPI

being the two main ones. MPICH began in 1992 as an implementation that would track

the MPI standard as it evolved and point out any problems that developers may incur and

was developed at Argonne National Laboratory and Mississippi State University [15].

2.3.3 Installation

MPICH can be downloaded from the MPICH website at

http://www-unix.mcs.anl.gov/mpi/mpich/. The version that is run in our lab is 1.2.5.2.

The installation of MPICH is straightforward. Download the file mpich.tar.gz and uncom-

press. The directory in which MPICH is installed on our system is hIm up,/,ll,, h/t..

redboots@apollo> gunzip mpich.tar.gz

redboots@apollo> tar -xvf mpich.tar

This creates the directory mpich-1.2.5.2.

The majority of the code for MPICH is device independent and is implemented on

top of an Abstract Device Interface or ADI. This allows MPICH to be more easily ported

to new hardware architectures by hiding most hardware specific details [16]. The ADI

used for networks of workstations is the ch_p4 device, where ch stands for "Chameleon",

a symbol of adaptability and portability, andp4 stands for "portable programs for parallel

processors" [15].

2.3.4 Enable SSH

The default process startup mechanism for the ch_p4 device on networks is remote

shell or rsh. Rsh allows the execution of commands on remote hosts [17]. Rsh works

only if you are allowed to log into a remote machine without a password. Rsh relies on

the connection coming from a known IP address on a privileged port. This creates a huge

security risk because of the ease in which hackers can spoof the connection. A more











secure alternative to rsh is to use the Secure Shell or SSH protocol, which encrypts the

connection and uses digital signatures to positively identify the host at the other end of the

connection [17]. If we were to just create a computational network that was not connected

to the Internet, rsh would be fine. Since all our computers in the lab are connected to the

Internet, using insecure communication could possibly result in the compromise of our

system by hackers.

To set up SSH to work properly with MPICH, several steps need to be done. First

make sure SSH is installed on the computers on the network. Most standard installa-

tions of Linux come with SSH installed. If it is not, SSH can be downloaded from

http://www.openssh.com. Next, an authentication key needs to be created. Go to the

.ssh folder located in your home directory and type ssh-keygen -fidentity -t rsa. When

the output asks you for a passphrase, just press Enter twice.

redboots@apollo> ssh-keygen -f identity -t rsa

Generating public/private rsa key pair.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in identity.

Your public key has been saved in identity.pub.

The key fingerprint is:

43:68:68:30:79:73:e2:03:d9:50:2b:fl:cl:5d:e7:60

redboots@apollo.xxx.ufl.edu

This will create two files identity and identitypub. Now place the identity.pub key

in the file $HOME/.ssh/authorized_keys where $HOME is the users home directory. If the

users home directory is not a shared file system, authorized-keys should be copied into

$HOME/.ssh/authorizedkeys on each computer.











Also, if the file authorizedkeys does not exist, create it.

redboots@apollo> touch authorizedkeys

Finally, while in $HOME/.ssh, type:

redboots@apollo> ssh-agent $SHELL

redboots@apollo> ssh-add

The above commands will allow the user to avoid typing in the pass phrase each

time SSH is invoked [18].

Now enter into the main MPICH directory and type:

redboots@apollo> ./configure -rsh=ssh

This will configure MPICH to use SSH instead of rsh. The above steps of installing

MPICH need to be performed for all the computers that are to be in the cluster.

2.3.5 Edit Machines.LINUX

In order for the master node to know which computers are available for the cluster,

the file machines.LINUX needs to be edited. After MPICH is installed on all the comput-

ers, open the file

/ lnine i,,pollo/hda8/mpich-1.2.5.2/util/machines/machines.LINUX on the master node, apollo

in our case, and edit it so that each node of the cluster is listed. In order to run an MPI

program, the number of processors to use needs to be specified:

redboots@apollo> mpirun -np 4 program

In the above example, 4 is the number of processors that are used to run program.

When the execution begins, mpirun reads the file machines.LINUXto see what machines

are available in the cluster. If the number of processors specified by the -np flag are more

than what is listed in machines.LINUX, the difference will be made up by some processors

doing more work. To achieve the best performance, it is recommended that the number











of processors listed in machines.LINUX is equal to or more than -np. The format for

machines.LINUX is very straightforward, hostname:number of CPUs. For each line, a

hostname is listed and, if a machine has more than one processor, a colon followed by

the number of processors. For example, if there were two machines in the cluster with

machine] having one processor and machine having four processors, machines.LINUX

would be as follows:

machine

machine2:4


The machines.LINUXfile that apollo uses is:


apollo.xxx.uf. edu

euclid.xxx.ufl.edu

hydra4.xxx.ufl.edu

hydra3.xxx.ufl.edu

Because all our compute nodes have single processors, a colon followed by a num-

ber is not necessary.

2.3.6 Test Examples

MPICH provides several examples to test whether the network and software

are setup correctly. One example computes Pi and is located in h/l,,e ,apollo/hda8/mpich-

1.2.5.2/examples/basic. The file cpi.c contains the source code. To calculate Pi, cpi solves

the Gregory-Leibniz series over a user specified number of intervals, n. MPI programs

can be really small, using just six functions or they can be very large, using over one hun-

dred functions. The four necessary functions that cpi uses are MPIInit, MPI-Finalize,

MPI_Comm-size, MPI_Commrank, with MPIJBcast and MPI_Reduce used to send and

reduce the returned data to a single number, respectively. The code for cpi.c is seen in














Appendix A. After the desired number of intervals n is defined in cpi.c, simply com-

pile cpi by typing at a command prompt:


redboots@apollo> make cpi

/home/apollo/hda8/mpich-1.2.5.2/bin/mpicc -c cpi.c

/home/apollo/hda8/mpich-1.2.5.2/bin/mpicc -o cpi cpi.o \

-im


This will create the executable cpi. To run cpi, while at a command prompt in

/hlme apollo/hda8/mpich-1.2.5.2/examples/basic, enter the following:

redboots@apollo> mpirun -np 4 cpi
Process 0 of 4 on apollo.xxx.ufl.edu
pi is approximately 3.1415926535899033, Error is 0.00000000000011
wall clock time = 0.021473
Process 3 of 4 on hydra3.xxx.ufl.edu
Process 2 of 4 on hydra4.xxx.ufl.edu
Process 1 of 4 on euclid.xxx.ufl.edu


Several tests were run while varying the number of intervals and processors. These

results are summarized in Figure 2.3.

2.3.7 Conclusions

Depending on the complexity of your application, MPI can be relatively simple to

integrate. If the problem is easy to split and divide the work evenly among the processors,

like the example that computes Pi, as little as six functions may be used. For all problems,

the user needs to decide how to partition the work, how to send it to the other processors,

decide if the processors have to communicate with one another, and decide what to do

with the solution that each node computes, which all can be done with a few functions if

the problem is not too complicated.

When deciding whether to parallelize a program, several things should be consid-

ered and performed. First, really understand how the serial code works. Study it and all














I... processor
--- 2 processors
- 3 processors
----4 processors


43


0 32
U

F- 21

I.. / -/



0-
10000 100000 1000000 10000000 100000000 1000000000
Interval size


Figure 2.3. cpi results




its intricacies so that you have a good visual of how the data is moved around and op-

erated upon. Without doing this step, you will have no clue on where to even begin the

parallelization process. Also, clean up the code, remove any unnecessary functions, and

simplify it as much as possible. Next, determine if it's even possible to parallelize the

program. If there are not any sufficiently sized groups of data that can be independently

solved on a single processor, parallelizing the code may be impossible or not worth the

effort. Now, determine if parallelizing the code is going to give a speedup that justifies

effort. For example, with cpi solving problems less than one hundred million intervals

simply is not worth the effort of parallelizing the code. Even though it was relatively

easy to parallelize cpi, imagine trying to parallelize a code with several hundred thousand

lines of code, spending many hours in your effort with the end result of an insignificant

speedup. As the problem size increases with more effort being exerted by the processors








20


than the network, parallelization becomes more practical. With small problems, less than

one-hundred million intervals for the cpi example, illustrated by the results in Figure 2.3,

the penalty of latency in a network simply does not justify parallelization.















CHAPTER 3
BENCHMARKING

In this chapter cluster benchmarking will be discussed. There are several reasons

why benchmarking a cluster is important. One being determining the sensitivity of the

cluster to network parameters. Such parameters include bandwidth and latency. Another

reason to benchmark is to determine how scalable your cluster is. Will performance scale

with the addition of more compute nodes enough such that the price/performance ratio

is acceptable? Testing scalability will help determine the best hardware and software

configuration such that the practicality of using some or all of the compute nodes is de-

termined.

3.1 Performance Metrics

An early measure of performance that was used to benchmark machines was MIPS

or Million Instructions Per Second. This benchmark refers to the number of low-level

machine code instructions that a processor can execute in one second. This benchmark,

however, does not take into effect that all chips are not the same in the way that they

handle instructions. For example, a 2.0 GHz 32bit processor will have a 2000 MIPS

rating and a 2.0 GHz 64 bit processor will also have a 2000 MIPS rating. This is an

obviously flawed rating because software written specifically for the 64 bit processor will

solve a comparable problem much faster than a 32 bit processor with software written

specifically for it.

The widely accepted metric of processing power used today is FLOPS or Floating

Point Operations Per Second. This benchmark unit measures the number of calculations

that a computer can perform on a floating point number, or a number with a certain preci-

sion. A problem with this measurement is that it does not take into account the conditions











in which the benchmark is being conducted. For example, if a machine is being bench-

marked and also being subjected to an intense computation simultaneously, the reported

FLOPS will be lower. However, for its shortcomings, the FLOPS is widely used to mea-

sure cluster performance. The actual answer to all benchmark questions is found when

the applications for the cluster are installed and tested. When the actual applications and

not just a benchmark suite are tested, a much more accurate assessment of the clusters

performance is obtained.

3.2 Network Analysis

In this section, NetPIPE(Network Protocol Independent Performance Evaluator)

will be discussed [19]. The fist step to benchmarking a cluster is to determine if the

network you are using is operating efficiently and to get an estimate on its performance.

From the NetPIPE website, "NetPIPE is a protocol independent tool that visually repre-

sents network performance under a variety of conditions" [19]. NetPIPE was originally

developed at the Scalable Computing Laboratory by Quinn Snell, Armin Mikler, John

Gustafson, and Guy Helmer. It is currently being developed and maintained by Dave

Turner.

A major bottleneck in high performance parallel computing is the network on which

it communicates [20]. By identifying which factors affect interprocessor communication

the most and reducing their effect, application performance can be greatly improved. The

two major factors that affect overall application performance on a cluster are network la-

tency, the delay of when a piece of data is sent and when it is received, and the maximum

sustainable bandwidth, the amount of data that can be sent over the network continuously

[20]. Some other factors that affect application performance are CPU speed, the CPU

bus, cache size of CPU, and the I/O performance of the nodes hard drive.











Fine tuning the performance of a network can be a very time consuming and ex-

pensive process and requires a lot of knowledge on network hardware to fully utilize the

hardware's potential. For this section I will not go into too many details about network

hardware. The minimum network hardware that should be used for a Beowulf today is

one based on one-hundred Megabit per second (100 Mbps) Ethernet technology. With

the increase in popularity and decrease costs of one-thousand Megabit per second (1000

Mbps) hardware, a much better choice would be Gigabit Ethernet. If very low latency,

high and sustainable bandwidth is required for your application, and cost isn't too im-

portant, Myrinet [21] or other proprietary network hardware are often used. From the

comparison chart, Table 3.1, both Fast Ethernet and Gigabit Ethernet technologies have a

much higher latency than compared to Myrinet hardware. The drawback of Myrinet tech-

nology, even for a small four node cluster, is its price. The data for the table was estimated

by using the computer hardware price searching provided by http://www.pricewatch.com.


Table 3.1. Network hardware comparison
Network Latency(microsecs) Max. Bandwidth (Mbps) Cost ($/4 nodes)
Fast Ethernet 70 100 110
Gigabit Ethernet 80 1000 200
Myrinet 7 2000 6000



3.2.1 NetPIPE

NetPIPE essentially gives an estimate of the performance for the network in a clus-

ter. The method in which NetPIPE analyzes network performance is by performing a

simple ping-pong test, bouncing messages between two processors of increasing size. A

ping-pong test, as the name implies, simply sends data to another processor which in turn

sends it back. Using the total time for the packet to travel between the processors and

knowing the message size, the bandwidth can be calculated. Bandwidth is the amount of











data that can be transferred through a network in a given amount of time. Typical units of

bandwidth are Megabits per second (Mbps) and Gigabits per second (Gbps).

To provide a complete and accurate test, NetPIPE uses message sizes at regular

intervals and at each data point, many ping-pong tests are carried out. This test will give

an overview of the unloaded CPU network performance. Applications may not reach the

reported maximum bandwidth because NetPIPE only measures the network performance

of unloaded CPUs, measuring the network performance with loaded CPUs is not yet

possible.

3.2.2 Test Setup

NetPIPE can be obtained from its website at

http://www.scl.ameslab.gov/netpipe/. Download the latest version and unpack. The in-

stall directory for NetPIPE on our system is home/apollo hda8.


redboots@apollo> tar -xvzf NetPIPE_3.6.2.tar.gz


To install, enter the directory NetPIPE3.6.2 that was created after unpacking the

above file. Edit the file makefile with your favorite text editor so that it points to the

correct compiler, libraries, include files, and directories. The file makefile did not need

any changes for our setup. To make the MPI interface, make sure the compiler is set to

mpicc. Next, in the directory NetPIPE_3.6.2, type make mpi:


redboots@apollo> make mpi

mpicc -0 -DMPI ./src/netpipe.c ./src/mpi.c -o NPmpi -I./src


This will create the executable NPmpi. To run NPmpi, simply type at a command

prompt:


mpirun -np 2 NPmpi -o np.out.mpi












This will run NetPIPE on the first two machines listed under

/lii,' ,apollo/hda8/mpich_1.2.5.2/util/machines/machines.LINUX. NetPIPE will by de-

fault print the results to the command prompt and also to the file np.out.mpi specified

after the -o option flag. Below is an example output between apollo and hydra4 printed

to the command prompt. The format of the data printed to the command prompt is as

follows: first column is the run number, second column is the message size, third col-

umn is the number of times it was sent between the two nodes, the fourth column is the

throughput, and the fifth column is the round trip divided by two. In Appendix B.1, the

file np.out.mpi for apollo and hydra4 is shown. The first column lists the test run, second

column is the message size in Mbps, third column lists how many messages were sent,

the fourth column lists the throughput, and the last column is the round-trip time of the

messages divided by two. Below is a partial output from a test run.

redboots@apollo> mpirun -np 2 NPmpi -o np.out.mpi
0: apollo
1: hydra4
Now starting the main loop
0: 1 bytes 1628 times --> 0.13 Mbps in 60.98 usec
1: 2 bytes 1639 times --> 0.25 Mbps in 60.87 usec
2: 3 bytes 1642 times --> 0.37 Mbps in 61.07 usec
3: 4 bytes 1091 times --> 0.50 Mbps in 61.46 usec
4: 6 bytes 1220 times --> 0.74 Mbps in 61.48 usec
5: 8 bytes 813 times --> 0.99 Mbps in 61.86 usec
6: 12 bytes 1010 times --> 1.46 Mbps in 62.53 usec
7: 13 bytes 666 times --> 1.58 Mbps in 62.66 usec
8: 16 bytes 736 times --> 1.93 Mbps in 63.15 usec



116: 4194304 bytes 3 times --> 87.16 Mbps in 367126.34 usec
117: 4194307 bytes 3 times --> 87.30 Mbps in 366560.66 usec
118: 6291453 bytes 3 times --> 87.24 Mbps in 550221.68 usec
119: 6291456 bytes 3 times --> 87.21 Mbps in 550399.18 usec
120: 6291459 bytes 3 times --> 87.35 Mbps in 549535.67 usec
121: 8388605 bytes 3 times --> 87.32 Mbps in 732942.65 usec
122: 8388608 bytes 3 times --> 87.29 Mbps in 733149.68 usec
123: 8388611 bytes 3 times --> 87.37 Mbps in 732529.83 usec












3.2.3 Results

NetPIPE was run on apollo and hydra4 while both CPUs were idle with the follow-

ing command:


mpirun -np 2 NPmpi -o np.out.mpi


The results are found in Appendix B.1. The first set of data that is plotted compares

the maximum throughput and transfer block size. This is shown in Figure 3.1.



Message size vs. Throughput
87 -




2 52

C 35


"'18



1 10 le+02 le+03 le+04 le+OS le+06
Message Size(bytes)


Figure 3.1. Message size vs. throughput




This graph allows the easy visualization of maximum throughput for a network. For

the network used in our cluster, a maximum throughput of around 87 Mbps was recorded.

This is an acceptable rate for a 100 Mbps network. If the throughput suddenly dropped

or wasn't at an acceptable rate, there would obviously be a problem with the network. It

should be noted that a 100 Mbps network will not reach this maximum value. This can be

attributed to the network overhead introduced by different network layers: e.g. Ethernet

card driver, TCP layer, and MPI routines [22].











NetPIPE also allows for the testing of TCP bandwidth without MPI induced over-

head. To run this test, first create the NPtcp executable. To install NPtcp on our cluster

required no changes to the file makefile in the NetPIPE_3.6.2 directory. To create the

NPtcp executable, simply type at the command prompt while in the NetPIPE3.6.2 direc-

tory:


redboots@apollo> make tcp

cc -0 ./src/netpipe.c ./src/tcp.c -DTCP -o NPtcp -I./src


This will create the executable NPtcp. To run the TCP benchmark, it requires both

a sender and receiver node. For example, in our TCP benchmarking test, hydra4 was

designated the receiver node and apollo the sender. This test obviously requires you to

open a terminal and install NPtcp on both machines unlike NPmpi which doesn't require

you to open a terminal on the other tested machine, in this case hydra4.

First, log into hydra4. Install NPtcp on hydra4 following the above example. For

hydra4, the executable NPtcp is located in /1,ie hy/1ia- hdal3/NetPIPE3.6.2. While in

this directory, at a command prompt type ./NPtcp


redboots@hydra4> ./NPtcp

Send and receive buffers are 16384 and 87380 bytes

(A bug in Linux doubles the requested buffer sizes)


The above line will now allow hydra4 to be the receiver. For each separate run,

the above command needs to be retyped. Next, log into apollo and enter the directory in

which NPtcp is installed. For apollo, this is located in /1,lue ,pollo/hda8/NetPIPE3.6.2.

While in this directory, at a command prompt start NPtcp while specifying hydra4 as the

receiver.

redboots@apollo> ./NPtcp -h hydra4 -o np.out.tcp
Send and receive buffers are 16384 and 87380 bytes
(A bug in Linux doubles the requested buffer sizes)









28


Now starting the main loop
0: 1 bytes 2454 times --> 0.19 Mbps in 40.45 usec
1: 2 bytes 2472 times --> 0.38 Mbps in 40.02 usec
2: 3 bytes 2499 times --> 0.57 Mbps in 40.50 usec
3: 4 bytes 1645 times --> 0.75 Mbps in 40.51 usec
4: 6 bytes 1851 times --> 1.12 Mbps in 41.03 usec
5: 8 bytes 1218 times --> 1.47 Mbps in 41.64 usec
6: 12 bytes 1500 times --> 2.18 Mbps in 42.05 usec
7: 13 bytes 990 times --> 2.33 Mbps in 42.54 usec



116: 4194304 bytes 3 times --> 89.74 Mbps in 356588.32 usec
117: 4194307 bytes 3 times --> 89.74 Mbps in 356588.50 usec
118: 6291453 bytes 3 times --> 89.75 Mbps in 534800.34 usec
119: 6291456 bytes 3 times --> 89.75 Mbps in 534797.50 usec
120: 6291459 bytes 3 times --> 89.75 Mbps in 534798.65 usec
121: 8388605 bytes 3 times --> 89.76 Mbps in 712997.33 usec
122: 8388608 bytes 3 times --> 89.76 Mbps in 713000.15 usec
123: 8388611 bytes 3 times --> 89.76 Mbps in 713000.17 usec


Running NPtcp will create the file np.out.tcp. The -h hydra4 option specifies the

hostname for the receiver, in this case hydra4. You can use either the IP-address or

the hostname if you have the receivers hostname and corresponding IP-address listed in

/etiL hli,\. The -o np.out.tcp option specifies the output file to be named np.out.tcp. The

format of this file is the same as the np.out.mpi file created by NPmpi. The file np.out.tcp

is found in Appendix B.2.

To compare the overhead costs of MPI on maximum throughput, the throughput of

both the TCP and MPI runs were plotted compared to message size. In Figure 3.2 the

comparison of maximum throughput can be seen.

From Figure 3.2, the TCP overhead test consistently recorded a higher throughput

throughout the message size range as expected. Another interesting plot is to consider

message size versus the time the packets travel. This is seen in Figure 3.3

This plot shows the saturation point between sending data through TCP and with

MPI routines atop of TCP. The saturation point is the position on the graph after which an

increase in block size results in an almost linear increase in transfer time [23]. This point

is more easily located as being at the "knee" of the curve. For both the TCP and MPI run,

















MPI vs. TCP throughput


1 10 le+02 le+03 le+04
Message Size(bytes)


le+05 le+06


Figure 3.2. MPI vs. TCP throughput comparison







MPI vs. TCP Saturation Points


TCP
- MPI


0.001
Time(seconds)


0.01 0.1


Figure 3.3. MPI vs. TCP saturation comparison


o.


ia-k


le+06-

le+05-

le+04-

le+03-

le+02-

10

1

0.1


0.0001


-~................. .....................



TCP
MPI
IMP


/'


/'


'"









30


the saturation point occurred around 130 bytes. After that, both rose linearly together

with no distinction between the two after a message size of one Kilobyte. It can be con-

cluded overhead induced by MPI routines does not affect latency performance greatly for

message sizes above one-hundred thirty bytes.

The greatest decrease in throughput also occurs for small message sizes. From

Figure 3.4, there is a fairly consistent percentage decrease in throughput for message sizes

below one Kilobyte. Below that there is as much as a 35 percent decrease in throughput

when MPI routines are added on top of TCP.


% throughput decrease with MPI overhead
35 -


Si 28
\
21


S14 -


7.3


0.52 -
1 10 le+02 le+03 le+04 le+05 le+06
Message Size(bytes)


Figure 3.4. Decrease in effective throughput with MPI




The next plot, Figure 3.5, is considered the network signature graph. This plots

the transfer speed versus the elapsed time for the data to travel. This is also considered

an "acceleration" graph [23]. To construct this plot, elapsed time was plotted on the

horizontal axis using a logarithmic scale and throughput on the vertical axis.

From Figure 3.5, the latency occurs at the first point on the graph. This occurs for

our network around 61[tsec. Since we are using Fast Ethernet this is an acceptable latency









31



Throughput vs. Time
87 -


70 -


S52







0.13
o 3









Figure 3.5. Throughput vs. time
18 /
/



Time(seconds)


Figure 3.5. Throughput vs. time




[22]. Also, Figure 3.5 allows for the easy reading of the maximum throughput, around 87

Mbps for our network.


3.3 High Performance Linpack-Single Node


High Performance Linpack (HPL) is a portable implementation of the Linpack

benchmark for distributed memory computers. It is widely used to benchmark clusters

and supercomputers and is used to rank the top five-hundred computers in the world at

http://www.top500.org. HPL was developed at the Innovative Computing Laboratory at

the University of Tennessee Computer Science Department. The goal of this benchmark

is to provide a "testing and timing program to quantify the accuracy of the obtained solu-

tion as well as the time it took to compute it" [24].

3.3.1 Installation

To install HPL first go to the project webpage:

http://www.netlib.org/benchmark/hpl/index.html Near the bottom of the page there is a











hyperlink for hpl.tgz. Download the package to the directory of your choice. Unpack

hpl. tgz with the command:


tar -xvzf hpl.tgz


This will create the folder hpl.

Next, enter the directory hpl and copy the file Make.Linux_PII_CBLAS from the

setup directory to the main hpl directory and rename to Make.Linux_P4:


redboots@apollo> cp setup/Make.Linux PII CBLAS \

Make.Linux P4


There are several other Makefiles located in the setup folder for different architec-

tures. We are using Pentium 4's so the Make.Linux_PII_CBLAS Makefile was chosen and

edited it so that it points to the correct libraries on our system. The Makefile that was used

for the compilation is shown in Appendix B.3.1. First, open the file in your favorite text

editor and edit it so that it points to your MPI directory and MPI libraries. Also, edit the

file so that it points to your correct BLAS (Basic Linear Algebra Subprograms) library as

described below. BLAS are routines for performing basic vector and matrix operations.

The website for BLAS is found at http://www.netlib.org/blas/. The Makefile which was

used for our benchmarking is located in Appendix B.3.1. Note the libraries which were

used for the benchmarks were either those provided by ATLAS or Kazushige Goto, which

will be discussed shortly.

After the Makefile is configured for your particular setup, HPL can now be com-

piled. To do this simply type at the command prompt:


make arch=Linux P4


The HPL binary, xhpl will be located in $hpl/bin/Linux_P4. Also created is the file

HPL.dat which provides a way of editing parameters that affect the benchmarking results.











3.3.2 ATLAS Routines

For HPL, the most critical part of the software is the matrix-matrix multiplication

routine, DGEMM, that is a part of the BLAS. An optimized set of BLAS routines widely

used is ATLAS, or Automatically Tuned Linear Algebra Software [25]. The website

for ATLAS is located at http://math-atlas.sourceforge.net. The ATLAS routines strive to

create optimized software for different processor architectures.

To install precompiled ATLAS routines for a particular processor, first go to

http://www.netlib.org/atlas/archives. On this page are links for AIX, SunOS, Windows,

OS-X, IRIX, HP-IX, and Linux. Our cluster is using the Linux operating system so the

linux link was clicked. The next page lists precompiled routines for several processors,

including Pentium 4 with Streaming SIMD Extensions 2 (SSE2), the AMD Hammer

processor, PowerPC, Athlon, Itanium, and Pentium III. The processors that we are using

in the cluster are Pentium 4's so the file atlas. 6. OLinux_4SSE2. tgz was downloaded. The

file was downloaded to /hl.e, alollo/hda8 and unpacked.

redboots@apollo> tar -xvzf atlas3.6.0 Linux P4SSE2.tgz

This creates the folder Linux_P4SSE2. Within this directory is the Makefile that

was used to compile the libraries, the folder containing the precompiled libraries, lib, and

in the include directory, the C header files for the C interface to BLAS and LAPACK. To

link to the precompiled ATLAS routines in HPL, simply point to the routines

LAdir = /home/apollo/hda8/Linux_P4SSE2/lib

LAlib = $(LAdir)/libcblas.a $(LAdir)/libatlas.a

Also, for the ATLAS libraries, uncomment the line that reads:

HPL OPTS = -DHPL CALL CBLAS

Finally, compile the executable xhpl as shown above. Enter the main HPL directory

and type make arch=LinuxP4











redboots@apollo> make arch=Linux_P4


This creates the executable xhpl and the configuration file HPL.dat

3.3.3 Goto BLAS Libraries

Initially, the ATLAS routines were used in HPL to benchmark the cluster. The re-

sults of the benchmark using the ATLAS routines were then compared to the results using

another optimized set of BLAS routines developed by Kazushige Goto [26]. The libraries

developed by Kazushige Goto are located at

http://www.cs.utexas.edu/users/kgoto/signupfirst.html. The libraries located at this web-

site are optimized BLAS routines for a number of processors including Pentium III, Pen-

tium IV, AMD Opteron, Itanium2, Alpha, and PPC. A more in depth explanation as to

why this library performs better than ATLAS is located at

http://www.cs.utexas.edu/users/flame/goto/. To use these libraries on our cluster, the

routines optimized for Pentium 4's with 512 Kb L2 cache were used, libgotop4512-

rO.96.so.gz. Also the file xerbla.fneeds to be downloaded which is located at

http://www.cs.utexas.edu/users/kgoto/libraries/xerbla.f. This file is simply an error han-

dler for the routines.

To use these routines, first download the appropriate file for your architecture and

download xerbla.f For our cluster, libgotop4_512-rO.96.so.gz was downloaded to

/lhiin ,le ,ollo/hda8/goto_blas. Unpack the file libgotop4_512-rO.96.so.gz.


redboots@apollo> gunzip libgoto_p4_512-r0.96.so.gz


This creates the file libgotop4_512-r0.96.so. Next, download the file xerbla.ffrom

the website listed above to /1h,,ne ,,ollo/hda8/goto_blas. Next, create the binary object

file for xerbla.f


redboots@apollo> g77 -c xerbla.f











This will create the file xerbla.o. These two files, libgoto-p4_512-r0.96.so and

xerbla.o need to be pointed to in the HPL Makefile.


LAdir = /home/apollo/hda8/goto_blas

LAlib = $(LAdir)/libgoto_p4_512-r0.96.so $(LAdir)/xerbla.o


Also the following line needs to be commented. By placing a pound symbol in front

of a line tells the compiler to ignore that line and treat it as text.


#HPL OPTS = -DHPL CALL CBLAS


3.3.4 Using either Library

Two sets of tests were carried out with HPL: one using the ATLAS routine and the

other using Kazushige Goto's routine. These tests were carried for several reasons. One

is to illustrate the importance of using well written and compiled software on a clusters

performance. Without well written software that is optimized for a particular hardware

architecture or network topography, performance of a cluster suffers greatly. Another

reason why two tests using the different BLAS routines were conducted is to get a more

accurate assessment on our clusters performance. By using a benchmark, we have an

estimate on how our applications should perform. If the parallel applications that we use

perform at a much lower level than the benchmark, then that would allow us to conclude

that our software isn't tuned for our particular hardware properly or the software contains

inefficient coding.

Below, the process of using either ATLAS or Kazushige Goto's BLAS routines will

be discussed. The first tests that were conducted used the ATLAS routines. Compile the

HPL executable, xhpl, as described above for the ATLAS routines using the file Makefile

in Appendix B.3.1. After the tests are completed using the ATLAS routines simply change

the links to Goto's BLAS routines and comment the line that calls the BLAS Fortran 77

interface. For example, the section of the Makefile which we use that determines which












library to use is seen below. For the below section of the Makefile, Goto's BLAS routines

are specified.

# Below the user has a choice of using either the ATLAS or Goto
# BLAS routines. To use the ATLAS routines, uncomment the
# following 2 lines and comment the 3rd and 4th. To use Goto's BLAS
# routines, comment the first 2 lines and uncomment line 3rd and
# 4th.

# BEGIN BLAS specification
LAdir = /home/apollo/hda8/hpl/libraries
LAlib = $(LAdir)/libgoto_p4_512-r0.96.so $(LAdir)/xerbla.o
#LAdir = /home/apollo/hda8/Linux P4SSE2/lib
#LAlib = $(LAdir)/libcblas.a $(LAdir)/libatlas.a
# END BLAS specification


If Goto's routines are to be used, just uncomment the two lines that specify those

routines and comment the two lines for the ATLAS routines. The line that specifies the

BLAS Fortran 77 interface is also commented when using Goto's BLAS routines.


#HPL OPTS = -DHPL CALL CBLAS


If the ATLAS routines are to be used, the above line would be uncommented. After

that xhpl is recompiled using the method described above.


redboots@apollo> make arch=Linux P4


3.3.5 Benchmarking

To determine a FLOPS value for the machines) to be benchmarked, HPL solves

a random dense linear system of equations in double precision. HPL solves the random

dense system by first computing the LU factorization with row-partial pivoting and then

solving the upper triangular system. HPL is very scalable, it is the benchmark used on

the supercomputers with thousands of processors found at

Top 500 List of Supercomputing Sites and can be used on a wide variety of computer

architectures.











3.3.6 Main Algorithm

HPL solves a linear system of equations,Eq. 3.1, for x using LU factorization. It

first computes the product of the matrix in lower and upper triangular form.


Ax =b (3.1)

Next, y is solved by forward substitution in Eq. 3.2


Ly = b (3.2)

Finally, x is solved by back substitution in Eq. 3.3.


Ux y (3.3)

To distribute the data and provide an acceptable level of load balancing, a two-

dimensional P-by-Q grid process is utilized. The n-by-n+1 coefficient matrix is parti-

tioned into nb-by-nb blocks that are cyclically distributed onto the P-by-Q process grid.

In each iteration, a panel ofnb columns is factorized, and the trailing submatrix is updated

[24]. After the factorization is complete and a value for x is solved, HPL then regenerates

the input matrix and vector and substitutes the computed value of x to obtain a residual.

If the residual is less than a threshold value of the order of 1.0 then the solution, x, is

considered "numerically correct" [24] A further explanation of the algorithm is found at

the projects website [24].

3.3.7 HPL.dat Options

When HPL is compiled, a file HPL.dat is created which holds all the options which

direct HPL to be run in a particular manner. Here, the format and main options of this file

will be discussed. Below is a sample HPL.dat file used during a benchmarking process.

HPLinpack benchmark input file

Innovative Computing Laboratory, University of Tennessee












HPL. out

1

3

1000 4800 8000

4

60 80 120

1

2

1 2

4 2

16.0

3

012

3

248

1

2

3

012

6

0 1 2 3 4 5

1

1

2

64

0

0


output file name (if any)

device out (6=stdout,7=stderr,file)

# of problems sizes (N)

Ns

# of NBs

NBs

PMAP process mapping (0=Row,l=Column)

# of process grids (P x Q)

Ps

Qs

threshold

# of panel fact

PFACTs (0=left, l=Crout, 2=Right)

# of recursive stopping criterium

NBMINs (>= 1)

# of panels in recursion

NDIVs

# of recursive panel fact.

RFACTs (0=left, l=Crout, 2=Right)

# of broadcast

BCASTs (0=lrg,l=lrM,2=2rg,3=2rM,4=Lng,5=LnM)

# of lookahead depth

DEPTHs (>=0)

SWAP (0=bin-exch,l=long,2=mix)

swapping threshold

L1 in (0=transposed,l=no-transposed) form

U in (0=transposed,l=no-transposed) form











1 Equilibration (0=no,l=yes)

8 memory alignment in double (> 0)


The first two lines of the file are not used. The third line lets the user choose the

name of the file the results will be saved to if desired, in this case HPL.out. The fourth

line directs the output to either the command terminal or to a file whose name is assigned

in the third line. The program will print to a file if the value in line four is any other than

6 or 7. For the above example, output will be written to the file HPL.out because line four

is specified as 1.

The fifth line allows the user to specify how many linear system of equations will

be solved. Line six specifies the size, Ns, of the matrices. The generated dense matrix

will therefore have the dimension Ns x Ns. The limiting factor in choosing a matrix size is

the amount of physical memory on the computers to be benchmarked. A benchmark will

return much better results if only the physical memory, Random Access Memory (RAM),

is used and not the virtual memory. Virtual memory is a method to simulate RAM by

using the hard drive for data storage. To calculate the maximum matrix size that should

be used first add up the amount of ram on the computers in the cluster. For example,

our cluster has four nodes with one Gigabyte of ram each for a total of four Gigabytes of

physical memory. Now, multiply the total physical memory by lelementper 8bytes, Eq.

3.4. Each entry in the matrix has a size of eight bytes.


element
numberelements = 4000000000bytes ,- (3.4)

The above result will give the total number of entries allowed in the matrix, in this

example, 500,000,000. Taking the square root, Eq. 3.5, will give the matrix size.


matrixsize = v/number-elements (3.5)











For this example the maximum matrix size is around 22,000 x 22,000. To allow

enough memory for the operating system and other system processes reduces the above

dimensions for a maximum allowable matrix dimension of around 20,000 x 20,000.

Lines six and seven, respectively, specify the number of block sizes to be tested in

different runs and the sizes of those blocks. Block sizes, used during the data distribution,

helps determine the computational granularity of the problem. If the block size is too

small, much more communication between the nodes is necessary in order to transfer

the data. If the block size is too large, the compute node may not be able to handle the

computation efficiently. Block sizes typically range from 60 to 180 depending on network

and compute node architecture [24].

Line nine specifies how the MPI process should be mapped onto the compute nodes.

Mapping is a way of specifying which processors execute which threads. The two possi-

ble mappings are row and column major.

Lines ten through twelve allow the user to specify the number of grid configurations

to be run and the layout of the grid. For the above example, two grid configurations will

be run. The first one will have a 1 by4 layout and the second will have a 2 by 2 layout. If

the user wanted to test HPL on a single computer, the number of process grids would be

1 and likewise the values of P and Q.


1 # of process grids (P x Q)

1 Ps

1 Qs


Line thirteen specifies the threshold to which the residuals should be compared to.

The default value is sixteen and is the recommended value which will cover most cases

[24]. If the residuals are larger than the threshold value, the run will be marked as a

failure even though the results can be considered correct. For our benchmarking, the

default value of 16 will be used for large problem sizes and -16 for small problems. A











negative threshold will cause xhpl to skip checking of the results. The user may wish to

skip the checking of results if a quick benchmark is desired without having to resolve the

system of equations.

Lines fourteen through twenty-one allow the user to choose the panel factorization,

PFACTs and recursive panel factorization, RFACTs. The panel factorization is matrix-

matrix operation based and recursive, dividing the panel into NDIVs subpanels at each

step [24]. The recursion continues until there is less than or equal to NBMINs columns

left. For the panel and recursive panel factorization, the user is allowed to test left-looking,

right-looking, and Crout LU factorization algorithms.


3 # of panel fact

0 1 2 PFACTs (0=left, l=Crout, 2=Right)

2 # of recursive stopping criterium

2 4 NBMINs (>= 1)

1 # of panels in recursion

2 NDIVs

3 # of recursive panel fact.

0 1 2 RFACTs (0=left, l=Crout, 2=Right)


The above example tests the three LU factorization algorithms for both recursive

panel factorization and panel factorization, tests two cases of stopping the recursion at

two and four columns in the current panel, and tests one case of dividing the panel into

two subpanels.

Lines twenty-two and twenty-three specify how the panels are to broadcast to the

other processors after factorization. The size available algorithms for broadcast are increasing-

ring, modified increasing-ring, increasing-2-ring, modified increasing-2-ring, long band-

width reducing, and modified long bandwidth reducing [24].











The remaining lines specifies options that will further optimize the benchmark tests.

For our runs the recommended values will be used. A further explanation of what the

remaining options can be found at the HPL website [24].

3.3.8 Test Setup

Many tests were conducted with different parameters specified in the file HPL.dat.

Initially, a small problem size, Ns, was tested while varying the other options. From

there it was determined what options allowed xhpl to perform the best. A small problem

size was used primarily because it would take a lot of time to conduct all the tests if a

large size was used. If 5 block sizes, NBs, all 6 panel broadcast methods, BCASTs, and 2

process grids were tested, that would be 60 tests. Even though by using a small problem

size the reported Gflops will not be the highest that is attainable, we still are able to

determine what parameters affect relative performance. After a test with a small problem

size was conducted, the options that performed the worst were eliminated and tests with

the remaining good performing options were carried out. This process of eliminating the

worst performing options and rerunning the tests with the best options was continued until

a set of options that performed the best was obtained.

The first test conducted involved a problem size, Ns, of 1000 on apollo. Initially

only apollo will be tested to get a base measurement for a nodes performance. After

apollo is fully tested, two, then three, then all the nodes will be tested so that the scal-

ability of the cluster can be determined. The HPL.dat file that was used for the initial

testing of apollo is seen in Appendix B.3.2. Six block sizes, left-looking, right-looking

and Crout's method for factorization, three values for the number of subpanels to create,

and four values for the columns when recursive panel factorization stops. It should be

noted that the threshold used for these tests will be -16. By using a negative threshold,

checking of the results will not be performed. For larger tests and tests using the network,











the results will be checked to determine if the network is correctly transferring data. Also,

for a single CPU test, testing panel broadcast algorithms is not necessary because the net-

work is not being used.

3.3.9 Results

The results for the initial tests on apollo are shown in the Appendix B.3.3. From

the data and Figure 3.6 there is a very noticeable correlation between block size, NBs, and

performance. The best performance was achieved at a block size of 160.


Block size comparison


32 64 96 1.3e+02 1.6e+02 1.9e+02
Block size


Figure 3.6. Block size effect on performance for 1 node



The next test conducted used a single block size, 160, and left the other options as

they were for the previous test. The HPL.dat file used for this test is shown in Appendix

B.3.4. The test was rerun using the command:


redboots@apollo> mpirun -np 1 xhpl


The results of the second test using 160 as the block size are shown in Appendix

B.3.5. From the results, the most noticeable parameter that affects performance is NDIVs,











or the number of subpanels that are created during the recursive factorization. For NDIVs

equal to 2, the average Gflops is 1.891, for NDIVs equal to 3, the average Gflops is 1.861,

and for a NDIVs equal to 4, the average Gflops is 1.864.

The next test involved setting NDIVs to 2 and rerunning with the other parameters

unchanged. From the data, the parameter that affects performance the most for this run

is the value of NBMINs. A NBMINs value of 4 returns the bests results with an average

of 1.899 Gflops compared to an average of 1.879 Gflops for NBMINs equal to 1, 1.891

Gflops for NBMINs equal to 2, and 1.892 Gflops for NBMINs equal to 8.

The remaining parameters that can be changed are the algorithms used for the panel

factorization and recursive panel factorization. For this test, NBMINs was set to 4 and

rerun. This test was run as before.

redboots@apollo> mpirun -np 1 xhpl

From the results, using any of the algorithms for panel factorization and recursive

panel factorization produced very similar results. Since the three factorization algorithms

produced similar results, for the final test using a large problem size, Crout's algorithm for

both factorizations will be used mainly because the algorithm implemented in SPOOLES

for factorization is Crout's.

The final test involving one processor will use the optimum parameters determined

as shown above and also the maximum problem size allowed by system memory. Check-

ing of the solution will also be enable by changing the threshold value, line 13, to a

positive 16. To calculate the maximum problem size, Eqs. 3.4 and 3.5 will be used.



element
number_elements = 1000000000bytes y s ,

matrixsize = /125000000


matrixsize = 11180











A matrix of size 11180 x 11180 is the maximum that can fit into system memory.

To ensure that slow virtual memory is not to be used, a matrix of size 10000 x 10000 will

be used for the final test. The HPL.dat file used for this test is found in Appendix B.3.6.

Using the ATLAS BLAS routines, apollo achieves a maximum Gflops of 2.840.

The theoretical peak performance of a Pentium 4 2.4 GHz is calculated as follows. The

processors we are using include SSE2 instructions which allow 2 floating point operations

per CPU cycle. The theoretical peak performance is then calculated by multiplying 2

floating point operations per CPU cycle by the processor frequency, 2.4 GHz, Eq. 3.6.


TheoreticalPeak = 2FPops 2.4GHz (3.6)
CPU(, 1,
The 2.840 Gflops reported by xhpl using the ATLAS BLAS routines is approxi-

mately 59 percent of the theoretical peak, 4.8 Gflops, of the processor. The results of the

tests using the ATLAS BLAS routines are shown in Table 3.2. Table 3.2 lists problem

size, block size, NDIVs, NBMINs, PFACTs, RFACTs, the average Gflops for all tested

options during each run, and the average Gflops percentage of the theoretical maximum

4.8 Gflops.













Table 3.2. ATLAS BLAS routine results
Ns Block Size NDIVs NBMINs PFACTs RFACTs Gflops % T. Max
1000 32 64 96 128 160 192 2 3 4 1 2 4 8 LCR LCR 1.626 33.88
1000 160 234 1248 LCR LCR 1.872 39.00
1000 160 2 1248 LCR LCR 1.890 39.38
1000 160 2 4 LCR LCR 1.893 39.44
10000 160 2 4 C C 2.840 59.17











3.3.10 Goto's BLAS Routines

Next, Goto's BLAS routines will be tested. First, compile xhpl using Goto's BLAS

routines as shown above. Edit the file Make.LinuxP4 so that the following lines are

uncommented


LAdir = /home/apollo/hda8/goto_blas

LAlib = $(LAdir)/libgoto_p4_512-r0.96.so $(LAdir)/xerbla.o


and the following line is commented


#HPL OPTS = -DHPL CALL CBLAS


The tests will be carried out in the same manner as the ATLAS routines were tested.

First, all options will be tested. The options that returned the most noticeable influence on

performance will be selected and the tests rerun until a final set of optimized parameters

are selected. The results for the tests using Goto's BLAS routines are shown in Table

3.3. The results clearly show that Goto's BLAS routines perform much better than the

ATLAS routines, around a 29.6% increase.













Table 3.3. Goto's BLAS routine results
Ns Block Size NDIVs NBMINs PFACTs RFACTs Gflops % T. Max
1000 32 64 96 128 160 192 2 3 4 1 2 4 8 LCR LCR 2.141 44.60
1000 128 234 1248 LCR LCR 2.265 47.19
1000 128 2 1248 LCR LCR 2.284 47.58
1000 128 2 8 LCR LCR 2.287 47.65
10000 128 2 8 L C 3.681 76.69











3.4 HPL-Multiple Node Tests

For this section, multiple nodes will be used. By knowing apollo's performance,

running xhpl on more than one node will allow us to see the scalability of our cluster.

Ideally, a clusters performance should scale linearly to the number of nodes present. For

example, apollo achieved a maximum Gflops of 3.681 using Goto's BLAS routines. How-

ever, because of network induced overhead, adding another identical node to the cluster

will not increase the performance to 7.362 Gflops. By determining how scalable a cluster

is, it is possible to select the appropriate cluster configuration for a particular problem

size.

3.4.1 Two Processor Tests

For tests using more than one node, the initial matrix size tested will be increased

from that of the matrix tested in the single node runs. The reasoning behind this is illus-

trated in the tests of MPICH after it was installed. When running cpi, for small problem

sizes a single processor performed better than a multi-node run. By using a small prob-

lem size and multiple nodes, the network would add significant overhead, so much that

it would be difficult to accurately measure the clusters performance. For small problem

sizes, the full potential of a processor isn't being used, instead time is wasted on inter-

processor communication.

For testing multiple nodes, two other options are added to those tested for a single

node, these being the process grid and panel broadcast algorithm. If these options were

added to the options tested for a single node, this would bring the total number of initial

tests to almost eight-thousand. Instead of running all tests at once, the process grid layout

and panel broadcast algorithms will be tested separately. Once it is determined which

parameters for the process grid layout and panel broadcast perform the best for a general

problem regardless of the other options, the same testing methodology applied to a single

node can be applied to multiple nodes.











3.4.2 Process Grid

The first test for multiple nodes will determine which process grid layout achieves

the best performance. A grid is simply a splitting of the work for matrix computations

into blocks and these blocks are then distributed among processors. A block matrix is a

submatrix of the original matrix. This is more easily illustrated by Eq. 3.7.


All A12 Lll 0 Ull U12
A21 A22 L21 L22 0 U22
A is divided into four submatrices, All, A12, A21, and A22 of block size NBs, i.e.

All is of size NBs x NBs. After LU factorization, A is represented by the matrices on the

right hand size of Eq. 3.7. By using blocked matrices, Level 3 BLAS can be employed

which are much more efficient than Level 1 and Level 2 routines [27]. This efficiency can

be attributed to allowing blocked submatrices to fit into a processors high speed cache.

Level 3 BLAS allows for the efficient access of today processors hierarchal shared mem-

ory, cache, and registers [28]. Level 1 BLAS only operates on one or two columns, or

vectors of a matrix at a time, Level 2 BLAS performs matrix-vector operations, and Level

3 BLAS handles matrix-matrix operations [27][29].

There are several different block partitioning schemes, one, two, and three-dimensional.

HPL employs a two-dimensional block-cyclic P-by-Q grid of processes so that good load

balance and scalability are achieved [24]. This layout is more easily visualized from

Figure 3.7.

For arguments sake, let Figure 3.7 illustrate a 36x36 matrix. 1, 2, 3, and 4 are the

processor ids arranged in a 2x2 grid which would equate to a block size, NBs, of six. This

blocked cyclic layout has been shown to possess good scalability properties [28].

The first test for a multi-computer benchmark will determine which grid layout

achieves the best results. A problem size, Ns, of 3500, and two process grid layouts, 1x2

and 2x1, were tested first. The remaining options do not matter for this test for it is only


























Figure 3.7. 2D block-cyclic layout




being used to determine which grid layout returns the best relative result. Also, Goto's

BLAS library will be used for the remaining tests because it performs the best on our

architecture.

The HPL.dat file used for the first multi-computer test is seen in Appendix B.3.7.

To run a multi-computer test, an additional node needs to be specified for the arguments

to mpirun. To run two nodes, the following command is entered.


redboots@apollo> mpirun -np 2 xhpl


The 2 after -np specifies the number of nodes the test will be run on. The nodes are

listed in the file machines.Linux located in//hlen pollo/hda8/mpich-J.2.5.2/util/machines/.

The two nodes used will be the first two listed in this file, apollo and euclid.

From the results in Appendix B.3.8, it is clear that a "flat" grid, 1x2, performs

much better than the 2x1 layout. For the remaining tests, a 1x2 process grid layout will

be used. The next test involves selecting which panel broadcast algorithm performs the

best. The HPL.dat file used for this test is seen in Appendix B.3.9. For this test, all 6

panel broadcast algorithms will be tested. The Increasing-2-ring algorithm performed

the best for this test.


1 2 1 2 1 2

4 3 4 3 4 3

1 2 1 2 1 2

4 3 4 3 4 3

1 2 1 2 1 2

4 3 4 3 4 3
434343












The next test will use the Increasing-2-ring algorithm. The option that returned the

bests results for this run was the block size. As seen in Figure 3.8, a block size of 128

clearly performed better than the others.


Block size comparison -
3.1 2 processors





2.9



2.7 /


32 64 96 128 160 192
Block size


Figure 3.8. Block size effect on performance for 2 nodes




The next run will use a block size of 128 and test the remaining options. From the

results, a NDIVs of 2 returned the best performance. Next, NDIVs will be set to 2 and the

test rerun with the command:


redboots@apollo> mpirun -np 2 xhpl


The following results were obtained after this run. NBMINs affected performance

the most, though slightly, with the following averages for different NBMINs, a NBMINs

of 1 had an average of 3.0304 Gflops, 2 had an average of 3.0308 Gflops, 4 had an average

of 3.0312 Gflops, and 8 and an average of 3.0306 Gflops.

The final test will determine the maximum performance of the two computer cluster

by using the largest matrix size that will fit into memory. By using Eqs. 3.4 and 3.5, and

maximum system memory of 2 Gb, the maximum matrix size is around 15800x15800.








53


To account for memory to run the operating system, a matrix of size 14500x14500 will

be used. The HPL.dat file used for this test is in Appendix B.3.10. This run achieved

5.085 Gflops. Out of curiosity different matrix sizes were also tested. The results of these

tests are as follows: 13500x13500 achieved 5.245 Gflops, 12500x12500 achieved 5.364

Gflops, and 11500x11500 achieved 5.222 Gflops. The results are summarized in Table

3.4.













Table 3.4. Goto's BLAS routine results-2 processors

Ns Grid layout Beast algorithm Block Size NDIVs NBMINs PFACTs RFACTs Gflops % T. Max
3500 2xl 1x2 1 128 2 8 C C 2.585 26.93
3500 lx2 012345 128 2 8 C C 2.891 30.12
1x2 2 32 64 96 2 3 4 1 2 4 8 LCR LCR 2.927 30.49
128 160 192
3500 lx2 2 128 234 1248 LCR LCR 3.025 31.51
3500 lx2 2 128 2 1248 LCR LCR 3.027 31.53
3500 lx2 2 128 2 4 LCR LCR 3.029 31.55
11500 lx2 2 128 2 4 C C 5.222 54.40
12500 lx2 2 128 2 4 C C 5.364 55.88
13500 lx2 2 128 2 4 C C 5.245 54.64
14500 lx2 2 128 2 4 C C 5.085 52.97












3.4.3 Three Processor Tests

This section will go through the steps of testing three processors and discussing the

results. The steps of testing three processors is the same as testing just two. First the best

grid layout is determined, then the best broadcast algorithm, and finally the remaining

variables that return the best results are found.

The first test determined the grid layout. As with the two processor test, a "flat"

grid of 1x3 performed the best as shown in the results. For broadcasting the messages

to the other nodes, modified increasing ring algorithm performed the best. Next, all the

options were tested. As in previous tests, the block size affected performance the most.

For the three processor test, the block size that achieved the overall best performance was

96. The results are shown in Figure 3.9.

Block size comparison -
3 processors


2.8


C 2.7 -
0

2.7 -


2.6


2.5 I I I I
32 64 96 128 160 192
Block size


Figure 3.9. Block size effect on performance for 3 nodes




The next test used a block size of 96 and tested the remaining variables. The results

from this test were inconclusive as to which parameter affects performance the most. This

test was run several times with similar results. The highest recorded performance for the








56


test was 2.853 Gflops which occurred four times. Since no conclusion could be made

from the results, the remaining variables were chosen to be Crout's algorithm for both the

panel factorization and panel factorization, NBMINs of 4, and NDIVs of 2.

The final test will determine the maximum performance of our three node cluster.

To determine the maximum matrix size to use, Eqs. 3.4 and 3.5 will be used. For three

gigabytes of total system memory and accounting for operating system memory require-

ments, a maximum matrix size of 17800x17800 will be used. As with the two processor

tests, different matrix sizes will be tested to see which achieves maximum performance.













Table 3.5. Goto's BLAS routine results-3 processors

Ns Grid layout Beast algorithm Block Size NDIVs NBMINs PFACTs RFACTs Gflops % T. Max
3500 3xl 1x3 1 128 2 8 C C 2.203 16.20
3500 lx3 012345 128 2 8 C C 2.489 18.30
1x3 1 32 64 96 2 3 4 1 2 4 8 LCR LCR 2.778 20.43
128 160 192
3500 lx3 1 96 234 1248 LCR LCR 2.847 20.93
17800 lx3 1 96 2 4 C C 6.929 50.95
16800 lx3 1 96 2 4 C C 6.798 49.99
15800 lx3 1 96 2 4 C C 6.666 49.02
19000 lx3 1 96 2 4 C C 6.778 49.84
18500 lx3 1 96 2 4 C C 7.013 51.57
18250 lx3 1 96 2 4 C C 7.004 51.50
18750 lx3 1 96 2 4 C C 6.975 51.29












From Table 3.5, the three node cluster achieved a maximum of 7.013 Gflops. Using

Equation 3.6, the three node cluster has a theoretical peak of 14.4 Gflops but only achieved

around fifty percent of that.

3.4.4 Four Processor Tests

The method of testing four processors is the same as the previous tests. The first

test will determine which grid layout delivers the best results. Besides the obvious layouts

of 1x4 and 4x1, a 4 processor test can also have a 2x2 grid layout. From the results a 2x2

grid clearly outperformed the other layouts.

From Figure 3.10, a block size of 96 overall performed much better than the others.

Several tests using a block size of 64 came close to that of 96 but the average Gflops was

much lower. To determine the maximum matrix size to be tested, Eqs. 3.4 and 3.5 are

Block size comparison -

4 processors
3.3


3.2


o 3.1


3


2.9


2.8
32 64 96 1.3e+02 1.6e+02 1.9e+02
Block size


Figure 3.10. Block size effect on performance for 4 nodes




used. The maximum matrix size that can fit into RAM is around 21500 x 21500. Results

for the four processor tests are summarized in Table 3.6.













Table 3.6. Goto's BLAS routine results-4 processors

Ns Grid layout Beast algorithm Block Size NDIVs NBMINs PFACTs RFACTs Gflops % T. Max
4000 4x1 1x4 2x2 1 128 2 8 C C 2.546 14.47
4000 2x2 0 12345 128 2 8 C C 3.083 17.52
40 2x2 2 32 64 96 2 3 4 1 2 4 8 LCR LCR 3.172 18.02
128 160 192
4000 2x2 2 96 234 1248 LCR LCR 3.272 18.59
21500 2x2 2 96 2 4 C C 8.717 49.53
21000 2x2 2 96 2 4 C C 8.579 48.74
21700 2x2 1 96 2 4 C C 6.939 39.43
21250 2x2 1 96 2 4 C C 8.642 49.10











3.4.5 Conclusions

Several conclusions can be made from the benchmarking tests. First, from the

HPL test results, there is a strong correlation between software optimized for a particu-

lar processor architecture and its performance. Clearly, the BLAS routines provided by

Kazushige Goto with optimizations for a processors cache outperform the optimized AT-

LAS routines by as much as 29%. While no changes will be made to the algorithm used

by SPOOLES to solve linear equations, there is always room for improvement in compu-

tational software. The ATLAS routines were long considered the best BLAS library until

Goto's routines made further improvements.

From Figure 3.11, another important point is illustrated. As the number of proces-

sors increases and depending on the problem type, the actual maximum performance of

a cluster when compared to the theoretical performance generally decreases as the num-

ber of processors increases [6]. As the number of processors increase, so does the time

for each compute note to intercommunicate and share data. If a cluster was only able

to reach a small percentage of its theoretical peak, there may be an underlying problem

with the network or compute node setup. If the cluster is used to solve communication

intensive problems the network may require Gigabit Ethernet or a proprietary network

that is designed for low latency and high bandwidth. On the other hand, if the problems

are computational intensive, increasing the RAM or processor speed ares two possible so-

lutions. Although the performance ratio decreases as the nodes increases, the advantage

of using more nodes is that the maximum problem size that can be solved increases. A

problem size of 12000x12000 was tested under one processor and took around two and

a half hours with the virtual memory being used extensively. Using four processors, the

same problem size was solved in less than three minutes with no virtual memory being

used. Although the solve efficiency of using multiple processors is less than that of one

processor, by spreading the work among compute nodes, large problems can be solved,

although not efficiently, but in a much more reasonable time frame.














Decrease in Maximum Performance
77

O.
-2 62


S 46

-c
S31






1 2 3 4
Number of Processors


Figure 3.11. Decrease in maximum performance




The block size, NB, is used by HPL to control the data distribution and computa-

tional granularity. If the block size becomes too small, the number of messages passed

between compute nodes increases, but if the block size is too large, messages will not be

passed efficiently. When just apollo was benchmarked with HPL, the best results were

obtained with a block size of 128, but as the number of nodes increases, so does the im-

portance of passing data between nodes. With multiple nodes, a block size of 96 allowed

the blocked matrix-multiply routines in HPL return the best results.

Depending on the problem type and network hardware, parallel programs will per-

form strikingly different. For example, when running cpi for the largest problem tested,

one node took 53 seconds to calculate a solution. When running two compute nodes, it

took around 27 seconds while four nodes only decreases the run-time to around 20 sec-

onds. Running two nodes nearly increased the performance two-fold but four nodes didn't

achieve the same performance. When running a test problem under HPL, one compute








62


node completed a run in three minutes for a problem size of 1000x10000, two compute

nodes completed the same test run in two minutes and fifteen seconds while four nodes

took one minute forty-nine seconds. For computational intensive problems such as those

solving linear system of equations, a fast network is critical to achieving good scalability.

For problems that just divide up the work and send it to the compute nodes without a lot

of interprocessor communication like cpi, a fast network isn't as important to achieving

acceptable speedup.















CHAPTER 4
CALCULIX

CalculiX [30] is an Open Source software package that provides the tools to create

two-dimensional and three-dimensional geometry, create a mesh out of the geometry,

apply boundary conditions and loadings, and then solve the problem. CalculiX is free

software, can be redistributed and/or modified under the terms of the GNU General Public

License [31]. CalculiX was developed by a team at MTU AeroEngines in their spare time

and were granted permission to publish their work.

4.1 Installation of CalculiX GraphiX

There are two separate programs that make up CalculiX: cgx (CalculiX GraphiX)

and ccx (CalculiX CrunchiX). cgx is the graphical pre-processor that creates the geometry

and finite-element mesh and post-processor which views the results. ccx is the solver that

calculates the displacement or temperature of the nodes.

First go to http://www.dhondt.de/ and scroll to near the bottom of the page. Se-

lect the link under Available downloadsfor the graphical interface (CalculiX GraphiX:

cgx): that reads a statically link Linux binary. Save the file to a folder, in our case

/lhle pollo/hda8. Unzip the file by typing:

redboots@apollo> gunzip cgx_l.l.exe.tar.gz

redboots@apollo> tar -xvf cgx_l.1.exe.tar'

This will create the file cgx_l.l.exe. Rename the file cgx_1]..exe to cgx, become

root, move the file to /I'i local/bin and make it executable.

redboots@apollo> my cgx_l.l.exe cgx

redboots@apollo> su









64


Password:

root@apollo> my cgx /usr/local/bin

root@apollo> chmod a+rx /usr/local/bin/cgx


To view a list of commands that cgx accepts, type cgx at a command prompt.

redboots@apollo> cgx


CALCULIX
GRAPHICAL INTERFACE
Version 1.200000


A 3-dimensional pre- and post-processor for finite elements
Copyright (C) 1996, 2002 Klaus Wittig

This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; version 2 of
the License.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public
License along with this program; if not, write to the Free
Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139,
USA.


usage: cgx [-bl-gl-cl-duns2d|-duns3d] filename

-b build-mode, geometry file must be provided
-c read an solver input file (ccx)
-duns2d read duns result files (2D)
-duns3d read duns result files (3D)
-g use element-group-numbers from the result-file


4.2 Installation of CalculiX CrunchiX


First go to http://www.dhondt.de/ and scroll to near the bottom of the page. Select

the link under Available downloadsfor the solver (CalculiX CrunchiX: ccx): that reads

the source code. Save this file to a folder, in our case /hl,,e 1apollo/hda8, and unzip.











redboots@apollo> gunzip ccx_l.l.src.tar.gz

redboots@apollo> tar -xvf ccx_1.1.src.tar

This creates the folder CalculiX with the source code inside CalculiX/ccx_. 1/src.

In order to build the serial solver for CalculiX, SPOOLES and ARPACK need to be

installed. ARPACK is a collection of Fortran 77 subroutines designed to solve large scale

eigenvalue problems [32] while SPOOLES provides the solver for sparse, linear systems

of equations [33].

4.2.1 ARPACK Installation

To download and install the ARnoldi PACKage (ARPACK), first go to the home-

page at http://www.caam.rice.edu/software/ARPACK/ and click the link for Download

Software. Download the zipped file arpack96. targz to a directory, in our case

/lihme I, pollo/hda8/, and unpack.

redboots@apollo> gunzip arpack96.tar.gz

redboots@apollo> tar -xvf arpack96.tar

This will create the folder ARPACK. Enclosed in ARPACK is the source code, lo-

cated in SRC, examples, documentation, and several Makefiles for different architectures

located in ARMAKES. Copy the file $ARPACK/ARMAKES/ARmake.SUN4 to the main

ARPACK directory and rename it to ARmake.inc.

redboots@apollo> cp ARMAKES/ARmake.SUN4 ARmake.inc

Edit ARmake.inc so that it points to the main ARPACK directory, the correct Fortran

compiler, and specifies Linux as the platform. The ARmake.inc file used for our system is

located in Appendix C. 1. Finally, while in the main ARPACK directory, make the library.

redboots@apollo> make lib

This will create the ARPACK library file libarpackiLinux.a in the current directory.











4.2.2 SPOOLES Installation

To install the SParse Object Oriented Linear Equations Solver (SPOOLES), first go

to the projects website at

http://www.netlib.org/linalg/spooles/spooles.2.2.html. Near the middle of the webpage,

there are links for documentation and software downloads. Download the file

spooles.2.2.tgz, place it in the folder SPOOLES.2.2, and unpack. The main directory for

SPOOLES is located in /luie ,qollo/hda8 for our installation.


redboots@apollo> cd /home/apollo/hda8

redboots@apollo> mkdir SPOOLES..2.

redboots@apollo> my spooles.2.2.tgz SPOOLES.2.2

redboots@apollo> cd SPOOLES..2.

redboots@apollo> tar -xvzf spooles.2.2.tgz


This will unpack spooles.2.2.tgz in the folder SPOOLES.2.2. First, edit the file

Make.inc so that it points to the correct compiler, MPICH install location, MPICH li-

braries, and include directory. The MPICH options are only for building the parallel

solver which will be discussed later.

To create the library, while in the directory /1relue qi// ,/hda8/SPOOLES.2.2, type:


redboots@apollo> make lib


This will create the library spooles.a in $SPOOLES.2.2.

4.2.3 Compile CalculiX CrunchiX

After ARPACK and SPOOLES are installed, enter into the CalculiX CrunchiX

source directory.


redboots@apollo> cd /home/apollo/hda8/CalculiX/ccx_1.1/src











Edit the file Makefile so that it points to the ARPACK and SPOOLES main direc-

tories and to their compiled libraries, and also to the correct C and Fortran 77 compilers.

The Makefile used for our setup is seen in Appendix C.2.

After the Makefile has been edited, type:

redboots@apollo> make

This will compile the solver for CalculiX. Now copy the solver, ccx_1.1, to the

/,,i local/bin and make it executable and readable by users.

redboots@apollo> su

Password:

root@apollo> cp ccx_1.1 /usr/local/bin

root@apollo> chmod a+rx /usr/local/bin/ccx_1.1

4.3 Geometric Capabilities

This section describes briefly the steps from creating a model to viewing the results

of a finite-element analysis. The first step is to create a model. For two-dimensional

geometries, points, lines, and surfaces are defined and with three-dimensional geometries,

points, lines, surfaces, and also bodies are defined.

The easiest way to create a point in CalculiX is by defining its location in a three-

dimensional space. Each point can be assigned a name or you use the wild card character

I and let CalculiX name the point for you.

After points, the next geometric entity that is created are lines. Lines can be straight,

an arc, or a spline. To create a straight line 2 points are selected. For an arc, a beginning

and endpoint are selected along with the arcs center point. For a spline, multiple points

are defined and then each point is selected to create the spline.

Surfaces are defined by selecting 3 to 5 lines. By using the command qsur the













mouse can be used to select lines and generate surfaces. Surfaces may be flat or curved

depending on the boundary defining lines.

Bodies are defined by 5 to 7 surfaces. After a surface is created, it can be copied

and translated or swept about a trajectory to create a body. Also, bodies can be created

by creating all the necessary surfaces and using the command qbod to select them all and

create the desired body.

4.4 Pre-processing

CalculiX allows for the creation of fairly complex models. Points, lines, surfaces,

and bodies are created, in that order, by either typing them or by using the mouse for

selection through the interface. Another option, since the file format is straightforward,

one can simply write the geometry file directly through your favorite text editor. If a point

is modified, the line that is defined partially by that point will also be modified and all

related surfaces and bodies. This also holds true for modifying lines, surfaces and bodies

and their associated geometric entities.

To begin creating geometry with CalculiX, issue the command:


redboots@apollo> cgx -b all.fbd


where all.fbd is the geometry file being created. This brings up the main screen seen in

Figure 4.1.

Creation of geometric entities can be performed in a multitude of ways as described

in the following.

4.4.1 Points

Points are created by entering its location in three-dimensional space or by the

splitting of lines. To create a point, simply type, while the mouse cursor is within the

CalculiX window, the following:





























Figure 4.1. Opening screen



pnt pl 0.5 0.2 10

This creates a point named pi at 0.5 in the x, 0.2 in the y, and 10 in the z-direction.

If the user wanted CalculiX to assign names to points, instead of typing p, the user would

replace it with !. Instead of using the CalculiX display, the following can be entered in

the file all.fbd to create the same point.

PNT pl 0.50 0.20 10.0

In Figure 4.2 p is plotted with its name.

4.4.2 Lines

Lines are created by creating points then selecting these points or by entering a

command. To create a line by selection, at least 2 points must be created. First, enter the

command:

qlin

A little selection box appears. When one of the points is within the selection box

press b for begin and / for line when the selection box is over the second point. b is always


jC,,,i,~pi- n









70


r Clculix Grapha NO I




















Figure 4.2. pi with label




the first key pressed when creating a line and / is the last. If an arc is desired, press b over

the first point, c for center of the second, and 1 over the third to create the arc. If a spline

is desired, press s over all the defining points between pressing b and 1. Figure 4.3 shows

a spline passing through 4 points.

'I c~- ~. .,....M _t




r


Figure 4.3. Spline











4.4.3 Surfaces

Surfaces can be created by using the command gsur or by using the mouse but are

limited to 3 to 5 sides. To create a surface using the mouse, the user enters the command

qsur, placing a selection box over a line and pressing 1 for the first line, 2 for the second,

and so on till up to 5 sides are selected. Figure 4.4 shows a surface created by 3 straight

lines and the spline.







/ / \









Figure 4.4. Surface




4.4.4 Bodies

The final geometric entities that can be created with CalculiX are bodies. Bodies

are defined by selecting five to seven surfaces using either selection by a mouse or by

command line input. Another method of creating bodies is by sweeping a surface along an

axis or rotating surface about a center line. Figure 4.5 shows a body created by sweeping

the surface created in Figure 4.4 along the vector 2, 10, 3.




























Figure 4.5. Body created by sweeping



4.5 Finite-Element Mesh Creation

This section describes the finite-element mesh creating capabilities of CalculiX.

CalculiX is able to create and handle several two-dimensional and three-dimensional ele-

ments. Two-dimensional elements that CalculiX can create are:


2 node beam element

3 node beam element

3 node triangular element

6 node triangular element

4 node shell element

8 node shell element


Three-dimensional elements that CalculiX can create are:


* 8 node brick element


3-1Ii- hi- M











20 node brick element


Although CalculiX can not create other elements at this time, the finite-element

solver is able to handle them and CalculiX can then post-process the results. The three-

dimensional elements that can be solved but not created are:


4 node tetrahedral element

10 node tetrahedral element

6 node wedge element

15 node wedge element


After the geometry is created, CalculiX allows the user to create a finite-element

mesh. To create a mesh, the user specifies the number of elements along an edge and then

to create an 8-node brick element, issues the command:


elty all he8


Twenty-node brick elements can be created by simply replacing he8 with he20.

Next, to create the mesh, type:


mesh all















CHAPTER 5
CREATING GEOMETRY WITH CALCULIX.


In this chapter, a tutorial is given on creating a three-dimensional part, applying a

finite element mesh and boundary conditions, and solving the problem. The various meth-

ods on creating and modifying geometric entities will be covered and also its usefulness

as a finite element pre- and post-processor.

5.1 CalculiX Geometry Generation


The part that we will be creating is shown in Figure 5.1.


Figure 5.1. Final part



We will be using many different commands in CalculiX to create points, lines,

surfaces, bodies, and modifying these entities. It is a relatively simple part, but it will

help illustrate the flexibility CalculiX gives to the user when creating and editing parts

and also its finite element capabilities.


----------- -











5.1.1 Creating Points

The first feature that will be created is the handle section. To begin, first create

points that define the boundary of the cross section.
pnt pi -0.93181 -0.185 0.37
pnt p2 -0.93181 -0.185 0.87
pnt p3 -0.93181 -0.185 1.37
pnt p4 -1.4 -0.185 0.37
pnt p5 -1.4 -0.185 0.62
pnt p6 -1.4 -0.185 0.87
pnt p7 -1.4 -0.185 1.12
pnt p8 -1.4 -0.185 1.37
pnt p9 -1.9 -0.185 0.87
pnt pl0 -1.65 -0.185 0.87
pnt pll -1.15 -0.185 0.87
pnt p12 -1.22322 -0.185 1.04678
pnt p13 -1.22322 -0.185 0.69322
pnt p14 0 0 0.37
pnt p15 0 0 0.87
pnt p16 0 0 1.37
Now type:


plot pa all


where p stands for point, a plots the name of the point, and all plots all the points.

If the points are hard to see you may move them by pressing and holding the right

mouse button and moving it. If you wish to rotate the points, press the left mouse button

and hold, then move the mouse. By pressing and holding the middle mouse button and

moving it, you can zoom in or out on entities in the screen.

If a mistake was made in the typing of the coordinates, you may delete them using

the qdel command.


qdel


After you type the qdel command press . A tiny square will appear

around your mouse pointer. To make this selection box bigger, move your mouse halfway











between the upper left corner and center of the CalculiX screen and press r. Now move

your mouse about halfway between the lower right corer and the center and press r again.

This should make the selection box bigger. If you wish to change the size and shape of

the selection box, just type r in one position and move your mouse to another position

and type r again. Anything that falls within the box can be selected. If you want to select

multiple points press:


a


This brings the selection box into mode:a. To select a point, enclose a point in the

selection box and press:


P


where p stands for point. Other entities that may be selected are lines, 1, surfaces, s, and

bodies, b. To select multiple points, enclose several points within the selection box and

press p. If you wish to go back to selecting only one point at a time, press:


i


Now that all the points are created your screen should look similar to Figure 5.2.

The next step is to copy these points and translate them 0.37 in the positive y-

direction. To copy points, add the points to a set then use the copy command to copy and

then translate that set. To create a set type the following:


qadd setl


which adds the set set]. Go to the left side of the CalculiX screen and press the left mouse

button. Select Orientation and then +y view. This will orient the screen so that you are

looking in the positive y-direction. Notice that a selection box appears at the tip of the


























P8
S 3
piO pi2

"5 "13 P2
4 p15
pi


J____


Figure 5.2. Creating points


,8 3 p16
pox





,12
P9 10 p6 pli p2 Pi5

p5

p4 PI p14









titfbd


Figure 5.3. Selection box







mouse pointer. Now make the selection box bigger by using r to resize it. Make the box


around the same size as shown in Figure 5.3.






Now press:











to enter into multiple selection mode. Then press:


P


to add all points within the box to set]. Now press:


q


to exit from the selection mode. Make sure points fourteen through sixteen are not se-

lected. Now that the points are added to the set we can now copy and translate the points.


copy setl set2 tra 0 0.37 0


5.1.2 Creating Lines

Now create lines that connect these points by using the qlin command. The qlin

command works by either selecting the beginning and end points to create a line or by

selecting the two end points and the center to create an arc. First, type:

qlin

Notice that there appears a selection box at the tip of the mouse pointer. Make this

box a little bigger so that it can select only one point. Place the box over p and press:


b


for begin. Place the selection box overp4 and press:


1


for line. Press:

































J


Figure 5.4. Creating lines





to quit. Now plot the line that was created. A line should be plotted as shown in Figure 5.4.

To create an arc, type qlin.

Make the selection box a little bigger. Selectpl by putting the selection box over

the point and pressing:



b



Put the selection box overpl4 and press:



c



for center. Finally, place the selection box overpO01 and press:



1



to create an arc as shown in Figure 5.5.

Now create the remaining lines so that your screen looks like Figure 5.6. If an

incorrect line was created, you may delete it using the qdel command as described above.


P008
eOOA F OOE
p10 P nO 002 pl6
pp2i
0',0


(l4










80







Doos
,p8 0 O03

49 Pl -006
4"o0 ,"2 OOC
p6 ,P OOG ,002 16

1p l 4p15

pl4




test +bd


Figure 5.5. Creating lines





5.1.3 Creating Surfaces

The next step is to create surfaces which are made up of four lines. To create a

surface, the command qsur and a selection box will be used to select lines that belong

to the surface. qsur works by placing the selection box over a line, pressing the key 1,

placing the selection box over another line, pressing the key 2, and repeating until all the

lines that make up the surface are selected. First plot all lines with their labels:



plot la all



Now type:



qsur



A selection box appears at the tip of the mouse button. Make the selection box a

little bigger and place over line L003 as shown in Figure 5.6.

Press:












. 0 x


Figure 5.6. Creating surfaces




and the line will turn red. Put the selection box over line LOO5 and press:


2


Now put the selection box over line LOOK and press:


3


Finally, put the selection box over line L004 and press:


4


Now press:





to generate the surface. Press:











. 0 x


Figure 5.7. Creating surface A001



to end the command qsur. If a line is hard to select, rotate or zoom the screen so that

the line is easier to pick. Now plot the surface that was just created with plus sa all. The

screen should look something similar to Figure 5.7 with the surface AO01 plotted.

Now create the surface that is bounded by lines LOOX, LOOG, LOO1, and L010 in the

same manner as the first surface. The screen should now look like Figure 5.8 after the

commands:

plot 1 all

plus sa all


5.1.4 Creating Bodies



The next step is to create a body out of surfaces. To create a body, the command qbod is

used. qbod requires 5 to 7 surfaces to define a body or exactly 2. For this example we

will be using only 2 surfaces, that are connected by single lines to create a body. If the

other method was used, we would have to create 6 surfaces by selecting the 4 bounding

lines, then select the six surfaces that would create a body, which would be a longer, more






























Figure 5.8. Creating surface A002




tedious approach. To create a body, first type:


qbod


Make the selection box a little bigger so that selecting a surface is easier. Place the

selection box over surface A01 as shown in Figure 5.9.

Press:


s


to select the surface. Now place the selection box over surface A002 and press:


s


Now press:


9

to generate the body and remaining surfaces. Your screen should look similar to Fig-

ure 5.10 after typing the following commands:


I~. OX
































Figure 5.9. Creating bodies


plot p all

plus 1 all

plus sa all

plus ba all


Figure 5.10. Plotting bodies


I 0 X












Now create the remaining surfaces and bodies so that your screen looks similar to

Figure 5.11.



















Figure 5.11. Creating the handle
5.1.5 Creating the Cylinder
/ o : O -O^ f>ooN '- ,, I
,' | -- no O t ,"


















The next feature that will be created is the cylinder. First, create points that are on

the cylinder boundary.


0
0
0
0
0
-0.93181
-0.93181
-0.59699
-0.59699
0.71106
0.71106
0.4678
0.4678


0
-0.95
0.95
-0.625
0.625
0.185
-0.185
-0.185
0.185
-0.63
0.63
0.41447
-0.41447











Notice that instead of giving each point a different name, we used to have CalculiX

generate a point number automatically. Your screen should look similar to Figure 5.12

after the commands:

plot s all

plus pa all


Figure 5.12. Creating the cylinder points


Now create the lines that define the boundary of the cylinder using the qlin com-

mand as shown above. First, type the command:


plus 1 all


The lines should be created from the points as shown in Figure 5.13.

The next step is to create the surfaces that are bounded by these lines. Use the

qsur command and instead of placing the selection box over the name of each line and

selecting it, place the selection box over a part of the line. As long as the selection box is

placed over one line, the line will be selected.