<%BANNER%>

Design Space Exploration of Virtual Machine Appliances for Wide-Area Distributed Computing

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20101112_AAAAFI INGEST_TIME 2010-11-13T00:31:17Z PACKAGE UFE0020420_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 2284 DFID F20101112_AADDXG ORIGIN DEPOSITOR PATH wolinsky_d_Page_38.txt GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
95ff9720d1edd7c6684eb50e45ece31a
SHA-1
9c14e3254c65c6dbfa2a46b199c7e34c4354da20
887670 F20101112_AADDSJ wolinsky_d_Page_48.jp2
f46b62a73e0c2c255fc5afbafb2ce120
be08a7a9cc1c66d4a99acd883d7c3f9ec9da2ff8
78785 F20101112_AADDNM wolinsky_d_Page_09.jp2
49528f8c1a9f60360b2591f50a65a475
4f136b84dce0872b9186a2a7a3fba21aee6f2977
1284 F20101112_AADDXH wolinsky_d_Page_40.txt
4ff553d1672f85a630b773eb3ebfe805
a06f32a7f864f1fe11467ec7693c639984d3f16a
1029874 F20101112_AADDSK wolinsky_d_Page_49.jp2
1a7a785646ebf3c935f2757120784f30
310e9846a34c525059731047398e5968685d3deb
27643 F20101112_AADDNN wolinsky_d_Page_19.QC.jpg
1776673290ceb979a6a31e950bb813d3
582aa75803b39c12da226a08c256b2ae5e33c3c7
2388 F20101112_AADDXI wolinsky_d_Page_42.txt
069058188f4fcfc692a57eaf0624db8c
9d3e2e3534ea94c5251773ac654676f91fee994b
792790 F20101112_AADDSL wolinsky_d_Page_51.jp2
9037359e324057143d5c3c74d308632c
67f2809833acceb3d8e33a9d13d413b232cf5032
78414 F20101112_AADDNO wolinsky_d_Page_24.jpg
eb1120d7d252d9364863facd9c185773
af593e62af641379ae2a9e8c3fd1ddb05273e40c
1161 F20101112_AADDXJ wolinsky_d_Page_43.txt
dd4613a0bcde130bc486b3622102c7cf
c355718b9a2b3310adfe2a73e6dc8f925b2e63c6
707385 F20101112_AADDSM wolinsky_d_Page_53.jp2
4889ed3a79427ff3118f6b612d87b901
37a132002f06ac1ecdd0afac7c9bd2c4c40c3c49
56372 F20101112_AADDNP wolinsky_d_Page_15.pro
278f0c99cf3b2d1e83e09add5dd58404
f5232d97f5d48c4ef6ef6c763908dd98e35df707
1894 F20101112_AADDXK wolinsky_d_Page_44.txt
19fe93da2fd85cf84efd8ff33d254386
1067ca0049eef2a11ea9d026cffec44aa95c7c68
1051970 F20101112_AADDSN wolinsky_d_Page_54.jp2
deb7cc96e03fd38bd1f6948424b522c6
eeee187d2214cd65ca0fd18988cd525f183eb685
55152 F20101112_AADDNQ wolinsky_d_Page_10.pro
c316fde6afb879f1feed5e5cf564d783
9448b65a9a8da539cb032cf613454b500eb58b02
1086 F20101112_AADDXL wolinsky_d_Page_45.txt
9ab0588c3bbb5c6310b99614796de610
c8d6f2289f7a4bd33ed5d38f9548dc33be9c3460
F20101112_AADDSO wolinsky_d_Page_55.jp2
5d74e2ef8379906c6a5a6abb5dd33d5d
c59163df70d1889955f8fa3dd99464ace9879509
26828 F20101112_AADDNR wolinsky_d_Page_30.QC.jpg
b69070f6656042126f29744c9156d921
e055bc605db85d838bfb5c09801ce9632841b24c
460 F20101112_AADDXM wolinsky_d_Page_47.txt
6ec52cda2a858f958a7f9a43aad8da9c
58703e390df85b356c7f5c733d96019e7ecb912b
1051983 F20101112_AADDSP wolinsky_d_Page_57.jp2
834e63d9ba88830f5958d1024dc0fa9c
98f3762c43ea3e1b223481addcdf7b46a42efa02
1582 F20101112_AADDXN wolinsky_d_Page_50.txt
0be9b90d9e6d9f3a0e4885f4577f0b3d
8a94b47c4c23ef622bbda791dfe837a50c655909
1051985 F20101112_AADDSQ wolinsky_d_Page_58.jp2
c3a94feab40f4a2619f3c05663951920
c5db27b8d8b8b6cf230b724a9fdcbd611ff20c90
89460 F20101112_AADDNS wolinsky_d_Page_27.jpg
1f9df3d8aed599c1c75e8dbc4aa4ab42
23ad55e2f94341099a3e70a8c32bd87ea4bda3dd
1503 F20101112_AADDXO wolinsky_d_Page_51.txt
4690a91b1e7bed7ba0f7f377942ea7e6
acc7202b28d5cdc3714e81fdacd546db65d7fafb
1051973 F20101112_AADDSR wolinsky_d_Page_60.jp2
58a582373265bcc8ceb5d3c5187ece4f
d67f26169e0e56a7bea67369cd6e112b834fd720
36211 F20101112_AADDNT wolinsky_d_Page_52.pro
5deb0d6132e02d5d170dae2261dcac78
b375fea9ec2f58c5b09657cc84c3bf59de986cae
1258 F20101112_AADDXP wolinsky_d_Page_53.txt
376826e462d742ab33ab1d43b72a4294
2fcfc40730c9952b221ac015c5293823332e54b1
22685 F20101112_AADDSS wolinsky_d_Page_62.jp2
b237a1fa02dccdb7f295dfaa695c6071
475b5f29b2191eca57b9bce185b4da92489dd3e4
33490 F20101112_AADDNU wolinsky_d_Page_13.jp2
3b82d3f66d423b1a407079036480ea95
55c3f662347cc6cd8bc0230d0f6d2eeae1e52cd1
3215 F20101112_AADDXQ wolinsky_d_Page_55.txt
0f7a18488ec3ee3f9e99f8f05879d072
6ef8080eda0a78b54ca010bcd81b89a4fe8a0177
1053954 F20101112_AADDST wolinsky_d_Page_01.tif
90ee051fea24a413818841ec1f967da8
2881bda2525536455b18252a98a7523505ca9a87
4420 F20101112_AADDNV wolinsky_d_Page_09thm.jpg
4a09f7635ee4d63c2bbcaab7f8cfd079
9df032246ed104490b71ac12667bcfdab912f4f6
2719 F20101112_AADDXR wolinsky_d_Page_57.txt
5366818530d92ae89cace54a161f3ccf
dd33a64a11e4e0f62713c3b57d4471c2a8d9e7d5
F20101112_AADDSU wolinsky_d_Page_02.tif
7aedcdc8c619fed0210da26c9505c36b
2672fea17fc964e2b5d647a2a81fdfc17e02a9eb
25271604 F20101112_AADDNW wolinsky_d_Page_25.tif
e2b333d3fee1409eac3e3c8680e496e5
ebe70163433fa8afd6b99d058ba4e8a6650ef75d
2543 F20101112_AADDXS wolinsky_d_Page_58.txt
af453168c84358b861cc383e422af3a5
5b4e9b30125f3b77e8ebe2dc0fdbf28a4f540347
F20101112_AADDSV wolinsky_d_Page_03.tif
6b15911aedc318f9e3f382ca8a5f2ff2
7c99683c1b84ace8319376a9b376ce8499a8e626
32763 F20101112_AADDNX wolinsky_d_Page_50.pro
53e6c36c14b1adcf2506043ac697a7fb
de24cb3e71d1ce2e246d301be0f96bb9e21119ff
2610 F20101112_AADDXT wolinsky_d_Page_59.txt
7332fc60247467d9af2ac8c7972fa2db
da9d3860e7511b53ae27367ad83124a3212a4916
F20101112_AADDSW wolinsky_d_Page_04.tif
ad31d37c0a117c9754d6ef719bd2a2f0
1b3b75bdd3b4477183ac7dd484dc9f5b68e72b62
67356 F20101112_AADDLA wolinsky_d_Page_05.jpg
c378e96a9ef7e563f9e3df6a092276ad
05ce5237d679717a66b72b1a4f88eeab4b98aa6f
3791 F20101112_AADDNY wolinsky_d_Page_61thm.jpg
504b7e131277d947f9a55f7ae26f92ff
29035a65981bfd802bc2389d0466ef73217447f5
5952 F20101112_AADDLB wolinsky_d_Page_12thm.jpg
1caea2f29636af1c86b81fae246c2236
03a4bf5a8950bd71ff6e7375e8e9d5aa391e663f
6803 F20101112_AADDNZ wolinsky_d_Page_31thm.jpg
a15f60a489c1c4ec96ff674a370f563f
2660a9494c2daa3f37e0443dfc7ae311dfb98986
2508 F20101112_AADDXU wolinsky_d_Page_60.txt
7485d5742f7ba6359d5fa5bde0871933
15a634a551abff4f2122cad3895a848b0adeea85
F20101112_AADDSX wolinsky_d_Page_05.tif
a15cf525d32374d0cc0660c9e9d6f34d
8c04e3aed047cf312b1edee6bc9ff0e714acbeb2
2089 F20101112_AADDLC wolinsky_d_Page_22.txt
51eda7b7bce52f86359c6f411bcc9233
bea09576f68e52d3e369c5bfd29d7b0da40eeb93
1036 F20101112_AADDXV wolinsky_d_Page_61.txt
20152cd93560ed5a4e92c89be9f5e2e4
b3dd715303a9304329d2b71aa0eb3cf000f5f93d
93064 F20101112_AADDQA wolinsky_d_Page_18.jpg
a47751356dc8e69c10a97655423e7a3e
735a78f881406eb9c2361b7186959ccbe60b2f40
F20101112_AADDSY wolinsky_d_Page_06.tif
b11e1b1fb774dcb7014a8c8d925dae9d
6e2f93acde96fcedf8cbcc370e541f132e3f0680
1051976 F20101112_AADDLD wolinsky_d_Page_30.jp2
62d4d9937640f63eba399da3a47e9667
48d4baf3c6166989b0e88eef9bb5a741854c376d
381 F20101112_AADDXW wolinsky_d_Page_62.txt
1053ac04f0ca6bec04fd8730d9bc20fa
189ef146f939c7570c0a13d40be57a44e49bf0aa
71183 F20101112_AADDQB wolinsky_d_Page_21.jpg
e8e75f777af7cfd63187ccca89e0f1e6
4d9ba6dc2f430ac4d074295f5fb3bc39a47d38b9
F20101112_AADDSZ wolinsky_d_Page_07.tif
4c21aa3056c0a7c21582034cab948c9c
6d5a92d808e2b00a78787ccd710d90bd689c1a85
881397 F20101112_AADDLE wolinsky_d_Page_52.jp2
4877c9d75498a12b2945f8fbed392d22
0706961c989f24679ab00e2b7ddd8d42bd715d22
74318 F20101112_AADDXX UFE0020420_00001.mets FULL
e31f529f37a6394d845a1fb544d47c89
f346b2ee53a949f09a7600ebaca9dcf75ee84dc0
78471 F20101112_AADDQC wolinsky_d_Page_22.jpg
7565a41e667a6bcd8cc9dea2b912b8f7
6feed971fb1d4208b9b578bd9bb33577d8d0a0b3
544592 F20101112_AADDLF wolinsky_d_Page_06.jp2
38746bb43e8397b8e8a33544da09cead
cbbf7ab44740af0f2db7d811ceff229521375fd2
7311 F20101112_AADDXY wolinsky_d_Page_01.QC.jpg
d032e0f9a864aeb5ecd859db9c45996f
5bcca39ea9af2f54f6dfd1c0c2feaa8170498b46
57815 F20101112_AADDQD wolinsky_d_Page_23.jpg
f0a59ce38f51c2663e8272167ba5b1ec
5b66f56a57382abf29471926ae67ea5865f30351
2194 F20101112_AADDLG wolinsky_d_Page_01thm.jpg
f528c65a1ca7cddc43091954bd344f1e
8ba535513fac36cf634bfa155ddd4dcfa628a689
29740 F20101112_AADDVA wolinsky_d_Page_23.pro
29163db381bfdecad008749379a36d63
11475c5283b6c61eea668f7a76a6286e99b89e7a
3275 F20101112_AADDXZ wolinsky_d_Page_02.QC.jpg
5434afd7a1818768ee1d065d71cd9e5f
e6ab83f703736bee1db7798982054d3bb4b7ae39
76156 F20101112_AADDQE wolinsky_d_Page_25.jpg
66421c3fe522375992133dbf0d9bf320
32cf73c31437bd886e09c7aa577da6e24ced5852
F20101112_AADDLH wolinsky_d_Page_37.jp2
97ab672db896b220d33b7f13af451cc7
d90ba2575ee791b751c748bf8ac85b44c781030f
47551 F20101112_AADDVB wolinsky_d_Page_25.pro
ce31bd4591839e3771fcc526836bc4d5
848bb0ea632e2c8472f70633b3dfa6b0069f7b86
83880 F20101112_AADDQF wolinsky_d_Page_31.jpg
980dea1d78b57e9cfe6cb73087a1033c
d81c65f0917de338b75199266dbd69c140296b8b
F20101112_AADDLI wolinsky_d_Page_48.tif
ca9b52850381a5c1890175b28c1a39ae
0d1754b304184ff864fd2db606056f444d0c7d8d
25504 F20101112_AADDVC wolinsky_d_Page_26.pro
40176adde9cd8e4e2ba6f6f14620d5b4
02d326beb63a0582f33132b4a225c18ce65657e6
79472 F20101112_AADDQG wolinsky_d_Page_32.jpg
c97dfb1e09b1514abd11ba2512d10f98
3fd96071817847a1848259e01c532bb5c75ca89c
70069 F20101112_AADDLJ wolinsky_d_Page_55.pro
996c1cf4dbcceeb21e9dbad79753db07
39d08f48f22d085bef6fa2e3624e9d41b434b40e
58476 F20101112_AADDVD wolinsky_d_Page_27.pro
0717bbb7af0260a12d01da6a37c13676
7baee0e99185ae65b16dcdf494adad279c8d6b39
6447 F20101112_AADDLK wolinsky_d_Page_35thm.jpg
e8459eb4deb2a54d0943f7b80e070ba6
c37e370b630742459700c906f6d74f0f1bc20390
53870 F20101112_AADDVE wolinsky_d_Page_28.pro
266babfa456d00b560a532e6ada95792
c0bcb3e6ed6a0bfa46b83b78cdf8603f76fbda15
72841 F20101112_AADDQH wolinsky_d_Page_33.jpg
240cc1207ce8c913ab1d5db49b1e0b8b
1476851304c7546a6bc7d91eff3fe5f6e88bf10a
87804 F20101112_AADDLL wolinsky_d_Page_11.jpg
a1652e18b51b4348f15c03808e1c8a9e
aa1b0ecf1b91ba69070fde20309df914ee54248f
54407 F20101112_AADDVF wolinsky_d_Page_29.pro
2436fd24d60ca1539f3c4bd44a61577a
e988e930264d0e993d03bdf532b154622a6a6955
80421 F20101112_AADDQI wolinsky_d_Page_34.jpg
3ec6c9d28e46d46d5143b332a3e942c3
745c8dc24f8bd6708b226a87fe2ed8e8bad1aa5e
828 F20101112_AADDLM wolinsky_d_Page_04.txt
84e3194f9e05fe86dc53ad019b96778e
4a726e06768de6e054f1e9229c50deb1dc78fd73
55223 F20101112_AADDVG wolinsky_d_Page_31.pro
997d840aaf213f6a17cc793347f8ee58
1378f013d94a73c422bf4cf59eef6b86480fc2bc
88775 F20101112_AADDQJ wolinsky_d_Page_36.jpg
bc6a3fbd96fc9a5dddc826c1b1907ba7
032aa60b079d17d3676dbff8ab46b11884b18199
F20101112_AADDLN wolinsky_d_Page_16.tif
7441f518fe1893068e41921522e2409b
3aa29fba18e9288b5711d0b8b1908248acb49cc4
61634 F20101112_AADDVH wolinsky_d_Page_32.pro
0b7b4bffd730bbd32a5bcb55fcae8e89
fabc072a148420a89e4106fa319a9692bb888b50
85887 F20101112_AADDQK wolinsky_d_Page_37.jpg
5be50118fc4590fbdfce3a10ded6f358
c5cb739b23da2c80ee524f1f1b46ccfa7a85d3e5
944410 F20101112_AADDLO wolinsky_d.pdf
9efbd6451353a4d161ff3417bc1726fb
eced35a03a6133e177dcb5bb5a7c4a902d893632
55606 F20101112_AADDVI wolinsky_d_Page_33.pro
9181a7fd649957668414c19dcbf705c4
05ab093f7f9e8090fd568cbbbe8652a7ea99bed5
87535 F20101112_AADDQL wolinsky_d_Page_38.jpg
0cfdf7d50b40ab789d243d990369e5e8
3d7c740530208604f7f90a6e0079b61539f12fa9
10310 F20101112_AADDLP wolinsky_d_Page_03.jp2
9481f5241d67e1ad05e1d56b3ab4648c
98ed76a3595f387830053076ab9654815c2e2b91
61665 F20101112_AADDVJ wolinsky_d_Page_34.pro
f2af9382a6ae0b576ebff61042f8ad0d
7f1efaf8511ebffc2abba6903136929e71c5795f
11616 F20101112_AADDQM wolinsky_d_Page_39.jpg
a54ba8f44bb3a1e38d1cc7177a01cda8
24cc946aa5298c2091e7523ee8fe584df4c55c03
52645 F20101112_AADDVK wolinsky_d_Page_35.pro
fff5270d55d75741414f4c898a1735cc
05560f56643582d5da2ff43f5a097cfbb965e5ab
46566 F20101112_AADDQN wolinsky_d_Page_40.jpg
7ee16950fcc3cafbf6c97346b8c5808d
3cc0b25d1cd09dcbd42ed8f39f6a6ef662a84f0f
F20101112_AADDLQ wolinsky_d_Page_12.tif
5b844b3adc713a82dd3ec4424f556a77
9d310a7d007ca17a03ad1f621823be29c5614bc7
58273 F20101112_AADDVL wolinsky_d_Page_36.pro
197ed502ea64343b071990b225b76ce2
49918179bda25521aa89c1675179e32a93bdb9dc
51889 F20101112_AADDQO wolinsky_d_Page_41.jpg
d69c6f53be65620f4659b62c9271d22f
a6c1cb6f70e62787e296aa63157c53e491a1c5fb
9428 F20101112_AADDLR wolinsky_d_Page_06.QC.jpg
55a010fe5051e4b437cb5166223d2523
346eaa31ec5b93473403c65fc80a36c120d50e13
57414 F20101112_AADDVM wolinsky_d_Page_37.pro
072d1623368339620674c7f035925753
d6e30dc8407bf3a43420b6f522be099a9e72f92a
89263 F20101112_AADDQP wolinsky_d_Page_42.jpg
e00d8a381d9d5e1a195a32d21f3b6ec1
63f2ee1e1e507c59f60acb86991b5ba220a752b4
2196 F20101112_AADDLS wolinsky_d_Page_28.txt
ddd3ec4a2154d2729fccd8278d7fc497
9e8e2c4bbf43f4ed009748c966e0ede101b0c0ad
58164 F20101112_AADDVN wolinsky_d_Page_38.pro
a0a17bc2e07598b5308c6f7cb4fc399f
66c3e72f3c6eb604b53a33c754b229e87d4e357b
50163 F20101112_AADDQQ wolinsky_d_Page_43.jpg
220f9c09c35016cb5daf2e8062bd3a8f
5490c7650b98aacc80d3ba51fef87622cdec555e
2192 F20101112_AADDLT wolinsky_d_Page_21.txt
f33fbd327fd574a6bb96445f0044e8cf
f9f17887c835f2f4248121c3f1a50862b148f0ac
28927 F20101112_AADDVO wolinsky_d_Page_40.pro
d5adfb43963b027a4cb4f04a4656ec58
266b31f8a6a4683cb21ece43378bb6795243811c
68492 F20101112_AADDQR wolinsky_d_Page_44.jpg
8c5ef40f61ddf17a71f2f4a0dd46df25
dc3622a289ee571dab7af9572278805f955eea59
19387 F20101112_AADDLU wolinsky_d_Page_20.QC.jpg
9bcff465645b0d03c3d43dda140bcdf8
e8e7d301098c8a119f720fed7ded34fd92ed72b4
75086 F20101112_AADDVP wolinsky_d_Page_41.pro
112b1513fb3d277c892accb4628e0eb1
6d2db598594ed777c91cc3eb61b6896c6c6c44c6
95927 F20101112_AADDQS wolinsky_d_Page_45.jpg
6589d0c66f5a2c1b41e1d961f598b6cd
8775810f31c795c72c067ba468606f46ce440ba1
65443 F20101112_AADDLV wolinsky_d_Page_59.pro
9301ffc81cc0a389aeb9a0b926562283
ce782f4a6b1ed87132291e6811f78619bd0c84b0
29277 F20101112_AADDVQ wolinsky_d_Page_43.pro
e86a40de060ecdfe601fa42b524682df
602fd0bd8cdce7fa09c9ef3e42a40f0adc5f0fc6
46279 F20101112_AADDQT wolinsky_d_Page_47.jpg
af17dc4f490e5d3a04d46d0170796ec1
4197086a824dbbb901014faf22eeb7ca2e9e6e3e
61513 F20101112_AADDLW wolinsky_d_Page_20.jpg
46352c3e0c90c8933d5aabd81e9a0034
11d349a4553edf9c1dd44f9b985526a0ea193dac
45272 F20101112_AADDVR wolinsky_d_Page_44.pro
44da06e79d7a5d508530d011e45ce2ec
f5dc025edda2c89ceb114d0e3b05e1a2e2685037
45165 F20101112_AADDQU wolinsky_d_Page_48.jpg
ad0f5465d820945b050a38e9be5ef13c
f65cb6bfccc789416d30d6ce6b5ca1258cb3a7a5
85191 F20101112_AADDLX wolinsky_d_Page_30.jpg
c62333ffd1d63929872bf8947ca3857e
05416a28f312e9795036cbd9eb9341c015525beb
25245 F20101112_AADDVS wolinsky_d_Page_45.pro
85808276791e65df5f46c10ddf8e177a
88adf9fb6dbebca830a6a4bc0b58b96ce1e7db2b
24003 F20101112_AADDLY wolinsky_d_Page_01.jpg
71c592f38cc47be2731742b5d1e642a3
8005d96809fb7adee857250f51b63f4b6289b996
44534 F20101112_AADDVT wolinsky_d_Page_46.pro
2a216dec67578c8a12bd2a7888ef0801
06124f6530c3ab983aa70670736529bce22b241f
51785 F20101112_AADDQV wolinsky_d_Page_49.jpg
9faac9805d708ff7b4a01636865eef37
01b476cd300de20378c55465842f6b85fbb8087b
1051948 F20101112_AADDLZ wolinsky_d_Page_36.jp2
54e4b5efe783a56f111f0e67f56272fe
7649bccc1c57a4f8f17c4577ea264d96fd40446e
2266 F20101112_AADDVU wolinsky_d_Page_49.pro
39ec00c09a1700fdc3fc5ebd201375c8
67295333e6cffd00a81020570d0af3202b32c644
62196 F20101112_AADDQW wolinsky_d_Page_50.jpg
c402782f4f8e5dbeaa3ca4e249f20cc2
500a61ab61455fcd4f7a5b5555ed54c4fadc10a8
32146 F20101112_AADDVV wolinsky_d_Page_51.pro
820a3b56a32bd912bb54184f88639db9
d2942a072d747cc1b7216b9aa8f6f44374a3cb36
58785 F20101112_AADDQX wolinsky_d_Page_51.jpg
ab95a5ae2509cd07969ed5152c62cc52
e8f79d4cae9f66347cdbf47eb4ec6f8f1f7dfc19
31569 F20101112_AADDVW wolinsky_d_Page_53.pro
18df4c3de8969963bd33de1aa2664b14
2633e3b6c868c5f62c670b2fff13e96e5eb62bac
1051954 F20101112_AADDOA wolinsky_d_Page_16.jp2
27d4c2778336183d09c4b0c88d1d5b74
a54f942683b8bf36d477323197c3b8e495810059
64975 F20101112_AADDQY wolinsky_d_Page_52.jpg
4dc8231d033f12410949713cb886e62a
71bf796c144694fe77eaedab47c52c6342f376f7
54356 F20101112_AADDVX wolinsky_d_Page_54.pro
5500af497b4cca8a649ca8828ed82fa1
cb3ee3779f20799c66dc1b9927914d6e6c09fe56
54943 F20101112_AADDOB wolinsky_d_Page_30.pro
2197ef6adf8f3aa899994128b486686f
d33f1ee40bfe7fbff017d65d82eeca3afcaadcf3
52395 F20101112_AADDQZ wolinsky_d_Page_53.jpg
5d0f45aa5f58a3ea0a93eb5e8a9c543e
b4f5523126eb0ea6ab12ecfa863b4f28816c6134
24296 F20101112_AADDVY wolinsky_d_Page_56.pro
b14ff862c50c63ae8aa238701c6ed3b5
156f81c8323d6977eaf67661d15a7e923d63c756
5599 F20101112_AADDOC wolinsky_d_Page_46thm.jpg
586e63947bbe9cd14a7b055da0c66c73
5388bf0331c7e17c49be957bd755468d5e665b7a
F20101112_AADDTA wolinsky_d_Page_08.tif
bfd30ae85f469238b9837cc030177c8c
ccef27ec79254ee77ab21333f4f0da5dfe06daf5
67678 F20101112_AADDVZ wolinsky_d_Page_57.pro
746b4cb4838ae629473c1f23bda04a6e
926941ad693e641f21948868091a171fbcfb44bc
113141 F20101112_AADDOD wolinsky_d_Page_21.jp2
52416e79de7c40dd54e8651d434dc0c3
19f2cb59f4d44fc1bb6cc6662196bfa990104d1a
F20101112_AADDTB wolinsky_d_Page_09.tif
b1d40888cf286b198bf843b596778ca4
23d8ff03977db99f4adc8c551de747c4bec7a9c1
1051974 F20101112_AADDOE wolinsky_d_Page_59.jp2
2fbb275cf03f9d269e9957918dcd0629
e512b7ae001cd30daaa6de23f4368a65f41d7ae1
13343 F20101112_AADEAA wolinsky_d_Page_40.QC.jpg
323312fec036e7b4ae73603fec4c6408
ac7dbc85bab73fd885ce45b7da950c50fd1b1260
F20101112_AADDTC wolinsky_d_Page_10.tif
ac2d97abd14aaf35a6ff4d2faa42321c
a2a0640ab1578773cc6bcfc9626781b773a175b0
1003684 F20101112_AADDOF wolinsky_d_Page_44.jp2
ede7ba5492b07eb3470d965ae56220be
6327629aec2ac4a5ce875e1dc0cd71fe8e544a27
4084 F20101112_AADEAB wolinsky_d_Page_40thm.jpg
4563e687043d7a20c0b1b3fdcbf4763e
ac8c12eb148955b96e13f343e8af133ca97be18f
34846 F20101112_AADDOG wolinsky_d_Page_09.pro
ee250194edc2693ea05e490c2c5a18f2
272c3ae6573cd577768e149720f61e165f3d6ba2
1347 F20101112_AADDYA wolinsky_d_Page_02thm.jpg
6e51a4e66f812ec3b39b727229696995
d3ec32a8a9bb2e936b8c011b9518765503591ecb
F20101112_AADDTD wolinsky_d_Page_11.tif
093056b35068af1271dfc72227b74f40
8dfd9582ee212556520c70c5a2f5d1e3eaffcda6
3794 F20101112_AADEAC wolinsky_d_Page_41thm.jpg
a1e3af1722c73d12806adb052a2d9875
3b1b175626037ecc3e3cec1755db6fc9584818e5
964 F20101112_AADDOH wolinsky_d_Page_02.pro
c6145b38efc06399be5729bb47188cc9
c07142041414dd971412b1a7c35a67e6cd90893d
4015 F20101112_AADDYB wolinsky_d_Page_03.QC.jpg
c7fd3b0b98ca05717a3a1dd35048fa9b
9f3243202bf1a65f074e819398504f0d7051e8fd
F20101112_AADDTE wolinsky_d_Page_13.tif
5b1cb4032857d5a756a2ad2be2ff9a44
bd6f60d9f55c27c17b9d4f6befd3aa88d787155d
27756 F20101112_AADEAD wolinsky_d_Page_42.QC.jpg
170b99acd9ee1307ddad9c7280dc850d
a5f2c721b938395420a8912f21e8be467a5294d7
2013 F20101112_AADDOI wolinsky_d_Page_46.txt
cd045be55e4c4ed6f8662f31fd22372f
362f4fc1b87328c4754c351ae67ad6ac51b15441
1756 F20101112_AADDYC wolinsky_d_Page_03thm.jpg
8a59fc370cddda7a6b37ddc293279deb
d3e45b12c54a0e3a60bd56f47eb31feea4941c4d
F20101112_AADDTF wolinsky_d_Page_14.tif
9648248cae85762876cc3211cffaad33
b4972c13e194365a89cfe6d566d872adbd65cc5c
1051980 F20101112_AADDOJ wolinsky_d_Page_10.jp2
c1cbc963e8632ff06c6318eb55a06648
758fd05c5df6050779b6f7b7812c21aec306a7eb
2922 F20101112_AADDYD wolinsky_d_Page_06thm.jpg
5953851e533aa91849657c60f676034a
7f2f9ad6511e3e4438b22be80a7b67a7b6054043
F20101112_AADDTG wolinsky_d_Page_15.tif
12560b28dfbfe4814096664d46b15181
41ad4c87436d0b3dde9d352c7ef61cf7c195aaec
7038 F20101112_AADEAE wolinsky_d_Page_42thm.jpg
f7082b18e3def0e51ba9b5765949b118
8db648556778c1ebbfbcc6a6b22bc0e7d7556b27
114374 F20101112_AADDOK wolinsky_d_Page_33.jp2
7a871b7c8a686ea6234df7f45beb4314
3aaaccb2cc4e15391475940e60f85c9d615bfb78
15816 F20101112_AADDYE wolinsky_d_Page_08.QC.jpg
453cba72c28a43385f8ba813306d9676
59f1951bd11fe0710537410a9f2249107fc3ca25
F20101112_AADDTH wolinsky_d_Page_17.tif
f08ef8d32ab24bd56113296a079bcaea
db9ebe4560a5c97aa4ed5c11774ddb464f05d408
15645 F20101112_AADEAF wolinsky_d_Page_43.QC.jpg
303f0a169f9f2c2741e9935cf9807a62
83b23b41817add3a796df4d779717599a6300568
5975 F20101112_AADDOL wolinsky_d_Page_21thm.jpg
812d9cc011e55b0335e21219b827e569
6583a61d0e1171c43389543b54c47610d99be7dc
4255 F20101112_AADDYF wolinsky_d_Page_08thm.jpg
6545bf4943d00af7ca3bf14a95936225
fb6a47c82e8c1a264e3d639443a7b7e463feadff
F20101112_AADDTI wolinsky_d_Page_18.tif
a34ce87532e08f04381edeedd540a7fb
a2c6a76cddafa0279b26e52c0bb63f05f07377b9
4009 F20101112_AADEAG wolinsky_d_Page_43thm.jpg
816ce68990a3a39bbe7a980013b708ac
aa7eaf056c48aee7fad774876eb539e13b0d581c
12012 F20101112_AADDOM wolinsky_d_Page_56.QC.jpg
ecfb984691d55b54dbd519f29169d8e6
9c0e9e265c56801112b72fdb47a0efc4fc09cebb
16427 F20101112_AADDYG wolinsky_d_Page_09.QC.jpg
1813a2fcdfa4167e47d137a9a459e35e
c10cf9d40b13bc0ad1d1cad9354da058cf603633
F20101112_AADDTJ wolinsky_d_Page_20.tif
1ace4d10a47163b40ede8da28b638e5e
bca2cb214258f7fbd82b2796f4155f9d43ee5ec0
21038 F20101112_AADEAH wolinsky_d_Page_44.QC.jpg
2b7be9a19c2a089f5ba289fff535b663
db0ba3905e0502cb34f1e92e0338232187440d26
28420 F20101112_AADDON wolinsky_d_Page_27.QC.jpg
c8d2c0f61d629422cfc0b1bd93a62220
65604198f3c5d6b74a86f8be78802aa163a386aa
25846 F20101112_AADDYH wolinsky_d_Page_10.QC.jpg
0cd1e1ac3fa4707f15c718e00ff4f5cc
3d07d54c24dd837ac44fc1f044e30dfa1869b873
F20101112_AADDTK wolinsky_d_Page_21.tif
6178fb45f694436db20fa47f92f99fe2
fa48cb539eeaf59d2a83810fb9737ead42e08ab7
27776 F20101112_AADEAI wolinsky_d_Page_45.QC.jpg
df9e5bd573f6724f9e42310467d9ce97
164c9aeaeea751a77509e6b088a3e158e17cbd3a
4713 F20101112_AADDOO wolinsky_d_Page_05thm.jpg
c1f576a3cbfe66ef125594470f38323d
25edb355643be8b9d868baa032c5df91642015a7
6550 F20101112_AADDYI wolinsky_d_Page_10thm.jpg
a1b410a42068156c1be16b14cf62342d
48059b7d85ab04b5ba295b6f9a589f6dd9890947
F20101112_AADDTL wolinsky_d_Page_22.tif
2623b55a17d35f72f4ce4ec51493e56a
8549d86ad3eb116769dae045b5f9a21edef94666
7027 F20101112_AADEAJ wolinsky_d_Page_45thm.jpg
72a595b7733261dcec04087978aa9bde
2bdba392797735f54f016f5347a946000d9ca042
F20101112_AADDOP wolinsky_d_Page_26.tif
011cb5c8d88d9889c0eeceafbfa63f1a
12701103c3f708c016d246a4f3a0218f19145992
27806 F20101112_AADDYJ wolinsky_d_Page_11.QC.jpg
4efe4b6406c1e34f77c9e2b1a83fdd6a
971a193ca3f00a0c328c3c2178c4c05c9340558d
F20101112_AADDTM wolinsky_d_Page_24.tif
cf58746707c2f3e08491010105678b3a
5378b43869c3d98b44cf5530877b55dc1ae4cb5c
20404 F20101112_AADEAK wolinsky_d_Page_46.QC.jpg
47cc1aefdac851b96b9a35e5599f60b1
027f1f9151fcd78084f37375f3ce9e683198d78e
68883 F20101112_AADDOQ wolinsky_d_Page_26.jpg
610d28fc7f51f0691c346e3451e83379
2e6f256c55716b5609a94a08e21002d6fb59db31
6786 F20101112_AADDYK wolinsky_d_Page_11thm.jpg
1d582ccfc75153360f7e46b97caa45f0
e67550c520acf74b1bff2c28f943a410a9335ae1
F20101112_AADDTN wolinsky_d_Page_28.tif
11ae91c22710412f0cc73eb7c7f68c50
5c62b97074d9be866a99003ed3c4da3b134e8ad0
4429 F20101112_AADEAL wolinsky_d_Page_47thm.jpg
ea73b599130543f123e7f11f909afb83
45330c93f3c507d3040ae9f6fb02d65328188ca0
F20101112_AADDOR wolinsky_d_Page_33.tif
e59c7ef7dde28300f88d840b153d4d9f
2d50523a0567a4e930ae21907f241f6c3f0b713a
21047 F20101112_AADDYL wolinsky_d_Page_12.QC.jpg
492f18ed2b36045cf86c7eced05db6df
dc70a1d4c41f09fad2c7218f4372df636821725c
F20101112_AADDTO wolinsky_d_Page_29.tif
a2abd69fb1999b32308b54801f1f1d8b
b897f93627d2d83dc3be58f617854ca65eb0e9d9
14021 F20101112_AADEAM wolinsky_d_Page_48.QC.jpg
eb11cd1d91eb0804418b408fd44db4af
b1db6ea72c1612252a77e44ebd54f165f3acef93
19763 F20101112_AADDOS wolinsky_d_Page_50.QC.jpg
d4023d68b7385acfb80d646edbde5993
ae012661fd84a9497237495bf5bb669f34c9e2c5
2513 F20101112_AADDYM wolinsky_d_Page_13thm.jpg
f4aa2c08ffd061171918652aaa1543b3
78a6e0f2803400a72ab7d0739217c8e231535556
F20101112_AADDTP wolinsky_d_Page_31.tif
30ca557c00da06f0eddfedbd10a4d474
5c72b23241d75d3211b1e099f0f3125bb23f4b8a
4348 F20101112_AADEAN wolinsky_d_Page_48thm.jpg
bfa288e44a1ec9ed1448a9a08b1be054
ea6ec63024a133c164ed7096d475f3d81adb7f75
25130 F20101112_AADDYN wolinsky_d_Page_14.QC.jpg
7d5b43b337f211b66fa296d243557460
849bd7ef4b499fbb03b9de760dd55d145c4f27b5
F20101112_AADDTQ wolinsky_d_Page_32.tif
f438a2ccc041702223a0f71474cfddc2
210df5a3515df3553fdfcb85bca6605d7e214f86
14611 F20101112_AADEAO wolinsky_d_Page_49.QC.jpg
9ff0d2c68acc0e38ff2acad1ac4d3732
2b80cc453bbdfcb8ff84fae626698bdf9ff84d7e
26798 F20101112_AADDOT wolinsky_d_Page_37.QC.jpg
359a2610abb4ba06576a07448a66befb
68dc18c0221d6e0121c34deb74a564440f2f5376
25817 F20101112_AADDYO wolinsky_d_Page_15.QC.jpg
297fb98834d80513158662a003e318c2
d82828bcf925858d3d37270da330dbfe90020ecb
F20101112_AADDTR wolinsky_d_Page_34.tif
981b7aba9f1b333be14324bf442a5a04
fef0d76dc25236710ce01c60ca12e8a187cf0291
4253 F20101112_AADEAP wolinsky_d_Page_49thm.jpg
e25b18566a05fa7b78e7c40c135efa7a
2f85b0b5190c90e1eae0e662e64bce8ad458ca89
2903 F20101112_AADDOU wolinsky_d_Page_39.pro
351a75474f3d5156c04f2974ceae7543
58996044871e9a349b9cc890bc9a258e6fbe26a7
6490 F20101112_AADDYP wolinsky_d_Page_15thm.jpg
49bcae09a6aeb0998924962fbc6b27fa
7eae6fb925324d53431e44174dc6d77740678a73
F20101112_AADDTS wolinsky_d_Page_35.tif
fbb74c9eae9a834ed469283966c768c9
983bc25c29d8e62658a54084a5a4b3d28fe4f75e
18854 F20101112_AADEAQ wolinsky_d_Page_51.QC.jpg
f68e641e0e3e3a5b460beea7f12d6532
579fc2d4747b39ea8b1fe5a3fde63d1672fdac4f
6155 F20101112_AADDOV wolinsky_d_Page_33thm.jpg
d9c63d9c3e8e2a9253f7329ab9210c75
e31af0fa5463150f7e5cd1d52467c529fa854034
28769 F20101112_AADDYQ wolinsky_d_Page_16.QC.jpg
8f8802ec5b7a9449af70351e872909c0
8daa9798094347ac6eb25df895689bc4e95adb26
F20101112_AADDTT wolinsky_d_Page_37.tif
cd1335e20796aba8c4d59c465572e2b1
330fd05696f04777811fb729e6236ce7ce7a142a
5485 F20101112_AADEAR wolinsky_d_Page_51thm.jpg
15622d95b7b0090660b22b770137a93b
d66750f21193ee002c5fafb51a2a2ab9ca94c810
F20101112_AADDOW wolinsky_d_Page_23.tif
00a1f13229e6d59e992b7a2903281bd9
b77424c9d0eaa771dd2ab5c8b9e5f28145c88678
7067 F20101112_AADDYR wolinsky_d_Page_16thm.jpg
a47ca91a003ad2dd6175681a95b22af2
74e14e06ef735eb2812d579c62d13dc34f6bfd33
F20101112_AADDTU wolinsky_d_Page_38.tif
c552d7d48541dbd09c5fa889f21035c6
8301a1be37841f1f3d6faca1876146035871d739
20343 F20101112_AADEAS wolinsky_d_Page_52.QC.jpg
4c0d4b8ca395958197fa6462c29ae816
193c0fa00322e5ce0a5f7384426355ae3e2bf098
10129 F20101112_AADDOX wolinsky_d_Page_47.pro
cc03067cc035d7b9af313cf476b636ab
d85ab45b61edd981e22a64a819be71c83cf65e03
28083 F20101112_AADDYS wolinsky_d_Page_17.QC.jpg
08bf837e41e91ec901fa6d60816cef28
4a8bc08559d96f9dc956b26d476472d8fe83d967
F20101112_AADDTV wolinsky_d_Page_39.tif
72661ffe95fd3b741ec65e8c14096ae1
70845a202be29d03f419d954df3636a825a883e0
5850 F20101112_AADEAT wolinsky_d_Page_52thm.jpg
b21267b313932723af9ee7a69e0d8112
16e08572951869adcb3056309c685a5ee4dcb9b9
50473 F20101112_AADDOY wolinsky_d_Page_24.pro
cb911a847a7cd83755c52b8820a769b4
8a9ba4b6eb0ff018a923449f06eeeefd5c8f6080
6866 F20101112_AADDYT wolinsky_d_Page_17thm.jpg
5e161df0c42c8f06aaf39fd2cbb0075d
eb36a1b4b8e52807b79660e916321c8d422a6008
F20101112_AADDTW wolinsky_d_Page_41.tif
740c0564b6835db6b7511abed939eec6
956e62a63a7c0a467f6fe0a369f26c52dd5ee73f
225 F20101112_AADDMA wolinsky_d_Page_49.txt
59668b9888a65f88ebdd3f03f3434d54
498780fd58ab60229f2848061af6090e5f3c2091
16324 F20101112_AADEAU wolinsky_d_Page_53.QC.jpg
643ef42efdeb09d8519ab26cbe8fada6
7bfecac49222dedb252d80915e75f03dfe5493bb
F20101112_AADDOZ wolinsky_d_Page_44.tif
ad7a7fc4619ab1231c30fccf5b59d4d5
c03f69496a9bc1385254ce1ea7cd523a0e570aa4
28943 F20101112_AADDYU wolinsky_d_Page_18.QC.jpg
5695a7f97558a2b26a778b5ee726f96e
6ddb7f71eeff8236aa1f9666df78c8b289054969
F20101112_AADDTX wolinsky_d_Page_42.tif
addbc62004952454e40a219fb7d01b00
08d4f761399df196724ae87d6e09f1cb0c0e9a8c
87267 F20101112_AADDMB wolinsky_d_Page_19.jpg
f1c75fbf8b13147e865fca7e7a30520e
d188cb92dcdf34202527980cbdb60e65a29ed0ea
25251 F20101112_AADEAV wolinsky_d_Page_54.QC.jpg
cb3c9908ca6fa753d94060a079c423de
f2f2faee501f6c7fd20fbfe993cb0be1fb855267
7050 F20101112_AADDYV wolinsky_d_Page_18thm.jpg
a2778bb4964a08822574fb071a84e12d
79f5890dc699f313a0fe942c44b5bae8bf6dc786
F20101112_AADDMC wolinsky_d_Page_19.tif
461cf03c80020742105cb5eb492585d6
f046e999f36f2c395f8074c6bd10ec501ad24eaa
6464 F20101112_AADEAW wolinsky_d_Page_54thm.jpg
63caf3516afa930b0d3683029c193e60
12ebbcfa578ce8195b3ba5d4a56b306c8dd42b82
6873 F20101112_AADDYW wolinsky_d_Page_19thm.jpg
412f0ed92246ebe5ab4140cd9253791e
9e6377eb831ed21bef3a2a079b40f302f08a0ed7
83478 F20101112_AADDRA wolinsky_d_Page_54.jpg
2db67c1b799f9d0f753d09a40443d450
9d21daa52846b469bbc27177d65fa599a01293de
F20101112_AADDTY wolinsky_d_Page_43.tif
00c3e6fc8d250a1c40ee7ccd841dab63
99f80ed711feef8fe38f6ae0fda049cc99190ddb
2241 F20101112_AADDMD wolinsky_d_Page_54.txt
ce0725b05140eae87b0c1c186947b1e1
434a3c66b9d06b2c7753adc84b283a50bd789e5d
25760 F20101112_AADEAX wolinsky_d_Page_55.QC.jpg
86208efa4b8ed33c12c77539bca5331d
76e97b618f2307f65d63f3b09cc6683a3e18b878
5099 F20101112_AADDYX wolinsky_d_Page_20thm.jpg
da37fffc6e38919d59f501abd59d9987
c023bedf7dfd82c1f16c13cf6a3b31e7e209ce11
87079 F20101112_AADDRB wolinsky_d_Page_55.jpg
fc4c7777782e04e239264ff1fcdbaeef
8c43800bef6523690006a8cdb5780ad5b82281e3
F20101112_AADDTZ wolinsky_d_Page_45.tif
06ef23778f2ee95a3df7615ab938d014
eb068fc4612a8136885aba8ff1ff556f8612ec92
3370 F20101112_AADDME wolinsky_d_Page_41.txt
49d4f4d6ad98e39cc3baea9555b673c9
b3cb64ae672ae372f2664a3d48530ea8cbcc9ff6
6812 F20101112_AADEAY wolinsky_d_Page_55thm.jpg
54450d8e5c8c1a966779f7d02471f1ea
c4d48259127a4e741baec358022b0a8a89ad6342
23022 F20101112_AADDYY wolinsky_d_Page_21.QC.jpg
418d6aa943b67127930f6d6c8b2fdeb2
de468dde7c3ae1ccd9068b934c0cdfffc13cb16e
37976 F20101112_AADDRC wolinsky_d_Page_56.jpg
c4b04a0c04927ebacc80897dfe649ac1
08707bbcac56953acb428138da4d59a180ef26eb
13675 F20101112_AADDMF wolinsky_d_Page_41.QC.jpg
10e4fdffda13c6f49f4c4d62c9a8c7a8
54c1211951ec1cbfb2ee8b4008d2b2ae272cf9c6
3318 F20101112_AADEAZ wolinsky_d_Page_56thm.jpg
e27cfdc4c3aef49e24494b437491b889
4f3acc2ea9d38bcff619c16f913239ee44ded905
23998 F20101112_AADDYZ wolinsky_d_Page_22.QC.jpg
aae81a2d873060a87f6790ccfafb9321
2a10febd46deb5f82aa5689658667f9e735c79f9
103940 F20101112_AADDRD wolinsky_d_Page_57.jpg
7c227adea4a19134a108768a7dbffcf9
84e8dd6259ff6dd113389b2b8fa4463f1bbcc3ea
1534 F20101112_AADDMG wolinsky_d_Page_39thm.jpg
911a5ce8f761a5f2be4ed10abd814f02
e8fef8b0a9df1ef8cc30ce463340a494348c8b16
63346 F20101112_AADDWA wolinsky_d_Page_58.pro
dc4796095fb521c31523d28405480067
b2af3a2d1b735d1eee8819a05b352c3ee68b32a1
96531 F20101112_AADDRE wolinsky_d_Page_58.jpg
14babec0a719d2bdea82682732a92a48
cf60be121c6cc7ec1c370656dc3d1c6c7163d65e
971 F20101112_AADDMH wolinsky_d_Page_56.txt
2c28dff8cb264dcabf04d89192f88c89
ec5d8d6ef4470a580ec371334299b57c806c121a
62806 F20101112_AADDWB wolinsky_d_Page_60.pro
7f744e91baf10f17ba281f10b2d2f192
c5894d1aa03ab60e8442f6847129167fc420198b
45724 F20101112_AADDRF wolinsky_d_Page_61.jpg
d95be1ae07e6519ad594c4e32f6d380f
3abc063921deaf1cbae412f21ece7ba3c7b781b9
2297 F20101112_AADDMI wolinsky_d_Page_27.txt
2e1d3253b8cbab913d89ad8431fa1730
beabfdf53a15f611f374a5147b67f53bbc2f5890
25367 F20101112_AADDWC wolinsky_d_Page_61.pro
68c04a01ed42f81bc3adc926fd9c64a6
b4003136bc898463e87b365f1a73efffaf65bcc1
19388 F20101112_AADDRG wolinsky_d_Page_62.jpg
d3caee255822e560731a213d95f93eb7
8ae751f79760f76e759574b3ebe1baaf08946632
10120 F20101112_AADDMJ wolinsky_d_Page_04.QC.jpg
9f179709e563cf133276aa18a24fa348
dae99635b38582fd4349cdc00ebe4f71cc222813
8642 F20101112_AADDWD wolinsky_d_Page_62.pro
1f06bba5bf304af503e955484720c053
133bb94f394d9aaa3511d48e9cea38075e111bf9
25621 F20101112_AADDRH wolinsky_d_Page_01.jp2
ac3be67a94b280d48d007284e717bae4
48ae62ea73485b2a9b228c6efce83941c0cd809c
1578 F20101112_AADDMK wolinsky_d_Page_52.txt
cd32e01e9a4d75862238dc4b0ae5fbb2
e4c813e29183586207ef300e9893b5bb1257472f
480 F20101112_AADDWE wolinsky_d_Page_01.txt
8af68b6891ed2e1d2aa12d64bf78c49b
9ad8b3b25ce76524af7347a038417c870f302830
1051981 F20101112_AADDRI wolinsky_d_Page_05.jp2
aaa85dec17020bb9377f3d61ad0957a3
a346481e149b73d187cf2a0e476418d712c5bb5c
1051934 F20101112_AADDML wolinsky_d_Page_08.jp2
bc4c4f4d395241f50bfee4cb82f75d54
eddbb54c2f74a48985c0267af0de120618834c37
214 F20101112_AADDWF wolinsky_d_Page_03.txt
d1a8099b4e77cda63e6268fb604ce8a4
3a564c1d3a2a8de940835a7e91a55bc7accfea95
377853 F20101112_AADDRJ wolinsky_d_Page_07.jp2
61bb963fae167866eec2c961c167644f
6fe6f3cff940c0d93c9f258a6bc18a826b35fbe1
F20101112_AADDMM wolinsky_d_Page_36.tif
65bcec5ac646954de349c70fb3925666
367cce86196fb533844fb1b53f89f4e34593f236
1448 F20101112_AADDWG wolinsky_d_Page_05.txt
ef069a6189bf303261aee121f7765d3f
ce0c09d62e7a1c4c38f1c06c324f9d009233ed57
1051969 F20101112_AADDRK wolinsky_d_Page_11.jp2
7eeb6784a0b223bb8bea9c5ad4b1ba3b
55def405009942d36c2345e3a912466fd54fb91f
F20101112_AADDMN wolinsky_d_Page_40.tif
af1554a77ba4b0984d4c96c8244fc103
43ecde3bf85b92a3213bc5ea7aa7a285e3b34d19
506 F20101112_AADDWH wolinsky_d_Page_07.txt
0d588b867595b82b009f6c59d17efd99
4bd209f789354edd6e2e1d0f5ce07f1e19825e58
866243 F20101112_AADDRL wolinsky_d_Page_12.jp2
cba53746779005981b5b448334f99207
ddb92e79e2dc47d4cb762354fe7e583343c5f1c2
F20101112_AADDMO wolinsky_d_Page_30.tif
0f7fcc194656c4c60ff229c188c4ae21
fdd57654509a208f4583f536b122a0fb8a8f3185
F20101112_AADDWI wolinsky_d_Page_08.txt
cb297798aeef019dea9b321bc2567d05
e4e191deda9b9e6ec4044cc010c3f709b9677aad
1051986 F20101112_AADDRM wolinsky_d_Page_14.jp2
7a1772bd54cd1cc80a34d7e8933e9b48
8a4e6efa456118534c5e017d8e535e1e2d2de43f
5753 F20101112_AADDMP wolinsky_d_Page_44thm.jpg
b562470b525a9a4d5c2709945432188c
f04f04f79b6f5c2617e4b0ec721d23abc30cd257
1577 F20101112_AADDWJ wolinsky_d_Page_09.txt
1a21df48e329c7a51e467f93e56ae992
e411199ce536c7205f0553a1cb6857f23b9e7f63
1051982 F20101112_AADDRN wolinsky_d_Page_17.jp2
6266dd7d988bf792abcba2317a61aa3b
6577f7c9049121abd218d2fa1dcbb022896a2c14
2048 F20101112_AADDMQ wolinsky_d_Page_07thm.jpg
72ba0829bf3340ef4289731b63634a4a
030716891900c6c6b0a0f7cbae01461fde1a5100
2278 F20101112_AADDWK wolinsky_d_Page_10.txt
7ca27742d7946ef11c1459194e84fee8
44dfdcb7bb44bf53887329279829886ac5a5b751
1051962 F20101112_AADDRO wolinsky_d_Page_18.jp2
017a3e2b2529ceaeb112553d3c0f8ee9
ee319b0878758bed7ca57da784608758a471e688
2298 F20101112_AADDWL wolinsky_d_Page_11.txt
4c75dd8b7f039c4c37f267a59e6f1e30
17b99eb0ba2d69457dd05d4e3be4bc198901e3cf
1051957 F20101112_AADDRP wolinsky_d_Page_19.jp2
36f5355856d4e480c0f70e82b76a904b
b4704ee746e90ee672f3849e4e59111df3e0e79a
844734 F20101112_AADDMR wolinsky_d_Page_50.jp2
1e20ab0d2f03cbe25ba85fd268f1f2b4
4772c7085b62e8fac0e950778fc725ee7ab779bc
1233 F20101112_AADDWM wolinsky_d_Page_12.txt
622587f9808cc2277e26207322f523aa
3b819021a877a9a13da11d924e6aa4acf646ea2e
868884 F20101112_AADDRQ wolinsky_d_Page_20.jp2
7479c45c56ba2a28003a32dcc89ee4ce
b4d8081f219a946a175ec63af83a405b89ba1598
23834 F20101112_AADDMS wolinsky_d_Page_24.QC.jpg
b11dc404d43eab460271b3d39ed58624
7e40c9e709b871f2301b1f0fbb4aea6280f3f75e
630 F20101112_AADDWN wolinsky_d_Page_13.txt
fef0508080d230165074559d3f17b995
dbe32695a30a9be9afd337d0c765ba107cbe3898
1051952 F20101112_AADDRR wolinsky_d_Page_22.jp2
7a66f50211349a546926459a63a914a0
fd4a1b2421da8eaa68b202bc8da70ba9cc43ccbf
6134 F20101112_AADDMT wolinsky_d_Page_07.QC.jpg
2c5f63866797a286cfe9c1f153597200
aef63557d91340927c876b5fcc92b7f98da982aa
2223 F20101112_AADDWO wolinsky_d_Page_14.txt
e6fc2f20384a481481f8978d186178a1
230f5811bcca5f807576384db5aa367923299fd3
819103 F20101112_AADDRS wolinsky_d_Page_23.jp2
3d728fb791a6198049af7139aea0e720
3c14b8888210159e100e6f056ea5dbcc6a44a918
120 F20101112_AADDMU wolinsky_d_Page_39.txt
fb14a481bab383c17208f52879653dd5
d2da9bf53281872a36964abc27a0d4d44ba7aa04
2231 F20101112_AADDWP wolinsky_d_Page_15.txt
62cc82df446ef05c9e9300dd8c6c2662
24ea25823a634e0ce9cd51132934431cfe4aba66
1051963 F20101112_AADDRT wolinsky_d_Page_24.jp2
038618f4d0ce657cd0ecbc654c0812fd
0ff9859bd6d9a4498be4bfb005ce9c83cbc1751a
102547 F20101112_AADDMV wolinsky_d_Page_59.jpg
55667c32e80b35620d29914720c07c4a
4aaaf23da942f478d30feefcba449ebf1a8d68dd
2379 F20101112_AADDWQ wolinsky_d_Page_16.txt
f042ccd5ec800053c0b3cf07afbb092c
2c9ffd34eacedab31211bb5b7f25f73805f1ab40
1051956 F20101112_AADDRU wolinsky_d_Page_25.jp2
e0a168d8df8f1c246fce4990d2137fb1
186568309fa9ca1a903df35ad3a9c808865a9715
1051978 F20101112_AADDMW wolinsky_d_Page_29.jp2
f0212e8ef6a4848991dea37200b56a14
58686a50fe6e9894b55e32ec1cddd9a0fb1ab964
2355 F20101112_AADDWR wolinsky_d_Page_17.txt
a556c31d2a4c174c303eab205a18bd2e
f52d4ede6d3927ae154f1211acb5be91ddf136e8
1033120 F20101112_AADDRV wolinsky_d_Page_26.jp2
2a2c9ab4b237ba13b6243a3b5358b2fb
5490619e96afdb3edc3acae0db9417550d075591
708160 F20101112_AADDMX wolinsky_d_Page_61.jp2
cfc56038a872eeeb2c459ae9a17a621f
df4c510a034ed989e6ef193217a6319606fb8af6
2413 F20101112_AADDWS wolinsky_d_Page_18.txt
ab35f0ba25e3cd4636e869ff2d071eed
357d8b432893055a5181aa25efbf243b611d0f92
58784 F20101112_AADDMY wolinsky_d_Page_42.pro
40cfcf94a16bbb26ab70b08698cfe7f6
e441934a7bc3eda0e52bcffe2f4702767b2f8a0e
2268 F20101112_AADDWT wolinsky_d_Page_19.txt
cf3da1fa9c82a2cc4e0bb94d0d8dbff4
eba97cda320b6fc2789b2d2fe2ce5e28173c2d5d
F20101112_AADDRW wolinsky_d_Page_28.jp2
30e6cc16738fdd63ddebd111c53a67c2
2244dd08ece06b41dbe2095a5ff04026b76393d5
6555 F20101112_AADDMZ wolinsky_d_Page_14thm.jpg
19c52b38df0052dd7bf7e0bdfc572147
5a6e1a7c950c0aa3f174dd2bc29c59711d51c94e
1563 F20101112_AADDWU wolinsky_d_Page_20.txt
0d3c80e692d9d29e49db831833567e3a
e1edf689ddc173b136e72464aabe4ae2c2cc2e92
124320 F20101112_AADDRX wolinsky_d_Page_32.jp2
7c141e5d0741db1e38eea7bd3ce5ab98
2835a941ba001948c9d8f12740fe2fa0c0350509
1252 F20101112_AADDWV wolinsky_d_Page_23.txt
eda3d4e1b4cf586ef2ce1b19b0e2fa80
4b651653ec2cf6789f4473d99964938f6e7980f9
123929 F20101112_AADDRY wolinsky_d_Page_34.jp2
3f98b9a493bace18ebfc61b1dd6090ec
910bcdba290dd5e18b7d2a32f14399f75824af2a
2047 F20101112_AADDWW wolinsky_d_Page_24.txt
6c98ddbd7f8beff59567ec9b4f8b0c1d
4004d46f28f2827dd007b721c6dbd96bc5deadc4
71715 F20101112_AADDPA wolinsky_d_Page_46.jpg
702a5d8e541e8724795a49992291a820
91347d614949036d02d1b97d4f0210b9a40c326e
1051936 F20101112_AADDRZ wolinsky_d_Page_35.jp2
0e7d97a299fc0400c70b8da6ed98c76f
3381534c88dea6065fcb0fc3411435dd2e0597d5
2011 F20101112_AADDWX wolinsky_d_Page_25.txt
648cdf3ca9ab3f3ee8de53a0a7cc5df7
8f1a35784370698532c82e50a8a216e3bfec7209
7235 F20101112_AADDPB wolinsky_d_Page_59thm.jpg
16c97b6babc71c39ad8c1dd420ed5952
bc9f43cb31db2567d3fb321d7327c0b5a0126f15
991 F20101112_AADDWY wolinsky_d_Page_26.txt
33044b7abd555aed535a02f6e72257cf
17176b13045ecd46cbf13fa5976f18ada357526a
19443 F20101112_AADDPC wolinsky_d_Page_07.jpg
4bc719a85f7d81cafbf49181cf524c20
972a21dc8602c917c2b04b6d3763a0e590460f5b
F20101112_AADDUA wolinsky_d_Page_46.tif
634c93eb513bd30e1404e995fc874bb2
4aa585cff89e2d03ae127b11b1ae5c2cef2dc5f6
2184 F20101112_AADDWZ wolinsky_d_Page_29.txt
3ae4781d10e290626d36d600b4000783
f711cf073bb146d99c2d826faf3c8f27fd02d083
4241 F20101112_AADDPD wolinsky_d_Page_53thm.jpg
2fa5f6678d3366a6fb04578b6fa8918c
9df922c7e232fbfa1913c543d63b063362b6a514
F20101112_AADDUB wolinsky_d_Page_49.tif
9fc7dca7d9d6740b4dc1f2b3414507f8
fc8eff57e05453aaeea951d5fb77cc698bf658fc
2191 F20101112_AADDPE wolinsky_d_Page_31.txt
718977b1f7b4dc16706825a12f1a5b4f
1d6c2711dd37a280ec291f1a9d70743635bcd1f1
27906 F20101112_AADEBA wolinsky_d_Page_57.QC.jpg
391d0b4b9dd0a27104162c315b3cc8c0
9b879ea3e2293e6733ca09516812142748f8e601
F20101112_AADDUC wolinsky_d_Page_51.tif
d3b7fa3b84e964a43bf5bacc83aedd32
718818e3979fda7590ca6ae730334dacec8f64f8
5522 F20101112_AADDPF wolinsky_d_Page_50thm.jpg
7c58c514675e5d793038e811d8b85d16
84bb74fecc55bc4093d60ccff913e0a48375c6c7
7187 F20101112_AADEBB wolinsky_d_Page_57thm.jpg
ebd39d8873ae5c31b7cf00bf18e26d6a
1ba2c9cb6607b9ec4c20efcf0c211e602714323c
F20101112_AADDUD wolinsky_d_Page_52.tif
b07163428a820ddc68ea4a3cdd24bf61
9ed698e1093f8dd7ec3a0f5ad8414e7a845bec80
24818 F20101112_AADDPG wolinsky_d_Page_25.QC.jpg
92440a181d865302115e9ad25329a9c8
2a718044df0d7bd213aca7c4813c764dda9b99d7
6176 F20101112_AADDZA wolinsky_d_Page_22thm.jpg
e00c0817484543872c64e6a795703278
ea3ce0cd8a9e09683225570345f9c6caa783160e
27036 F20101112_AADEBC wolinsky_d_Page_58.QC.jpg
3c8cf9cd095da5b310b743cd6d3ade66
1b743a7ac6116165e8aede3dc67ac8555c888d36
F20101112_AADDUE wolinsky_d_Page_53.tif
1e36ba399b3056170c40ea64b960221e
163da0f257fdc0d45504a8dbe30ec98abac86105
F20101112_AADDPH wolinsky_d_Page_15.jp2
80aa92451fd9189ed7f87099beb2d738
fc623df76f4e6ff94202bedae04cfb61e30cd373
18597 F20101112_AADDZB wolinsky_d_Page_23.QC.jpg
07d8b96558c2017136d1acdc5b24819a
7c8f1766f6da1f66d15b1b3069c81fd0de73617b
29030 F20101112_AADEBD wolinsky_d_Page_59.QC.jpg
4a9fb130f9221a3e841b23468c54eeec
195cf9e67f043d0bede0797fc5e569e55c983651
F20101112_AADDUF wolinsky_d_Page_54.tif
831fb85c6d3201cb84aefde4f89a899f
e3b581268ddce19faa2dd6e61993c69fbdb440af
8317 F20101112_AADDPI wolinsky_d_Page_13.QC.jpg
beb4847c4ef12fe5e5a823bba8a62303
c6a069d5948eb6bd8fa41f4f0a697c4cd9776cdd
5147 F20101112_AADDZC wolinsky_d_Page_23thm.jpg
cfe1b6149f55a88b06550e6462071a76
45c7ee6e6c5859e98f971d8c6d19dd96f8d1b23b
27789 F20101112_AADEBE wolinsky_d_Page_60.QC.jpg
d140e1bdd7b32c0e2dc6d03149724c43
ab45af494f58bacba3697d891ba5090f9dc9f436
98861 F20101112_AADDKM wolinsky_d_Page_60.jpg
1d4faabeccf991ead034dcdfb4a2ba32
98397ed852500cf3785ad7b8f04327bfd3bc8aa6
F20101112_AADDUG wolinsky_d_Page_56.tif
243672bf446740103f928abbae08453d
8efb0e7af714df0bacc1ce925e981c2721fb37b5
2416 F20101112_AADDPJ wolinsky_d_Page_32.txt
d763e764049cf5e00ca606e36ba6f89a
be403b3a842f946a50d5f86a12619435b4e82f1c
6282 F20101112_AADDZD wolinsky_d_Page_24thm.jpg
4576311c4fee043432722c659b9ef11b
b51d87b97656cafd435e2561d81ac58a18a1c985
1051918 F20101112_AADDKN wolinsky_d_Page_27.jp2
0cdd9f0aef97c378ecfb1f6d1fab7fdf
af1d5b1d0b1bc157564e2a105fc0c23456c30b01
F20101112_AADDUH wolinsky_d_Page_57.tif
fe928708a23a221a0f6589d098844429
87205dcea19c90a68bf9ec3220ab740672574c45
F20101112_AADDPK wolinsky_d_Page_47.tif
2cf3031e72e4c65185d75c1f0d479f0d
82ba3a304c5013182d6e4bf623f74a761b973383
6682 F20101112_AADDZE wolinsky_d_Page_25thm.jpg
9ad53af57dd9e10a77d82ce450f68b03
013b82b07a5be7f057c1e185e222abb0aeb906d5
7173 F20101112_AADEBF wolinsky_d_Page_60thm.jpg
84a839869fccf21a64c67031e1c1e3b5
f55f9883314303d3a0829d742f5fb07a7ab3ee09
91 F20101112_AADDKO wolinsky_d_Page_02.txt
8586e4e7cc251c5a1521016dda104a28
42877b903051391871fe79f58ac1ac0da48c99a4
F20101112_AADDUI wolinsky_d_Page_58.tif
a03893b0c581ee3dcade4285ac41a7f5
a3e89903cf38170b2c10a3e013d76cefd6f8404c
1035 F20101112_AADDPL wolinsky_d_Page_48.txt
6b3c5b09d4345c3228072e7a7036aa72
aef3469c49feb70fb6ccd3be9fd045e69b18288c
22887 F20101112_AADDZF wolinsky_d_Page_26.QC.jpg
9d034ecf5442f413afdef3a6a6770b78
c51e4d839c53f865b930364e3c8494c7d303681b
13206 F20101112_AADEBG wolinsky_d_Page_61.QC.jpg
9f246777cf6f3445b15a5523db5e4264
b9935e2f3ff70d84d78b7e22967c4430d681bdf0
F20101112_AADDUJ wolinsky_d_Page_59.tif
0a99761c3149926facad400a7ec4e6d7
3ab1771fd7e19bba9f580f120fa8baafba68bce8
96028 F20101112_AADDPM UFE0020420_00001.xml
1c27187d30af9725622a162b1b5aa516
2272aef155b9b2921c7859d1a60c9c0c557930d9
6339 F20101112_AADDZG wolinsky_d_Page_26thm.jpg
024db71c335327280b83dfab93c69ce3
b38acf2e509c792c99ab9919de8b7e9c5a4509bf
6042 F20101112_AADEBH wolinsky_d_Page_62.QC.jpg
a48bc7553159a2633cfb6d49a190be07
1b2c41c05b2d7f4f5488cab20c8e1ff1288d169d
F20101112_AADDUK wolinsky_d_Page_60.tif
5615d1e2ca3df226a13686065c27eca4
925b001a8b2cbed0ec32043fd4c6c787d5de609d
6991 F20101112_AADDZH wolinsky_d_Page_27thm.jpg
921f25a707cdb61e8a739d297d699c23
898bbf44dc055158f97920147e2d314dc8fa2982
23438 F20101112_AADDKP wolinsky_d_Page_48.pro
960d2f97075dae1afd897e24e2827f5d
edcae035c68bb4357a4aa0d18cd3bbcf49a87518
1962 F20101112_AADEBI wolinsky_d_Page_62thm.jpg
8833bb4386aa85ab7ff3dfa215751ead
37bad210a5f211b697921405d72e95156504f32e
F20101112_AADDUL wolinsky_d_Page_61.tif
ffe6dbb52b7d1015545358c7299572cb
b1f46792ff3b04d4abd3ee1b26ced8c0500a8501
25979 F20101112_AADDZI wolinsky_d_Page_28.QC.jpg
55d7a0dd1435d59c434de8c55ffe628c
ac7165a0b63d2e8ca79c3bcc8c74882abc87e3cd
80483 F20101112_AADDKQ wolinsky_d_Page_35.jpg
8c2cae65bb5938235130f221980759ad
d4b44c064dc9e44053d452f6bfda99e339b28c4c
F20101112_AADDUM wolinsky_d_Page_62.tif
6a5c2ace509c5cf5f239ffb7755b6c8f
4868b42b6a79bcf2ec3f5bc830d1a87113be13ae
10060 F20101112_AADDPP wolinsky_d_Page_02.jpg
0f27d8c04ac0f7c3755b8e95d7dbd29c
c8163da554fad41c33570a5ef333096d4ca9a9c3
6625 F20101112_AADDZJ wolinsky_d_Page_28thm.jpg
c907ad74d818e1af9fd794f75454568d
3d0464c79fb95468009dfea8ba720c9bb59fa564
52688 F20101112_AADDKR wolinsky_d_Page_22.pro
5e822945b1567f8a0ffe03d1ccf1fddc
d93756a32bf76a7447b5e4b94adb02d5f4c01b8e
8408 F20101112_AADDUN wolinsky_d_Page_01.pro
0e6edb416c436cc4161be3c44090664d
8fbde4ce4af08559bcac1a9fc8c2c0ce7d51fe80
12194 F20101112_AADDPQ wolinsky_d_Page_03.jpg
b45014a933522ff9f7817bb6fe5a280a
828b517177160885da8969e270a506c10d86ec61
26357 F20101112_AADDZK wolinsky_d_Page_29.QC.jpg
6752659e7422a6e0f6147d2d2bfe1894
964785f3bb4d1d3af73ce3fbe5b84c90e135d213
1016 F20101112_AADDKS wolinsky_d_Page_06.txt
2a6cc3f3ba7c7b7668055a3dc6041681
f5860b94c0b18aa651b9529e960f1c449fad1a5d
3319 F20101112_AADDUO wolinsky_d_Page_03.pro
c17bf61cae9e4ec3e049d2ef041da2b1
d17455dd109c3264dc4b9f87f92e0361ab1b86f0
31108 F20101112_AADDPR wolinsky_d_Page_04.jpg
d01d414bd9c0e751dc32bd697c99861c
50865e6e647c7e2318aab0d70bd6af961edf97a7
6624 F20101112_AADDZL wolinsky_d_Page_29thm.jpg
71cd12f4193901511e561b3ab496e6e5
8f93130ecc45d780183ce97f2d50f06e9d69cf01
53432 F20101112_AADDKT wolinsky_d_Page_21.pro
bd4949dd12b48c93e07117b5b90a5b9a
bc56ed4c5e27ff2a34d0cc49760f09c1dcba98ad
19924 F20101112_AADDUP wolinsky_d_Page_04.pro
7591efd4ed4b8f5305cd7a3ec4248c5f
381be31c8974ee9f1c6e5db3fd8931ccfa1a5f10
33441 F20101112_AADDPS wolinsky_d_Page_06.jpg
caf1d0f68da0e13f3aab69c085e5d5c4
cf3a1df0c32956a6e1f2a8f120f3c8fe4de522ee
6655 F20101112_AADDZM wolinsky_d_Page_30thm.jpg
2071ae5ab94a278775c204c4a496733a
79fba3b06ebfb4d9de1f0bad7ca13fae6b960775
83960 F20101112_AADDKU wolinsky_d_Page_29.jpg
cd9dd01e7569bd2295d5e9b62af7fa63
3be5bddab78ef3192a17f9c19ba3388c88541bc3
35858 F20101112_AADDUQ wolinsky_d_Page_05.pro
260c8b7f328fe5baf56d0feaef306454
5d1a3625a90fdb9735397b924a6fd3245e4addbc
55165 F20101112_AADDPT wolinsky_d_Page_08.jpg
10fb56a8c81f99a032aaa812a0c2f90e
fdd713882078bdc7750944566a35eee6112bb6ed
26395 F20101112_AADDZN wolinsky_d_Page_31.QC.jpg
b108df8fbe8c3d11fbb7fd87d03e5ef8
24192b4ecf047c6f21a76be76e358ef83db96f64
26010 F20101112_AADDKV wolinsky_d_Page_13.jpg
10ea51f8a707b798fe84d0ea3d9b6706
4fb7a805b7fb9304748f98f0141f6c3ccdb293bb
10555 F20101112_AADDUR wolinsky_d_Page_07.pro
e83eb19ddf7a5b20fa7479b88196e6cd
73927d940f6d497827c5723c2ea7a88e4a2f55d3
25676 F20101112_AADDZO wolinsky_d_Page_32.QC.jpg
07bef6b342ae36bad47bfa56cf8fab39
694373b3bf9d310346f0556140723dc61c329ff1
F20101112_AADDKW wolinsky_d_Page_27.tif
56e5acafda2bccfc8ed4b7a7fae0ec7f
bdd4ef785fe72f260cd1863037c5a57faa1b42b4
34721 F20101112_AADDUS wolinsky_d_Page_08.pro
11ac2fb0a551f088b0cc8b653f2f242e
401d85e6ca5bfe4d1339df446585e4694530a74b
52521 F20101112_AADDPU wolinsky_d_Page_09.jpg
68585077597b8b5efb137163fd8e3204
1500ac36419cf33db84e8edab862cd3c0513633d
6297 F20101112_AADDZP wolinsky_d_Page_32thm.jpg
4a506097c6cd8ba6c04ae1502c7961b9
07d3286d2db7ff9eeefe62196c7fa3cbc8396817
14674 F20101112_AADDKX wolinsky_d_Page_47.QC.jpg
564ea8f73699302fdc09a76a02aaa37d
6a8fc89483d7cbc69f3c22817a8e5b374e8bf7f4
57528 F20101112_AADDUT wolinsky_d_Page_11.pro
fd1318b46b3b9275df5bc1b18326f332
42f833d38a9d050df1c77aa4bdd3d4747dd44832
82507 F20101112_AADDPV wolinsky_d_Page_10.jpg
f3a86acc4ff401f99014b24840a8849d
b6abaae6a0a8c9ddebddacc26ae9ea2cad29ccc8
23546 F20101112_AADDZQ wolinsky_d_Page_33.QC.jpg
4553513d8ea96b8e54969dafa5335d4b
64506aa6337bcec9771c53c1652eb580808a36c7
F20101112_AADDKY wolinsky_d_Page_55.tif
7d18a06887f87c6c5ec599059e647a06
3e5539123ee928f0d072920452b6dd906f755a52
30550 F20101112_AADDUU wolinsky_d_Page_12.pro
45c1096dcf0570346a51d9814d30d2b4
87757913ce08c54c6d977831de521fa35583c183
65642 F20101112_AADDPW wolinsky_d_Page_12.jpg
656f672b8fceed59a5165a79fc2fd36f
3d56a58955b55a92fee69dee86d89bb568d6e88f
25962 F20101112_AADDZR wolinsky_d_Page_34.QC.jpg
3950c7b7711653394c1eb1e35009026c
662d03e9200b4820f2eec5d644dec4132142d80d
2956 F20101112_AADDKZ wolinsky_d_Page_04thm.jpg
dc342c5a36ff0f8ac0f35ed6d52ea898
60a5a5286b3a538ff9ff77f8d1f85ec66889891f
14825 F20101112_AADDUV wolinsky_d_Page_13.pro
6125da469c11515dbe9c3ace4a7678f6
0c3f723eb49c48b164fa04ba76fc419d1f9d5a5a
84567 F20101112_AADDPX wolinsky_d_Page_15.jpg
02115c82898d03aedc1dba7032b671c6
2603a9ac26cde0633cc593b10e1b414b593da61a
6380 F20101112_AADDZS wolinsky_d_Page_34thm.jpg
df59e430cb28d9a51fd2be2ac4a61243
f85302a204478498f1c87d97306f91c7bf761a52
60474 F20101112_AADDUW wolinsky_d_Page_16.pro
6236feb1108dead94cc3c2d600ff9dfe
20a2b8069a5ee92fbf678a1e0130db3d1263f32a
82940 F20101112_AADDNA wolinsky_d_Page_14.jpg
284aadf1fd0b3bf3907ae6e0710ec47b
3773d89ee1d457a716f3b5e981a0dfbf1b94cd45
92009 F20101112_AADDPY wolinsky_d_Page_16.jpg
6f0a163a05c0ceb2d7ebc382dd343634
5849ccd023a6230590d3f9af9e83b65079fdcd65
25191 F20101112_AADDZT wolinsky_d_Page_35.QC.jpg
efe38bcbee5a35c2c383d80af78788f1
885b0f8eea29d03639b8b93cd5720c930140f455
61548 F20101112_AADDUX wolinsky_d_Page_18.pro
fcacc440d2771ee7fa5b10a2c410ad3e
cbf22c68be7bdb4da227a2bb1be5a46f692e8038
53717 F20101112_AADDNB wolinsky_d_Page_14.pro
cc8480bcb041155dda81b6f7f5b0a293
2b0b14f73cecdecc6d1faa849538991a82695e2b
90540 F20101112_AADDPZ wolinsky_d_Page_17.jpg
4a8d50930614f8a0ecef2e5597e4ed27
e7220fafff24b7eb0f50a44521ef38d56d5ded78
27814 F20101112_AADDZU wolinsky_d_Page_36.QC.jpg
269017bf50635c0cafa06c01cbe572f5
97f97fa9c33ce33645c9cef6cf520377b8eb32ff
81264 F20101112_AADDNC wolinsky_d_Page_28.jpg
7e3010f51d107522b331dc2e9afd6efa
cec8f59d77f6a0a475dba2dde8c3ed278db05386
6840 F20101112_AADDZV wolinsky_d_Page_36thm.jpg
f304f45b8fbf7eb1cd2c166f5e7e60f0
ce2c2fc9d6dda05e64ecaed0d6bbc3f494e78e8c
56967 F20101112_AADDUY wolinsky_d_Page_19.pro
2b27c47839747c49d2878f47ff481d4e
8480c62f6c7dffc7819f64267ae708b9b77a0423
46062 F20101112_AADDND wolinsky_d_Page_04.jp2
a4a5a897a41433c206d0a0294152a986
708af4444302f48bf4f137fedc994255ce5f5272
6908 F20101112_AADDZW wolinsky_d_Page_37thm.jpg
0ac45435a0ff0ff6d128eb8ae26dfaf1
f077228eda982afb98f4f17fba61e4a46a73c4bc
1051941 F20101112_AADDSA wolinsky_d_Page_38.jp2
a32da8bfef36ac669e8791d6c7f1f0d7
067a04d488ddec7dc203f186d54ab574662affd0
18365 F20101112_AADDNE wolinsky_d_Page_05.QC.jpg
73b2707ad6f840602d7cfee26a019e58
457b3cf44adcdb1d639a02deb4cf60db8fa7da8b
27760 F20101112_AADDZX wolinsky_d_Page_38.QC.jpg
d39c6fadca79d4b1436652ff189179f5
b4601bc7a888899192542d7534a1f121a33a52b8
9955 F20101112_AADDSB wolinsky_d_Page_39.jp2
e436080ebdc2a0b6e21553d9ea2205a4
8c2f5ac0e63f91224d6758a4783614988aa736a9
39143 F20101112_AADDUZ wolinsky_d_Page_20.pro
9472fe60d61f751a81ae1259740d1732
905ddeef7014a3b1f43cbbe4160e27bde611977d
6784 F20101112_AADDZY wolinsky_d_Page_38thm.jpg
0c3e58b6d1d6e736517bbf8eebba74b1
9c229155af6f4a583376ada09d5077a5a227796c
54468 F20101112_AADDNF wolinsky_d_Page_56.jp2
6b6261efe912dcfde4b6a2310ac4e870
3c7de0ec16d4c7c3122f4cd53cfcf73189551d58
584409 F20101112_AADDSC wolinsky_d_Page_40.jp2
24f0940e5b80ba1cb244ca69e090a634
da96101787341ce95c93b0095ee43bb782986c7e
3881 F20101112_AADDZZ wolinsky_d_Page_39.QC.jpg
3f57e5f2a4b8f95d16175561f37d81c6
749bd22bf71a745e24401737cbccd91007bd8ca8
23324 F20101112_AADDNG wolinsky_d_Page_06.pro
9853bedab2e3bc3cab183c816e880793
db7313f9ebbd76c2dcb60f7a9651173499cbf071
2169 F20101112_AADDXA wolinsky_d_Page_30.txt
53e5e23fecbb6ce0ab40044561eafecd
bf3dd5632ce6993ad89035f5a3f7d75217a7d18d
817441 F20101112_AADDSD wolinsky_d_Page_41.jp2
d1854ea24f7d5db7abd2702023757137
8085380097604b0deca9351eac715c0f121b9c32
59009 F20101112_AADDNH wolinsky_d_Page_17.pro
53a564d129c97aa4d1245be45fe5d089
f249ce331dce6983cf4d97b244271b661e2beddb
2230 F20101112_AADDXB wolinsky_d_Page_33.txt
63a2da90926dfbbbcc2e27422f2a3b39
309593f663d80e98a938ba23673f005863bdd2a8
F20101112_AADDSE wolinsky_d_Page_42.jp2
ad4878f413e66037a4551654cb671102
176206edb0570c3be317dc92104d8662f4cee015
F20101112_AADDNI wolinsky_d_Page_50.tif
f76b167c4723da480a4b9e17930d8a08
925da5829929ca6d6a88e2f26cfb4c3e74923697
2420 F20101112_AADDXC wolinsky_d_Page_34.txt
ffe08afe098280b506da8421895b8f69
5294deea74d783cc4e8b20720b06f28d7b2ba127
655892 F20101112_AADDSF wolinsky_d_Page_43.jp2
fa5cadae786c29a81fc0fa1bec559598
a9214b02128e11366afdb1612329a1b34789d154
5697 F20101112_AADDNJ wolinsky_d_Page_02.jp2
b1c845dd3d0660d382c18da6779520d8
ce719e6fcb78ef570031dd2b2028fe44263a9bdb
2113 F20101112_AADDXD wolinsky_d_Page_35.txt
43bdd6d08945cc532886953f25c39414
bdff691a20512e1b40dcc171855a0554b5a5bf46
1051975 F20101112_AADDSG wolinsky_d_Page_45.jp2
af49810d79cae0a193c71869cab8e782
14efb47e07a7fb76326f41f1db6f3d974bebcac2
7190 F20101112_AADDNK wolinsky_d_Page_58thm.jpg
8ca3a22b818e6815fdbba94ba2e187d5
b83bce55a68c0ad9c94d8d9398f4009c1f560acc
2296 F20101112_AADDXE wolinsky_d_Page_36.txt
d1079af32235e4a819f92059f1d08b1c
81ed53dbcfe29556a8abe2838979b2c3508502b9
971902 F20101112_AADDSH wolinsky_d_Page_46.jp2
a8d790eb749915707d45c93f8804f021
dd860347aefb47486e8750ed366177380d12d1e3
2265 F20101112_AADDXF wolinsky_d_Page_37.txt
41fbaeb5de16dccfb9b4cf41440709e9
e3b8e3b0a009b0e62e1ea91fb0f99678dc6b64b7
921799 F20101112_AADDSI wolinsky_d_Page_47.jp2
8c1ddfcedbad97c64a911a503bb0291c
a519247ceb4a88bc33b451f314d3766953a78cfe
F20101112_AADDNL wolinsky_d_Page_31.jp2
eba0867e3cc1d62cbdda557e79682f40
bf4ee866ecdb8809ea803ff9edde309d612289b7



PAGE 1

1

PAGE 2

2

PAGE 3

3

PAGE 4

Ihavemanytothankforsuccessfullygettingthisfar.ByfarthemostimportantisDonna,whocontinuallyencouragesmetopursuemypassions.IdeeplyappreciateProfessorFigueiredoforgivingmemyrstgraduateassistantshipandfollowingmethroughtograduation.Further,IthankbothProfessorFigueiredoandBoykinfortheirhardworkanddedicationtowardmyresearchandrelatedresearch.Also,IthankProfessorLamfortakingachanceonmewhatseemslikeaneternityago.Also,myparentswhohavealwaysencouragedmeeventhoughastimegoeson,theyunderstandlessandlessofwhatIamdoing.Finally,IknowdeepinmyheartthatwithoutalovingandforgivingGod,thatnoneofthiswouldhavebeenpossible. 4

PAGE 5

page ACKNOWLEDGMENTS ................................. 4 LISTOFTABLES ..................................... 7 LISTOFFIGURES .................................... 8 ABSTRACT ........................................ 9 CHAPTER 1INTRODUCTION .................................. 10 1.1ProblemStatement ............................... 10 1.2DeningtheSolution .............................. 11 1.3ThesisOutline .................................. 13 2BACKGROUND ................................... 14 2.1GridComputing ................................ 14 2.2Virtualization .................................. 17 2.3VirtualNetworking ............................... 19 3THECONSTRUCTIONOFTHEGRIDAPPLIANCE .............. 21 3.1VirtualMachines ................................ 21 3.1.1VirtualMachineIndependence ..................... 22 3.1.2DataPortability ............................. 22 3.2Condor ...................................... 23 3.3IPOP ...................................... 24 3.3.1Architecture ............................... 24 3.3.2Services ................................. 27 3.4Security ..................................... 29 3.4.1VirtualMachines ............................ 30 3.4.2FirewallsandVirtualNetworking ................... 31 3.4.3IPsec ................................... 31 3.4.4TheFallBack .............................. 33 3.5Administration ................................. 33 3.6UserInterfaces ................................. 35 3.6.1ApplicationAccess ........................... 35 3.6.2DataAccess ............................... 37 4RELATEDWORK .................................. 42 5SYSTEMVALIDATIONANDPERFORMANCE ................. 44 5.1Validation .................................... 44 5

PAGE 6

.................................. 44 5.1.2PingTest ................................. 44 5.1.3SimpleNodesonPlanet-Lab ...................... 45 5.1.4GridApplianceSystemIndependence ................. 45 5.2PerformanceEvaluation ............................ 46 5.2.1SimpleScalar ............................... 50 5.2.2PostMark ................................ 51 5.2.3Iperf ................................... 51 5.2.4Discussion ................................ 53 6CONCLUSION .................................... 54 6.1CurrentDeployments .............................. 54 6.2Conclusion .................................... 55 REFERENCES ....................................... 57 BIOGRAPHICALSKETCH ................................ 62 6

PAGE 7

Table page 3-1IPAddresstohostnamemappingusingIPOP'sdnsserver. ............ 28 5-1Thepingtestresultsdatingfrom02/24/07to03/14/07.Therstforlossentriesrefertotheruns,whilethenalentrycontainsdataforindividualpeertopeercommunicationforallruns. ............................. 46 7

PAGE 8

Figure page 1-1High-levelarchitecturaloverviewoftheGridAppliance. ............. 12 3-1UnionFSlayoutintheGridAppliance. ....................... 23 3-2IPOPdeployedinthesamedomainastheVM. .................. 25 3-3IPOPdeployedinaseparateexecutiondomainfromtheVM. .......... 26 3-4VNCsessionpoweredbytheGridAppliancewebinterfacerunningCACTI. .. 40 3-5AJAXsessionpoweredbytheGridAppliancewebinterfacerunningSimpleScalar. 41 5-1Examplerunofcondor status. ............................ 45 5-2GridAppliancerunningonWindowsusingVMwareServer. ........... 47 5-3GridAppliancerunningonLinuxusingVMwareServer. ............. 48 5-4GridAppliancerunningonMACOS/XusingVMwareFusion. .......... 49 5-5SimpleScalarresultsshowtheoverallexecutiontimes(inminutes)fortheexecutionoftheGobenchmarkinthreedierentcongurations. ............... 50 5-6PostMarkI/Othroughputresults,inread/writeMB/s. ............. 51 5-7Iperfresultsaregiveninmegabitspersecond,thatisbiggerisbetter. ...... 52 6-1CurrentdeploymentofGridAppliancesfromtheGridApplianceWebInterface. 55 8

PAGE 9

9

PAGE 10

1 ],whichstatesthatsystemsshouldbeself-conguring,self-healing,self-optimizing,andself-protecting.Thisrelatestomanyissuesexistingingridcomputing[ 2 ],suchasthedeployment,maintenance,andaccessibilityofgridresources.Deploymentofgridresourcesfocusesprimarilyonthecomplexityofthesoftwarestackanditsdependencies.Maintenanceinvolvestheminimumnumberoftasksrequiredbyanadministratorinordertokeepawellrunninggridsystem.Theabilityofuserstointeractwithgridresourcesdependsonhowaccessiblethegridresourceis.Theproblemis\Whataretherequirementsforthegridsoftwarestackandhowaretheybestmet?"Theprocessofsettingupacomputinggridcanbeverydetailed.Gridsoftwaretendstohavemanydierentcomplexcongurationswithrelativelyfewplugandplaysystems;furthermore,manygridsoftwarepackagesstillrequireotherdependencieswhicharenotincludedinthecoresoftwarepackage.Theissuecanbecomplicatedbyrequiringaspecicorlimitedsetofhardwarecongurations.Oftentime,gridsoftwarerequiresdedicationoftheunderlyinghardwareleadingtosystemutilizationineciencies.Thoughdevelopersideallywantagridsystemwhichneverneedsanyupdatesinthesoftwarestack,oftentimesissuesarediscoveredafterreleaseornew,desirablefeaturesaremadeavailable.Theissueofmakingtheseupdatesavailabletothesystemoftentimescanbeacomplexprocessataminimumrequiringthattheadministratorateachgridnodedownloadtheupdate,applyittothesystem,andthentweakeachsystemaccordingly.Thereareafewsystemswhichhaveautomaticupdatingfeaturesandoftentimesiftheydotheyrequiresuperuserlevelserviceswhichmayinterferewithusingthehardwareresourceforotherpurposes.Ifanupdatebreakspre-existingsoftwareorcongurations,thiscanleadtoanevenlargernightmare. 10

PAGE 11

2 ],theGridAppliancetakesadvantageofsuchtechnologiessuchastheInternet,distributedcomputing,andpeer-to-peernetworking.Thebenetsofusingvirtualmachinesingridcomputing[ 3 ]\includesecurity,isolation,customization,legacysupport,andresourcecontrol"withverylimitedoverheadinprocessorintensiveapplications.GridAppliancehasbeendesignedtorunonthetwomostpopularvirtualmachinetechnologiesVMware[ 4 ]andXen[ 5 ]andisintheprocessofworkingonVMssuchasVirtualBox[ 6 ],Parallels[ 7 ],Qemu[ 8 ],andKernel-basedVirtualMachine(KVM)[ 9 ].ThemostimportantuseofvirtualizationintheGridApplianceisencapsulationofnotonlyvirtualizeddisksbutalsoallowingafullsystemtoexecuteinsideanotherinanon-obstrusiveway.WithrespecttovirtualdisksVMsallowforthecreationoflesystemswhichareplacedintoasingleportablele.Withthiscapability,allthesoftware 11

PAGE 12

High-levelarchitecturaloverviewoftheGridAppliance.. usedinthesystemisinstalledontothevirtualdiskandsincethesoftwarerunsinanabstractedhardwarenorecongurationisrequired.ThekeysoftwarefeaturesoftheGridAppliancefallinthesecatogories(Figure 1-1 ):systemservices(a),miscellaneousservices(b),administrativeservices(c),webinterface(d),andnetworkservices(e).SystemservicesarevariedandincludeportablelesystemsviaUnionFSandvirtualnetworkthroughIPOP(IPoverPeer-to-Peer).Miscellaneousservicescontainsoftwarewhichcaneasilybeinterchangedsuchasthegridschedulerandlocaluserinterfacelibrary.Administrativeservicesincludeautomaticupdatesandadministrativessh.Webinterfaceprovidesthecapabilityforpublishingofcontentaswellasuseraccesstothesystem.Userscanaccessthecommandlineandtheirlesthroughthenetworkservices.Someofthismaterialhasbeenpresentedbefore[ 10 ],whichistheworkofthesameauthor.Whilethispresentsmanynewconceptsandawiderbreadthofinformation,itis 12

PAGE 13

13

PAGE 14

11 ],inotherwords,theabilitytoprovidealargeamountofprocessingpoweroveralargeperiodoftime.Thisisbecausewideareasystemstendtonotprovidethebestenvironmentforparallelapplicationsduetohighlatency;however,theyareidealforsequentialapplications.Theneedforgridcomputingisthere,userswanttobeabletosubmitmanytasksinasecureenvironmentwiththeknowledgethattheywillbeprocessedinduetime.Thiscreatesmanyobstaclesfocusedprimarilyonusers'abilitytoaccesstheresourcesandensuringthereareenoughresourcesavailablegiventheamountofusersinthesystem.Mostmodernsystemsuseacentralmanagerscheme,whichallusersconnectandsubmittheirjobstotheclusterfromthesemanagers.Thepitfallofthisdesignisthatitcreatesasinglepointoffailure,whereifthemanagerormanagersgodown,accesstothesystemisdenied.Furthermore,theeortassociatedwithmaintainingthesystembyITpersonnelcanbecostly.Addingnewresourcesandmanagingexistingresourceiscomplex,error-prone,andcostly;thishappenstobethebulkcostofasystem.SystemslikePlanetLab[ 12 ]havereducedthecomplexityofaddingnewresourcesbyprovidingaCDimageandacongurationlethatcanbecopiedtoaoppydriveoraUSBdevice;however,thissystemstillrequiresacentralmanagerintermsofpersonandcomputing.Themaincomputerprovidestheonlywayuserscangainaccesstotheindividualnodes;furthermore,thiscentralsystemalsocontainsthemainsystemimagewhichallnodes 14

PAGE 15

13 { 15 ],Condor[ 16 { 18 ],Sun'sGridEngine[ 19 ],LoadSharingFacility[ 20 ],andRES.Thissoftwarerequiresacentralmanagertowhichallworkernodesmusthavedirectcommunication.Inordertosubmittothenodes,ausermusthaveaccesstoamachinewhichcancommunicatetoalltheworkernodes,wherebothmachinesareinthesameaddressspace.ThisremovestheabilityforadefaultconguredCondororPBSsystemtobeabletotalkfromoneprivatenetworktoanotherprivatenetwork.ThisisanissuewhereuniversitiesmaywanttosetupashareddistributedcomputinggridbutareunwillingtogiveeachmachineinthepoolapublicIPaddressduetothelackofpublicIPaddressesorsecurityissues.Thesolutionproposedforthisproblemisvirtualnetworking,whichisdiscussedfurtherinchapter3.5.Thebasicneedsforthejobmanager[ 21 22 ]inGridAppliancescaseare1.abilitytohandlehundredsofnodesandmorejobsthanworkers;2.allowsanynodethatcanexecutejobsalsosubmit;3.theprojectisOpensourceandafreefullversion;4.handlessystemandnetworkissueswell;5.supportsuser-levelcheckpointing;6.providesforsharedcyclesfrom;desktops 14 ],PBSPro[ 13 ],andTorque[ 15 ].OpenPBSisanolderversionofPBSthatwasreleasedtotheopensourcecommunity.AccordingtoAltair,itisnotwellsuitedforsystemsthatwanttorunmorethan100jobsormorethan32nodes.Altair'sagshipjobmanager,PBSPro,isclosedsourceandisonlyfreetotheacademiccommunitybutrequirescash

PAGE 16

23 ],allowsfortheabilitytobuildandunitegridapplicationsandsystems.Forexample,theschedulersCondorandPBShavereleasedtoolsthatallow 16

PAGE 17

24 ]orkernelsthathavebeenprogrammedtoworkaroundmachineshortcomings,suchaswhatXen[ 5 ]hasdone.Theriseofworkstationsandcheapcomputingsawtheendofresearchanduseofmainstreamvirtualcomputing.Inthelate1990sandearlier2000s,virtualcomputingmadeacomebackledbyVMware[ 4 ].Virtualcomputingisnowbeingusedasawaytoincreasereliability,security, 17

PAGE 18

24 ].Inthiscase,thevirtualmachineisinterpreted,whereeachinstructionisconvertedintocompatiblecodeandexecutedonanemulatedprocessor,this,however,canbeextremelyslow.VMwarehasspentconsiderableamountresearchtimeintomakingthisprocessfasterbymeansofdynamicrecompilation[ 25 ],whencodeisreadaheadofrun-timeandchangedtorunontheexistinghardware.Thisenablesnon-virtualizablesystemstobevirtualized.VMwarestwofreevirtualizationenvironments,PlayerandServer,arebaseduponastandaloneprocessrunninginahostcomputer,whereasESXisbaseduponthehypervisorconcept.ThemainproblemwithESXisthatitissupportedbyalimitedamountofhardwarecongurationsandtothisdateSATAharddrivesarestillnotsupported.ThereisstillaconsiderableamountofoverheadindynamicrecompilationandsoresearchinCambridgecameupwiththeideaofusingparavirtualization.Paravirtualizationtakesadvantageofthefactthatalargechunkofsystemcallsdoesnotrequiretheusageofprivilegedinstructions.WhatXendoesischangethecompositionoftheguestoperatingsystem'ssystemcallssothatinsteadofexecutingprivilegedinstructionstheyexecutehypercallsthattriggerthehostoperatingsystemtodealwiththeprivilegedinstructions.Thedisadvantageoccursinsystemsthathaveseveralsystemcallsthattriggerhypercallsoverandover.Furthermore,asstatedearlier,Xendoesrequiretheuseofanon-standardkernel,installationofwhichisdauntingforevenexperiencedusers.Xenisbaseduponthehypervisorconcept,butthisisforthemostpartinvisibletotheusersasXenusesdriversfromLinuxandisthereforeascompatiblewithdieringhardwareasLinuxis.HardwarecompaniesarerecognizingtheneedtosupportvirtualizationatthehardwarelevelandhavebeguntoaddinstructionstoassistinVM.Xenwastherstsoftwaresuitetoshowthispublicly.ThiswasfollowedupbyaprojectcalledKVM,KernelbasedVM,whosepurposewastoshowhowecientaVMcanbethanksto 18

PAGE 19

26 ]showingthathardwarevirtualizationwasstillnotwheretheirsoftwareprocessis.Anotheraspectofvirtualmachinesistheabilitytomultiplexnetworkinterfacecards.Thisisdoneintwodierentways,bridgingandnetworkaddresstranslation(NAT).BridgingmultiplexesthedeviceatOSIlayer2,suchthattheVM'snetworkdeviceshaveanipaddressonthelocalareanetwork.Theother,NAT,multiplexesrightabovethenetworklayer,suchthatthevirtualmachinehasaprivateaddressunknowntothehostmachineslocalareanetwork.ThetwoprimaryadvantagesofNATaresecurityandnolosslocalipaddresses. 2 ].Thisworksneinhomogeneoussituations;however,theInternetdoesnotprovideahomogeneousenvironmentgivenrewallsandNATs(networkaddresstranslators).Firewallsprovidealayerofsecuritybypermittinganddenyingtracfromenteringanetwork.NATsallowmultiplehostsonaprivatenetworktoshareacommonpublicaddress.TightlycontrolledrewallsandNATsareusuallymadesuchthatthehost(s)areunabletoreceivenetworkmessagesuntiltheyhavemadecommunicationwithapublichost.ThisconnectivitywillonlylastsolongastherewallorNATallowsthestatetoexist,commonly30seconds.OftentimesthesetwodevicesareusedasameansofsecuritywhichTCP/IPdoesnotinherentlygive,soitwouldbeunacceptabletoremovetheiruse.Theprobleminthiscaseisdenedashowtogainaccesstoresourcesthatarebehind 19

PAGE 20

27 ]andOpenVPN[ 28 ].Thesesystemsrequirethatauserhaveacerticate,username,andpasswordtogainconnectivitytothenetwork.SeveralsimilaroeringshavebeenprovidedspecicallyforthegridcommunitysuchasViNe[ 29 ],VNet[ 30 ],andViolin[ 31 ];however,noneofthesesolutionsprovideencryptedcommunication.Allthesetechnologiesshareacommonissuethatis,havingacentralizedsystemwhichrequiresadministratorstosetupaddressingandroutingrules;havingthiscentralsystemallowsforthenetworktobeeasilycompromised.Thisgivesmotivationfordistributedsystems,suchasIPOP(InternetProtocoloverPeertoPeer-IPoverP2P).IPOPisbuiltupontheprincipleofpeertopeernetworking,whereallnodesarecreatedequally,beingbothserverandclient.Inpeertopeermodels,newpeersarestartedbyconnectingtoknowngoodpeerswhicharelistedinapreconguredle.Thisinitiallisthasnosizelimitsandisencouragedtobeverylarge.Aslongasoneofthosepeersisalive,newconnectionscanjointhenetworkandbecausethesystemispeertopeer,evenifalltheinitialpeersgodown,alreadyconnectedpeerswillremainconnected.Afterconnectingtotherstpeer,nodesattempttodiscoverothernodesthatareclosetoeachother. 20

PAGE 21

21

PAGE 22

32 ].Theideabeingthattherearethreelayers,baseoperatingsystemlayer,developerslayer,andauserslayer.Theselayersorstacksareallcombinedintoasinglelesystemfromtheusersperspective,seegure 3-1 .ThebaselayercontainsthecorecongurationofGridApplianceandcanbesharedbymultiplevirtualmachinesreducingdiskspacecosts,thislayerisread-only.ThedevelopersorsitespeciclayerisusedbylocalsystemadministratorstoaddspecialfunctionalitytotheGridAppliancethatisnotprovidedinthedefaultimage.Toaccessthisauserneedonlyselectatboot,thattheywouldliketogointodevelopmentmode.Attheendofthesession,thedeveloperrunsascript,whichblanksoutanyGridAppliancecongurationandisabletoreleasethisnewimagetousers.TheuserlayeraordstheabilityforuserstomigratetheirdatafromonemachinetoanotherwithoutworryingabouttheunderlyingcongurationofthatGridAppliance.

PAGE 23

UnionFSlayoutintheGridAppliance. 23

PAGE 24

33 34 ],thispaperwilllightlytouchIPOParchitecturepriortotheGridApplianceandchangesmadefortheGridApplianceanduser-friendliness.ForareliableandselfmanagingnetworkthreeserviceswereaddedtoIPOPtheyincludeDHCP(DynamicHostCongurationProtocol),DNS(DomainNameSystem),andARP(AddressResolutionProtocol). 35 ].IPRouterbindslibtuntapandBrunetsuchthatitisresponsibleforsendingandreceivingdataoverBrunetandinterfacingwithlibtuntap.SimpleNodesareboundtoBrunetandareusedtosetupstaticpeersforinclusionontheinitiallistofIPOPpeersfromwhichothernodesconnecttothesystem.ThetraditionalmechanismfordeployingIPOPcanbeseeningure 3-2 .TheapplicationsendsandreceivesontheTAPdevice,whileIPOPreadsandwritestotheTAPdevice.IncomingpacketsarereceivedbyIPOPandwrittentoTAP.OutgoingpacketsarereadbyIPOP,converted,andsentoverthephysicalEthernetdevice.TheIPOPvirtualnetworkaddressspaceis10.128.0.0/255.128.0.0inthisexample.OneofthemajorproblemsofpeertopeertechnologiesisthatNATsandrewallstendtomakeitdicult.Togetaroundthis,BrunetemploysUDP(userdatagram

PAGE 25

IPOPdeployedinthesamedomainastheVM. protocol)andNATholepunchingtechnologies.Furthermore,ifnodesgodown,Brunetisabletoreconnectthesystemwithoutuserinterference,makingitself-healing.NowfocusswitchestothearchitecturethatisrelevanttotheGridAppliancesinternals,thelocalnetworkstackandremotenetworkstacknegatingwhathappensinthepeertopeernetworkingoverlay,ieBrunet.IntheoriginalpapersdiscussingIPRouter[ 33 34 ],itrequiredalotofuserinputtosuccessfullystart.ThisisbecauseitneedstoknowdetailssuchasthehardwareandIPaddressesoftheTAPdevice,thatis,therewasnotrueDHCPprocess.ThisalsomeantthattheTAPdeviceneededtobecompletelysetuppriortostartingIPRouter.AlotoffocuswentintomakingitsuchthattheonlyrequirementforstartingIPRouterwasthataTAPdeviceexistedonthesystem.NowIPRouterlearnstheseaddressesbyreadingthemoutofthepacketsgoingovertheTAPdevice.ByenablingarobustIPRouterthatcanlearnitsaddressesallowsfordynamicchangestosystemcongurations,suchasachangeinIPaddress;furthermore,thisallowsfortwointerestingpossibilities:IPRouterrunninginadierentexecutiondomain(Figure 3-3 )thanthevirtualnetworkcommunicationandallowingmultiplevirtualnetworkadaptersperIPRouter.TheapplicationsendsandreceivesoveritslogicalEthernetdevice,whichisrevealedtothehostasaVIF(virtualinterface).TheIPOPbridgeconnectsthe 25

PAGE 26

29 ]allowsformultiplemachinesbehindasingledomaincommunicateovertheregularlocalareanetworkadapter. Figure3-3. IPOPdeployedinaseparateexecutiondomainfromtheVM. TheworkfocusedonrunningIPRouterinadierentexecutiondomainincludestestingusingXenandVMware.InXen,theideaistoreplaceXen'sbridgingrulestousetheTAPdeviceinsteadoftheInternetbasednetworkdevices,typicallyeth0.InVMware,thisisdonebybridgingthehostonlynetworkconnectiontothetapdevice.Inthesecases,theoperatingsystemusedwasLinuxandthetapdeviceisusedprimarilytobridgedata 26

PAGE 27

36 37 ]andDNS[ 38 39 ].BothofthesearefoundinIPOP.DHCPordynamichostcongurationprotocolallowsfortheassigningofauniqueIPaddressfromacentralserver.DomainnameserverprovidesawaytomapnamesofmachinestotheirIPaddressesandvice-versa.IntheoriginalIPRouter,thelocalmachineneededroutingrulestoaccessthenetwork,togetaroundthis,IPRouternowfeaturesanARP[ 40 ]service.InIPOP,therearetwoformsofDHCP,onethatworkssimilartothestandardformatusingacentralizedserverwithcommunicationusingSOAP(SimpleObjectAccessProtocol),whiletheotherworksinadistributedmethodusingdistributedhashtable[ 41 ]createstoobtainIPaddresses.TheformerhasbeeninplacesinceSummer2006andwilleventuallybephasedoutasthelatterbecomesmorestable.WhenIPRouterreceivesapacketfromtheTAPdevicethathasDHCPportnumbersassociatedwithit,itrealizesthisisaDHCPpacket.ThisisallowablebecausetheDHCPportnumbershavebeendenedinaRFC[ 37 ]andreservedbythesystemforthesepurposesspecically.Librarieshavebeenwrittentosupportthedecoding(andlaterencoding)ofthesetypesofpackets.DependingonifSOAPorDHT(distirbutedhashtable)DHCPisenabled,willresultindierentaects.InthecaseofSOAP,theclientwillsendtheSOAPserverarequestandtheSOAPserverwillmapanIPaddresstotherequestingclientwitharecongurableleasetime.InthecaseofDHT,theclientattemptstouseBrunet'sDHT[ 41 ]featureto 27

PAGE 28

10.128.0.1C128000001Table3-1. IPAddresstohostnamemappingusingIPOP'sdnsserver. makeexclusivecreatesintheDHTspacetoreserveanIPaddress,ifthisissuccessful,thenodehasobtainedthatIPaddress.ThemajorbenetofDHTisthatitisdistributedanddoesnotcontainasinglepointoffailure;however,DHTrequiresthattheBrunetsystembeinaverygoodcondition,whereallnodesareaccessibletoallnodes.ThustheSOAPoptionisstillviable,untilthereisenoughstabilityintheBrunetsystem.Bythismechanism,theDHCPclientsthatexistincurrentoperatingsystemsarecapableofsettingtheIPaddressoftheTAPdevice,furtherreducinguserconguration.ManyclassicgridapplicationsrequiretheuseofDNShostnames,thusmotivatingthecreationoftheDNSservice.ThisserviceisexecutedoutsideoftheIPOPexecutiondomainformodularitypurposesandiswritteninPython.OneproblemnoticedwhenbuildingtheDNSandDHCPsystemwasthatbydefaulttheDNSserver'slistisoverwrittenbythemostrecentDHCPcall.ForLinux,thishasbeenxedbytheuseofthepackageresolvconf;however,laterversionofresolvconfhavemadeitsuchthatifaDNSserverisrunningonthelocalhost,nootherDNSserversareneeded.ThisproblemrequiredtheuseofanolderversionofresolvconfthanprovidedintheGridAppliancecurrentoperatingsystem.InolderincarnationsoftheGridAppliance,aneditedhostslewasusedinsteadofDNS,sincetheremustbeamappingforeachhostnametoIPaddress,thislecouldbeinexcessofmegabytes,usingtheDNSservermethodnamelookupswereshrunkfrommeasurablesecondstobeingmilliseconds.TheDNSserverusedisofasimpledesign,wheretheIPaddressismappeddirectlytoasingleIPforanexampleseetable 3-1 .PriortotheinclusionofARPtheuserwouldneedtoaddaruletotheroutingtabletoaddagateway,also,thismachineneededafakehardwareaddressaddedtotheARP 28

PAGE 29

42 ]providetrulyisolatedcomputingenvironmentsatthecostoflimitedusability.Thefundamentalideaisthatoncethetaskiscrunchingtonotletanythinginoroutuntilthetaskiscompleted.Thismakestruesandboxesnon-idealforuserinteractionandpositionsthemasexecutionenvironmentsonly.Thisisdonebyremovinganynetworkinterfacesfromthevirtualmachine.Havingacentralizedenvironmentmakesmanagementsimple,becausetheadministratorcaneasilyseewhenusersaremisbehavinganddealwithitaccordingly.Thedownsidepresentsitselfwhenthesystem'sserversarehackedeithertakinguserdataormorelikeyadenialofaccesswhichessentiallycausestheserverstogooine.Ifthiswerethecase,theexecutionenvironmentswouldcompletejobsandhavenowheretosendthem,possiblycausingtheresultstobelost;furthermore,nonewjobscouldbeadded,sinceuserswouldbeunabletoconnect.Alsouserdatawouldbeunavailableduringthisdowntime.Takingintoideaofthesandbox,theGridApplianceemploysvirtualmachines,rewalls,virtualnetworking,andIPsectocreateasandbox.VMsprovideanabstractedunitfromtheunderlyingsystem.FirewallsalongwiththevirtualnetworkensurethattransmissionofincomingandoutgoingInternettracpertainsspecicallytothegridsystem.IPSecmakesitsuchthatonlytrustednodesareallowedintosecurepools.This 29

PAGE 30

43 ],whichusesacomplexsystemfordeployingtaskstoVMsandreceivinglesoverNFS. 30

PAGE 31

44 ]isthatcommunicationamongstagroupofnodesoccursinanencryptedandsecuremanner.IPsecisbasedontheSSL(securesockets 31

PAGE 32

32

PAGE 33

33

PAGE 34

status"application.Anadministratorcanmonitorthecountofnodestoensurethatmachinesareabletoconnect,stayconnected,andarefunctioningproperly.Other 34

PAGE 35

45 ]andNanoHub[ 46 ].Mostgridsystemsoeronlyusercontrolfeaturesandsystemoverviewinauser-friendlyinterface.ThisisthecaseforsiteslikeTeraGridPortal[ 47 ]andPlanetLab.EvenCondorprovidesutilitiestoeasilyviewsystemstatistics,asdescribedpreviously"CondorView"[ 48 ].Thechallengefacedbygridsystemsisthatthereisnosingle 35

PAGE 36

49 ]sessionsusingRappture[ 50 ].VNCprovidesadesktopsessiononaremotecomputer,wheretasksexecuteremotelybutresultsaredisplayedlocally.RapptureisaTcl/TkbasedapplicationwhichtakestheinputofanXML(extensiblemarkuplanguage)leanddisplaysagraphicaluserinterface.Theuserisinstructedtogivesomeinputsandsubmitforexecution.Thiscallsupascript,whichisspeciedintheXMLletoexecutegiventhevariablesandreturntheresult.ThesescriptscanbewrittenmanyofthemostpopularlanguagessuchasPerl,Python,C,andTclandiseasilyportabletootherlanguages.ThetwomainadvantagesofRapptureisthesimplicityofdisambiguatingdisplayedcontentfromthetaskexecutionandprovidingaeasytoworkwithframeworkthatdoesnotrequiretheusertocodeandgraphicaluserinterfacecontent.OurGridsoftwaresuiteincludesMyGrid,whichprovidesauserinterfaceforsubmittingjobsovertheInternet.TheGUIportionofitprovidestabsforaddingnewjobs,whichcomeinpredenedles;statusofcurrentjobs,andstatusofthesystem;however,accordingtothelatestdocumentationdoesnotrunonWindows.Byitself,theGridApplianceprovidesanX11basedwindowsmanagerfromwhichtheuserinteractswiththeirsystem.Furthermore,thewebinterfacefortheGridAppliancealsosupportsVNCsessions.ThemainbenetofthisisthatuserscancodeforthenativeAPIs,whethertheybeWindowsclasses,WindowsForms,X11,TK,GTK,QT,etc, 36

PAGE 37

51 ]baseduserinterfaces.AJAXorasynchronousjavascriptoverXMLprovidesawaytocreateinteractivewebapplicationsthataresupportedinmostwebbrowserssincetheearly2000s,includingInternetExplorer,Firefox,Mozilla,andOpera,tonameafew.ThebenetofthisoverVNCisthatthejavascriptrunsontheclientsidereducingtheextraresourcesrequiredforVNCsessions,whichhavebeenmeasuredtobeatleast10MBofmainmemoryperinstance.Furthermore,whentheservermachinecrashesthereisnowaytosavethestateofaVNCsession;however,theuservariablesinanwebpageareeasilystoredtodiskandprovidesforsessionhandlingeveninthecaseofahardwarefailure.Localusersaregivenmultiplewaystointerfacewiththemachine,asmentionedbefore,thereisadefaultX11basedwindowsmanagercalledIceWM,whichissimilartoWindows9xuserinterface.UserscanalsoaccessthesystemthroughSSH(secureshell)forthosedesiringnottobeconnedtoagraphicaluserinterface.WorkisunderwaytomakeeachGridAppliancecapableofhostingwebinterfacesonlytothelocalhost,thisisgoingtobebasedothewebinterfaceinamodulebasedformat. 52 ].Thedownsidetothisisthattheuserisforced 37

PAGE 38

53 ]isaHTTP(hypertexttransferprotocol)lesystem.ThisallowswebsystemstointegrateDavFSdirectlyintowebinterfaceandtakeadvantageoftheuserdatathereasopposedtorequiringextrauseraccountsthroughsoftwaresuchasPAM(pluggableauthenticationmodules)andLDAP(lightweightdirectoryaccessprotocol),similartowhatFTP(letransferprotocol)andSSHrequire.CurrentlytoaccessDavFSsharesinWindows,auserneedstoaddinganewremotefolder,whichisnottrivialforregularusers,andinmostavorsofLinux,whichrequiretheremotefolderbemounted,whichisevenmorecomplicatedthantheWindowsversion.Onthepositivesidethough,therearemanywebapplicationsthatcansupportreadingaDavFSfromtheInternet,remoteDavFSclients.ThemostwidelyintegratedlesystemtodateisSamba,whichisthebasisforWindowslesharing.ThedownsideofSambaisthattheversionssupportedonWindowshaveaweaklevelofsecurityandtheytoorequireintegrationwithapasswordauthenicationsystemlikeFTPandSSH.Samba'ssavinggraceisthataclientisintegratedintoalmosteverymodernlesystemmanagertodateGiventhiswealthofoptions,theGridAppliancesupportstheseallthesevariousdataaccessmodes.FilesharingforthewebinterfaceusesDavFSandincludesawebinterfacetoaccessles,thesearebasedonimplementationsforPHP.TheadvantageofDavFSinthissystemisthatvirtualuserdirectoriesweresetupforindividualsastheyregisteraccountsonthewebinterface;theDavFSserverusedintheGridApplianceismadespecicallyforthispurpose.Thismethodprovidesasecurelesystemwithoutrequiringdirectaccesstotheremotemachine.ForlocalinstantiationsoftheGridAppliance,users 38

PAGE 39

39

PAGE 40

VNCsessionpoweredbytheGridAppliancewebinterfacerunningCACTI. 40

PAGE 41

AJAXsessionpoweredbytheGridAppliancewebinterfacerunningSimpleScalar. 41

PAGE 42

54 55 ]areanabstractunitthatarenotaonesizetsallsolution,proposingcustomizedVMsfordierentapplications.ThereissomediscussiononconnectivityregardingcerticatesbutnotonhowtheyaredistributednorhowwellVMsmustbeconnectedtointeroperate.TheirgridsystemisbaseduponGlobusToolkit4[ 23 ];howthisisconguredisnotdiscussed.Onanalnote,thepurposeoftheGlobusWorkspacesistoruninawelldenedenvironmentassoftwaremustexisttostart,stop,andpausetheVMsasthereisnodiscussionofdirectuserinteraction.Thusalogicalconclusionthataddingnewexecutionmachinesisnotasimple,distributedprocess.OurGrid[ 43 ]proposessimilargoalstothatoftheGridAppliance,apeertopeerorientedgridstructurethatmakesiteasytousegridresources.TheOurGridprojecthasthreedierenttiers,middlewaremanagerscalledOurGridPeers,userinterfacescalledMyGrid,andexecutionnodescalledSWAN(sandboxwithoutaname).TheOurGridPeershelpenablethepeertopeersystemandbrokerfairtradeamongstthedierentsites.MyGridandSWANwerediscussedearlierinthechaptersregardinguserinterfaceandsecurity,respectively.TheGridAppliancesystemrequiresonlytwoentities,thatbeingtheSimpleNodeswhicharemaintainedbythemainadministrationunitsofthesystemandtheGridApplianceitself.UserscanturnanymachineintoanexecutionnodewithoutreconguringthesystemasisrequiredforSWAN.GridAppliancethelocalinstallationcapabiltiesoftheMyGrid;however,thereisarmbeliefthatanynodessubmittingjobsshouldalsobeexecutingthemaswell.MinimalintrustionGrid(MiG)[ 56 ]proposesthecreationofnewgridsoftwareleavingbehindexistingtechnologies.Theadvantageswiththisapproachareasimpliedadaptable 42

PAGE 43

57 ]andFOLDING[ 58 ]athomeprojects.ThebasisfortrustistheuseofSSLcerticates,whichcanonlybeobtaineddirectlyfromtheirwebsite.ThereisnodiscussiononcreatingindependentMiGs,nor,thesafetyofexecutionnodesfromhostilesoftware;however,thewebsitereferencesthescreensaverasasandbox.Submissionoccursfromthewebsite,anideasimilartohowtheGridApplianceWebInterfaceworks.Thereisnodiscussionofhowjobsarescheduledorhowschedulingoccurs.Theymakeclaimsofdecentralizedgridsystemsbutdonotgointodetailsonthematter.TheprojectseemstohavestagnatedsinceApril2006. 43

PAGE 44

status",thestateofallcurrentlyconnectednodesarelisted(Figure 5-1 ).Theideaistowatchthechangeinsystemsizetodetermineiftheremaypossiblyconnectivityissues.UsingtheGridApplianceWebInterface,thisinformationisalreadyavailablefromawebsite,makingthetaskofwatchingthestatusofthesystemveryeasy. status"tellsus,doesnotmeanthatthesystemiswellconnected.Forthatcase,ascripthasbeendeployedonallGridAppliancesthatpingseverynodeinthesystem3timeseverytwelvehourstodeterminetheconnectivity.ThankstotheworkofProfessorP.OscarBoykin,thetaskofobtainingrelevantdataoutofthepingtest'slogshasbeenmademucheasier(Table 5-1 ). 44

PAGE 45

Examplerunofcondor status. 45

PAGE 46

Runs272918100.00%0%Loss27239899.81%33%Loss4460.16%66%Loss740.03%100%Loss3950.14%100%forentireexecution00Table5-1. Thepingtestresultsdatingfrom02/24/07to03/14/07.Therstforlossentriesrefertotheruns,whilethenalentrycontainsdataforindividualpeertopeercommunicationforallruns. onMicrosoftWindows,Apple'sMACOS/X,andLinuxsystemssupportingVMware.Foreect,imagesofthesetestcaseshavebeenprovided. 59 ],aCPU-intensivecomputerarchitecturesimulatorwhichmodelstheexecutionsofapplicationswithcycle-levelaccuracy 61 ],alesystembenchmarkIperf[ 62 ],aTCPthroughputbenchmarkTheperformanceoftheseapplicationsisevaluatedwith3dierentplatforms:GridApplianceasbothVMwareServerandXenVMs,andLinuxonthephysicalhost.ThepurposeofthesebenchmarksisnottocompareVMwaretoXenorthephysicalhardware,buttoinvestigatethecostofusingvirtualmachinesandnetworkingfortheappliance,withfocusonmachinecongurationsthatwouldbeexpectedinadesktop-Gridtypeenvironment.Thehostcongurationwas:adesktop-classsystemPentiumIV1.7GHzCPUwith256KBon-chipcache512MBRAMPC133RAMPATAHarddriveat66MBps100MbitEthernetVMwareServer1.0.1XenTestingNightlySnapshot09/26/06 60 ]Gobenchmark

PAGE 47

GridAppliancerunningonWindowsusingVMwareServer. 47

PAGE 48

GridAppliancerunningonLinuxusingVMwareServer. 48

PAGE 49

GridAppliancerunningonMACOS/XusingVMwareFusion. 49

PAGE 50

5-5 .TheSimpleScalarexecutableusedwasSim-Cache.Theresultsareconsistentwithpreviousndings;thatis,virtualmachinesshowlowoverheadinthecaseofprocessorintensivetasks(0.4%forXen,and10.6%forVMware. Figure5-5. SimpleScalarresultsshowtheoverallexecutiontimes(inminutes)fortheexecutionoftheGobenchmarkinthreedierentcongurations. 50

PAGE 51

5-6 .Theread/writeratioremainsthesameforallvalues. Figure5-6. PostMarkI/Othroughputresults,inread/writeMB/s. Theuseofle-backedVMdrivesandlesystemstacksgreatlyfacilitatethedeploymentofthesandbox,butcomewithanassociatedperformancecost.ThemeasuredperformanceofthesandboxwiththisI/Ointensivebenchmarkis55%to64%ofthephysicalhost. 51

PAGE 52

Iperfresultsaregiveninmegabitspersecond,thatisbiggerisbetter. environment,buttoshowtheexpectedpeakbandwidthofthesandboxconguredwiththeIPOPuser-levelvirtualnetworkingsoftware.TheresultsestablishthattheVMwareimplementationwherethevirtualnetworkingrunsontheguestVM,withoutIPSec,deliversabandwidthof11.8Mbps,whereaswithIPSecthebandwidthisdecreasedto10.3Mbps.WhentheIPOPvirtualnetworkrunsonthehost,thevirtualnetworkbandwidthisimprovedsubstantiallyto26.5Mbps.ThiscanbeexplainedbythefactthattheIPOPsoftwareisnetwork-intensive;whenitrunsonthehost,itisnotsubjecttothevirtualizationoverheadduringitsexecutionandcandeliverbetterperformance.Xendelivers11.9MbpswithIPOPondomU,and14.1MbpswithIPOPondom0 5-7 withsuperscript1implyingvirtualnetworkinginsidetheVM,whereas2meansvirtualnetworkingonthehost.

PAGE 53

3 42 ].AsubstantialsourceofoverheadsinnetworkthroughputperformancecomesfromtheIPOPvirtualnetworkimplementation,whichcurrentlyisentirelyinuserspace.Nonetheless,thesandboxnetworkthroughputperformancelevelsareacceptablefortheintendedapplicationofthissandboxforcompute-intensiveapplicationsandinwide-areaenvironments.RunningtheIPOPsoftwareonthehostallowsforstrongerisolationofthevirtualnetworktracbecausetheprocessthatcapturesandtunnelspacketsresidesonaseparatedomainfromthecomputesandbox.Furthermore,virtualnetworkingonthehostprovidesthebestobservedthroughput.However,itcomeswiththedownsideofrequiringausertoinstalladditionalsoftware.AnalternativethatdoesnotrequireIPOPtorunonthehostbutstillprovidesstrongvirtualnetworkisolationistoaddasecondlevelofvirtualization(e.g.,byrunningtheXenappliancewithinaVMwarehostedI/Oenvironment).Thisisthesubjectofon-goinginvestigations. 53

PAGE 54

63 ]).Event-triggeredjobscanbescheduledtorunlocallyonindividualappliances,orsubmittedtoothernodesinthepoolthroughCondor.Intheretrospectiveanalysisscenario,dataisretrievedfromtheSCOOParchive,simulationmodelexecutionsaredispatchedtoaCondorpool,andresultsarepublishedbacktotheSCOOParchivethroughLDM.Currently,atotalof32appliancesservingthesepurposeshavebeendeployedatfourSCOOPsites.AnotherusagescenariowhereGridAppliancesarebeingdevelopedinthedomainofcoastalsciencesisanapplianceforeducationandtrainingofresearchers,students,andthepublicatlargeincyber-infrastructuretechniques.Inthisusagescenario,theapplianceintegratessurgesimulationmodels,Condormiddleware,visualizationsoftware,aswellastutorialsandeducationalmaterialoncyber-infrastructureandcoastalandestuarinescience.Asaresult,itenablesend-to-endusage,includingapplicationdevelopment,data 54

PAGE 55

CurrentdeploymentofGridAppliancesfromtheGridApplianceWebInterface. input,simulationexecution,datapost-processingandvisualization.UserswhoarenotfamiliarwithGridcomputingcandownloadandinstallanappliance,andsubmitasamplesimulationfromtheirownhomeorocecomputertootherGridAppliancesallwithinminutes.Incontrast,conguringphysicalmachinestoruntheappropriatemiddlewareandsimulationmodelstakesaleveloffamiliaritywithinstallingandconguringvarioussoftwareandmiddlewarepackagesthatisasignicantbarriertoadoptionbymanyscientists,engineersandstudents.Thereisalsoageneralusedeploymentcontainingover90nodes.ThewebinterfacemonitorsandsubmitsjobstothispoolandhasapictureoftheUnitedStatesmapshowingthecurrentlocationofdierentnodes(Figure 6-1 ).ThewebsiteisalsousedtomonitortheotherGridAppliancespools.Thewebsiteisavailableathttp://wow.acis.u.edu. 55

PAGE 56

56

PAGE 57

[1] J.O.KephartandD.M.Chess,\Thevisionofautonomiccomputing,"Computer,vol.36,no.1,pp.41{50,2003.[Online].Available: http://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=1160055 [2] I.Foster,C.Kesselman,andS.Tuecke,\Theanatomyofthegrid:Enablingscalablevirtualorganizations,"Int.J.HighPerform.Comput.Appl.,vol.15,no.3,pp.200{222,August2001.[Online].Available: http://portal.acm.org/citation.cfm?id=1080667 [3] R.J.Figueiredo,P.A.Dinda,andJ.A.B.Fortes,\Acaseforgridcomputingonvirtualmachines,"inICDCS'03:Proceedingsofthe23rdInternationalConferenceonDistributedComputingSystems.Washington,DC,USA:IEEEComputerSociety,2003.[Online].Available: http://portal.acm.org/citation.cfm?id=851917 [4] VMware.(2007,March)Vmwareworkstation,vmwareserver,vmwareplayer,vmwareesx.[Online].Available: http://www.vmware.com [5] P.Barham,B.Dragovic,K.Fraser,S.Hand,T.Harris,A.Ho,R.Neugebauer,I.Pratt,andA.Wareld,\Xenandtheartofvirtualization,"inSOSP'03:ProceedingsofthenineteenthACMsymposiumonOperatingsystemsprinciples.NewYork,NY,USA:ACMPress,2003,pp.164{177.[Online].Available: http://portal.acm.org/citation.cfm?id=945462 [6] InnoTek.(2007,March)Virtualbox.[Online].Available: http://www.virtualbox.org [7] Parallels.(2007,March)Parallelsworkstation,parallelsdesktop.[Online].Available: http://www.parallels.com [8] F.Bellard.(2007,March)Qemu.[Online].Available: http://fabrice.bellard.free.fr/qemu/ [9] Qumranet.(2007,March)Kernel-basedvirtualmachineforlinux.[Online].Available: http://kvm.qumranet.com/kvmwiki [10] D.I.Wolinsky,A.Agrawal,P.O.Boykin,J.Davis,A.Ganguly,V.Paramygin,P.Sheng,andR.J.Figueiredo,\Onthedesignofvirtualmachinesandboxesfordistributedcomputinginwideareaoverlaysofvirtualworkstations,"inFirstWorkshoponVirtualizationTechnologiesinDistributedComputing(VTDC),November2006. [11] C.S.Yeo,R.Buyya,H.Pourreza,R.Eskicioglu,P.Graham,andF.Sommers,\Clustercomputing:High-performance,high-availability,andhigh-throughputprocessingonanetworkofcomputers,"inHandbookofNature-InspiredandInnovativeComputing:IntegratingClassicalModelswithEmergingTechnologies,A.Y.Zomaya,Ed.NewYork,NY:Springer,2006,ch.16,pp.521{551. [12] L.PetersonandD.Culler.(2007,March)Planet-lab.[Online].Available: http://www.planet-lab.org 57

PAGE 58

[13] Altair.(2007,March)Pbspro.[Online].Available: http://www.altair.com/software/pbspro.htm [14] ||.(2007,March)Openpbs.[Online].Available: http://www.openpbs.org/ [15] C.Resources.(2007,March)Torqueresourcemanager.[Online].Available: http://www.clusterresources.com/pages/products/torque-resource-manager.php [16] M.Livny,J.Basney,R.Raman,andT.Tannenbaum,\Mechanismsforhighthroughputcomputing,"SPEEDUPJournal,vol.11,no.1,June1997. [17] J.BasneyandM.Livny,\Deployingahighthroughputcomputingcluster,"inHighPerformanceClusterComputing:ArchitecturesandSystems,Volume1,R.Buyya,Ed.PrenticeHallPTR,1999. [18] D.Thain,T.Tannenbaum,andM.Livny,\Distributedcomputinginpractice:thecondorexperience."Concurrency-PracticeandExperience,vol.17,no.2-4,pp.323{356,2005. [19] Sun.(2007,March)gridengine.[Online].Available: http://gridengine.sunsource.net/ [20] P.Computing.(2007,March)Loadsharingfacility,lsf.[Online].Available: http://www.platform.com/ [21] A.Staicu,J.Radzikowski,K.Gaj,N.Alexandridis,andT.El-Ghazawi,\Eectiveuseofnetworkedrecongurableresources,"inMilitaryApplicationsofProgrammableLogicDevices,2001. [22] R.Vogelbacher.(2007,March)Jobscheduler/resourcemanagerevaluation.[Online].Available: http://gridweb.cti.depaul.edu/twiki/bin/view/IBG/SchedulerProductEvaluation [23] G.Alliance.(2007,March)Globustoolkit.[Online].Available: http://www.globus.org/toolkit/ [24] G.J.PopekandR.P.Goldberg,\Formalrequirementsforvirtualizablethirdgenerationarchitectures,"inSOSP'73:ProceedingsofthefourthACMsymposiumonOperatingsystemprinciples,vol.7,no.4.NewYork,NY,USA:ACMPress,October1973.[Online].Available: http://portal.acm.org/citation.cfm?id=808061 [25] V.Bala,E.Duesterwald,andS.Banerjia,\Dynamo:atransparentdynamicoptimizationsystem,"inPLDI'00:ProceedingsoftheACMSIGPLAN2000conferenceonProgramminglanguagedesignandimplementation,vol.35,no.5.ACMPress,May2000,pp.1{12.[Online].Available: http://portal.acm.org/citation.cfm?id=349303 [26] K.AdamsandO.Agesen,\Acomparisonofsoftwareandhardwaretechniquesforx86virtualization,"inASPLOS-XII:Proceedingsofthe12thinternationalconferenceon

PAGE 59

[27] Cisco.(2007,March)Ciscovpn.[Online].Available: http://www.cisco.com/en/US/products/sw/secursw/ps2308/index.html [28] J.Yonan.(2007,March)Openvpn.[Online].Available: http://openvpn.net/ [29] M.Tsugawa,A.Matsunaga,andJ.A.B.Fortes,\Virtualizationtechnologiesintransnationaldg,"indg.o'06:Proceedingsofthe2006internationalconferenceonDigitalgovernmentresearch.NewYork,NY,USA:ACMPress,2006,pp.456{457. [30] A.SundararajandP.Dinda,\Towardsvirtualnetworksforvirtualmachinegridcomputing,"2004.[Online].Available: http://citeseer.ist.psu.edu/645578.html [31] I.X.Jiang,\Violin:Virtualinternetworkingonoverlay."[Online].Available: http://citeseer.ist.psu.edu/714412.html [32] C.P.WrightandE.Zadok,\Unionfs:Bringinglesystemstogether,"LinuxJournal,no.128,pp.24{29,December2004. [33] A.Ganguly,A.Agrawal,O.P.Boykin,andR.Figueiredo,\Ipoverp2p:Enablingself-conguringvirtualipnetworksforgridcomputing,"inProceedingsofInternationalParallelandDistributedProcessingSymposium(IPDPS).Toappear,Apr2006. [34] A.Ganguly,A.Agrawal,P.O.Boykin,andR.Figueiredo,\Wow:Self-organizingwideareaoverlaynetworksofvirtualworkstations,"inProceedingsofthe15thIEEEInternationalSymposiumonHighPerformanceDistributedComputing(HPDC),June2006,pp.30{41. [35] M.Krasnyansky.(2007,March)Universaltun/tapdevicedriver.[Online].Available: http://www.kernel.org/pub/linux/kernel/people/marcelo/linux-2.4/Documentation/networking/tuntap.txt [36] R.Droms,RFC2131DynamicHostCongurationProtocol,March1997. [37] R.D.S.Alexander,RFC2132DHCPOptionsandBOOTPVendorExtensions. [38] P.Mockapetris,RFC1034Domainnames-conceptsandfacilities,November1987. [39] ||,RFC1035Domainnames-implementationandspecication,November1987. [40] D.C.Plummer,RFC0826EthernetAddressResolutionProtocol:Orconvertingnetworkprotocoladdressesto48.bitEthernetaddressfortransmissiononEthernethardware,November1982. [41] A.Ganguly,D.I.Wolinsky,P.O.Boykin,andR.J.Figueiredo,\Decentralizeddynamichostcongurationinwide-areaoverlaynetworksofvirtualworkstations,"inWORKSHOPONLARGE-SCALE,VOLATILEDESKTOPGRIDS,March2007.

PAGE 60

[42] S.Santhanam,P.Elango,A.A.Dusseau,andM.Livny,\Deployingvirtualmachinesassandboxesforthegrid,"inWORLDS,2005. [43] N.Andrade,L.Costa,G.Germoglio,andW.Cirne,\Peer-to-peergridcomputingwiththeourgridcommunity,"May2005. [44] S.Kent,RFC2401SecurityArchitecturefortheInternetProtocol,November1998. [45] S.Adabala,V.Chadha,P.Chawla,R.Figueiredo,J.Fortes,I.Krsul,A.Matsunaga,M.Tsugawa,J.Zhang,M.Zhao,L.Zhu,andX.Zhu,\Fromvirtualizedresourcestovirtualcomputinggrids:thein-vigosystem,"FutureGener.Comput.Syst.,vol.21,no.6,pp.896{909,2005. [46] N.F.C.Nanotechnology.(2007,March)nanohub.[Online].Available: http://www.nanohub.org/about/ [47] E.Roberts,M.Dahan,J.Boisseau,andP.Hurley.(2007,March)Teragriduserportal.[Online].Available: https://portal.teragrid.org/gridsphere/gridsphere [48] (2007,March)Condorview.[Online].Available: http://www.cs.wisc.edu/condor/manual/v6.7/3 4Contrib Module.html [49] A.HarterandT.Richardson.(2007,March)Virtualnetworkcomputing.[Online].Available: http://www.cl.cam.ac.uk/research/dtg/attarchive/vnc/index.html [50] M.McLennan,D.Kearney,W.Qiao,D.Ebert,andC.S.R.KentSmith.(2007,March)Rappturetoolkit.[Online].Available: https://developer.nanohub.org/projects/rappture/ [51] J.J.Garret.(2005,February)Ajax:Anewapproachtowebapplications.[Online].Available: http://www.adaptivepath.com/publications/essays/archives/000385.php [52] H.Edlund.(2007,March)Drall.[Online].Available: http://home.gna.org/drall/ [53] Y.Goland,E.Whitehead,A.Faizi,S.Carter,andD.Jensen,RFC2518HTTPExtensionsforDistributedAuthoring{WEBDAV,February1999.[Online].Available: http://dav.sourceforge.net/ [54] K.Keahey,I.Foster,T.Freeman,X.Zhang,andD.Galron,\Virtualworkspacesinthegrid,"inProceedingsofEuropar2005,September2006. [55] K.Keahey,K.Doering,andI.Foster,\Fromsandboxtoplayground:Dynamicvirtualenvironmentsinthegrid,"inProceedingsof5thInternationalWorkshopinGridComputing,November2004. [56] R.AndersenandB.Vinter,\Minimumintrusiongrid-thesimplemodel,"in14thIEEEInternationalWorkshopsonEnablingTechnologies:InfrastructuresforCollaborativeEnterprises(WETICE2005),June2005.

PAGE 61

[57] B.UniversityofCalifornia.(2007,March)Seti@home.[Online].Available: http://setiathome.berkeley.edu/ [58] V.PandeandStanford.(2007,March)Folding@homedistributedcomputing.[Online].Available: http://folding.stanford.edu/ [59] S.LLC.(2007,March)Simplescalar3.0d.[Online].Available: http://www.simplescalar.com [60] S.P.E.Corporation.(2007,March)Speccpu2000.[Online].Available: http://www.spec.org/cpu [61] J.Katcher,\Postmark:anewlesystembenchmark,"inNetworkAppliance,October1997. [62] T.B.ofTrusteesoftheUniversityofIllinois.(2007,March)Iperf.[Online].Available: http://dast.nlanr.net/projects/iperf/ [63] G.P.DavisandR.K.Rew,\Theunidataldm:Programsandprotocolsforexibleprocessingofdataproducts,"inTenthInternationalConferenceonInteractiveInformationandProcessingSystemsforMeteorology,Oceanography,andHydrology,October1996,pp.131{136.

PAGE 62

DavidWolinskywasbornOctober31,1982.HeattendedtheUniversityofFloridafora4-yearBachelorofSciencedegreeinComputerEngineeringfollowedbya2-yearMasterofSciencedegreeinelectricalengineering,respectively.Thelast2yearsofhislifehavebeenconsumedbyvirtualmachines. 62


Permanent Link: http://ufdc.ufl.edu/UFE0020420/00001

Material Information

Title: Design Space Exploration of Virtual Machine Appliances for Wide-Area Distributed Computing
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0020420:00001

Permanent Link: http://ufdc.ufl.edu/UFE0020420/00001

Material Information

Title: Design Space Exploration of Virtual Machine Appliances for Wide-Area Distributed Computing
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0020420:00001


This item has the following downloads:


Full Text





DESIGN SPACE EXPLORATION OF VIRTUAL MACHINE APPLIANCES FOR
WIDE-AREA DISTRIBUTED COMPUTING



















By

DAVID ISAAC WOLINSKY


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA

2007

































S2007 David Isaac Wolinsky



































I dedicate this to Donna; I deeply appreciate her understanding and pray that the next

time I am not so overwhelmed.









ACKNOWLEDGMENTS

I have many to thank for successfully getting this far. By far the most important

is Donna, who continually encourages me to pursue my passions. I deeply appreciate

Professor Figueiredo for giving me my first graduate assistantship and following me

through to graduation. Further, I thank both Professor Figueiredo and Boykin for

their hard work and dedication toward my research and related research. Also, I thank

Professor Lam for taking a chance on me what seems like an eternity ago. Also, my

parents who have alr--,i- encouraged me even though as time goes on, they understand

less and less of what I am doing. Finally, I know deep in my heart that without a loving

and forgiving God, that none of this would have been possible.









TABLE OF CONTENTS


page


ACKNOW LEDGMENTS .................................

LIST O F TABLES . . . . . . . . . .

LIST OF FIGURES . . . . . . . . .

A B ST R A C T . . . . . . . . . .

CHAPTER

1 INTRODUCTION ..................................


Problem Statement .
Defining the Solution
Thesis Outline .


2 BACKGROUND .....


Grid Computing .
Virtualization ...
Virtual Networking .


3 THE CONSTRUCTION OF THE GRID APPLIANCE


3.1 Virtual Machines ............
3.1.1 Virtual Machine Independence .
3.1.2 Data Portability .........
3.2 Condor . . . . .
3.3 IP O P . . . . .
3.3.1 Architecture ...........
3.3.2 Services ......
3.4 Security . . . . .
3.4.1 Virtual Machines ........
3.4.2 Firewalls and Virtual Networking
3.4.3 IPsec . . . .
3.4.4 The Fall Back .. ........
3.5 Administration .. ..........
3.6 User Interfaces .. ..........
3.6.1 Application Access .. .....
3.6.2 Data Access .. .........


4 RELATED WORK ...................

5 SYSTEM VALIDATION AND PERFORMANCE ..

5.1 Validation . . . . . .









5.1.1 Condor .. ... .. .. .. .. ... .. .. ... .. ..... 44
5.1.2 Ping Test ................. . . ..... 44
5.1.3 Siij:l, Nodes on Planet-Lab .................. ... 45
5.1.4 Grid Appliance System Independence ................ 45
5.2 Performance Evaluation .................. ......... 46
5.2.1 Sim pleScalar .................. ............ 50
5.2.2 PostMark ............... ............ 51
5.2.3 Iperf . ............... ............ .51
5.2.4 Discussion ............... .......... .. 53

6 CONCLUSION . ............... ............ 54

6.1 Current Deployments ............... ........... 54
6.2 Conclusion . ................ ............ 55

REFERENCES ................... .................... 57

BIOGRAPHICAL SKETCH .................. ............. 62









LIST OF TABLES
Table page

3-1 IP Address to hostname mapping using IPOP's dns server. .... . . 28

5-1 The ping test results dating from 02/24/07 to 03/14/07. The first for loss entries
refer to the runs, while the final entry contains data for individual peer to peer
communication for all runs. .................. .. ...... 46









LIST OF FIGURES
Figure page

1-1 High-level architectural overview of the Grid Appliance. . . 12

3-1 UnionFS layout in the Grid Appliance. ............. .... 23

3-2 IPOP deploiv .1 in the same domain as the VM. ................... .. 25

3-3 IPOP deploy, .1 in a separate execution domain from the VM. . .... 26

3-4 VNC session powered by the Grid Appliance web interface running CACTI. .40

3-5 AJAX session powered by the Grid Appliance web interface running SimpleScalar. 41

5-1 Example run of condor_status. ............... ......... 45

5-2 Grid Appliance running on Windows using VMware Server. . .... 47

5-3 Grid Appliance running on Linux using VMware Server. ........... ..48

5-4 Grid Appliance running on MAC OS/X using VMware Fusion. . ... 49

5-5 SimpleScalar results show the overall execution times (in minutes) for the execution
of the Go benchmark in three different configurations. . . 50

5-6 PostMark I/O throughput results, in read / write MB/s. ............ .51

5-7 Iperf results are given in megabits per second, that is bigger is better. . 52

6-1 Current deployment of Grid Appliances from the Grid Appliance Web Interface. 55









Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

DESIGN SPACE EXPLORATION OF VIRTUAL MACHINE APPLIANCES FOR
WIDE-AREA DISTRIBUTED COMPUTING

By

David Isaac Wolinsky

May 2007

C('! i: Renato J. Figueiredo
Major: Electrical and Computer Engineering

Virtual machines (VMs) provide an ideal way to package complete systems in a file

and deploy complete application environments without hindering pre-existing software

on a computer. By using VMs, the ability to develop, deploy, and manage distributed

systems has been greatly improved. This paper explores the design space of VM-based

sandboxes where the following techniques that facilitate the usability of secure nodes for

grid computing: grid schedulers, DHCP-based virtual IP address allocation on virtual

LANs, self-configuring virtual networks supporting peer-to-peer NAT traversal, stacked

file systems, IPsec-based host authentication and end-to-end encryption of communication

channels, and user interfaces. Experiments with implementations of single-image VM

sandboxes, which incorporate the above features and are easily deplo 1il-, on hosted I/O

VMMs, show execution time overheads of 10.,' or less for a batch-oriented CPU-intensive

benchmark.









CHAPTER 1
INTRODUCTION

1.1 Problem Statement

International Business Machines Corporation (IBM) proposes the idea of autonomic

computing [1], which states that systems should be self-configuring, self-l. !ii_-

self-optimizing, and self-protecting. This relates to many issues existing in grid computing

[2], such as the deployment, maintenance, and accessibility of grid resources. Deployment

of grid resources focuses primarily on the complexity of the software stack and its

dependencies. Maintenance involves the minimum number of tasks required by an

administrator in order to keep a well running grid system. The ability of users to interact

with grid resources depends on how accessible the grid resource is. The problem is "What

are the requirements for the grid software stack and how are they best met?"

The process of setting up a computing grid can be very detailed. Grid software tends

to have many different complex configurations with relatively few plug and p1 li systems;

furthermore, many grid software packages still require other dependencies which are not

included in the core software package. The issue can be complicated by requiring a specific

or limited set of hardware configurations. Often time, grid software requires dedication of

the underlying hardware leading to system utilization inefficiencies.

Though developers ideally want a grid system which never needs any updates in the

software stack, often times issues are discovered after release or new, desirable features are

made available. The issue of making these updates available to the system often times can

be a complex process at a minimum requiring that the administrator at each grid node

download the update, apply it to the system, and then tweak each system accordingly.

There are a few systems which have automatic updating features and often times if they

do they require super user level services which may interfere with using the hardware

resource for other purposes. If an update breaks pre-existing software or configurations,

this can lead to an even larger nightmare.









Users prefer common interfaces and few are willing to learn the complexities of

text-based grid interfaces, that is, they are most comfortable with a graphical "point

and click" interface where only user level configuration data is visible and all system

level configuration data is transparent to them. Even if a graphical user interface (GUI)

was available, each different application would need its own unique interface and most

GUIs are complex to program, which limits the amount of applications developers or

administrators can make available via a "point and click" system. In an age of frustrated

informational technologists (IT) professionals and computer users, people are less likely

to install foreign software on their computers, whether it be an ssh, XI1, Virtual Network

Computing (VNC), or grid system specific client.

1.2 Defining the Solution

The solution proposed in this thesis is called Grid Appliance and deals with the

above issues while still limiting any compromise in terms of software or hardware

configuration and with limited performance overhead. At the heart of Grid Appliance

lies virtualization primarily through packaged virtual machines (VMs). As sir---, -1, 1 by

previous research [2], the Grid Appliance takes advantage of such technologies such as the

Internet, distributed computing, and peer-to-peer networking.

The benefits of using virtual machines in grid computing [3] "include security,

isolation, customization, legacy support, and resource control" with very limited overhead

in processor intensive applications. Grid Appliance has been designed to run on the two

most popular virtual machine technologies VMware [4] and Xen [5] and is in the process of

working on VMs such as VirtualBox[6], Parallels [7], Qemu [8], and Kernel-based Virtual

Machine (KVM) [9].

The most important use of virtualization in the Grid Appliance is encapsulation of

not only virtualized disks but also allowing a full system to execute inside another in

a non-obstrusive way. With respect to virtual disks VMs allow for the creation of file

systems which are placed into a single portable file. With this capability, all the software































Computer


Figure 1-1. High-level architectural overview of the Grid Appliance.


used in the system is installed onto the virtual disk and since the software runs in an

abstracted hardware no reconfiguration is required. The key software features of the Grid

Appliance fall in these categories (Figure 1-1): system services (a), miscellaneous services

(b), administrative services (c), web interface (d), and network services (e).

System services are varied and include portable file systems via UnionFS and virtual

network through IPOP (IP over Peer-to-Peer). Miscellaneous services contain software

which can easily be interchanged such as the grid scheduler and local user interface library.

Administrative services include automatic updates and administrative ssh. Web interface

provides the capability for publishing of content as well as user access to the system. Users

can access the command line and their files through the network services.

Some of this material has been presented before [10], which is the work of the same

author. While this presents many new concepts and a wider breadth of information, it is









closely related to the original paper. Due to this, this paper will only be referenced here,

acknowlding that it was the premise for further and future investigation.

1.3 Thesis Outline

This thesis is outlined as follows: C'!i lpter 2 reviews background projects, ('! Ilpter

3 discusses the construction of the Grid Appliance, ('!i lpter 4 overviews related work,

C'! Ilpter 5 validates and tests the performance of the Grid Appliance, and finally in

('!C lpter 6 discusses the current deployment, concludes this thesis, and provides light on

future work.









CHAPTER 2
BACKGROUND

The primary features of the Grid Appliance are virtual computing, grid computing,

and virtual networking. This chapter reviews projects in those fields and presents

arguments as to why a specific project was chosen over others for inclusion in the Grid

Appliance.

2.1 Grid Computing

The Grid Appliance provides the ability to create wide area distributed systems with

the focus on high throughput computing [11], in other words, the ability to provide a large

amount of processing power over a large period of time. This is because wide area systems

tend to not provide the best environment for parallel applications due to high latency;

however, they are ideal for sequential applications.

The need for grid computing is there, users want to be able to submit many tasks in

a secure environment with the knowledge that they will be processed in due time. This

creates many obstacles focused primarily on users' ability to access the resources and

ensuring there are enough resources available given the amount of users in the system.

Most modern systems use a central manager scheme, which all users connect and submit

their jobs to the cluster from these managers. The pitfall of this design is that it creates

a single point of failure, where if the manager or managers go down, access to the system

is denied. Furthermore, the effort associated with maintaining the system by IT personnel

can be costly.

Adding new resources and managing existing resource is complex, error-prone,

and costly; this happens to be the bulk cost of a system. Systems like PlanetLab [12]

have reduced the complexity of adding new resources by providing a CD image and

a configuration file that can be copied to a floppy drive or a USB device; however,

this system still requires a central manager in terms of person and computing. The

main computer provides the only way users can gain access to the individual nodes;

furthermore, this central system also contains the main system image which all nodes









must download before they can be used. Administrators are required to allow new users to

access resources, initiate the process of resource adding, and to fix configuration bugs in

existing resources.

The cluster computing software stack consists primarily of job managers such as

PBS and its forks [13-15], Condor [16-18], Sun's Grid Engine [19], Load Sharing Facility

[20], and RES. This software requires a central manager to which all worker nodes must

have direct communication. In order to submit to the nodes, a user must have access to

a machine which can communicate to all the worker nodes, where both machines are in

the same address space. This removes the ability for a default configured Condor or PBS

system to be able to talk from one private network to another private network. This is an

issue where universities may want to set up a shared distributed computing grid but are

unwilling to give each machine in the pool a public IP address due to the lack of public IP

addresses or security issues. The solution proposed for this problem is virtual networking,

which is discussed further in chapter 3.5.

The basic needs for the job manager [21, 22] in Grid Appliances case are
1. ability to handle hundreds of nodes and more jobs than workers;
2. allows any node that can execute jobs also submit;
3. the project is Open source and a free full version;
4. handles system and network issues well;
5. supports user-level check pointing;
6. provides for shared cycles from; desktops ;
7. ability to prevent rogue nodes from submitting jobs.

There are three products based upon PBS (Portable Batch System), OpenPBS [14],

PBS Pro [13], and Torque [15]. OpenPBS is an older version of PBS that was released

to the open source community. According to Altair, it is not well suited for systems that

want to run more than 100 jobs or more than 32 nodes. Altair's flagship job manager,

PBS Pro, is closed source and is only free to the academic community but requires cash



1That is, if a machine is taken over by the local user, the job will suspend and restart later or migrate
to an available machine.









backed license otherwise. TORQUE is a fork of the OpenPBS. TORQUE and PBS Pro

have a similar feature set and are capable of handling thousands of nodes and more jobs

than workers also with the support of interactive jobs. PBS and its derivatives were

designed to run on dedicated resources and do not support and thus does not provide for

point 6 mentioned above. Also there is only kernel-level support for check pointing.

Sun's Grid Engine has a similar feature set of PBS, except that it supports many

more operating systems and allows for the execution parallel. Another bonus for the grid

engine is that it supports user-level check pointing and handles point 6 fine. The Grid

Engine comes in both a free and a support backed version. The source is available from

the Internet. The packaging is robust and supported on many operating systems.

Condor is meant for shared resources, where a job will suspend if the computer is

accessed by a local user; furthermore, Condor is free and open source, allowing for changes

to Condor. Condor is the only scheduler out of these three to have a strong presence in

academia, largely in part because it is still actively being developed by the University of

Wisconsin. The Grid Appliance's primary users are expected to be academia, thus Condor

being developed by a university and the other listed reasons, Condor was selected as the

default scheduler in the Grid Appliance. The faculty related to the development of the

Grid Appliance also had strong ties and background to Condor, thus using it allowed for

less overhead in learning how to configure and use the system.

The last two mentioned Load Sharing Facility and RES are both closed source

and require cash back licenses to use which goes against the principles for the Grid

Appliance. While Condor was selected as the default manager for now, there was

significant interest in the features that Grid Engine provided that Condor did not and

will remain a possibility if those features are needed in the future.

Thanks to the Globus Alliance, pools can be shared in a secure fashion. Their

software, Globus Toolkit [23], allows for the ability to build and unite grid applications

and systems. For example, the schedulers Condor and PBS have released tools that allow









external connection to their pools via Condor-G and PBS-Globus respectively. In fact, the

current deployment of the Grid Appliance allows access to the condor pool via Condor-G.

Imagine a system where a user installs a light weight, non-intrusive application

that connects directly to a distributed cluster running on homogeneous systems. This

application requires no user input besides pressing the "virtual power on" button and

updates are invisible to the user. The software requires no additional network cards

besides an active connection to the Internet. If the user ever breaks the application or it

has internal strife, all the user has to do is restart the application; often times however,

the application will be able to recover on its own. This application is a "black box", the

user's system has no idea what's running inside and nothing outside the system affect

it and reciprocally nothing inside of it can affect the system. System mangers can turn

on the software and walk away, while users' are given multiple different levels of entry

depending on their needs and skill sets, such as web interface, console access, and direct

file system access. This is all available in the form of the Grid Appliance. The remaining

chapters discuss the merits of components of the Grid Appliance and their related works.

2.2 Virtualization

Back in the late 1960s and early 1970s, IBM led research into the use of virtual

computing to time multiplex fully complete, isolated systems to individual users. The

system was called CP/C'I\! and later VM/C'\ IS, which stood for Control Program

(Virtual Machine) / Cambridge Monitor System. The hypervisor approached used in

the CP/C'\ !S uses a minimal kernel that provides virtual interfaces to the underlying

hardware. This approach only works on fully virtualizable hardware [24] or kernels that

have been programmed to work around machine short comings, such as what Xen [5] has

done. The rise of workstations and cheap computing saw the end of research and use of

mainstream virtual computing.

In the late 1990s and earlier 2000s, virtual computing made a comeback led by

VMware [4]. Virtual computing is now being used as a way to increase reliability, security,









and reduce machine count. The mainstream process at this time is based upon the x86

ISA, which is not completely virtualizable due to the fact that not all sensitive instructions

are a subset of privileged instructions [24]. In this case, the virtual machine is interpreted,

where each instruction is converted into compatible code and executed on an emulated

processor, this, however, can be extremely slow. VMware has spent considerable amount

research time into making this process faster by means of dynamic recompilation [25],

when code is read ahead of run-time and changed to run on the existing hardware. This

enables non-virtualizable systems to be virtualized. VMwares two free virtualization

environments, Pl-,- r and Server, are based upon a standalone process running in a host

computer, where as ESX is based upon the hypervisor concept. The main problem with

ESX is that it is supported by a limited amount of hardware configurations and to this

date SATA hard drives are still not supported.

There is still a considerable amount of overhead in dynamic recompilation and so

research in Cambridge came up with the idea of using paravirtualization. Paravirtualization

takes advantage of the fact that a large chunk of system calls does not require the usage of

privileged instructions. What Xen does is change the composition of the guest operating

system's system calls so that instead of executing privileged instructions they execute

hypercalls that trigger the host operating system to deal with the privileged instructions.

The disadvantage occurs in systems that have several system calls that trigger hypercalls

over and over. Furthermore, as stated earlier, Xen does require the use of a non-standard

kernel, installation of which is daunting for even experienced users. Xen is based upon the

hypervisor concept, but this is for the most part invisible to the users as Xen uses drivers

from Linux and is therefore as compatible with differing hardware as Linux is.

Hardware companies are recognizing the need to support virtualization at the

hardware level and have begun to add instructions to assist in VM. Xen was the first

software suite to show this publicly. This was followed up by a project called KVM,

Kernel based VM, whose purpose was to show how efficient a VM can be thanks to









hardware virtualization instructions and using already existing operating system code.

Specifically, KVM use the Linux kernel's subsystems to deal with the VMM role. Thanks

to this their code base is able to be kept at a minimum, while the major features in the

VMM have been tested thoroughly a long time, thanks to them being a part of a well

validated kernel. Recently after the introduction of hardware virtualization, VMware

published a paper [26] showing that hardware virtualization was still not where their

software process is.

Another aspect of virtual machines is the ability to multiplex network interface cards.

This is done in two different v--,v-, bridging and network address translation (NAT).

Bridging multiplexes the device at OSI 1,- v-r 2, such that the VM's network devices have

an ip address on the local area network. The other, NAT, multiplexes right above the

network li .-r, such that the virtual machine has a private address unknown to the host

machines local area network. The two primary advantages of NAT are security and no loss

local ip addresses.

2.3 Virtual Networking

The basis for the "Grid pi .lI!, i. is "flexible, secure, coordinated resource sharing

among dynamic collections of individuals, institutions, and resources" [2]. This works

fine in homogeneous situations; however, the Internet does not provide a homogeneous

environment given firewalls and NATs (network address translators).

Firewalls provide a l-v-r of security by permitting and denying traffic from entering

a network. NATs allow multiple hosts on a private network to share a common public

address. Tightly controlled firewalls and NATs are usually made such that the host(s) are

unable to receive network messages until they have made communication with a public

host. This connectivity will only last so long as the firewall or NAT allows the state to

exist, commonly 30 seconds. Often times these two devices are used as a means of security

which TCP/IP does not inherently give, so it would be unacceptable to remove their use.

The problem in this case is defined as how to gain access to resources that are behind









incompatible networks. The solution is virtual private networks (VPNs) with support for

TCP/IP.

The most simple form of a VPN comes in the form of central based VPNs, such as

CiscoVPN [27] and OpenVPN [28]. These systems require that a user have a certificate,

user name, and password to gain connectivity to the network. Several similar offerings

have been provided specifically for the grid community such as ViNe [29], VNet [30], and

Violin [31]; however, none of these solutions provide encrypted communication. All these

technologies share a common issue that is, having a centralized system which requires

administrators to setup addressing and routing rules; having this central system allows for

the network to be easily compromised.

This gives motivation for distributed systems, such as IPOP (Internet Protocol

over Peer to Peer IP over P2P). IPOP is built upon the principle of peer to peer

networking, where all nodes are created equally, being both server and client. In peer

to peer models, new peers are started by connecting to known good peers which are listed

in a preconfigured file. This initial list has no size limits and is encouraged to be very

large. As long as one of those peers is alive, new connections can join the network and

because the system is peer to peer, even if all the initial peers go down, already connected

peers will remain connected. After connecting to the first peer, nodes attempt to discover

other nodes that are close to each other.









CHAPTER 3
THE CONSTRUCTION OF THE GRID APPLIANCE

The basis for the Grid Appliance has been defined through the use of virtual

machines, grid scheduling software, and virtual networking. In this chapter, the discussion

focuses on implementation of these features into the Grid Appliance. Furthermore, topics

including security, administration, and user interfaces are introduced and discussed in

depth. No one feels safe in using products which do not present any form of system

security, through techniques employ, -l the Grid Appliance presents multiple 1l-, rS? of

security, which provides a truly safe environment. Without administration features, Grid

Appliance systems would likely not be redeploy, -1 outside of the initial system. Providing

a user interface, means even foreign users to grid environments will not feel alienated and

have access to a rapid task scheduling environment.

3.1 Virtual Machines

Modern virtual machines are highly portable, encapsulated systems in a file that

provide a homogeneous virtual system running on heterogeneous hosts, which can be

run on any system with supporting software. As of this date, VMware has support for

the three 1n i, i operating systems, Windows, Linux, and MacOS (more information is

available in Appendix A); and the Grid Appliance has successfully run on all three. The

Grid Appliance is distributed as a compressed file weighing in at 229 Megabytes. VMs are

capable of running pre-built operating systems that require no configuration from the user;

because of this, the Grid Appliance is able to be used independently of the underlying

hardware and software configurations. One thing not mentioned in any paper is that the

homogeneous environments provided by virtual machines is limited by the instruction set

of the processor, for this reason software included in the Grid Appliance is compiled to run

on at least the 686 (or Pentium Pro / II) architecture. Knowing this, testing of the Grid

Appliance concerns only components inside the system.









3.1.1 Virtual Machine Independence

From the ground up, the Grid Appliance was developed with a focus on being a

perfect fit for generic virtual machines. This presented quite a conflict, because VMware

hardware configuration slightly differs from those of rival virtualization products; further,

VMware has developed a disk format that has only recently been opened to the public,

but not much work on compatibility has been done1 To date, the Grid Appliance has

successfully booted in VMware and Xen and testing will soon begin for Qemu, KVM,

Parallels, and VirtualBox.

3.1.2 Data Portability

As mentioned earlier VMs provide encapsulation in the form of VM disks, the Grid

Appliance takes advantage of that by supporting multiple different 1-v rs of disk access

provided by UnionFS [32]. The idea being that there are three l r -s, base operating

system l rv, ', developers livT and a users l v -r. These lV z-i or stacks are all combined

into a single file system from the users perspective, see figure 3-1. The base 1-v.-r contains

the core configuration of Grid Appliance and can be shared by multiple virtual machines

reducing disk space costs, this 1lv. r is read-only. The developers or site specific 1lv. r is

used by local system administrators to add special functionality to the Grid Appliance

that is not provided in the default image. To access this a user need only select at

boot, that they would like to go into development mode. At the end of the session, the

developer runs a script, which blanks out any Grid Appliance configuration and is able to

release this new image to users. The user liv. r affords the ability for users to migrate their

data from one machine to another without worrying about the underlying configuration of

that Grid Appliance.



1A utility provided by the Qemu group called qemu-img allows for the conversion of single file, grow-
able VMware images; however, this is only one of the possible 4 different VMware disk configurations.






























Figure 3-1. UnionFS layout in the Grid Appliance.


3.2 Condor

The decision to use Condor is based on it being open source, extremely fault tolerant,

and easily configurable. Since previous sections have gone into detail about Condor

features, this section focuses on the configuration in the Grid Appliance. One issue with

Condor is that it binds to an IP address instead of a device. Because of this inconvenience,

a script in the Grid Appliance occasionally checks the IP address and restarts Condor if

needed with the new address. To make a Grid Appliance a manager, worker, or submission

only node requires only the creation of a specific file. As the Condor scripts created for the

Grid Appliance start, they will check this file and start Condor appropriately. A similar

feature is used for the condor manager's IP address, which is stored in a file so that users

can easily change that configuration value as well.

To validate the Condor installation the following tasks are performed:
* batch execution of 100 standard output jobs,
* test case of Condor's DAG scheduler,
* check pointing jobs,









* Condor rocking 2


3.3 IPOP

Because the Grid Appliance uses IPOP, there is very limited centralized management

of the virtual network. A detailed discussion of the IPOP architecture is addressed by

related work [33, 34], this paper will lightly touch IPOP architecture prior to the Grid

Appliance and changes made for the Grid Appliance and user-friendliness. For a reliable

and self managing network three services were added to IPOP they include DHCP

(Dynamic Host Configuration Protocol), DNS (Domain Name System), and ARP (Address

Resolution Protocol).

3.3.1 Architecture

IPOP contains four in, i ri" assemblies: Brunet, libtuntap, IPRouter, and ip'l,'! Node;

this is done to separate major features from each other. The Brunet library contains the

peer to peer connectivity. Libtuntap is used to read and write from the virtual network

adapter, TAP [35]. IPRouter binds libtuntap and Brunet such that it is responsible for

sending and receiving data over Brunet and interfacing with libtuntap. 'i'pl, Nodes

are bound to Brunet and are used to setup static peers for inclusion on the initial list of

IPOP peers from which other nodes connect to the system. The traditional mechanism

for deploying IPOP can be seen in figure 3-2. The application sends and receives on

the TAP device, while IPOP reads and writes to the TAP device. Incoming packets are

received by IPOP and written to TAP. Outgoing packets are read by IPOP, converted,

and sent over the physical Ethernet device. The IPOP virtual network address space is

10.128.0.0/255.128.0.0 in this example.

One of the 1 i ri problems of peer to peer technologies is that NATs and firewalls

tend to make it difficult. To get around this, Brunet employs UDP (user datagram



2 Condor flocking allows a job submitter to schedule jobs to execute on another Condor managers
system, when all the nodes on the submitter's system are occupied.









SENDTO SENDTO
70.115.3.4 10.128.0.2

S TX TX
ETH IPOP TAP
R RX

RECVFROM RECVFROM
128.35.4.99 10,128,0.35

Figure 3-2. IPOP deploy, ,1 in the same domain as the VM.

protocol) and NAT hole punching technologies. Furthermore, if nodes go down, Brunet

is able to reconnect the system without user interference, making it self-healing. Now

focus switches to the architecture that is relevant to the Grid Appliances internals, the

local network stack and remote network stack negating what happens in the peer to peer

networking overlay, ie Brunet.

In the original papers discussing IPRouter [33, 34], it required a lot of user input to

successfully start. This is because it needs to know details such as the hardware and IP

addresses of the TAP device, that is, there was no true DHCP process. This also meant

that the TAP device needed to be completely setup prior to starting IPRouter. A lot of

focus went into making it such that the only requirement for starting IPRouter was that a

TAP device existed on the system. Now IPRouter learns these addresses by reading them

out of the packets going over the TAP device.

By enabling a robust IPRouter that can learn its addresses allows for dynamic

changes to system configurations, such as a change in IP address; furthermore, this allows

for two interesting possibilities: IPRouter running in a different execution domain (Figure

3-3) than the virtual network communication and allowing multiple virtual network

adapters per IPRouter. The application sends and receives over its logical Ethernet device,

which is revealed to the host as a VIF (virtual interface). The IPOP bridge connects the









VIF to the TAP device, which IPOP reads and writes. IPOP sends and receives packets

for the 10.128.0.0/255.128.0.0 virtual network over the host's physical Ethernet device. At

this point in time, only the first feature has been implemented and checked. The second

feature is similar to how the ViNe router [29] allows for multiple machines behind a single

domain communicate over the regular local area network adapter.


Figure 3-3. IPOP deploy, ,1 in a separate execution domain from the VM.


The work focused on running IPRouter in a different execution domain includes

testing using Xen and VMware. In Xen, the idea is to replace Xen's bridging rules to use

the TAP device instead of the Internet based network devices, typically eth0. In VMware,

this is done by bridging the host only network connection to the tap device. In these cases,

the operating system used was Linux and the tap device is used primarily to bridge data









into IPRouter and the actual virtual network interface in the virtual machines connects

over the virtual network (IPOP) without the VM's knowledge.

The final portion of IPOP is the libtuntap, which contains operating system

dependent code for communication with the TAP device. The interface provided has

been written in such a way that the same IPRouter and Brunet assemblies can be run on

any system as long as there is a libtuntap library available for these systems. To date, this

has been coded for Linux and Windows.

3.3.2 Services

In creating a self-configuring grid environment two important services are required,

those being DHCP [36, 37] and DNS [38, 39]. Both of these are found in IPOP. DHCP or

dynamic host configuration protocol allows for the assigning of a unique IP address from a

central server. Domain name server provides a way to map names of machines to their IP

addresses and vice-versa. In the original IPRouter, the local machine needed routing rules

to access the network, to get around this, IPRouter now features an ARP [40] service.

In IPOP, there are two forms of DHCP, one that works similar to the standard

format using a centralized server with communication using SOAP (Simple Object Access

Protocol), while the other works in a distributed method using distributed hash table [41]

creates to obtain IP addresses. The former has been in place since Summer 2006 and will

eventually be phased out as the latter becomes more stable. When IPRouter receives a

packet from the TAP device that has DHCP port numbers associated with it, it realizes

this is a DHCP packet. This is allowable because the DHCP port numbers have been

defined in a RFC [37] and reserved by the system for these purposes specifically. Libraries

have been written to support the decoding (and later encoding) of these types of packets.

Depending on if SOAP or DHT (distirbuted hash table) DHCP is enabled, will result in

different affects. In the case of SOAP, the client will send the SOAP server a request and

the SOAP server will map an IP address to the requesting client with a reconfigurable

lease time. In the case of DHT, the client attempts to use Brunet's DHT [41] feature to









IP Address Host name
10.128.0.1 C128000001
Table 3-1. IP Address to hostname mapping using IPOP's dns server.


make exclusive creates in the DHT space to reserve an IP address, if this is successful,

the node has obtained that IP address. The 1n i, irw benefit of DHT is that it is distributed

and does not contain a single point of failure; however, DHT requires that the Brunet

system be in a very good condition, where all nodes are accessible to all nodes. Thus

the SOAP option is still viable, until there is enough stability in the Brunet system. By

this mechanism, the DHCP clients that exist in current operating systems are capable of

setting the IP address of the TAP device, further reducing user configuration.

M n!i: classic grid applications require the use of DNS hostnames, thus motivating

the creation of the DNS service. This service is executed outside of the IPOP execution

domain for modularity purposes and is written in Python. One problem noticed when

building the DNS and DHCP system was that by default the DNS server's list is

overwritten by the most recent DHCP call. For Linux, this has been fixed by the use

of the package resolvconf; however, later version of resolvconf have made it such that if a

DNS server is running on the localhost, no other DNS servers are needed. This problem

required the use of an older version of resolvconf than provided in the Grid Appliance

current operating system. In older incarnations of the Grid Appliance, an edited hosts

file was used instead of DNS, since there must be a mapping for each host name to IP

address, this file could be in excess of megabytes, using the DNS server method name

lookups were shrunk from measurable seconds to being milliseconds. The DNS server

used is of a simple design, where the IP address is mapped directly to a single IP for an

example see table 3-1.

Prior to the inclusion of ARP the user would need to add a rule to the routing table

to add a gateway, also, this machine needed a fake hardware address added to the ARP









cache. The new implementation responds to ARP requests, so that the nodes now think

they are on the same livr 2 network.

3.4 Security

In providing a public grid system, security is of the utmost importance. The Grid

Appliance's goals in this arena are to prevent users of the Grid Appliance from affect the

host machine and from affecting remote machines. The two areas focused on in the Grid

Appliance are sandboxes and centralized, tightly controlled systems. Also it is important

to provide the ability to create completely private Grid Appliance systems, which focuses

on the use of encrypted channels.

Sandboxes [42] provide truly isolated computing environments at the cost of limited

usability. The fundamental idea is that once the task is crunching to not let anything in or

out until the task is completed. This makes true sandboxes non-ideal for user interaction

and positions them as execution environments only. This is done by removing any network

interfaces from the virtual machine.

Having a centralized environment makes management simple, because the administrator

can easily see when users are misbehaving and deal with it accordingly. The downside

presents itself when the system's servers are hacked either taking user data or more like

a denial of access which essentially causes the servers to go offline. If this were the case,

the execution environments would complete jobs and have no where to send them, possibly

causing the results to be lost; furthermore, no new jobs could be added, since users would

be unable to connect. Also user data would be unavailable during this down time.

Taking into idea of the sandbox, the Grid Appliance employs virtual machines,

firewalls, virtual networking, and IPsec to create a sandbox. VMs provide an abstracted

unit from the underlying system. Firewalls along with the virtual network ensure that

transmission of incoming and outgoing Internet traffic pertains specifically to the grid

system. IPSec makes it such that only trusted nodes are allowed into secure pools. This









approach is significantly different from that proposed by OurGrid [43], which uses a

complex system for deploying tasks to VMs and receiving files over NFS.

3.4.1 Virtual Machines

If a VM is ever overtaken the worst thing an intruder can do is compromise the

data inside the virtual machine and cause the virtual machine to behavior erratically,

this is because of the abstraction between VM and host machine. In computers there

are at least two levels of privilege: system and user modes. The system level has control

over everything and can only be affected by system privileged tasks; where as the user

mode can be controlled by all levels, however, the operating system kernel usually create

independent execution platforms for each application. VMs execute in the user mode.

There is a lot of difficulty in keeping applications from effecting each other, thus

brings the desire to have applications running in different domains altogether, this can

be achieved using a VM. The VM is software that runs in user mode but allows for guest

operating systems to run on a virtual cpu believing that they are the sole owner the

underlying processor. This makes it such that processes inside the VM do not know about

processes outside running on the host and vice-versa. Therefore software bugs in the VM

can not affect software running outside the VM. Of course this puts some reliance into

the VMM being a secure and stable software. VMMs have an advantage over operating

systems lies in their simplicity. Also most VMM software developers are constantly looking

for holes and fixing them as soon as possible.

The VM still allows for users to take advantage of resources given to them by the

VMM, such as Internet connected network devices and the ability to run dangerous

processes. Network issues are covered in the next two sections and the effect of dangerous

processes is ameliorated by the existence of tweakable parameters for VMMs that allow

users to specify memory and processor allocations to specific VMs. A user can also

shutdown a faulty VM at any time.









3.4.2 Firewalls and Virtual Networking

In order to prevent communication from outside the Grid Appliance to inside and

vice-versa, a strict firewall is implemented. A firewall is a super user level application that

deals as a middle man in network communication. In the simplest form, it takes in either

a send or a receive checks to see if the transmission should be accepted based upon one or

more of the following parameters: sender port, receiver port, sender ip address, receiver ip

address, local transmission device, and protocol.

Firewalls by themselves are limited because there still needs to be open ports and

protocols allowed for grid software to work. The approach taken by the Grid Appliance is

to consolidate all grid software traffic to run over a virtual network, IPOP, which runs over

a single network port and protocol and blocking via the firewall all other network traffic.

Since the operating system will prevent new processes from using a bound port and all

other ports are closed, only a super user could start new network services. Thus security

falls into how well defended is the super user access.

In the Grid Appliance there are only two v--~i to obtain super user access, know

the password of the local user or have the administrator's ssh certificate and password;

furthermore, the only way to access root remotely is via ssh server that requires the use of

certificates to obtain access. The purpose for the administrator's ssh is discussed in more

detail in Section 3.5. and will probably be removed once the project is more mature.

Another form of virtual networking used is the host only network interface provided

by VMware, which allows the creation of network adapters in guest operating systems

that can only communicate with the local host. This configuration allows all forms of

networking to transmit across it. This is further discussed in Section 3.6.

3.4.3 IPsec

The basis for IPsec (IP security) [44] is that communication amongst a group of nodes

occurs in an encrypted and secure manner. IPsec is based on the SSL (secure sockets









1. r) method. There is a two step process for incoming nodes into an IPsec system,

obtaining a signed certificate and hand shaking with other nodes in the pool.

Consider a system whose "Certificate Authority's" name is Alice to which a new

node, Bob, would like to connect. Bob starts by creating a certificate request and includes

in it a unique identifier, perhaps his land line number. This would be sent to Alice, who

will make a decision on whether or not to sign the request and return to Bob. If Alice

approves, she will send Bob back a signed certificate and her own public certificate that

will gain him access to the system.

With this certificate, Bob can now talk to other members of Alice's system. A ,ibe

he wants to talk to Carol, because Carol has some unused resources that Bob would like

to take advantage of. Bob begins by contacting Carol by stating he wants to begin secure

communication. The first part of the operation is to coordinate that the users are who

they i, they are through the use of their signed certificates and Alice's public certificate.

The idea being that if Carol asks Bob for his phone number and compares that to what

the caller's identification states and follows that up with the i .-.. iii:.; the caller ID -,-

its Bob, Alice trusts that it is Bob, so therefore I trust Bob. The second phase involves

Carol and Bob setting up encryption and decryption keys over which they will pass their

data. This communication ends when either Carol or Bob decides to end it or they go for

a long period of time without talking and they are disconnected.

What the method above lacks is the description of an automatic method for Bob

to get Alice his certificate request and Alice in turn to respond to Bob's request. In the

Grid Appliance, the method is still being refined but works by starting Bob in an insecure

pool. At any time, Bob can start a script that will automatically create the certificate

and send it to Alice. Alice will be presented with Bob's credentials and will simply click

an accept or deny button which if accepted will automatically sign Bob's certificate and

send it and Alice's public certificate back. At which point, Bob will be transferred into the

secure pool where only certified nodes can p1 i ,. While it is entirely possible to allow Alice









to automatically accept nodes, at this point in time, the option is still left as a conscious

decision for Alice to make.

To summarize, IPsec provides a secure way for a system of nodes to talk amongst

each other using a common public certificate, given by Alice. Nodes can not impersonate

other nodes unless they obtain the signed certificate from Alice and obtain the nodes

address. Furthermore, nodes can be excluded from pools via an interface to Condor or by

adding them to the exclusion list included with Alice's certificate. In Grid Appliance the

phone number is replaced by the virtual IP address allowing point-to-point authentication

in a simple scalable manner. This technique is ideal for creating independent, private pools

of Grid Appliances also for ensuring user authentication in public systems.

3.4.4 The Fall Back

In the case that all else fails and the network becomes uncontrollable there are many

methods by which the system can be retaken. This all depends on the level of corruption,

but as a fail safe scenario, it is possible to restart the entire grid system within minutes

thanks to the convenient packaging via the system in a file configurations presented by

virtual machines, this of course only works for nodes, which are under direct control of

the administrator. The downside is that nodes not directly in control of the main system

administrator would need to restarted by their local administrator. However, there is a

significant amount of easy to use scripts that will stop and delete bad instances and create

and start clean instances. As another step, the Brunet namespace could be changed such

that nodes that were infested would be unable to reconnect to the now clean system.

This is a scenario that will most likely never be seen but has been prepared for in such an

emergency.

3.5 Administration

Management of clusters is not trivial and adding the complexity of a distributed

system complicates the matter more. In most clusters, different machines are assigned

some unique identification by the administrator so that if the machine acts up the









administrator can easily identify the machine acting up and diagnose the problem. In the

case of Grid Appliance, this is impossible, since nodes are given IP addresses via dhcp as

well as requiring the deployment of 'iipl, N ,des for initial. The problem has been further

extended by allowing users to create add new nodes to the system at will.

To deal with these issues, the Grid Appliance has been designed to be self-healing.

In cases, where a machine is removed from internet access for long periods of time, IPOP

will continue to seek connectivity to nodes until Internet access is restored. If jobs are lost

over Condor, then they will be re-executed upon connecting back to the server. In fact,

the jobs are given a 2 hour window for nodes to reconnect before they are dropped from

the node executing them. Updates are handled through the Debian's "dp'(1 interface and

will be rolled back upon failure. As in conventional physical machine environments, there

are situations in which the system state is such that it requires a reboot to re-initialize

the virtual machine; however, if the disk is corrupted, there is no other solution besides

starting a new virtual machine. That is as simple as copying new files over the old virtual

machines files.

There has also been investigations into how to provide support for a global administrator.

The primary method thus far relies on each Grid Appliance having a ssh client that runs

only on the virtual network and accepts only a pre-defined ssh key. An administrator can

access any virtual machine in the pool and diagnose issues as needed. This approach has

also been used in Planet Lab. The main downside to this is that if the problem lies in

IPOP then this approach is useless. As a work around, it has been proposed that there be

an external application that runs on the host, which could provide an ssh tunnel directly

into the virtual machine. There are also scripts available which help in the deployment of

the Grid Appliance on systems that run VMware Server and have ssh connectivity.

Finally, Condor provides a way to check the status of the current pool by running

"coi [1 i _-I II l- application. An administrator can monitor the count of nodes to ensure

that machines are able to connect, stay connected, and are functioning properly. Other









management features of Condor include "Condor Quill" that provides a history of job

submissions and "CondorVi .-- that provides a graphical representation of current and

past utilization.

The work done so far is only an initial foray into the topic of administration and since

the environment of the Grid Appliance is still in development, this topic has not been

covered in as much depth as possible.

3.6 User Interfaces

Traditionally access to grids has been limited to tools like SSH, telnet, SFTP, and

FTP. In the worst case, these tools require experience with a console and know how of

the special commands for these tools, while SFTP and FTP have the benefit of having

GUIs coded for them to make it easier to navigate file systems. Where as SSH and telnet

force users to remain in the console and learn the hosted operating systems command

line interface. All these systems also require that a user be given direct access to remove

machines by adding user account on the local machine or network. This chapters focus is

discussion of user interfaces as an enabler for regular users on grid systems.

3.6.1 Application Access

To access most grid resources a user most be comfortable with a console environment

as the majority of tools are only available there. This can provide a large and undesirable

learning curve for users that are only interested in a single application that runs with

different data sets. By having a user interface, the grid environment becomes more

accessible and hence should have a diverse population of users. Two examples of grid

systems providing user interfaces are InVIGO [45] and NanoHub [46].

Most grid systems offer only user control features and system overview in a

user-friendly interface. This is the case for sites like TeraGrid Portal [47] and PlanetLab.

Even Condor provides utilities to easily view system statistics, as described previously

"CondorVi. .- [48]. The challenge faced by grid systems is that there is no single









user interface for the vast library of user applications. The issue is complicated by the

difficulties in user interface programming.

InVIGO provides a user front-end for job submission; however, developers for

interfaces are forced into working with form based, static content and a limited API. The

concept is good; however, InVIGO is difficult to port given that most features have been

designed from the ground up to support InVIGO. Users prefer more lively and dynamic

content, which at this point in time can only be provided either through a Java or Flash

runtime or natively through Javascript, none of which are supported in InVIGO.

NanoHub relies on VNC [49] sessions using Rappture [50]. VNC provides a desktop

session on a remote computer, where tasks execute remotely but results are di-i1 li '1

locally. Rappture is a Tcl/Tk based application which takes the input of an XML

extensiblee markup language) file and di- pl'1 a graphical user interface. The user is

instructed to give some inputs and submit for execution. This calls up a script, which

is specified in the XML file to execute given the variables and return the result. These

scripts can be written many of the most popular languages such as Perl, Python, C, and

Tcl and is easily portable to other languages. The two main advantages of Rappture is

the simplicity of disambiguating di-l1 li I1 content from the task execution and providing

a easy to work with framework that does not require the user to code and graphical user

interface content.

OurGrid software suite includes MyGrid, which provides a user interface for

submitting jobs over the Internet. The GUI portion of it provides tabs for adding new

jobs, which come in predefined files; status of current jobs, and status of the system;

however, according to the latest documentation does not run on Windows.

By itself, the Grid Appliance provides an X11 based windows manager from which the

user interacts with their system. Furthermore, the web interface for the Grid Appliance

also supports VNC sessions. The main benefit of this is that users can code for the native

APIs, whether they be Windows classes, Windows Forms, Xll, TK, GTK, QT, etc,









rather than web coding. Graphics APIs by nature provide dynamic presentations and do

not require the developer of an application to adapt already written software for a web

interface. Furthermore, the Grid Appliance employs Rappture and provides a library that

make it easy to submit Rappture jobs to a scheduler. One other good feature of VNC is

that by default the sessions will not expire until they shut are off.

The web interface for the Grid Appliance also provides for AJAX [51] based user

interfaces. AJAX or .i-vLchronous javascript over XML provides a way to create

interactive web applications that are supported in most web browsers since the early

2000s, including Internet Explorer, Firefox, Mozilla, and Opera, to name a few. The

benefit of this over VNC is that the javascript runs on the client side reducing the extra

resources required for VNC sessions, which have been measured to be at least 10 MB of

main memory per instance. Furthermore, when the server machine crashes there is no way

to save the state of a VNC session; however, the user variables in an web page are easily

stored to disk and provides for session handling even in the case of a hardware failure.

Local users are given multiple v--i-s to interface with the machine, as mentioned

before, there is a default Xll based windows manager called IceWM, which is similar to

Windows 9x user interface. Users can also access the system through SSH (secure shell)

for those desiring not to be confined to a graphical user interface. Work is underway to

make each Grid Appliance capable of hosting web interfaces only to the local host, this is

going to be based off the web interface in a module based format.

3.6.2 Data Access

Another in i ri facet of user interfaces is access to user data, InVIGO provides a

web interface to the users file system, where NanoHUB uses DavFS[]. Another widely

accepted remote file sharing platform is Samba, which is the default file sharing system for

Windows.

The only way to access user files through InVIGO is by accessing it through the web,

through a perl application called Drall [52]. The downside to this is that the user is forced









to download any files prior to accessing them locally. Furthermore, it is very difficult to

write scripts that would automatically retrieve files. The positive side to it; however, is

that InVIGO developers do not need to concern themselves with how to "mount" their

file system in other operating systems, because as long as their is a web browser, their file

system works.

DavFS [53] is a HTTP (hypertext transfer protocol) file system. This allows web

systems to integrate DavFS directly into web interface and take advantage of the user

data there as opposed to requiring extra user accounts through software such as PAM

(I-hi-- ,11 1,. authentication modules) and LDAP (lightweight directory access protocol),

similar to what FTP (file transfer protocol) and SSH require. Currently to access DavFS

shares in Windows, a user needs to adding a new remote folder, which is not trivial for

regular users, and in most flavors of Linux, which require the remote folder be mounted,

which is even more complicated than the Windows version. On the positive side though,

there are many web applications that can support reading a DavFS from the Internet,

remote DavFS clients.

The most widely integrated file system to date is Samba, which is the basis for

Windows file sharing. The downside of Samba is that the versions supported on

Windows have a weak level of security and they too require integration with a password

authenication system like FTP and SSH. Samba's saving grace is that a client is integrated

into almost every modern file system manager to date

Given this wealth of options, the Grid Appliance supports these all these various data

access modes. File sharing for the web interface uses DavFS and includes a web interface

to access files, these are based on implementations for PHP. The advantage of DavFS

in this system is that virtual user directories were setup for individuals as they register

accounts on the web interface; the DavFS server used in the Grid Appliance is made

specifically for this purpose. This method provides a secure file system without requiring

direct access to the remote machine. For local instantiations of the Grid Appliance, users









are able to access their files through Samba without knowledge of a password or other

complications.




















Disconnect Options Clipboard Send Ctrl-Alt-Del Refresh


Result: Output


Cache size: 32678
Block size: 32
Associativity of Cache: 4
Chip Technology: O.ium
Number of Read/Write Ports: i
Number of Read Ports: 0
Number of Write Ports: 0
Number of Banks: i


---------- CACTI version 3.2 ---
Cache Parameters:
Number of Subbanks: 1
Total Cache Size: 32678
Size in bytes of Subbank: 3267
Number of sets: 255
Associativity: 4
Block Size (bytes): 32
Read/Write Ports: 1
Read Ports: 0
Write Ports: 0
Technology Size: 0.18um
Vdd: 1.7V
Access Time (ns): 1.44018
Cycle Time (wave pipelined) (ns)
Total Power all Banks (nJ): 1.05
Total Power Without Routing (nJ)
Total Routing Power (nJ): 0
Maximum Bank Power (nJ): 1.0519
Best Ndwl (Li): 8
Best Ndbl (Li): 1
Best Nspd (Li): 2
Best Ntwl (Ll): 1
Best Ntbl (Ll): 4
Best Ntspd (Ll): 1
Nor inputs (data): 3
Nor inputs (tag): 2
Area Components:

N] I "


Find: I Select All

II~(1 reut aaetr- la


Close VNC Session I


Figure 3-4. VNC session powered by the Grid Appliance web interface running CACTI.


I/
x




































Sim-Cache session 47012


Level 1 Cache Type:split
Level 2 Cache Type: |unified

L i Lr jlr, l-i'Jri
1T',.1 of sets 1256

Block sze 32
Associativity 1
..'.1'.- -. I __' J

LI Data
Number of sets 256
Block sze 32
















T I


ciT r I i
Replacement Pohcy

L2 Unified
Number of sets |1024
.i i- i,4









Assocriae ty: s 4
TLB Ilsttuceion
[IT, ,, -, of sets [16
I [ I ,- .- r ., .




ITLB Data
Number of sets [32
Block sze. |4096





Store stete |


Instance I|
Type:ler I
condorexec.exe: Simplecalar/AlIpha Tool Set version 3.0 of August, 2003.
Copyright (c) 1994-2003 by Todd M. Austin, Ph.D. and SimpleScalar, LLC.
All Rights Reserved. This version of SimpleScalar is licensed for academic
non-commercial use. No portion of this work may be used by any cornerclal
entity, or for any commercial purpose, without the prior written permission
of SlmpleScalar, LLC (info@simplescalar.com).

narnng: section .comment' ignored...
sim: command line: condorexec.exe -cache:dll dl6:256:32:1:1 -cache:ill 111:256:32:1:1 -cache:112
dl2 -cache:dl2 u12:1024:64:4:1 -tlb:dtlb dtlb:32:4096:4:1 -tlb:itlb iclb:16:4096:4:1 go.alpha 13 29
go.in

sim: simulation started @ Tue Feb 20 16:18:49 2007, options follow:

sim-cache: This simulator implements a functional cache simulator. Cache
statistics are generated for a user-selected cache and TLB configuration,
which may include up to two levels of instruction and data cache (with any
levels unified), and one level of instruction and data TLBs. No timing
information is generated.

S-conflg # load configuration from a file
-dumpconfig # dump configuration to a file
-h false # print help message
-v false # verbose operation
S-d false # enable debug message
# -i false # start in Dlite debugger
seed 1 # random number generator seed (0 for timer seed)
S-q false # initialize and terminate irmedlately
S-chkpt # restore EIO trace execution from namem>
-redir:sim # redirect simulator output to file (non-interactive only)
-redir:prog # redirect simulated program output to file
-nice 0 # simulator scheduling priority
-max:nst 0 # maximum number of inst's to execute
-cache:dll dil:256:32:1:1 # 11 data cache config, i.e., {lnone}
-cache:d2 ul2:1024:64:4:1 # 12 data cache config, i.e., |none}
-cache:ill 111:256:32:1:1 # 11 inst cache config, 1.e., {|dllIdl2|nonse
-cache:112 d12 # 12 instruction cache config, i.e., {|dl21none}
-tlb:itlb itlb:16:4096:4:1 # instruction TLB config, i.e., { none}
-tlb:dtlb dtlb:32:4096:4:1 # data TLB config, i.e., {|none)
flush false # flush caches on system calls
-cache:icompress false # convert 64-bit inst addresses to 32-bit inst equivalents


Figure 3-5. AJAX session powered by the Grid Appliance web interface running

SimpleScalar.









CHAPTER 4
RELATED WORK

The Grid Appliance is not the first project that has attempted to put a grid

system into a virtual machine as a redeplov-1.i-l image. Similar projects include Globus

Workspaces and OurGrid. Other projects have also sought to bring grid computing in a

non-intrusive way notably the Minimal intrusion Grid.

Globus Workspaces [54, 55] are an abstract unit that are not a one size fits all

solution, proposing customized VMs for different applications. There is some discussion

on connectivity regarding certificates but not on how they are distributed nor how well

VMs must be connected to interoperate. Their grid system is based upon Globus Toolkit

4 [23]; how this is configured is not discussed. On a final note, the purpose of the Globus

Workspaces is to run in a well defined environment as software must exist to start, stop,

and pause the VMs as there is no discussion of direct user interaction. Thus a logical

conclusion that adding new execution machines is not a simple, distributed process.

OurGrid [4;:]1i '1 -.. similar goals to that of the Grid Appliance, a peer to peer

oriented grid structure that makes it easy to use grid resources. The OurGrid project has

three different tiers, middleware managers called OurGrid Peers, user interfaces called

MyGrid, and execution nodes called SWAN (sandbox without a name). The OurGrid

Peers help enable the peer to peer system and broker fair trade amongst the different sites.

MyGrid and SWAN were discussed earlier in the chapters regarding user interface and

security, respectively. The Grid Appliance system requires only two entities, that being

the inipl,1 N ,des which are maintained by the main administration units of the system

and the Grid Appliance itself. Users can turn any machine into an execution node without

reconfiguring the system as is required for SWAN. Grid Appliance the local installation

capabilities of the MyGrid; however, there is a firm belief that any nodes submitting jobs

should also be executing them as well.

Minimal intrustion Grid ( ilG) [56] proposes the creation of new grid software leaving

behind existing technologies. The advantages with this approach are a simplified adaptable









code base providing the minimal feature set that the grid system requires. This is one of

the downsides of the Grid Appliance, containing so many different projects means that the

size and complexity of the system are increased; however, the benefit of using standard

software sets is their maturity and features that would be difficult to add to new software.

The MiG project has recently released execution nodes that can run as Windows screen

saver, akin to the SETI [57] and FOLDING [58] at home projects. The basis for trust is

the use of SSL certificates, which can only be obtained directly from their website. There

is no discussion on creating independent MiGs, nor, the safety of execution nodes from

hostile software; however, the website references the screen saver as a sandbox. Submission

occurs from the website, an idea similar to how the Grid Appliance Web Interface works.

There is no discussion of how jobs are scheduled or how scheduling occurs. They make

claims of decentralized grid systems but do not go into details on the matter. The project

seems to have stagnated since April 2006.









CHAPTER 5
SYSTEM VALIDATION AND PERFORMANCE

In this chapter, an overviews the validation and performance evaluation of the Grid

Appliance system.

5.1 Validation

Having a large distributed system can be a nightmare to ensure that there is good

connectivity from all nodes in the pool. In fact, there can be users which start an instance

that may go unreported because it is unable to connect. To guarantee at least some sanity

in the system tools and methods have been developed which test the connectivity of all

workers to the master, all nodes to each other, and the state of impl,! Nodes running on

Planet-Lab. Just because the nodes have good connectivity does not mean that Condor is

working, to ensure that it is, the tasks outline in 3.2 are performed. The focal point of this

section is on the state of the distributed system.

5.1.1 Condor

Through the use of "condor_status", the state of all currently connected nodes

are listed (Figure 5-1). The idea is to watch the change in system size to determine if

there may possibly connectivity issues. Using the Grid Appliance Web Interface, this

information is already available from a web site, making the task of watching the status of

the system very easy.

5.1.2 Ping Test

Just because a system is responding well to the manager node, which is what

"coii'. l _-Ii II i- tells us, does not mean that the system is well connected. For that

case, a script has been deploy, .1 on all Grid Appliances that pings every node in the

system 3 times every twelve hours to determine the connectivity. Thanks to the work of

Professor P. Oscar Boykin, the task of obtaining relevant data out of the ping test's logs

has been made much easier (Table 5-1).




























SM q I-id sei-C 1 I80: ie d iMW







Figure 5-1. Example run of condor status.


5.1.3 SimpleNodes on Planet-Lab

Because of Brunet's ability to self-heal, even if all the iitlIp. N ,des go offline, the

system will reform and stay connected, although no new nodes will be able to connect.

To ensure that a 1iT in i ilty of Planet-Lab iiipl, Nudes are active, a script that checks the

status of every node in the pool runs every other di-5 stating if iiiipl, Node is on or not.

In the event of an upgrade or finding nodes down, another script will update or reinstall

"inp!, N ide on all the Planet-Lab nodes parallel.

5.1.4 Grid Appliance System Independence

A i, i" r goal for the Grid Appliance is true system independence, this is provided by

the system in a file concept that can be run on x86 virtual machines and emulators, such

as VMware and Qemu respectively. To date the Grid Appliance has successfully been run









Type Occurrence of total
Runs 272918 100.0ii' .
(I' Loss 272' ;' 99.81
3 ;', Loss 446 0.11,'
,,.' Loss 74 0.0 ;'
101' Loss 395 0.1 .!
1011' for entire execution 0 0
Table 5-1. The ping test results dating from 02/24/07 to 03/14/07. The first for loss
entries refer to the runs, while the final entry contains data for individual peer
to peer communication for all runs.


on Microsoft Windows, Apple's MAC OS/X, and Linux systems supporting VMware. For

effect, images of these test cases have been provided.

5.2 Performance Evaluation

To evaluate the performance of the sandbox, three benchmarks were used. The

benchmarks are
* SimpleScalar [59], a CPU-intensive computer architecture simulator which models the
executions of applications with cycle-level accuracy1
* PostMark [61], a file system benchmark
* Iperf [62], a TCP throughput benchmark

The performance of these applications is evaluated with 3 different platforms: Grid

Appliance as both VMware Server and Xen VMs, and Linux on the physical host. The

purpose of these benchmarks is not to compare VMware to Xen or the physical hardware,

but to investigate the cost of using virtual machines and networking for the appliance,

with focus on machine configurations that would be expected in a desktop-Grid type

environment. The host configuration was:
* a desktop-class system
* Pentium IV 1.7GHz CPU with 256KB on-chip cache
* 512MB RAM PC133 RAM
* PATA Hard drive at 66 MBps
* 100 Mbit Ethernet
* VMware Server 1.0.1
* Xen Testing Nightly Snapshot 09/26/06




1 the experiments employed the SPEC 2000s[60] Go benchmark






























i ," jj a Efi j


r Ld I N ..,' -,L,'.. **'t I-I,,
U ll uu !


f r GhidApplrnce


r1 .... nx I








.1.. ,'- r r1 I ; .1 F


slm-outorder
540 KB
Fie


I.,.,


________________lobr


-_'



.J "- T i


Figure 5-2. Grid Appliance running on Windows using VMware Server.


I^HH I























rl GridAppliance UMware Server Console
FlIe i ldt vew HgS[ .rA Tabs ielp


SSlu Down Suspend P r -.'i lesiari Snap'hroi : .7 Full Screen Quclc Swich I Summary
I davidwdesktop qGrCrdApp~iance X ( Windows F P professional I
Sdav'd'w desktop X f GridAppliance A1 Window! XP Profeusional X


1 smb:/jqridappliance/qrlduser/ Konqueror
Location Edit View Bookmarks Tools Settings Help

, 0QQ CJ & 0 egriarpln^e.gr*iJ*-: i". *g~j D
|.- f smb://gridappliance/griduser/




Archer data files gva_web kernels



Linux createtable.sh disablex.sh enablex.sh local iprules



README.txt sim-outorder startx.sh


l I1 J| | .|| 0 |lgridusern
iur uerson of u-1ware Toiolss 1 our ot dlar


'13 Irem, Files 154 9 9 B Tuall 6 Folder-


Figure 5-3. Grid Appliance running on Linux using VMware Server.



















































Figure 5-4. Grid Appliance running on MAC OS/X using VMware Fusion.











49










* Debian Etch for the physical host
* Debian Sarge for the Grid Appliance

The benchmarks were conducted on identically configured VMs, where only one

uniprocessor virtual machine per physical CPU was deploy, ,1 and no other active

processes running on the machines. A total of 256 MB of memory was given to each

virtual machine. For networking tests, the VMs were run on two distinct, identically

configured physical machines connected over 100 Mbit Ethernet.

5.2.1 SimpleScalar

SimpleScalar (version 3.0d) is used for benchmarking CPU performance. For the

SimpleScalar test, SPEC 2000s Go was run using the alpha binaries found at [26]. The

tests were executed over Condor. The parameters for go were "13 29 2stone9.in" (Figure

5-5. The SimpleScalar executable used was Sim-Cache. The results are consistent with

previous findings; that is, virtual machines show low overhead in the case of processor

intensive tasks (0.,!'. for Xen, and 10.',' for VMware.

89.35
100 00 "' 79 92
80.18

90 00

7000-0
60 00-

30 O0



20 00-
10 00-
0 00
Physical VMWare Xen

Figure 5-5. SimpleScalar results show the overall execution times (in minutes) for the
execution of the Go benchmark in three different configurations.










5.2.2 PostMark

PostMark (version 1.51) is used for benchmarking disk performance, mainly for heavy

I/O for many small files. For this test, the minimum and maximum size of files was 500

bytes up to 5,000,000 bytes (4.77 megabytes). To obtain steady state results, PostMark

was configured with 5,000 file transactions. Results are shown in figure 5-6. The read /

write ratio remains the same for all values.


12

10




) 538
U Read
44 428 aWrite







Host Vmware Xen


Figure 5-6. PostMark I/O throughput results, in read / write MB/s.


The use of file-backed VM drives and file system stacks greatly facilitate the

deployment of the sandbox, but come with an associated performance cost. The measured

performance of the sandbox with this I/O intensive benchmark is 55' to 6 !'. of the

physical host.

5.2.3 Iperf

Iperf is used to benchmark TCP network throughput. In this case, a 30 second

transfer takes place and the throughput is measured at the end of this period. Iperf was

run with the parameters -t 30. Given that the tests were conducted on 100 Megabit

Ethernet, the results are not meant to -,-.-- -1 the sandbox results in a wide-area



















20 14 1

15 103 118 119

10-




Host IPsec Vmware1 VMware2 Xen' Xen2


Figure 5-7. Iperf results are given in megabits per second, that is bigger is better.


environment, but to show the expected peak bandwidth of the sandbox configured

with the IPOP user-level virtual networking software. The results establish that the

VMware implementation where the virtual networking runs on the guest VM, without

IPSec, delivers a bandwidth of 11.8 Mbps, whereas with IPSec the bandwidth is decreased

to 10.3 Mbps. When the IPOP virtual network runs on the host, the virtual network

bandwidth is improved substantially to 26.5 Mbps. This can be explained by the fact

that the IPOP software is network-intensive; when it runs on the host, it is not subject to

the virtualization overhead during its execution and can deliver better performance. Xen

delivers 11.9 Mbps with IPOP on domU, and 14.1 Mbps with IPOP on dom02 Results

are shown in figure 5-7 with superscript 1 implying virtual networking inside the VM,

where as 2 means virtual networking on the host.




2 As of this writing, Xen responds with a warning message stating that negative segmentation is not
supported when running IPOP. It is conceivable that the mono runtime environment uses negative
segments and performance may be degraded due to this fact.









5.2.4 Discussion

The experimental results show a small overhead of the sandbox for compute-intensive

applications, a conclusion that has been also observed in previous v. i. :[;, 42]. A

substantial source of overheads in network throughput performance comes from the

IPOP virtual network implementation, which currently is entirely in user space.

Nonetheless, the sandbox network throughput performance levels are acceptable for the

intended application of this sandbox for compute-intensive applications and in wide-area

environments. Running the IPOP software on the host allows for stronger isolation of

the virtual network traffic because the process that captures and tunnels packets resides

on a separate domain from the compute sandbox. Furthermore, virtual networking on

the host provides the best observed throughput. However, it comes with the downside of

requiring a user to install additional software. An alternative that does not require IPOP

to run on the host but still provides strong virtual network isolation is to add a second

level of virtualization (e.g., by running the Xen appliance within a VMware hosted I/O

environment). This is the subject of on-going investigations.









CHAPTER 6
CONCLUSION

This chapter discusses current deployments of the Grid Appliance, future work, and a

brief conclusion.

6.1 Current Deployments

The Grid Appliance is currently being used by different application domains.

A Condor pool is available to the nanoHUB for the execution of batch jobs, and

customizations are underway to enable an application development environment

intended to foster the addition of graphical, interactive applications to the nanoHUB

cyber-infrastructure by its community.

Grid Appliances are also being deploy, .1 in support of hurricane storm surge models

as part of the SCOOP [34] project. In this context, they are used in two different v--,-

on-line, dynamic data-driven execution of models, and off-line retrospective analysis. In

the event-driven scenario, the computation on appliances are i._I... 1, .I by data streams

made available by sources such as the National Hurricane Center, and model simulation

results are published to the SCOOP community through SCOOPs data transport system

(UniData's LDM [63]). Event-li-_-_1:, it jobs can be scheduled to run locally on individual

appliances, or submitted to other nodes in the pool through Condor. In the retrospective

analysis scenario, data is retrieved from the SCOOP archive, simulation model executions

are dispatched to a Condor pool, and results are published back to the SCOOP archive

through LDM. Currently, a total of 32 appliances serving these purposes have been

deploi- .1 at four SCOOP sites.

Another usage scenario where Grid Appliances are being developed in the domain of

coastal sciences is an appliance for education and training of researchers, students, and

the public at large in cyber-infrastructure techniques. In this usage scenario, the appliance

integrates surge simulation models, Condor middleware, visualization software, as well

as tutorials and educational material on cyber-infrastructure and coastal and estuarine

science. As a result, it enables end-to-end usage, including application development, data











Ll ..i *. *.r...
-- ,--~ e-. OD--. .,, -r. ,,and'-, a unr',
Wy n .Val J-D,
pies fpuai Nho o PhoreL
L vasnglo, Mon'rana Dakora ai LrunS ,rik
+ a E -- l" ,r" raJo.a
Sm TIuoir ""ed'' i ~'.S ril OU -* Ma.ne E1 Sct.a
c n Dakota Miri., To,o,- ,, .rtrj.'
S Ic. Wyoming ureM.L .) e m j.ar.* or n
,311 IH ma C hii ]gas
La Cy rjebrasra : .- le- n.V :,r H-ar,,rra
O.*rpl ud-t' WKan- Iil is Inriana U"i0 ri "\ Mass3 cluseIts
'Jlh COlrado *\ S ,._
S ade.amero n ansas Mio5un Luaii-, Virginia Conne
r.sM LaS lha eiluck j .r.i
C.lll|orniad V larp,.iller 0 e Jerrsey
Ia.ir rn:ralc AiuyErlm AI'tm. Oklanrma Tenre-see C.r, \ 'iava
input i o x an s Cal ollnr;a elaware
i i cor i nga ean nl o Ali.ali 3 c Marn iarM
p err'cc, Dalla Mi ssiSSppi Carollna CitrlCi ofi
--rlo-''s "u, EPi a Alabama Colurmbia ______l_
D-ntt Tu.engine, er Te.nas t" an
d t L r Talah aassBe j Ukairw le

Lari-.c -*r, r iaa I
4.. 'H riE, *a n
I Gull OT r aIF'.W ',iam.
M"i-Eo D lMeiOs *--I-
Vttj MMopw LWiap 4OV07I TB li- rerm .5 Lte r
Last update: Wed Mar 14 18:26:55 EDT 2007
Node count: 103

Figure 6-1. Current deployment of Grid Appliances from the Grid Appliance Web
Interface.


input, simulation execution, data post-processing and visualization. Users who are not

familiar with Grid computing can download and install an appliance, and submit a sample

simulation from their own home or office computer to other Grid Applianc es all within

minutes. In contrast, configuring physical machines to run the appropriate middleware

and simulation models takes a level of familiarity with installing and configuring various

software and middleware packages that is a significant barrier to adoption by many

scientists, engineers and students.

There is also a general use deployment containing over 90 nodes. The web interface

monitors and submits jobs to this pool and has a picture of the United States map

showing the current location of different nodes (Figure 6-1). The web site is also used to

monitor the other Grid Appliances pools. The website is available at http://wow.acis.ufl.edu.

6.2 Conclusion

Over nearly a year ago, the Grid Appliance was only a virtual machine with Condor

and an unmanageable IPOP weighing over 2 gigabytes. Since then significant features









such as a graphical user interface, network file access by Samba, DHT over DHCP capable

IPOP using standard operating system DHCP, an interactive web interface, stackable file

systems, and user-friendliness for reconfigurability have been added. The Grid Appliance

provides grid capabilities for all technology backgrounds having both console and web

interfaces. Most importantly, the Grid Appliance has low overhead for tasks that it was

designed to run. While there may be other solutions for virtual grid workstations, a

related works review establishes that none have the same depth that Grid Appliance

provides.

Even with all these features there is still much room for future research in the

Grid Appliance. A few focal points for research include a BitTorrent based file system,

distributed Condor, nested virtualization for superior sandboxing, and tools for rapid

development of simple interactive web pages.









REFERENCES


[1] J. O. Kephart and D. M. C'!. -- "The vision of autonomic computing,"
Computer, vol. 36, no. 1, pp. 41-50, 2003. [Online]. Available: http:
//ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber 1160055

[2] I. Foster, C. Kesselman, and S. Tuecke, "The .I I.il,:v of the grid:
Enabling scalable virtual organizations," Int. J. High Perform. Comput.
Appl., vol. 15, no. 3, pp. 200-222, August 2001. [Online]. Available:
http://portal.acm.org/citation.cfm?id 1080667

[3] R. J. Figueiredo, P. A. Dinda, and J. A. B. Fortes, "A case for grid computing on
virtual machines," in ICDCS '03: Proceedings of the 1' '.1 International Conference on
Distributed Computing S,.ii m,; Washington, DC, USA: IEEE Computer Society,
2003. [Online]. Available: http://portal.acm.org/citation.cfm?id 851917

[4] VMware. (2007, March) Vmware workstation, vmware server, vmware pi ,. r, vmware
esx. [Online]. Available: http://www.vmware.com

[5] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. N. 1, 1. r,
I. Pratt, and A. Warfield, "Xen and the art of virtualization," in SOSP
'03: Proceedings of the nineteenth AC('[ symposium on Operating s; l-. im- principles.
New York, NY, USA: ACMi Press, 2003, pp. 164-177. [Online]. Available:
http://portal.acm.org/citation.cfm?id 945462

[6] InnoTek. (2007, March) Virtualbox. [Online]. Available: http://www.virtualbox.org

[7] Parallels. (2007, March) Parallels workstation, parallels desktop. [Online]. Available:
http://www.parallels.com

[8] F. Bellard. (2007, March) Qemu. [Online]. Available: http://fabrice.bellard.free.fr/
qemu/

[9] Qumranet. (2007, March) Kernel-based virtual machine for linux. [Online]. Available:
http://kvm.qumranet.com/kvmwiki

[10] D. I. Wolinsky, A. Agrawal, P. O. Boykin, J. Davis, A. Ganguly, V. P-r ni--vin,
P. 1,4 ii.- and R. J. Figueiredo, "On the design of virtual machine sandboxes for
distributed computing in wide area overlays of virtual workstations," in First Workshop
on Virtualization Technologies in Distributed Computing (VTDC), November 2006.

[11] C. S. Yeo, R. Buyya, H. Pourreza, R. Eskicioglu, P. Graham, and F. Sommers, "Cluster
computing: High-performance, high-availability, and high-throughput processing on
a network of computers," in Handbook of Nature-Inspired and Innovative Computing:
Integrating Classical Models with Emerging Technologies, A. Y. Z.- n-iv-, Ed. New
York, NY: Springer, 2006, ch. 16, pp. 521-551.

[12] L. Peterson and D. Culler. (2007, March) Planet-lab. [Online]. Available:
http://www.planet-lab.org









[13] Altair. (2007, March) Pbs pro. [Online]. Available: http://www.altair.com/software/
pbspro.htm

[14] (2007, March) Openpbs. [Online]. Available: http://www.openpbs.org/

[15] C. Resources. (2007, March) Torque resource manager. [Online]. Available:
http://www.clusterresources.com/pages/products/torque-resource-manager.php

[16] M. Livny, J. Basney, R. Raman, and T. Tannenbaum, \. I, ii, i-ini for high throughput
computing," SPEEDUP Journal, vol. 11, no. 1, June 1997.

[17] J. Basney and M. Livny, "Deploying a high throughput computing cluster," in High
Performance C'lI r Computing: Architectures and S -.ii; I Volume 1, R. Buyya, Ed.
Prentice Hall PTR, 1999.

[18] D. Thain, T. Tannenbaum, and M. Livny, "Distributed computing in practice: the
condor experience." Concurr ,.. ; Practice and Experience, vol. 17, no. 2-4, pp.
323-356, 2005.

[19] Sun. (2007, March) gridengine. [Online]. Available: http://gridengine.sunsource.net/

[20] P. Computing. (2007, March) Load sharing facility, Isf. [Online]. Available:
http://www.platform.com/

[21] A. Staicu, J. Radzikowski, K. Gaj, N. Alexandridis, and T. El-Ghazawi, "Effective
use of networked reconfigurable resources," in Military Applications of P,.. 'j;Iiii,,I,,l
Logic Devices, 2001.

[22] R. Vogelbacher. (2007, March) Job scheduler/resource manager evaluation.
[Online]. Available: http://gridweb.cti.depaul.edu/twiki/bin/view/IBG/
SchedulerProductEvaluation

[23] G. Alliance. (2007, March) Globus toolkit. [Online]. Available: http:
//www.globus.org/toolkit/

[24] G. J. Popek and R. P. Goldberg, "Formal requirements for virtualizable third
generation architectures," in SOSP '73: Proceedings of the fourth AC' [ symposium on
Op I ,il.:,,,j system principles, vol. 7, no. 4. New York, NY, USA: ACM Press, October
1973. [Online]. Available: http://portal.acm.org/citation.cfm?id 808061

[25] V. Bala, E. Duesterwald, and S. Billii "Dynamo: a transparent
dynamic optimization system," in PLDI '00: Proceedings of the AC 'I SIGPLAN
2000 conference on P,.',,,i.:,in 1,, .,ii,~,j: design and implementation, vol. 35,
no. 5. ACMi Press, May 2000, pp. 1-12. [Online]. Available: http:
//portal.acm.org/citation.cfm?id 349303

[26] K. Adams and O. Agesen, "A comparison of software and hardware techniques for x86
virtualization," in ASPLOS-XII: Proceedings of the 12th international conference on









Architectural support for 1' ..iii, .:,.i l' ,.,nrii,. and operating s4i-/1 .- New York,
NY, USA: AC\ I Press, 2006, pp. 2-13.

[27] Cisco. (2007, March) Cisco vpn. [Online]. Available: http://www.cisco.com/en/US/
products/sw/secursw/1p- -' i /index.html

[28] J. Yonan. (2007, March) Openvpn. [Online]. Available: http://openvpn.net/

[29] M. Tsugawa, A. Matsunaga, and J. A. B. Fortes, "Virtualization technologies in
transnational dg," in dg.o '06: Proceedings of the :'iii; international conference on
D.:,g.i:l government research. New York, NY, USA: AC'\I Press, 2006, pp. 456-457.

[30] A. Sundararaj and P. Dinda, "Towards virtual networks for virtual machine grid
computing," 2004. [Online]. Available: http://citeseer.ist.psu.edu/645578.html

[31] I. X. Jiang, "Violin: Virtual internetworking on overlay." [Online]. Available:
http://citeseer.ist.psu.edu/714412.html

[32] C. P. Wright and E. Zadok, "Unionfs: Bringing file systems together," Linux Journal,
no. 128, pp. 24-29, December 2004.

[33] A. Ganguly, A. Agrawal, O. P. Boykin, and R. Figueiredo, "Ip over p2p: Enabling
self-configuring virtual ip networks for grid computing," in Proceedings of International
Parallel and Distributed Processing Symposium (IPDPS). To appear, Apr 2006.

[34] A. Ganguly, A. Agrawal, P. O. Boykin, and R. Figueiredo, \\. Self-organizing
wide area overlay networks of virtual workstations," in Proceedings of the 15th IEEE
International Symposium on High Performance Distributed Corl.',.:.;, (HPDC), June
2006, pp. 30-41.

[35] M. Ki -i-wl I-1:i-. (2007, March) Universal tun/tap device driver. [Online]. Available:
http://www.kernel.org/pub/linux/kernel/people/marcelo/linux-2.4/Documentation/
networking/tuntap.txt

[36] R. Droms, RFC 2131 D n.'i :. Host Configuration Protocol, March 1997.

[37] R. D. S. Alexander, RFC 2132 DHCP Options and BOOTP Vendor Extensions.

[38] P. Mockapetris, RFC 1034 Domain names concepts and facilities, November 1987.

[39] -- RFC 1035 Domain names implementation and .. .. ,/ l.:n, November 1987.

[40] D. C. Plummer, RFC 0' :'; Ethernet Address Resolution Protocol: Or converting
network protocol addresses to 48.bit Ethernet address for transmission on Ethernet
hardware, November 1982.

[41] A. Ganguly, D. I. Wolinsky, P. O. Boykin, and R. J. Figueiredo, "Decentralized
dynamic host configuration in wide-area overlay networks of virtual workstations," in
WORKSHOP ON LARGE-SCALE, VOLATILE DESKTOP GRIDS, March 2007.









[42] S. Santhanam, P. Elango, A. A. Dusseau, and M. Livny, "Deploying virtual machines
as sandboxes for the grid," in WORLDS, 2005.

[43] N. Andrade, L. Costa, G. Germoglio, and W. Cirne, "Peer-to-peer grid computing with
the ourgrid community," May 2005.

[44] S. Kent, RFC 2401 S.. I';, Architecture for the Internet Protocol, November 1998.

[45] S. Adabala, V. C'!i id!, P. C'!I I-! t, R. Figueiredo, J. Fortes, I. Krsul, A. Matsunaga,
M. Tsugawa, J. Zli iir, M. Zhao, L. Zhu, and X. Zhu, "From virtualized resources to
virtual computing grids: the in-vigo system," Future Gener. Comput. Syst., vol. 21,
no. 6, pp. 896-909, 2005.

[46] N. F. C. Nanotechnology. (2007, March) nanohub. [Online]. Available:
http://www.nanohub.org/about/

[47] E. Roberts, M. Dahan, J. Boisseau, and P. Hurley. (2007, March) Teragrid user portal.
[Online]. Available: https://portal.teragrid.org/gridsphere/gridsphere

[48] (2007, March) Condorview. [Online]. Available: http://www.cs.wisc.edu/condor/
manual/v6.7/3_4Contrib_Module.html

[49] A. Harter and T. Richardson. (2007, March) Virtual network computing. [Online].
Available: http://www.cl.cam.ac.uk/research/dtg/attarchive/vnc/index.html

[50] M. McLennan, D. Kearney, W. Qiao, D. Ebert, and C. S. R. Kent Smith. (2007,
March) Rappture toolkit. [Online]. Available: https://developer.nanohub.org/projects/
rappture/

[51] J. J. Garret. (2005, February) Ajax: A new approach to web applications. [Online].
Available: http://www.adaptivepath.com/publications/e- z i -/archives/000385.php

[52] H. Edlund. (2007, March) Drall. [Online]. Available: http://home.gna.org/drall/

[53] Y. Goland, E. Whitehead, A. Faizi, S. Carter, and D. Jensen, RFC 2518 HTTP
Extensions for Distributed Authoring -WEBDAV, February 1999. [Online]. Available:
http://dav.sourceforge.net/

[54] K. Keahey, I. Foster, T. Freeman, X. Z!i ii. and D. Galron, "Virtual workspaces in
the grid," in Proceedings of Europar 2005, September 2006.

[55] K. Keahey, K. Doering, and I. Foster, "From sandbox to pl li-ground: Dynamic virtual
environments in the grid," in Proceedings of 5th International Workshop in Grid
Cor,,li;',l.: November 2004.

[56] R. Andersen and B. Vinter, \ Iiilini, intrusion grid- the simple model," in 14th IEEE
International Workshops on Enabling Technologies: Infrastructures for Collaborative
Enterprises (WETICE 2005), June 2005.









[57] B. University of California. (2007, March) SetiQhome. [Online]. Available:
http://setiathome.berkeley.edu/

[58] V. Pande and Stanford. (2007, March) FoldingQhome distributed computing. [Online].
Available: http://folding.stanford.edu/

[59] S. LLC. (2007, March) Simplescalar 3.0d. [Online]. Available: http:
//www.simplescalar.com
[60] S. P. E. Corporation. (2007, March) Spec cpu 2000. [Online]. Available:
http://www.spec.org/cpu

[61] J. Katcher, "Postmark: a new file system benchmark," in Network Appliance, October
1997.

[62] T. B. of Trustees of the University of Illinois. (2007, March) Iperf. [Online]. Available:
http://dast.nlanr.net/projects/iperf/

[63] G. P. Davis and R. K. Rew, "The unidata 1dm: Programs and protocols for flexible
processing of data products," in Tenth International Conference on Interactive
Information and Processing SI. ii' for Mett..',l.-i,;' 0 ,... *,/,l,,l'l,,; and Hydr 1... ,i;
October 1996, pp. 131-136.









BIOGRAPHICAL SKETCH

David Wolinsky was born October 31, 1982. He attended the University of Florida for

a 4-year Bachelor of Science degree in Computer Engineering followed by a 2-year Master

of Science degree in electrical engineering, respectively. The last 2 years of his life have

been consumed by virtual machines.