<%BANNER%>

Advance Reservation and Scheduling of Bulk File Transfers in E-Science

University of Florida Institutional Repository
Permanent Link: http://ufdc.ufl.edu/UFE0021168/00001

Material Information

Title: Advance Reservation and Scheduling of Bulk File Transfers in E-Science
Physical Description: 1 online resource (77 p.)
Language: english
Creator: Rajah, Kannan
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: admission, advance, bulk, e, grid, optical, scheduling
Computer and Information Science and Engineering -- Dissertations, Academic -- UF
Genre: Computer Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: The advancement of optical networking technologies has enabled e-science applications that often require transport of large volumes of scientific data. In support of such data-intensive applications, we develop and evaluate control plane algorithms for admission control and scheduling of bulk file transfers. Each file transfer request is made in advance to the central network controller by specifying a start time and an end time. If admitted, the network guarantees to begin the transfer after the start time and complete it before the end time. We formulate the scheduling problem as a special type of the multi-commodity flow problem. To cope with the start and end time constraints of the file-transfer jobs, we divide time into uniform time slices. Bandwidth is allocated to each job on every time slice and is allowed to vary from slice to slice. This enables periodical adjustment of the bandwidth assignment to the jobs so as to improve a chosen performance objective: throughput of the concurrent transfers. In this thesis, we study the effectiveness of using multiple time slices, the performance criterion being the trade-off between achievable throughput and the required computation time. Furthermore, we investigate using multiple paths for each file transfer to improve the throughput. We show that using a small number of paths per job is generally sufficient to achieve near optimal throughput with a practical execution time, and this is significantly higher than the throughput of a simple scheme that uses single shortest path for each job. The thesis combines the following novel elements into a cohesive framework of network resource management: advance reservation, multi-path routing, rerouting and flow reassignment via periodic re-optimization. We evaluate our algorithm in terms of both network efficiency and the performance level of individual transfer. We also evaluate the feasibility of our scheme by studying the algorithm execution time.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Kannan Rajah.
Thesis: Thesis (M.S.)--University of Florida, 2007.
Local: Adviser: Ranka, Sanjay.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0021168:00001

Permanent Link: http://ufdc.ufl.edu/UFE0021168/00001

Material Information

Title: Advance Reservation and Scheduling of Bulk File Transfers in E-Science
Physical Description: 1 online resource (77 p.)
Language: english
Creator: Rajah, Kannan
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: admission, advance, bulk, e, grid, optical, scheduling
Computer and Information Science and Engineering -- Dissertations, Academic -- UF
Genre: Computer Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: The advancement of optical networking technologies has enabled e-science applications that often require transport of large volumes of scientific data. In support of such data-intensive applications, we develop and evaluate control plane algorithms for admission control and scheduling of bulk file transfers. Each file transfer request is made in advance to the central network controller by specifying a start time and an end time. If admitted, the network guarantees to begin the transfer after the start time and complete it before the end time. We formulate the scheduling problem as a special type of the multi-commodity flow problem. To cope with the start and end time constraints of the file-transfer jobs, we divide time into uniform time slices. Bandwidth is allocated to each job on every time slice and is allowed to vary from slice to slice. This enables periodical adjustment of the bandwidth assignment to the jobs so as to improve a chosen performance objective: throughput of the concurrent transfers. In this thesis, we study the effectiveness of using multiple time slices, the performance criterion being the trade-off between achievable throughput and the required computation time. Furthermore, we investigate using multiple paths for each file transfer to improve the throughput. We show that using a small number of paths per job is generally sufficient to achieve near optimal throughput with a practical execution time, and this is significantly higher than the throughput of a simple scheme that uses single shortest path for each job. The thesis combines the following novel elements into a cohesive framework of network resource management: advance reservation, multi-path routing, rerouting and flow reassignment via periodic re-optimization. We evaluate our algorithm in terms of both network efficiency and the performance level of individual transfer. We also evaluate the feasibility of our scheme by studying the algorithm execution time.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Kannan Rajah.
Thesis: Thesis (M.S.)--University of Florida, 2007.
Local: Adviser: Ranka, Sanjay.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0021168:00001


This item has the following downloads:


Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20101115_AAAAEU INGEST_TIME 2010-11-15T17:48:52Z PACKAGE UFE0021168_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 27652 DFID F20101115_AABOOK ORIGIN DEPOSITOR PATH rajah_k_Page_63.QC.jpg GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
cd154d6c88ecca10d75634b7764683e5
SHA-1
d90607ff0cb4df6df5f6f01e72d8dfae077548f9
25271604 F20101115_AABOEQ rajah_k_Page_71.tif
c4a4c96678214e1844cbcd2991563f7d
269f6eea952be39508fab573fb569f7b9b99473b
2300 F20101115_AABOJN rajah_k_Page_70.txt
0e996a45bf63377194b65bdc1c6a0f00
e56b49601098940b0dce5c0c15e5eda4720dd1d5
6826 F20101115_AABOOL rajah_k_Page_63thm.jpg
82e5f3592cfeb348c7445c6a3c1a3ba1
46d1b0b04fb6afed77222e3febdf7fcf207b077f
2501 F20101115_AABOJO rajah_k_Page_71.txt
772fc9687b1a9427b5c6302c00450d86
ce058002459fed49c8eaffb57153d1c2a99af1a0
1053954 F20101115_AABOER rajah_k_Page_73.tif
6eb8bb693d78e01bf978a8a8f311379d
94207e152e60ee4a17ce63ea4e99a202ac64f580
90008 F20101115_AABNXT rajah_k_Page_17.jpg
5f5e37ce52cb1ed6176ed44b214a70a8
0d7165ea8850461e717b94977765b078e9164719
23089 F20101115_AABOOM rajah_k_Page_65.QC.jpg
4af9394f733d2f2a74d5f9aaf45ba6f4
ef0bc99d7fa56ae67f650b710c24aaa975a28ab0
1216 F20101115_AABOJP rajah_k_Page_72.txt
ad4c042b6bdbd42fe2bdf9a859fce48c
022e6aabd613c037a920617e8f8f3070f4f22508
F20101115_AABOES rajah_k_Page_74.tif
a5898da51e7c973d2023df5b7d5ed3a7
33fb9d8f7d10ffa2e2cbfbbb536cbfa647ab19ae
38196 F20101115_AABNXU rajah_k_Page_18.jpg
74f280b9136bd27c6410d6f274825c44
99a24814b1229f6d9de124fad91bf058e6831e4b
5959 F20101115_AABOON rajah_k_Page_65thm.jpg
3c84de53dbab738386f7a97873dbe302
216f164aecda50efda57003d36e72fdb4b26d7fb
2541 F20101115_AABOJQ rajah_k_Page_73.txt
82a3d530f94baad2d14e0bdf7a2ebec5
a814bda2dce040b73e0cbdb46a40363614787241
F20101115_AABOET rajah_k_Page_75.tif
678d2e0a5521623a3b83c7528f0e00b4
81b8a6a09520ce52b32d0c3234dcd0196da29f9d
78005 F20101115_AABNXV rajah_k_Page_19.jpg
41af8aea82d81e80237cf01702b97c18
4491c3ba99e4751e830739a0471ae3d71d4b7ac3
19956 F20101115_AABOOO rajah_k_Page_66.QC.jpg
6b987f7e091d698fa427e81407793cfe
e9846090a115cd849f6f9f170a98ba964929a02f
2709 F20101115_AABOJR rajah_k_Page_74.txt
82181ef4352bb623efaf2f4e54692c7d
d0c0433fb7f913aac24a76d87851a18cb73365c9
F20101115_AABOEU rajah_k_Page_76.tif
e39475ac877f83645ce39f8410f8e80b
9ff831a345236c9678ef913cc96270fdefe4a161
80170 F20101115_AABNXW rajah_k_Page_20.jpg
71066cdc5acb9a69ba56f7885d774de7
18f3cbe5630849718f10820fab6ceb42aec708ea
24505 F20101115_AABOOP rajah_k_Page_67.QC.jpg
3b6b30773197ff8fbfecf7b307d92610
ab9f820702bb933f8b016623e73323a1ff28b708
2605 F20101115_AABOJS rajah_k_Page_75.txt
4259f4fb79005aa30b9abb92536db7d7
64f8fba54335249f35a9372046eb6b83287a8f4e
F20101115_AABOEV rajah_k_Page_77.tif
281cd1a8f5dfaf5506aa7b2f508a173b
d4a85fb6db306bccf71a55a8379c533656dfafa9
62670 F20101115_AABNXX rajah_k_Page_21.jpg
4d33219bd9560ca694ad19aab7e8436e
f940275cd5ab1e7ee7e7c25a4ef233a9be9fe24e
13602 F20101115_AABOOQ rajah_k_Page_68.QC.jpg
4b66da8c2935a489ffcf2bf06c60896c
1cf63bcb657142a11c2a0499293fd0ec2e842d94
687 F20101115_AABOJT rajah_k_Page_77.txt
4dae6fe64c2e399ca37eb7f8216ad44b
d4aeb32351a2af35f26061f247934d6dec888168
7861 F20101115_AABOEW rajah_k_Page_01.pro
513efcba5142cfe7131cb2dc2529bdca
7c9ecc8da8b41acb1b6b47c6475b6986574ba3c7
256 F20101115_AABNVA rajah_k_Page_76.txt
8dca25387170387cfe236a9c496f8825
eea6f60ac0e62b6b2914d73eb92872148f874d9f
47222 F20101115_AABNXY rajah_k_Page_22.jpg
14cd473f645b336c78d45b58f7097826
ed18b6c8f6e82f5feba36226f8644bda2c3e375f
4815 F20101115_AABOOR rajah_k_Page_69.QC.jpg
f02b71302caa0e9fb2d7775044bcb4ec
f2b1fcc165518d6ed16f487ea3da3b33fc329675
513538 F20101115_AABOJU rajah_k.pdf
d5f45ddebb581075cc1d8d113778e6f5
adb6c634a5777421d9d53e3eed78f49141287b35
854 F20101115_AABOEX rajah_k_Page_02.pro
f8712879f0c106ea5ba1b1aecabc29e8
61b2b8a347f0845d1d410fcf0af11c559f3cc052
F20101115_AABNVB rajah_k_Page_72.tif
7686e32d21ba7978bee7b508f72674e3
090e919866ffa04f1738fd2871cc452e77f95e14
93146 F20101115_AABNXZ rajah_k_Page_23.jpg
3a6c16794327455bd39ce35f8508cbe3
dfef719bd46ea2336a3371d3a2f7afe2e95ba71e
6712 F20101115_AABOOS rajah_k_Page_71thm.jpg
e5d27de67e7a296d7846576366c301d8
d4b3e059bf04df780eccd29ff16376ab46319c0b
2173 F20101115_AABOJV rajah_k_Page_01thm.jpg
168520c245cc9380760b86218a555ce0
7f66f9cb7812e9245fb2a4f3967980ed2b34aef8
724 F20101115_AABOEY rajah_k_Page_03.pro
6308b16bb25c45bc92545650d510a2c5
38c1541bbeec5fd95c90af6e5dc1a68098f71728
1051974 F20101115_AABNVC rajah_k_Page_15.jp2
2aa678851bc2ff303c1132049ea20d0d
118b4119a619113e6ba8d4bba119089aec2356c2
14532 F20101115_AABOOT rajah_k_Page_72.QC.jpg
6b9c6543af710529d29641afc7d1bd00
e42d2ed2cd6b08b7093884922764fed56230bb04
2403 F20101115_AABOJW rajah_k_Page_04thm.jpg
6748ee2288d876c47d4574881a100129
044605d478178d4666f4da7eb9fbd5458d8d8af3
1855 F20101115_AABNVD rajah_k_Page_26.txt
4142e0b08601f989479c16826ea69ff4
02ec12d3c13676cc55e43d811b9b0c4f683515f4
1051950 F20101115_AABOCA rajah_k_Page_65.jp2
a32a7e2ea123498809840ea21536d2e7
c74cb4b6c8ccc851d702eaf15f056d6fdc7e553b
12331 F20101115_AABOEZ rajah_k_Page_04.pro
b355370d34fbd77ce47b95376cfd8c1b
6613919ee620d8d2e5b91248a70ebfecfede154f
23923 F20101115_AABOOU rajah_k_Page_73.QC.jpg
1f6e3c42606095f9c6f4eb2828025915
64b4ee09fd512ed51a7cc92312df151f0a464ba1
5435 F20101115_AABOJX rajah_k_Page_51thm.jpg
0bafd3b3a6310ecddb8444841e457cbc
9fc5400d9ff61d90e8b2dda80dcd023bf6f097b2
4630 F20101115_AABNVE rajah_k_Page_46thm.jpg
e33241c3363499b304adcea5ec5e5a9c
90fb88cb051402489a26eb2fdd35dc6841dc877e
103846 F20101115_AABOCB rajah_k_Page_66.jp2
9546eafc97fcc22278214665546d218f
e6399f589dae02090ca4afa76216d357e328558f
6195 F20101115_AABOOV rajah_k_Page_73thm.jpg
6c6f9a7061ff68643e3372b555ac33f3
7979091eab8662c50f0f6def148ef2ce2343134e
19223 F20101115_AABOJY rajah_k_Page_26.QC.jpg
8a5cb20ec5c9a44c558acd2d7436387d
2e7fe10f8607e57a282f1ac5aaef1bfec5b2e6c4
51972 F20101115_AABOHA rajah_k_Page_67.pro
94a8ff9cd9d8223af6911f841376cd60
0ac7d104e202a9dd317d9cf04d8ee4721d0dff03
1051971 F20101115_AABOCC rajah_k_Page_67.jp2
57b5ec325687b5f670e8f3115b2a8b32
8f7924bf0880db279409c825ffad6c0bad891a1a
53006 F20101115_AABNVF rajah_k_Page_10.pro
f8b8d9e42f66f2a25c54ce9181f632cb
39f1ff9e5aa5f98c1350f8d1223113f74d2af0d7
25087 F20101115_AABOOW rajah_k_Page_75.QC.jpg
11d5ecc5b5e00049414b9d31dc01c83a
28f1a72f39a6c305d5d24257d0f9edb19076b336
27139 F20101115_AABOJZ rajah_k_Page_32.QC.jpg
02aef0ae13dd8f531926ab10fdad30a5
3993058552eee234dcf7ec27eda9b04ffb52ad63
21975 F20101115_AABOHB rajah_k_Page_68.pro
def0a95ee012544f2d21003c905f39a7
fae0edd3962f92a1b1968f8d1b564578adc90594
596757 F20101115_AABOCD rajah_k_Page_68.jp2
e59a12a2919ddcf0a92d899bfc9f5960
15a358b4c36368af7fc3fdef00e460ee03c3a8f0
1051986 F20101115_AABNVG rajah_k_Page_09.jp2
e9c790b1eeac62a83bf6e7c2d2a36f79
2c2cc3cac042ab85813c7b05688146f802868ba9
6297 F20101115_AABOOX rajah_k_Page_75thm.jpg
b325a1764e3afcb647582ac86c0e31a5
dd911faeb18b1421906d97186e7a9bb2bd78f26d
16281 F20101115_AABOCE rajah_k_Page_69.jp2
6b850442ced9e0fd896a969e6adc6f3c
7b76accea8cf2656aecb96d7596a968d975c18dc
3164 F20101115_AABNVH rajah_k_Page_02.QC.jpg
9506eca30ccc465d7df53e84bec0493c
57ee55b33d06df2f7c65cf735c3e42dc09b12c36
4982 F20101115_AABOOY rajah_k_Page_76.QC.jpg
e9b377fbb78798930acf887363ba269c
013b2c571a01aae98712487d7e5a011fba75cc90
5549 F20101115_AABOMA rajah_k_Page_50thm.jpg
e0d2c9ae2eeec9dc8b86bc0260c60dce
3d6b9ea851bde27b8441a82938df18b696208fe8
56473 F20101115_AABOHC rajah_k_Page_70.pro
f9434da0bf1292feba8bba31d0627d98
ea10fca6c498557aab9849d306f96d9c92637b6b
1051975 F20101115_AABOCF rajah_k_Page_70.jp2
f1f575cc3180823a7e1b879a6a96139b
9b46e3f29ced460790ee2a3393aefc6c160ac108
F20101115_AABNVI rajah_k_Page_06.tif
0a71d1e740e1b5d384793c777e9dcdc1
fe67e3bc22b161198275f593e60b84efdf7d277f
1727 F20101115_AABOOZ rajah_k_Page_76thm.jpg
fbdc4788bf1e5ab69f58e4e7aef9a3c7
5e980238b894b1e3cec8322dafb09ebd9fe70c83
116307 F20101115_AABOMB UFE0021168_00001.xml FULL
043bab089c1c4e6763022ca02b1f8283
869a83e1f593c45258c92c019c1aa6f4ea9a319b
62733 F20101115_AABOHD rajah_k_Page_71.pro
621d59341f980c8b157f9b888275f59c
fd75a3ef14f5b12db155230ab916c6e6d5540d5a
1051966 F20101115_AABOCG rajah_k_Page_71.jp2
150143cfe187d4b528aa47c9d61aaee0
5e1243f92dea8d8e35200c67e9606ad6148b97b2
F20101115_AABNVJ rajah_k_Page_50.tif
05999de577bc30510cea75952fdd9ae1
532ec02cd4f52b852faeee87b5ee27e61a971cfe
1343 F20101115_AABOMC rajah_k_Page_02thm.jpg
ce7201d7393b5c74a89c854ae4a2370a
bcda2e9975eb161591622324056dad8b0c83a088
29849 F20101115_AABOHE rajah_k_Page_72.pro
dd36bd9e637c2c3186851dc1e91343e9
b7e3c69f5cfc98d90d2294d2c5dc4303da67a1e9
660065 F20101115_AABOCH rajah_k_Page_72.jp2
1f862ede3e15dafedd294500dc70c1e9
b044c8840d852aa0c3247f9b761f12073101a0da
F20101115_AABNVK rajah_k_Page_22.tif
afc58da2d06a9dd85671ce04526c587d
d8f99ccf9f454e969a923d0077d262f027a8b1fd
3082 F20101115_AABOMD rajah_k_Page_03.QC.jpg
d095266d770567e21562834f3ac17dd4
73a96c0011ea744306b97b1e5643e21103c1a6c3
63096 F20101115_AABOHF rajah_k_Page_73.pro
cef64e830a3f2b86b15b048df650fd05
80a62e91ee1a63d1036782dece3f9c7b33c47f43
136464 F20101115_AABOCI rajah_k_Page_73.jp2
2ea5bfebe977be2362ad3e3e68a3333d
9082772e2a1c3ae1b9d925f1f295a6bf95888174
1368 F20101115_AABNVL rajah_k_Page_57.txt
598a460dd7d7e044c11294cf28cf2844
8a6034586c232c7e27fcddf349f2e66ddf13ed07
F20101115_AABOME rajah_k_Page_03thm.jpg
211a0603ad9622c58f1d7dfe38bbcd4b
02302fc39c1459a4bfd41cf7bf3a20cf0e77bf7f
6213 F20101115_AABOHG rajah_k_Page_76.pro
14360d5ad4eeef065d440735ea4ca62f
a24ebadba4e5cbd8b3499e34dc6aaf2db14b69f8
144508 F20101115_AABOCJ rajah_k_Page_74.jp2
808247138e110cb46f67854cdd3b4c52
9a5f74dbad87e20aaa0745e63a3c095a907febe0
F20101115_AABNVM rajah_k_Page_37.tif
98e38d2b120f19070a0fa9bf81c38bbb
8a7e6b2dd2feef8bbd5f9ef8c417b0da41cf0991
7821 F20101115_AABOMF rajah_k_Page_04.QC.jpg
9eaf6911788effe50b84a0249ec3d3e3
391e6cc26d05a1817662ff8b0a9e275a4e8a2ac6
375 F20101115_AABOHH rajah_k_Page_01.txt
b8bbdbac38c581807b01e564d1370960
0df75a458e32909143b0f44b4191b340a5364a61
139823 F20101115_AABOCK rajah_k_Page_75.jp2
279883ccbacba0e27cad83d06c4724b1
51b0fd1540f9c73da3f21d18dcab5e8383f99af2
F20101115_AABNVN rajah_k_Page_54.tif
1a33b5f6ceec383ccd9d2a5b1a5df378
582f97f70404fd08b43815a48f345ddbc424ef2d
21354 F20101115_AABOMG rajah_k_Page_05.QC.jpg
246af67e3b7a12da6572e9f95eb42008
a31d90f53105df957d1185c6b699783f27ebb48f
89 F20101115_AABOHI rajah_k_Page_02.txt
6cfe7328feef4520c257fb2a9036afed
7ff16090d5af522e3a26e2092deef4bd7fb8c24e
16932 F20101115_AABOCL rajah_k_Page_76.jp2
4fb7a489be7b7982117c061c2f414853
509e918db9c57c5b13f6e75805e1a6946cb8b76a
2525 F20101115_AABNVO rajah_k_Page_23.txt
37625e0c49b72c2c9ef46274e9a54fb2
d98d0bcaa73c8f43cbc9d7f5c7648890b4575f2b
86 F20101115_AABOHJ rajah_k_Page_03.txt
8d045c25db8903a241427e89e04b8ff4
aadb950825a9d4e50cb88d6d48a5872c1c51f9f4
38989 F20101115_AABOCM rajah_k_Page_77.jp2
061291f85b7fcbad2dc6457dc29f4be9
013690b9af4a970afea3ce781782c6c8191b5761
F20101115_AABNVP rajah_k_Page_26.tif
46972fa36614195df301727f9c4d2a6b
16a34c9e1da49ad075923ee5040b5b3b86ec4e25
26870 F20101115_AABOMH rajah_k_Page_08.QC.jpg
ca609e624d0941fb540a65994b8ad92f
f9cc5b7541e31d4ff54a2919ed3b2666b87d9d5b
528 F20101115_AABOHK rajah_k_Page_04.txt
b03d31f8801c3cd624c9994ae864f457
f42f7d7016955304103487b4bfaedd749eef5dc2
F20101115_AABOCN rajah_k_Page_01.tif
1fd7c6a1c55488a811c9dee84883b2b4
2a715494a6a58115b2ea1a0130c21b98a5271744
26578 F20101115_AABNVQ rajah_k_Page_56.pro
bf2653a4da6227b0df5f1a143ee0942a
dc99834f09fdae4b0865798bc2b215ece26ac668
6813 F20101115_AABOMI rajah_k_Page_08thm.jpg
52f040883cd6c71ab6bd58ba05cd21b8
051cd710f21a6107c3f611f32b79299e4b901a60
1351 F20101115_AABOHL rajah_k_Page_05.txt
586a353dfbfab424b0ade8f2e7e4916f
ccb518eff9086c9fe5f4e9d1d16cfe1d9592608b
F20101115_AABOCO rajah_k_Page_02.tif
aa68303d0005acd283a228792bdde784
38c6d56ff379f6a642ab56f26660f72fda8d6b50
22085 F20101115_AABOMJ rajah_k_Page_10.QC.jpg
1ecde895f79652e5290488bfb2035964
5428a8d0ccc52cda5d16f021666f544b57d1d31c
566 F20101115_AABOHM rajah_k_Page_06.txt
a8742994f97c876d0120e96db3a0d93a
c0a20b38ba989a60c3ee93950c027bcfc6de5900
F20101115_AABOCP rajah_k_Page_04.tif
4b2675a0f608f81f2dd9a098ecd7927c
81d1129b4665f3c8062607209b4d046a3e7d5352
71454 F20101115_AABNVR rajah_k_Page_24.jpg
92cc982e834f7f6b071dbdb64c11f9c1
6b86c83480cccf8aad02a8ea95ae03a10dabd790
1979 F20101115_AABOMK rajah_k_Page_11thm.jpg
bad82e885ea0ec052c8ed1106404336a
05aac8d38337f8b43ef97581192ec1e3c08b4fbb
540 F20101115_AABOHN rajah_k_Page_07.txt
b5ef9efcf2124de690e8fcb3ce45d082
f757e9f847a409cbe03bc3bb40b3639e91654936
F20101115_AABOCQ rajah_k_Page_05.tif
50af6039849ca68d906a3dc03a37b7ff
85052e4ba8b4de9b87925a8840dc0800bce1cd26
5834 F20101115_AABNVS rajah_k_Page_45thm.jpg
a0e34ab1c9642cde3332132c1f67fb6d
75abba6434aec0706ad1b842c973a27c0be552fc
26438 F20101115_AABOML rajah_k_Page_12.QC.jpg
14f42f4aaed9b301bad29761f8daed84
3c0c6598e10dbe0012825ce6fdbe39115e039112
2863 F20101115_AABOHO rajah_k_Page_08.txt
ce59fdd9408e550c3d357b5777c785dc
3833f154ae0e5b03f7268dd2c9e7018df79ec14d
F20101115_AABOCR rajah_k_Page_07.tif
3143a0f0aba32d4e62482cf8df050bb1
4849ced73d6e1cd1edeba8a0edc659b1ee146750
55076 F20101115_AABNVT rajah_k_Page_61.pro
3aa13fd0f049a4921bbf4210c3c2e7d3
c9d5d17647c2fb2aa04154f16c91594cc890220f
7007 F20101115_AABOMM rajah_k_Page_13thm.jpg
7142c9e3498f07beceea74e09ac5bef5
63f172cceb9c9844107d189dbb9bc26764a32bb9
2276 F20101115_AABOHP rajah_k_Page_09.txt
d5a63f9807c1ca68b4311dc8f4e7d861
37bfb19755b216f04ae4218d4e6eabb5e77516c7
F20101115_AABOCS rajah_k_Page_08.tif
cac5ddf65781ba04e26531ac6d17a41a
0b863009d504e0e3f3444aef595106fc1c67fec0
2208 F20101115_AABNVU rajah_k_Page_43.txt
9571a8e6b3d9970f5ef86ebf548a1f95
86f6710a9174f6b88842b07b945f1b80237a35eb
27713 F20101115_AABOMN rajah_k_Page_14.QC.jpg
af74827bfacf84eb63709a338a970904
9ed1048d8db162e393cd81b7c9354dee87885532
2286 F20101115_AABOHQ rajah_k_Page_10.txt
8e05ba32a6d70d155206bcdad9f64ddf
569fe9b4133423268863adae23e3ab8b47860d19
F20101115_AABOCT rajah_k_Page_09.tif
321d5c5bbf1831152cada0c76ff8a73f
68b588e3d8c3f3a3b5733fe4bb8d6db0145acd63
2129 F20101115_AABNVV rajah_k_Page_06thm.jpg
1ee999bc38a6e3ae095daf4c8e39178c
741818e291bbc0b79df1e02d9fee2bd390cad008
27727 F20101115_AABOMO rajah_k_Page_15.QC.jpg
7e613949f94111b7eb57bc91b2cd601e
4d428c011e9e59421b2a730d601fd0404663d118
343 F20101115_AABOHR rajah_k_Page_11.txt
07aa93ccd81f22bcaf81b7797f7aa37b
af56b9ecb7e6c4bf5b570bac04acd1ddf3688595
F20101115_AABOCU rajah_k_Page_10.tif
1acbd088be1a091fb3b8010b966619a7
cccc36acc11348d492f85090449eeb22f1d19751
F20101115_AABNVW rajah_k_Page_48.tif
03433889bd0aa8aa862998549f0f4cf9
d16b82aec569456393d1427e17528d0a65475f27
27747 F20101115_AABOMP rajah_k_Page_16.QC.jpg
974467be392bdcff1fe83d21cfb5f7d8
f038eb513c7e66240f9ee133b8306b7e9eb7d2c2
2299 F20101115_AABOHS rajah_k_Page_12.txt
af04082788fd77c4110aa0e2b08c98a1
b3b6283d6a02d5b25ef6ebc8317ec9fa37b63eeb
F20101115_AABOCV rajah_k_Page_11.tif
2f5a5e529e4ef8dc32a22983f0a963b4
93bbbcfdeec6aeebc07b7ae2e1baf5a7b7d479e6
31128 F20101115_AABNVX rajah_k_Page_05.pro
60d4991697c37fb846b799fd06aea290
5325a12ef7c8d5845f2c81f67e57dcf6f28c0ddf
6717 F20101115_AABOMQ rajah_k_Page_16thm.jpg
98d22f5173089182787d25c11597b908
ff0193e19f665354345cdf3f456c4bc084f97769
2305 F20101115_AABOHT rajah_k_Page_14.txt
a8cf4529bc564c0f37f17252cfac5083
cc96768bddbb7f288fbaf13c31feca59cd3dc929
F20101115_AABOCW rajah_k_Page_12.tif
bc110738f85d71fbaecb0770bfbe10d1
c0e1fcf5817638f0e1ec7d3c5b68ed428e089ac8
F20101115_AABNVY rajah_k_Page_65.tif
9b23b46b7287d5ad894f48c026cf0267
2a016a380eb6b95ad4396d89e4be811af3f99e9f
24510 F20101115_AABOMR rajah_k_Page_19.QC.jpg
fddd309e67d789b84544c50fcf43a772
3c90652f74b08fee0c4d1441e58257595904fff9
2330 F20101115_AABOHU rajah_k_Page_15.txt
1e83a2ab1c0c144fb2a9ff12ac307037
805697eba631bc54ba1659faaea8637db7065b7b
F20101115_AABOCX rajah_k_Page_13.tif
8f7210293e07d71c2e01b55c5bdf6dd2
9ce6ac90365f23d6b8c00538fef2235503d0c837
25834 F20101115_AABNVZ rajah_k_Page_74.QC.jpg
c44c06c58964b68ceb024b1b16e2e98b
2bcacda4f8bb036bc88015cad5ff16983568f5a9
25205 F20101115_AABOMS rajah_k_Page_20.QC.jpg
bd71b1f5ff975c41d209c53a6131c011
9c14d29035b3bfeb0562fe478a8d5b2633fddf45
F20101115_AABOCY rajah_k_Page_14.tif
233a7a8b0aa23693f181d36d11dabdaa
3c3cd5353b6d65138ffd6f2b1d9357e6223a8fc8
2370 F20101115_AABOHV rajah_k_Page_16.txt
90ce60554e2270603e61cd25c25911f3
1451f5e1311f3d25f26ac3dfc0ab58a51db925a0
5391 F20101115_AABOMT rajah_k_Page_21thm.jpg
56522aa8edd61e195e98bd85cc7a25c8
5c870fc6800be24f72ec14bef00972209c826f6f
68593 F20101115_AABNYA rajah_k_Page_25.jpg
07c9e5fe34535a25244573d7194170d6
3ecea64d339b80a5bf8146f4392d749225a234ee
1051976 F20101115_AABOAA rajah_k_Page_08.jp2
ce0621c96b9714ba6addad469ff16cea
5cd0a2ec30ae3de5eb3907eb00f42b665f5ee2af
F20101115_AABOCZ rajah_k_Page_15.tif
c03ad7bf71e11fa2574b7a40c07a47fe
812dd3e33de1cf3feaa2a6d0a259a2c991ccc2b8
2365 F20101115_AABOHW rajah_k_Page_17.txt
4b5c4effcbf46d137feedf1c9afa9cca
6f439762531e86e7c4e9260ac484ec58e4dc3616
21484 F20101115_AABOMU rajah_k_Page_25.QC.jpg
c54523b94f1392ec952b40ef48b88484
fcf0e5b400d5bcae0d5b57d59fb4600305e68bc0
109430 F20101115_AABOAB rajah_k_Page_10.jp2
8eca54810b8e0b5d0a540a62f95f052e
f2e17a17e3e3810a319f9bad28487ec3b8d0124b
806 F20101115_AABOHX rajah_k_Page_18.txt
816829b21291f1207a5a91b473113eff
247c58ff104c76dde68bd8b0509baaba4daae388
60507 F20101115_AABNYB rajah_k_Page_27.jpg
f53e41bf4f0e6f172d77a9276b34cc84
a0b974f44e233371ef374739828988c6f0262094
5518 F20101115_AABOMV rajah_k_Page_25thm.jpg
630af71a2bbc9dfa6622dfa42d32e77d
c46068c0bcd7797a0439233cc6c38a77b64b88fd
21540 F20101115_AABOAC rajah_k_Page_11.jp2
548a0fecd20a235e8b909bcf7b78d42e
4c8a8076e65d805ff776d72a89b77d82547b893b
2094 F20101115_AABOHY rajah_k_Page_20.txt
0aafc7c49a3f63514c70418284bf6f4c
91faffb6409b5089b844bd3c42272fa1d7f501b9
74994 F20101115_AABNYC rajah_k_Page_29.jpg
5108f65a033f2544c902dc75c4a22575
a22043a72a7db2ae204155878c19abf3b784834b
18864 F20101115_AABOMW rajah_k_Page_27.QC.jpg
d415699e15cad592ef88f220de5b16c3
2b7436d28b4d155d67f212b71fab7ad661a4f3e4
1051972 F20101115_AABOAD rajah_k_Page_13.jp2
06ded4fc9490bcfafe6d3e79e0ba615d
55ff2bc1151945d844e01da86f6d1f9c2053a31c
11671 F20101115_AABOFA rajah_k_Page_06.pro
a79b81cbc2bab01d8558af0117b74814
6cd391f06077b825604ca233c11698d3fdd7f3b6
1819 F20101115_AABOHZ rajah_k_Page_21.txt
04df8ed99363865975365baf92c2e079
0bad9cf58c26a9e15358e39a86d55f3183d1e5eb
68168 F20101115_AABNYD rajah_k_Page_30.jpg
249b398374911163faf54ad5f2dfd69f
340a97691dc8e2963f40725803683c97fb9d3fc8
5055 F20101115_AABOMX rajah_k_Page_27thm.jpg
b7f1dd051470935376eb3b64aff2ea48
d9b7d35285b7ef2c05d36c60fe1919fe142b46f0
1051902 F20101115_AABOAE rajah_k_Page_14.jp2
468f49076038083ec27c867a46cf912b
03c0ebbe17a10e35523d0f0259c4e3d999744769
13177 F20101115_AABOFB rajah_k_Page_07.pro
f20b65c524b89fb8598f2cffc0d1e0ea
bb919917230fcac8712519a3ad8281243f1db62f
86665 F20101115_AABNYE rajah_k_Page_32.jpg
e198d4ac6e3b070ed63ffd4ffb52ffc5
cbdbfb920ef0cbd12992868751d1d88e7f3d6210
21799 F20101115_AABOMY rajah_k_Page_30.QC.jpg
2c575f7b7704e42c3825797f5abaecf0
4abea9bd4b6dbbc8fa98fbf133f779b66621e35d
6004 F20101115_AABOKA rajah_k_Page_64thm.jpg
deb91be080dadfa0441517a8ac4192c9
268c1c7df563839bb614a420c93a53a033572134
1051967 F20101115_AABOAF rajah_k_Page_16.jp2
8b5ec64aad4b7183c9ddc31a1118aa28
968435bdab1cc10d1a7a80f48652a1647e461831
68850 F20101115_AABOFC rajah_k_Page_08.pro
04b304d40e9ef8146ffcce7d343dea85
6d2a0879b133661dcc71b0db55722e44910ba936
73960 F20101115_AABNYF rajah_k_Page_33.jpg
0c1aabd89606211c49aedf3a331e8227
62b012ce099ba8d371ec9c88e9d1768bf409827a
5685 F20101115_AABOMZ rajah_k_Page_30thm.jpg
d35470c8fb32a4be45a1106d87103b27
63daf7f6754c66f337c56a4c636795c5212b73c1
10068 F20101115_AABOKB rajah_k_Page_41.QC.jpg
c7be123634d969edcc42e949e1d4cac7
ee6f88f230bcc99171cba22cc61fad5658b24c0c
F20101115_AABOAG rajah_k_Page_17.jp2
28a6c37e37b0516fca86a1b20e58e26b
e9a8be940cceed728c86d24bf9d4792ed2e8ae52
54079 F20101115_AABOFD rajah_k_Page_09.pro
706355f83dfae8b578854210282175a2
3bd9a81fdbacad66f5e8894783a39d4f389b974c
71866 F20101115_AABNYG rajah_k_Page_34.jpg
08569ecbf611a5e9f9d5e97ad7187489
dbd069ed91b91e1ec7efa3f35a9f6f6057415b79
5966 F20101115_AABOKC rajah_k_Page_62thm.jpg
2c8830a10dd43c2f37e8605a0affcd70
4dd31c65182212f6ca15b2942fa6396753ce2d74
479648 F20101115_AABOAH rajah_k_Page_18.jp2
b9581c3a12952638ef405e81b950d5ff
ce7fcf9ca8c58afa0d16a7fe2419c52b9fcf3ff1
8516 F20101115_AABOFE rajah_k_Page_11.pro
45018a583f73d3f7c90801c29b2518bf
a7d5e7a6645eb6ab76b77293ff0f7b2edbfc994f
55691 F20101115_AABNYH rajah_k_Page_35.jpg
6b9be60b9a6fb354bcf0b2dae64247fb
1aa7a61b460549a86f89f18a80baa435130e024b
6673 F20101115_AABOKD rajah_k_Page_12thm.jpg
b8fe4d151a90f9e88fa2388e1a74762a
0aef71bfd370243ef590e2c01f76a1270e59fbad
56578 F20101115_AABOFF rajah_k_Page_12.pro
b86f7c8e7d16b37031558cb8c2c51436
d9bf4c160444774014e3cd2bd908ebcf0a3ffa94
80395 F20101115_AABNYI rajah_k_Page_36.jpg
79ab2cf7a4e07d057896668162c9997b
fcd189285beb3adda30ab9e33e5a9e2f63dd2811
1051969 F20101115_AABOAI rajah_k_Page_19.jp2
4f4d33d33a9e10671fce221d2ad87fce
17b603bfac8f5b18ec3bd88e814dcd1bb0bfde96
6724 F20101115_AABOKE rajah_k_Page_20thm.jpg
2a00b8ce1fce2ae54e3cf540979c7e7f
9f969ae4dab229cc8728d9848cff8c0d881ed638
61021 F20101115_AABOFG rajah_k_Page_13.pro
dd6c3662dadfa0110f9a08c6977d66ec
1efd035abf08b40231f434cd3266276603123676
60246 F20101115_AABNYJ rajah_k_Page_37.jpg
895fc46fdfb234ea3c822f3c0b53e46d
fbfaee50f99f86630e6f7005b34d3ebf83075909
1051973 F20101115_AABOAJ rajah_k_Page_20.jp2
25067a8b7d81fa354352219def87df62
4a76be94dcc4a12403152603d54faddaa5149a2b
58600 F20101115_AABOFH rajah_k_Page_14.pro
93fc7d5f93d5d479a1aa84a1a2d6d3fd
0e97d0b7ac9948e548f162902f3981cfd5677333
87910 F20101115_AABNYK rajah_k_Page_38.jpg
fa44ecdf9bde3abba6b29a0c84211c03
c2d7e8074d8683d4e9f630be0cafb0d602d72f81
889680 F20101115_AABOAK rajah_k_Page_21.jp2
6794f99eb2e83ea618cea6b9ef192595
de74446641698396f8465b2abb5c6a93bd2d96e4
6663 F20101115_AABOKF rajah_k_Page_70thm.jpg
20aa72c17fad6527a8a64b9114cf8cde
d8b56ccebada75ef6c2c972bc05c18ca5fc39c7c
59148 F20101115_AABOFI rajah_k_Page_15.pro
0a3c8e722be3f593988a35f81c3afa0c
6cda167e45e2ab9b43e2e228b7bb0e0b33fbd485
46938 F20101115_AABNYL rajah_k_Page_39.jpg
85d0aee6b11aeb012591c50f5d51216a
a1be30e9713fb178950b13157412bf81960d77e2
655542 F20101115_AABOAL rajah_k_Page_22.jp2
77f99eff07f37ae78012383c7eb3e5c9
4f245c90f629bd6dff6a52fc959b6309686f1bf6
5969 F20101115_AABOKG rajah_k_Page_24thm.jpg
c548c2c4b19d23352d961f5cef2b523e
5dbae57433e522f18f61afac15c1cedfc5cbcc6b
59367 F20101115_AABOFJ rajah_k_Page_16.pro
d9869707ae6011c851ab99e8fe551842
e8cc60fdc0b6b48763d55d4e33cb2c67bf5b6fe5
71704 F20101115_AABNYM rajah_k_Page_40.jpg
910d29a30c2135a6b9eccc4f33197831
2e6037f00a1fd12623666ee679fe16b8f4dc4380
1051977 F20101115_AABOAM rajah_k_Page_23.jp2
95041628ac55ae4e89a152f098c781d8
e4f87144a86ef2c5e4833ab3ec691b453d74a3ab
4455 F20101115_AABOKH rajah_k_Page_22thm.jpg
942eb809542ae694e1fea3c60a9bd8a5
6ee06f0a17c293a1828521bede8e48a515f3f514
60271 F20101115_AABOFK rajah_k_Page_17.pro
f98179090d80fdc00bcc95a1f346e5fe
154f70fd507310e6512a65fe6cd4f9940094f021
34337 F20101115_AABNYN rajah_k_Page_41.jpg
1cee99e1a3aa25e1bcc287b8a042eb3e
c06e0cf9cb6bfc1599b1ab154e91b03e11747139
1049793 F20101115_AABOAN rajah_k_Page_24.jp2
8a330a53ce7445884dba0fb7b4be990c
479bc05c58a9a971aeb761ab60d8c5cb99e06133
22488 F20101115_AABOKI rajah_k_Page_62.QC.jpg
51f0555c63a213c66a62385000a8a32c
49e3c43c56d1945556cc7f1ab6f4be754be7ba09
20074 F20101115_AABOFL rajah_k_Page_18.pro
901e6928bb786fb91e9ec80021a19256
1595a10de6f84969c3694cc8be1864bfd97da03a
75424 F20101115_AABNYO rajah_k_Page_42.jpg
3a703c57a2abe2f03562a75e3b210e4d
4de8c49e748fb2ed31954cc4d16b87d2b7d97cb5
933540 F20101115_AABOAO rajah_k_Page_25.jp2
06180e4534796bd00fd08d32ef99b7a6
f742ae1acedd3cd3c09d7e0d59a144b33bac59c4
5726 F20101115_AABOKJ rajah_k_Page_10thm.jpg
0fde5737cbfb8399867d14be82781ba1
467abfca56703cce0ad9a48c59be5643228adf03
51896 F20101115_AABOFM rajah_k_Page_20.pro
a193446f2880174bacb88b252387cee0
2ade5752ce71be4d9c9e40b70320c432cd69df39
67123 F20101115_AABNYP rajah_k_Page_43.jpg
3ffeb8f282c17e0a430f519ec56e6d2c
c1c882717c837c66ab88ced06972694a889c0302
883453 F20101115_AABOAP rajah_k_Page_26.jp2
8a50a1b0a4522ae1b25e9a3470489f98
e0f4ebf185aefc14f19ad059343d59b213d57ae6
24854 F20101115_AABOKK rajah_k_Page_36.QC.jpg
353e48dd419741f807330dbd02af6fe6
efd27e745e259615b6e9ba3f15ff59c03c50223e
41248 F20101115_AABOFN rajah_k_Page_21.pro
63ac33bb416edb6d7fc990822c608555
55e39b25860c75a07019604090120a018a90ccbd
72858 F20101115_AABNYQ rajah_k_Page_44.jpg
2e2b452e1962d8183c0ab06885cda14d
ba5d32d78e03a7dc94a33ae8fcb64fed78c76597
806878 F20101115_AABOAQ rajah_k_Page_27.jp2
4906c46e0bad42d11ba4f95ea5b16a1b
b985e813a1fe72dcb1b2f60e09b5354a3ebefbfc
22740 F20101115_AABOKL rajah_k_Page_45.QC.jpg
1297736dfb3726a88d182f6e71f0abe4
21d09697e16ce342ca9df3cd6b4f4cd5fa28453a
27594 F20101115_AABOFO rajah_k_Page_22.pro
7ebb191c86b9409341f425c2450a7021
732c07cdb14f97c31094457065db2faeccb8fda9
74873 F20101115_AABNYR rajah_k_Page_45.jpg
d18f9aa026fd557b32ac4ea0ef2e8a57
609f2bd59921996421788b8ba7dab044611ee94b
57815 F20101115_AABOAR rajah_k_Page_28.jp2
21b65901f22f4a687cb2e37038ce96fc
1d6414825ab2ed5950a21d89e8b75aa7f62666d9
5811 F20101115_AABOKM rajah_k_Page_11.QC.jpg
8b166f62135a512be1d2924f1840662c
f3c2743c708def5f95b589f1759eb34995968f39
46733 F20101115_AABOFP rajah_k_Page_24.pro
baa48a19363c5c067e624a369c93348d
1dcfa19afb033c69811e18f6d3b9ec85cd734480
55233 F20101115_AABNYS rajah_k_Page_46.jpg
c0418b43e3b291197203a11b3c89d1a7
c30624eb65b8b5fc8a2cc3f4be2b799af0d68f39
112297 F20101115_AABOAS rajah_k_Page_29.jp2
e19d22f24678e6cd6f2bc78358ea26e5
8f6d834b9c9d31eaca7e94af821835645f25b050
23281 F20101115_AABOKN rajah_k_Page_64.QC.jpg
a7c2c44b4da5335d93bae6fa41a33f14
b1a8912fda064c7165058a0c526bf959aac6ab1c
42859 F20101115_AABOFQ rajah_k_Page_25.pro
e5c139b1a89d8b224acc09fb4d3364b2
5acabe1292b6046356adf6d28600f23dc2ba04ab
67621 F20101115_AABNYT rajah_k_Page_47.jpg
e5f774599a6029e99ca3214e344f0883
26b45adb9722b305a7851e87a5e9e8a30b0a39ed
89876 F20101115_AABOAT rajah_k_Page_31.jp2
f23cf3cfefdd28cc2a8fd4a1f089e140
dcaecbc79671e6dd46e44c1adcd79a2ed7cb95e0
4189 F20101115_AABOKO rajah_k_Page_28thm.jpg
e25299219c348acd34903d520f1bd175
0c6b63bf426e3f513390298b9573c423393e1812
39639 F20101115_AABOFR rajah_k_Page_26.pro
064a9c7b860b5476b45058e29d788658
377f2a01a5fe16a9392e16425518cbade342e24f
1051985 F20101115_AABOAU rajah_k_Page_32.jp2
587d84cf30152f9057e8097ff09c234d
ebe5a82407fa5d967059d8b6fb08213d6ed8a500
29377 F20101115_AABOKP rajah_k_Page_23.QC.jpg
51f5356c737a3b48d990d02950e3a787
4f208713bfe251f8f0f11f2d184b2da47e160a8b
38180 F20101115_AABOFS rajah_k_Page_27.pro
9747bc4c794cefccf7a71ad6b10f2d21
c6d2cebae23e7a9d9097082e658f22ff62a474fd
74318 F20101115_AABNYU rajah_k_Page_48.jpg
4e09c400ad45571bc5a933bca7570305
7e41ebc4cad797c4878f61ae4e5994cde03201a9
1008008 F20101115_AABOAV rajah_k_Page_33.jp2
145c4de328fe22dfeaf616f4fbdc8b48
f449c25ecbaf1579d2bfb255a0383b93bb07665c
5544 F20101115_AABOKQ rajah_k_Page_09thm.jpg
ec607b99bb6dacd66fb7486ae8a09105
77c2aebd0fd35068be7d22d113a0cf6de9a866cd
25618 F20101115_AABOFT rajah_k_Page_28.pro
c355b3c4f66c9c394baaa69b9798f3f3
20cf145c0c569134edbcff2e4eb43fd349a18118
75637 F20101115_AABNYV rajah_k_Page_49.jpg
a111047e91dcd7e7b3e8059adbada52f
cd5fa2200f169ec27fa61ae7ab68524e29f168fc
1048234 F20101115_AABOAW rajah_k_Page_34.jp2
13cd87dc7f820d14ee248f9d2d27c21a
73b06c3f2a8a36b1b54c5244d0e49cafcc380d32
5314 F20101115_AABOKR rajah_k_Page_58thm.jpg
c1faf2a50ff8fa592d2a8a9c4bb94870
f76179c3cd35cf03ab181ad2c27f1b275840af4b
55697 F20101115_AABOFU rajah_k_Page_29.pro
d1e717500d5a5dcc6163f36e995d924d
1e453d03bc0b87b14c1f34b9775cf695bd3aea83
62661 F20101115_AABNYW rajah_k_Page_50.jpg
2be567fc6ac6cd325ccd829b06c9ad5b
e43614bc0cf980aca33be46b6b4dd34bd00aed63
817487 F20101115_AABOAX rajah_k_Page_35.jp2
c8d6afd460856d0ee9fad07c03db0786
8d83e9108565ddbea82110e13d9a51a57ce8a0f6
23438 F20101115_AABOKS rajah_k_Page_42.QC.jpg
38dc0b58decfcd3e8469b36d75a2ad66
6cea7fcad0247c352e08a2a9a8e2bdb7131b4485
42979 F20101115_AABOFV rajah_k_Page_30.pro
559d566ddd4ae948edbb037cdcfd519b
46c8c3c3b6a4d2e8ad3b3f9f4512a477682a2b36
56942 F20101115_AABNYX rajah_k_Page_51.jpg
f47b801c4c419c9b2287181dcd529ef5
52812e5e110bd940eb308c9f3a6b6d7b80af1115
F20101115_AABOAY rajah_k_Page_36.jp2
335057fc2b7a62a2f0537c437d546684
e70e9a0b717f447ba2c0d3887d6d571d2feeef8c
5195 F20101115_AABOKT rajah_k_Page_05thm.jpg
aee9f9395afdc4c86877d9affa48b20e
08bc3ae50a056522376057e5f62cbf6b6708b715
57108 F20101115_AABOFW rajah_k_Page_32.pro
87b07658582df75370a03db1f65243aa
360f0d3c5efff2ac9820d8c440770a18adf6b069
F20101115_AABNWA rajah_k_Page_25.tif
ca230889aad576daa2c3150b5aad2bb3
c54e75e1a9b23a01708b51e0991d28f13d601b40
83927 F20101115_AABNYY rajah_k_Page_53.jpg
08098c7d7a43da8f300f96533d15c4e7
976a97718b14ca062373a6d18a7ef73824653f70
848771 F20101115_AABOAZ rajah_k_Page_37.jp2
1859e726427d9f249f95b5d2ba41b25a
35a840e92cef4aec18e8a0d76ee809a9c4afa1e1
7131 F20101115_AABOKU rajah_k_Page_01.QC.jpg
d4ae93320f815b7fb2fb7307d0a5cb53
fe4e07140af970da94a495936b7f8a4fea7019a0
45529 F20101115_AABOFX rajah_k_Page_33.pro
063f0cc27fcb432c93c2129a12bd0064
4a5af30c5f9eb9b8af03830f3e2fd15d38ded2f0
25000 F20101115_AABNWB rajah_k_Page_49.QC.jpg
bf1608fcf570a6ee083962e7f690110c
76b083165d77329ce43ed5730f8cec3e7579b4bb
84149 F20101115_AABNYZ rajah_k_Page_54.jpg
a4a725c986ffda4c73db7c748d1ff457
987891a223732017f2b1b197d6a44162d87f585e
6576 F20101115_AABOKV rajah_k_Page_53thm.jpg
3be7ba1963197e303a0aff66912adf6c
0e9a41b812c83bad0168023de11ecf451f1e7c80
45202 F20101115_AABOFY rajah_k_Page_34.pro
27009cac2574c3dc2febcc1b5794e8bb
83f60de964a93f4248580b12446d315b336aef97
67557 F20101115_AABNWC rajah_k_Page_74.pro
d186820147eaec9995eba7e6abab148b
02263b6623dfa7e077451569add3f679c6679207
28353 F20101115_AABOKW rajah_k_Page_17.QC.jpg
6dd9f5916de101e91709343da868a6be
8b455ad519cbae2e2713b069f4a193d33af49aab
F20101115_AABODA rajah_k_Page_16.tif
334967381380cdcefec4897970f9504b
5d4bb4ca93134edef8e47a84c2e160d6417b70f1
32155 F20101115_AABOFZ rajah_k_Page_35.pro
8db1de1978eae6d50cda70b2f4ec9bd5
a7b1b9760226bc045154439d030a1a2dba575bc7
19047 F20101115_AABNWD rajah_k_Page_31.QC.jpg
1d4a208870db7ad5d4a07b568bf6818a
042df8c832c8be0d170a1a70da9f0cfaeb4f707d
4653 F20101115_AABOKX rajah_k_Page_35thm.jpg
dfaa7c2f23c08689b31de36e6d89241f
373671249b470d29b049633990e35974e86f2abc
F20101115_AABODB rajah_k_Page_17.tif
6fd94a68bbca9457344452ef8cfb5b07
ac90d59122034eaaaf08b6aa5ab2ddfe59e97bc6
1051959 F20101115_AABNWE rajah_k_Page_12.jp2
8400ef2d1ad5139457344cd736ba7604
edb6558d029ede6f2a8c335ce725c479770bfe5f
23660 F20101115_AABOKY rajah_k_Page_29.QC.jpg
3d271071225e23a02a727db88910906d
b31c3d5bcc752cf1e1f5371f0822db29e3215aeb
F20101115_AABODC rajah_k_Page_18.tif
e54728dbdfc1123ad839a0984901fec4
e3c33763df180a0b115cf05898dd54aaee3a3838
60013 F20101115_AABNWF rajah_k_Page_31.jpg
0feda5dfcf0fa38e00da3866d9c4e5e2
ba32ea1d258393820d906f3c24192bbe2c434c32
1518 F20101115_AABOIA rajah_k_Page_22.txt
41f9a5c6a2bef4d32a585258edb01ad7
6c2417c7e060114bcd2f1a55188e39ea156b63ca
20311 F20101115_AABOKZ rajah_k_Page_21.QC.jpg
1aacfab5ecb532517f0d2ee360e50153
3cc9a545b72a4d9fb50650d3494c3b5299d0efd6
F20101115_AABODD rajah_k_Page_19.tif
eef0b7914cfb6c0d5e47787cdbca8aff
d04cc92c4c1f6e78b53f1af348313b68d212c1d0
93655 F20101115_AABNWG rajah_k_Page_63.jpg
d9f31cf5c0b9967a73b04e7e2db8cbf6
d9a018d0432e5e80787378c57b49bacb81839801
2070 F20101115_AABOIB rajah_k_Page_24.txt
2d82394a0a98d6e83a34c78709202573
d7287ce745787c7db659a02ef937eb0c7ac46f0a
63371 F20101115_AABNWH rajah_k_Page_26.jpg
fada398385b44a9e39d1a8ea1d7f1ce1
85fba513ba8aa1ef114feeb97c648371acc83920
1762 F20101115_AABOIC rajah_k_Page_25.txt
bc8981482061ff13c5dd58f33f4e704f
c6b6bcd562c01e875524386e2471fc0c74ff115f
F20101115_AABODE rajah_k_Page_21.tif
d0f89daf14795a819dac7e6f9c5b827e
8410ff9b96773bedb2a2b6161d6311f43c8a7a7c
5072 F20101115_AABONA rajah_k_Page_31thm.jpg
0314417c6af9ddfb345d5aaca6bbc910
29d61247639b4882bbf645b72d749fe1d819f55f
6737 F20101115_AABNWI rajah_k_Page_15thm.jpg
8f74314b709514a9a25e747dcb26fd84
a5b0926053549b419ade72bdd360ccfb569c99b7
F20101115_AABODF rajah_k_Page_23.tif
4721bbdbf67deab2057f1930e7e216ba
97f5e0a3d9264281c7d85500afaad47168dd6d32
6762 F20101115_AABONB rajah_k_Page_32thm.jpg
6824d364d184aee7688735fe97775ac3
e3404f18fc365d5d8058aef78188ea41c848267d
2181 F20101115_AABNWJ rajah_k_Page_59.txt
dfb5d4ef65a6b6a11d2d4b74783eba61
97e0746a57d9e2a4391f4d2d76c7c2c700a976af
1611 F20101115_AABOID rajah_k_Page_27.txt
43c1b2209ce2a5bf4a4914c08bfeca4d
150889d600c75ee1047d79ec4855c18c4dd6c5ca
F20101115_AABODG rajah_k_Page_24.tif
984807c9813755cdf73fdbe57144e719
6b7b2007b9248dc4b2bc5cf5d357a6c99693bdff
22406 F20101115_AABONC rajah_k_Page_33.QC.jpg
81b427253ec147c29dad397ca644e6a1
d61da23249138e91d7888a1d0e8fdf3a9306dd73
40976 F20101115_AABNWK rajah_k_Page_31.pro
5afeabc1ce49f66bb80eb6dda9fbf7a1
29afefc7cdf7dff9f6daeb87b369b6825ecd19f5
1106 F20101115_AABOIE rajah_k_Page_28.txt
ca236c8dcd35e698761e48e0ad83bc36
9a5dfe6576722b560903ea20807cd11bf5b80f9b
F20101115_AABODH rajah_k_Page_27.tif
f27b9d7956f781226e03f53315136896
4a734c5c10276dbc767310c6e7bf6f74c1c00667
5723 F20101115_AABOND rajah_k_Page_33thm.jpg
8c807e70fa206ba85f013ee1d9d2b4e4
30b0c15b24c6ab536086f85a73f3e3efaa4a573d
1145 F20101115_AABNWL rajah_k_Page_56.txt
4c55add5fb686317424ed41892d948bc
1f62bd7935322060181cb3077ebb89288eee7393
2232 F20101115_AABOIF rajah_k_Page_29.txt
552d4ff30f63fbd57c36f1d84a449c8f
d729cb6e41ebee9c34ae5905556fb88c2fb7b70c
F20101115_AABODI rajah_k_Page_28.tif
6df8efcd2f9c58b40a7572fd61a32057
2d4c51b5e65a88d5a8a0b163089b942ed1bfcb53
21518 F20101115_AABONE rajah_k_Page_34.QC.jpg
9c6801ef7bc035b4756f3ea612aa38bf
707b7f0aca952c3d7dedd6ddf6b9b664e9048e19
7033 F20101115_AABNWM rajah_k_Page_38thm.jpg
017512e58b8cebf2ee693672e9ffa636
b47b44656257f5ecc103e8e0aa6aa875a19c063c
1865 F20101115_AABOIG rajah_k_Page_30.txt
07ee07c578a368dba95d1cf6061d3659
7bc6f2029c99dc9bb9ff3ee3305183b564656c2c
F20101115_AABODJ rajah_k_Page_30.tif
af0519ce57bd29c6b03afa865717e218
66cdc63fef9f2f0ad3c476ee7ca23c81c34a7acb
5687 F20101115_AABONF rajah_k_Page_34thm.jpg
86bdaf1ee78ef2adf817f7712cdbe600
611fd47dc9bb3d1fcc72edc34ddc3e796293bff1
5076 F20101115_AABNWN rajah_k_Page_37thm.jpg
13444724af241cb0e2834b8b5aef80b9
e5b67a7a4fd6010fab446ea265b83e4efa588165
1639 F20101115_AABOIH rajah_k_Page_31.txt
0375a733007b5fa8380ca3d3f055d0bc
0e83a7470e51fdff689ce4ae30dd15eb965d223f
F20101115_AABODK rajah_k_Page_31.tif
988c86d538426c4f0d27da4491936ae3
cccd152a6f1e55b09ee1a826ab3d7026de14435e
17031 F20101115_AABONG rajah_k_Page_35.QC.jpg
c1f321c7b01c56c0386f43b9ce06b3ae
4e31a2e1d942cb03a58550665be495fab34526f5
16403 F20101115_AABNWO rajah_k_Page_77.pro
27682e487635c2e3fda70425df8fac90
03565b3db8e48cb05182ced4752ba603a292aea0
2261 F20101115_AABOII rajah_k_Page_32.txt
ebb1e0074e1622bae56097bc9054a861
d2b1f4dfbfa03c8152388d50896401cdb049a499
F20101115_AABODL rajah_k_Page_32.tif
4ab0b8e3b1de94f46d9e5a20af754daf
40c0f3db4cd801f9b3fc2b8ba21652d2503d47f5
17690 F20101115_AABONH rajah_k_Page_37.QC.jpg
2d948e12a25ba083b847216fcb7c0e45
365838b3a38839223cd9e9d3518735288f28aa7e
45545 F20101115_AABNWP rajah_k_Page_64.pro
1e6658a2925582e7dea05c298054c751
3517a4aeea4a73d7072144e480c271fa5f9e3b97
2285 F20101115_AABOIJ rajah_k_Page_33.txt
5cc3cc1f53afd3c55c6fdb839d66d9e4
3b0416f014738df7eb54b7a965a2661c1a35be18
F20101115_AABODM rajah_k_Page_33.tif
610b1afd493e7c0b3164986c140b77cf
300f91ce5a578a3026520bc3ae102d633524dd06
90215 F20101115_AABNWQ rajah_k_Page_74.jpg
c31afa5b1141e497ee84ef987a9204ec
0f28bf1b1d018651fefb91843afeb0757cfc5426
1491 F20101115_AABOIK rajah_k_Page_35.txt
83d38a078ac23325e473239b5b8c7f7e
16e48b9e11b21a49ee030554f2be948c91078cfd
F20101115_AABODN rajah_k_Page_34.tif
036643292067fd6cd4dc4f9fb24072f2
6df6da44c0d986863e7a1993320164d65871126c
13652 F20101115_AABONI rajah_k_Page_39.QC.jpg
8cf978e5d4aac0b94e83a91907f498f6
fd39fe31db820e6881b8bbdf8ffbf32ab37413fe
85947 F20101115_AABNWR rajah_k_Page_70.jpg
429e6e6628f5e721417af2945f61375f
cf5e97e2c79d7b5c3ab07ed2abd180f863263b1d
2165 F20101115_AABOIL rajah_k_Page_36.txt
8f5362dfca72d9d34b1a4d43fccbbbf4
b752225db77cfa478457f1b43816f5c8511e64b7
F20101115_AABODO rajah_k_Page_35.tif
d88de0014c869d5f8928cada9c9bbf59
835f96ab573705ab1ac279eeb345201e0919cd7c
3991 F20101115_AABONJ rajah_k_Page_39thm.jpg
f1f7938471c8594aa96e0be49c444304
bf8d73b5ccf0c0aa1d0d0aee94c9d2e9f307fbd9
2255 F20101115_AABOIM rajah_k_Page_37.txt
1af2f7fc15427be68cb7301442184f7a
957511262b180bab2075c8fb951a996c8247dbfb
F20101115_AABODP rajah_k_Page_36.tif
a01bdc057aaac578f0f769b175322130
c84a1124395bac5ec1e38b84e54de56469149429
20921 F20101115_AABONK rajah_k_Page_40.QC.jpg
acadc5649860fb5ee8cfe40122ca564d
7fb4aaf0db4daf80d37a111c947b0e378d5cfb7c
6945 F20101115_AABNWS rajah_k_Page_06.QC.jpg
31fe1ddc36ceca350ecd42497c3e4985
2483f6262f0eef6f52c57e156edb3396bb751cfe
1533 F20101115_AABOIN rajah_k_Page_39.txt
efeffbc2a6e170a73774fb704bb6e732
8bf95f7c89836a092e35e87460a439fc67ad6202
F20101115_AABODQ rajah_k_Page_38.tif
e1db1b07137d7962d444358f2d6966c6
adcf9f7218bab2081c8fa4a2a84f8a89405e9aaf
5619 F20101115_AABONL rajah_k_Page_40thm.jpg
2ad74645a0433030dd02a73d42c6e5dc
1f85f3ca9bc285e12d3a84374f246befe7df14c0
F20101115_AABNWT rajah_k_Page_29.tif
bc20f97a626e3aecea353881f3356ee9
4c26a07c8d8f998b407e59eaa286308cbf7b95d3
2159 F20101115_AABOIO rajah_k_Page_40.txt
abcf45abed6e7827f48e67470fd97ec8
d3f6d093772ee1ad38589887d0b494c8483f0a70
F20101115_AABODR rajah_k_Page_39.tif
1a2aa49453343366b4183d254d38e879
5a5063245cc4589e7523c95f3e6c6e60db3ce7e0
3087 F20101115_AABONM rajah_k_Page_41thm.jpg
f4b3dc018f6403192152977ef6d27085
0619e1c345009f1d8531a9d6f709dd6652153585
F20101115_AABNWU rajah_k_Page_69.tif
90dc100373ea844629dd4969a8020af4
adfb23dffb6bf0d99ca4938c6dd09ace468ad215
866 F20101115_AABOIP rajah_k_Page_41.txt
8429d92b8306b1604446479a2fbcfa65
7485ccb9f2f87a7c29f639d47b931f648328e87a
F20101115_AABODS rajah_k_Page_40.tif
946d517126e31a57a1597f4dbf628e38
c4d1fcc39239811832e7f61e2004645ccd4ae617
6293 F20101115_AABONN rajah_k_Page_42thm.jpg
112d7686e9fe783f47474f413b31e507
3ea3df75cfeba095e16fe44eeff6274dd45e2073
39968 F20101115_AABNWV rajah_k_Page_28.jpg
65166a2750e59ae896dc264868161d9e
e9c1c8a1e9aaedd040b914c89ad060b6ac453cc2
2039 F20101115_AABOIQ rajah_k_Page_42.txt
0c168c49b0a7d7e96be7d114ae84aaa3
9b76a236790fb9c40a8135effc8762dbb5ece4e4
F20101115_AABODT rajah_k_Page_41.tif
630e593fc65fed8607e7666b94bec62b
746f1ab2a7ae31fef00b811bea9194f6cdd7bd8e
20017 F20101115_AABONO rajah_k_Page_43.QC.jpg
200cdd684e7856ad612e28b531253805
e296f164f22328d0730054c1e67f5419d2196371
50260 F20101115_AABNWW rajah_k_Page_19.pro
9ad7292b826b35300bbcafb30164860e
de37b06e6c9cdb719ede9e69e2203cf8fbee18f3
1998 F20101115_AABOIR rajah_k_Page_44.txt
859068d6ef90baac3fee15bbace6f76d
978e9d07986a4ccd3ae46003b15de37591593b06
F20101115_AABODU rajah_k_Page_42.tif
aa8db20afd90c4e522c0e884fff50154
ed4622086ef7d39825c35f379d59cf73f354a9f9
5416 F20101115_AABONP rajah_k_Page_43thm.jpg
f78aed7bf8034aaaca78ca5364c6914b
3259e79d12965059c8a33fb81250a70f6cfb757e
2400 F20101115_AABNWX rajah_k_Page_13.txt
c557f2019c76361b72f7aa8198312acb
101d3c38f8d5387b37b4b6cbb19be3aad5a91469
2002 F20101115_AABOIS rajah_k_Page_45.txt
e05d1422ac36c1222e27ae631782e8a1
38c91f20b68115267a54a13c9343b5f42b02e0d5
F20101115_AABODV rajah_k_Page_43.tif
1c73ed23bd6a2113d53a51a88cbf5687
2ab576a97249cad1312dcb6629432871636a0fe2
22663 F20101115_AABONQ rajah_k_Page_44.QC.jpg
f8ab8c76f37b83f6664e3826ad9aac27
fda20e0163055e93e4256f14243e108bf60fcebe
2293 F20101115_AABNWY rajah_k_Page_38.txt
8a551a56df5760e885cea8476bfe12f5
5c83d836270a64697989b0f9d49f64254735e398
1673 F20101115_AABOIT rajah_k_Page_46.txt
cced0bfe83e8beebd5bc4749882117a1
a259fd0cfd970b9569c8426ce11fc406ec28ccd9
F20101115_AABODW rajah_k_Page_44.tif
1dc526882fb9954cff69d22e4968da91
6edb3fd372cb2bf5ffced7cd6d7be2edcfec3d67
5866 F20101115_AABONR rajah_k_Page_44thm.jpg
28ef899cd38219f78350717e66feebf9
7a7350b3109b996e4c74a0639386ac14625095cf
1841 F20101115_AABOIU rajah_k_Page_47.txt
31d4c7d02512c37ae0ae129c72667042
e336d224af92c6cf9870d5134e8092a597629c2d
F20101115_AABODX rajah_k_Page_46.tif
7ddecf3e25be91c4044756d9cb0c7f60
383ac381de0b3a52657711788e08897f58ec8688
6282 F20101115_AABNWZ rajah_k_Page_67thm.jpg
1a405fb2ea9647f86ae7ac71d7f4db58
ffcfd05f33802cad565356ba49d6b6c5d629d340
16015 F20101115_AABONS rajah_k_Page_46.QC.jpg
0b77b66a3f92f27a00825a1e85f1be9a
1e46fcd2b76fd627504fdd2bff09bb85295b4b53
F20101115_AABOIV rajah_k_Page_48.txt
625783bb2c29b77529470849b9fc9763
43c47ec4f34013b51f0f5f6442ea6d70da16e230
F20101115_AABODY rajah_k_Page_47.tif
cc9ddf4339478c8c3c8dc0d838f91291
20027157d73e195ee1fea1aaf53ed83b6bc47375
2140 F20101115_AABNUC rajah_k_Page_67.txt
95c9ab4c59d78da3e848c827ccdecda3
0a68fcc16da480abf4b6e09209feecf05db9c179
20617 F20101115_AABONT rajah_k_Page_47.QC.jpg
4d9a6636f060bea6be0c76d4e1830314
5f249f8b64af3aee6ebdae16909d0bfb4c21fe6f
2353 F20101115_AABOIW rajah_k_Page_49.txt
f8f7ce3a5745d1080beddd5a99945a31
0ae2fc6b3def3bf18b0a036511f8c8cdf6a74755
85054 F20101115_AABNZA rajah_k_Page_55.jpg
2cf882b31030e2f9b855a3e371bc45bb
d58b81988c41757274a0c07dcf7ac36546d1ff5f
1051952 F20101115_AABOBA rajah_k_Page_38.jp2
ec36b6234a2cbc4931200f51ee24fea6
b86f5876dc1f8f345542b6f2481341ea94b76061
F20101115_AABODZ rajah_k_Page_49.tif
9a069ad2a71b85463419d8cf739ce814
3e5c3c75d172a323395c2dc9054f6767272fe296
26455 F20101115_AABNUD rajah_k_Page_07.jpg
81802fea73b60976ad54286ee5b32fe5
c3034d6d183012ae06a054fce73feae84a83fdaf
5489 F20101115_AABONU rajah_k_Page_47thm.jpg
0b1b756a15aa1b12532d95289896aaaf
d249cafd3583bee5ac2ab9d1c7a69ad4c4d6548e
1718 F20101115_AABOIX rajah_k_Page_50.txt
2229247e780cada73e934c0abeda890c
52f8aff43ea05ea041e503e1891cf8dd8a7b9701
49754 F20101115_AABNZB rajah_k_Page_56.jpg
84195e6bc1234ca5d0466cf7216b0cef
f896045a7aeff683457dd8f885039ebec85a18e2
67161 F20101115_AABOBB rajah_k_Page_39.jp2
7c4bc9620fd1028680702b43e7b70094
9e7cb1fcb86d013eb76ae188d6213c5214668112
F20101115_AABNUE rajah_k_Page_20.tif
6a6de5a56e7cca7f59b18b5cedd7754e
23177a7258094a0b5e681b494b9bcc2735e4d800
23504 F20101115_AABONV rajah_k_Page_48.QC.jpg
806624dff50b4033f7bb225471499a8c
6ce2f7f8fa5fe8cdf806482da0dd40717efd449e
51554 F20101115_AABOGA rajah_k_Page_36.pro
ce5b57d1ff0a44ad433510a02d6a9580
0c0a6b06e346de043b4702e71f7df3915279ba6e
1814 F20101115_AABOIY rajah_k_Page_51.txt
a58af2f8f7983337b19cc3c826b59d5c
74da2a86c9734b30567ff37edd9c30c39e8ff3e1
64910 F20101115_AABNZC rajah_k_Page_57.jpg
66aba18f0bdf573c1c91316b8c5ae63b
2a483f315fea52e8ed4d9e87605ce9dcf500f6bd
999689 F20101115_AABOBC rajah_k_Page_40.jp2
b223031969604b806cdebe156338cc18
e68f6c921dc5ba4d1a1ecc4e2315269000ce5008
F20101115_AABNUF rajah_k_Page_05.jp2
a8ba9be653afa239f18a9d031895a5b2
3863681c92db79473a5da69a967aaebe8291cc0b
6123 F20101115_AABONW rajah_k_Page_48thm.jpg
4dff4a3f57b02bf966dc7894dd4793a8
5a5bfa4861773e24e217b2ee9468161a8488f9e9
2453 F20101115_AABOIZ rajah_k_Page_52.txt
4c588965ba0f182e4897acae62d9b9e4
9f508c4573c216f6d7366a1fc24b47eb19f835e1
69800 F20101115_AABNZD rajah_k_Page_58.jpg
b32dd2f883e9ba62a42b9e09aec3215f
a292a2cf3b590fdf758accb61322bed3d24dfe4f
44768 F20101115_AABOBD rajah_k_Page_41.jp2
de427470adc5af60af31434857b4e3f3
6e88f240a723e2fcc4e137a40aa95ddaa0b15efe
64514 F20101115_AABNUG rajah_k_Page_23.pro
17dd5fb6517518d249aa5f8c0e5ac18b
41481f1e6722c3c9cbcdb5cf10bb314e79051642
6144 F20101115_AABONX rajah_k_Page_49thm.jpg
81848eb2e5241b279c2a20852a25d5d5
ad2f8ce094d05c9d0f3f7bf84366acbb5e4e4cb3
77566 F20101115_AABNZE rajah_k_Page_59.jpg
962be84a49b610f30a4aa1f64d302577
bff1909883fe4c5748cda771d508e42aa0ca47c8
98400 F20101115_AABOBE rajah_k_Page_43.jp2
50e2e5d5847b3bf4c5f0a78149fdb19d
5b96dd1922f7ee8bbf8f227957880fdd29be3d86
6349 F20101115_AABNUH rajah_k_Page_36thm.jpg
9de3ae4ef4e2b210eed03d55c9da110a
b36585316cdc3972eca627b5f6184676dc7b2378
37459 F20101115_AABOGB rajah_k_Page_37.pro
44c34ba17947e60296bb8a0a6b78379b
911bb1e7d105cb47f3ef9e1eac2965312a037016
18452 F20101115_AABONY rajah_k_Page_51.QC.jpg
ab60e41e910b90353258b572d8b5a289
2fd75742ad590edd9c77b2ea54bb0e87f62ac9f8
2428 F20101115_AABOLA rajah_k_Page_07thm.jpg
ceecf59bf2ff0e1b8aa0ad0d34410d44
23bd87cf8b0a6a5014ac51a4a117889d737a6591
76570 F20101115_AABNZF rajah_k_Page_60.jpg
5961a670740af979066fee6af46b9401
ba5d2ff7d85ae715400fd5ea99c4985a355a3899
1021510 F20101115_AABOBF rajah_k_Page_44.jp2
a5149922aba22ebb6bd31efa2522b1d9
33e214f69142be871898a3165b80bed519489a75
9490 F20101115_AABNUI rajah_k_Page_77.QC.jpg
d7a455882d06c01755e3ab4ed502bbf5
55d66ca868c2810aea7024b0539936da7d6735fb
58107 F20101115_AABOGC rajah_k_Page_38.pro
343e800ecd3e24b5d3e51db900efdf22
c68d7eb2a7ee7b00fe66cdec2632ef4114c61be0
24923 F20101115_AABONZ rajah_k_Page_52.QC.jpg
1061b5c132ae390d761adf5119447662
4c7ba4d76c4097814dad4d9bda9749f10493ca36
13417 F20101115_AABOLB rajah_k_Page_28.QC.jpg
e3b2e28803e1b8bd1087eef9b49117d5
089191217d7c228af7aa6363398222a9f1b7d3c2
84060 F20101115_AABNZG rajah_k_Page_61.jpg
fb3763d7adaca80f298c2615dd749bfc
c022aec378be9c8abfc83a001ab4dcc8c3a5f72d
1051910 F20101115_AABOBG rajah_k_Page_45.jp2
aa33630459abd24b8c87309fd41632c6
31b790b723d965fbacc649ffc747fc87a748c809
2078 F20101115_AABNUJ rajah_k_Page_19.txt
b67cf1d55d36c146a81c8f0e63e39fa9
28a11c7fde728285c3178a77f98ed533e87a3763
44921 F20101115_AABOGD rajah_k_Page_40.pro
24ea211a5d36db996fef2274edbd7e4d
dab932dc545fa0a668bb6bf3c26ceff70619a15e
7061 F20101115_AABOLC rajah_k_Page_23thm.jpg
f0c9031ae9f12e46bad08125db1ce980
2a756cb2ebed91e77101dbd614e2374e963f79c7
74816 F20101115_AABNZH rajah_k_Page_62.jpg
903737ddbd55b5034bbb335a22919c97
b31286d862e16105fa38c48a8d841531d498714a
697899 F20101115_AABOBH rajah_k_Page_46.jp2
628ec331904bd9f1a72ad666e4415bc8
cb7f1d6b7f73a789bd2874f82678d345645ac152
F20101115_AABNUK rajah_k_Page_45.tif
3042a4b4cfd293d6cc446bfb4f6bc173
f08dfa60837a0cd245984d636ae581d8da836270
18028 F20101115_AABOGE rajah_k_Page_41.pro
49fdc0c2505804c4fc38a463341c4230
3160ed6f9c60ff2d36cf3e63186cf8d0d60adb0f
23177 F20101115_AABOLD rajah_k_Page_60.QC.jpg
ee3c792c3af7e028cb0d37ec4a57dcfe
807fc6baf35b0cddd0963a47eba7da8b813688e1
76374 F20101115_AABNZI rajah_k_Page_64.jpg
b0de39410d15293597c9ea570a6684ce
98d8917e3f833b956f84ca4f095514a777516e46
923212 F20101115_AABOBI rajah_k_Page_47.jp2
ce202520fade024158d3bac5c4a888fa
98fef818e281af9b1a41321647b0e458545abac0
64962 F20101115_AABNUL rajah_k_Page_75.pro
6d62ae4908a25ca37ec652dc02f3be4c
b33a1b4aa9d2ece51e2aa6d379cdb49c5937f696
49186 F20101115_AABOGF rajah_k_Page_42.pro
cabde5e8c37d44b5721149b88fa37053
55c8946c03b5528123210a6b64a6239710a15d66
15413 F20101115_AABOLE rajah_k_Page_22.QC.jpg
218cbbe93c78463a7dfadb558cae900d
14e275b0941198c01c16c038f470e51d014faa5a
75827 F20101115_AABNZJ rajah_k_Page_65.jpg
7e6c3452999dcadd55f0fc66217fcd16
ade34d2c501c676140e54397c9a0156dec87415a
F20101115_AABOBJ rajah_k_Page_48.jp2
b1d6563280f11f754940885c03f89f25
a7b5363481817d810b5e5e78253bc03f69eaab26
41096 F20101115_AABNUM rajah_k_Page_62.pro
98f976c9f8f26b99dff8b83c5f0beb0c
b66e7bd60aa1916a94fc8847e5439855c6560b43
49017 F20101115_AABOGG rajah_k_Page_43.pro
837b51e93147e4c3ce597332cf288cbd
8d5a9038eae3c25b42323341b3a4622dccf8d177
21466 F20101115_AABOLF rajah_k_Page_58.QC.jpg
be0db5f7bd308d03ae9910f5e313eee6
f50f712868a21d2107b7a20b5123f84cad9c80a3
65424 F20101115_AABNZK rajah_k_Page_66.jpg
69d603d0659e09432a36c2c538cac3b5
f4c026e029280471f1ada753b7ac455a9ac64ad5
118167 F20101115_AABOBK rajah_k_Page_49.jp2
79565e5950012843c58cc86edac19dee
68b6e99c8a7a17efd2698ac193159738d7a87252
11732 F20101115_AABNUN rajah_k_Page_18.QC.jpg
03dfc729ec4e97f5464791966a8a35fe
f902e7b9716d9183cc5538c95a5e0b6aba55024d
47854 F20101115_AABOGH rajah_k_Page_44.pro
5e288574b4c55c2bfaae9f67451dd422
5b75ffb75cf984233880dff96779efe245d6cead
80143 F20101115_AABNZL rajah_k_Page_67.jpg
9752be83acdfc497181c15a071190683
7008067bf9b100a51637ba6793894cf94d87c2ad
857881 F20101115_AABOBL rajah_k_Page_50.jp2
dc8c77b8147be03ea164fd88befb8d0a
c36334d0962cb77eb54476e2ca5fc442fd2de996
F20101115_AABNUO rajah_k_Page_68.tif
f8a8c6826a8fce338e2157a7a516480d
579e34f88b1b31e122a02c03a4f48621cd1bb2c5
49736 F20101115_AABOGI rajah_k_Page_45.pro
fff26ed14d7e5c4e4cb43d4b925d685a
60c4319df65b97b24538ca8996a1832d64c4098e
26614 F20101115_AABOLG rajah_k_Page_53.QC.jpg
032e9bc4e08ca3243412607de8135271
e3c0352899999d6010dbfa05f8187f87ec86ad75
43818 F20101115_AABNZM rajah_k_Page_68.jpg
66dbe42caa4b0621b56120d3d2e879ee
f6a33cf1d371f11e6b444be8930f2a4959eb726b
819556 F20101115_AABOBM rajah_k_Page_51.jp2
4dd34c2f8cee0eabf89f1dc5e8cd80cd
a44cc0ccb7463a084ceb588580af4e55fff087d3
19671 F20101115_AABNUP rajah_k_Page_50.QC.jpg
e9ecc03c3639043a2e3f8624abfdfe55
9a00d097bbad91a7d352976b67c709d6aa183495
31798 F20101115_AABOGJ rajah_k_Page_46.pro
d211b837f1e230f020ed97d2dd74c80e
b0374ba04d83d35aef1bbc5276c229ca6fa9dc19
19291 F20101115_AABOLH rajah_k_Page_57.QC.jpg
3cff84fb5b8413949cf2a1ede2da8e2b
e95f900b71c6566f0a2a44c6bc22705d45bfc494
16051 F20101115_AABNZN rajah_k_Page_69.jpg
871da30b6cba04269b957a3a7054e7c3
d6d5f8608c647976eb3c0191b0ca8a564cf88ee4
1051957 F20101115_AABOBN rajah_k_Page_52.jp2
6344746d5ba225e74763af92e3cc7752
4a8023d1d4766c99538aa696fb05ba8e5cbe8514
41170 F20101115_AABOGK rajah_k_Page_47.pro
17945719432168432b6714fc0b6556aa
a3615e7e51d1a78b4e01a45dfad394d190dfbb4d
7802 F20101115_AABOLI rajah_k_Page_07.QC.jpg
89233389d4a84de101eaaec9285783ea
6c8dcecbfcacefd6f8f6fc81dbe13c0623a8ca89
90767 F20101115_AABNZO rajah_k_Page_71.jpg
462e69280c6e0c908b53ed64c262f2f7
e78aefd2ec7acf18b669f1d143f375eb883e25db
F20101115_AABOBO rajah_k_Page_53.jp2
1f75f1a968896a6c01340b67e51946f4
60c2cd8242d5383a2b5b63ffc0ee99ad9bf51cab
4336 F20101115_AABNUQ rajah_k_Page_69.pro
0f176315212a72c32642fd3768d3144f
708f3d6e9c69754fb54ad1299d8903de84bb4de3
48617 F20101115_AABOGL rajah_k_Page_48.pro
da8f3d876de3c531435a8562101e649b
3e25f24cf099bf64aab4dadbe40ca9bab567da6d
6578 F20101115_AABOLJ rajah_k_Page_61thm.jpg
34267521ed52e478cb882f3b240b7dfb
6f7625c7c612293fce7d483676ab79a858e3b6c0
51114 F20101115_AABNZP rajah_k_Page_72.jpg
6d38e2adb45200bff5b92ff1d3351880
30f43b21fa29a77b9d3e63de8fbd5cdb56a4b015
1051968 F20101115_AABOBP rajah_k_Page_54.jp2
84e41bf0f8d974c737ebc7c414695b6e
afc99bcaefcb31bfad887e3bf320d68e8ca40e92
6338 F20101115_AABNUR rajah_k_Page_19thm.jpg
c5d25699754bd47451829a6791cfc3df
3c780cad0690835d39025abb7b0b6628349d1bbb
59743 F20101115_AABOGM rajah_k_Page_49.pro
a5099a8951a91be81daa872393a63779
6e5d4b399afacb845ae8f1d36d25957d9b3a6fc5
3819 F20101115_AABOLK rajah_k_Page_72thm.jpg
6a93ca61214626d9d561dae7a44a9e4d
b8f0c4dd4a040727916385112902744f7638b6cb
82322 F20101115_AABNZQ rajah_k_Page_73.jpg
d49c95dedff685baf0280c50f7fac542
69d9d055040794dc7919dacc508f021f95bed47e
F20101115_AABOBQ rajah_k_Page_55.jp2
71f0706f6db8d891cce6b23d0d0ff545
c4f90c77c35de1d352d863cc8caa0a3ede0ec63e
90745 F20101115_AABNUS rajah_k_Page_52.jpg
917a023ee9174ae6d2d461c4ba70e05d
9cf5dc5a635f2db2d18a2a4d85ec6263fe0d638f
37218 F20101115_AABOGN rajah_k_Page_50.pro
06fbd0d6c48599ffaef96b2426c0e320
31fcd35594fddf28476ac39545e986153a151209
28121 F20101115_AABOLL rajah_k_Page_13.QC.jpg
eeac2ed0e30c3a3a3fa0263f151f95be
2dc8a7935c0c2a55c3bcdb53809c17341dc38dd1
85906 F20101115_AABNZR rajah_k_Page_75.jpg
a6605c63b5a904566924edbf087ac56d
60dc9450031abe41a78db8cf704a0c604f0843d2
60721 F20101115_AABOBR rajah_k_Page_56.jp2
615ae2b3d5371778f5c6e364481f9532
d1a3d9799ec0fccd9b0cdfbbcdb72ca216618093
1051961 F20101115_AABNUT rajah_k_Page_42.jp2
d5b6f09103ae4c9a61a2871bd976c72d
67b60ef2f5f3ccb1ee377075d3ea19d30e31f74f
37517 F20101115_AABOGO rajah_k_Page_51.pro
ce9abdbde0ea0f637926e7e351130efa
0fdb2c1f3d853982f0696db1411f4f71a044f696
2693 F20101115_AABOLM rajah_k_Page_77thm.jpg
f2f9d412bbfa0e96f37f1035f03d1747
c350288a40beea355a6e949e55690e63f231d289
15828 F20101115_AABNZS rajah_k_Page_76.jpg
7c6b06b3548a8693a433c6574dfcbd56
a9baab987c649cced6fa610d5a0463797b4ab764
842434 F20101115_AABOBS rajah_k_Page_57.jp2
4b85a83bcb5f4597bed351aca436a6ac
d78f42b79da8acce0f66f281f9a4bf9fab4c80d2
946696 F20101115_AABNUU rajah_k_Page_30.jp2
2eabb5a935909d2b6b924f8f1f021101
62ce194bf830384f2615be2e0e1f34535c0c2873
60483 F20101115_AABOGP rajah_k_Page_52.pro
4039aaa64035022c8b41aebb2285fd20
99aa130512287a9abf83dc78cecfc9f54df258f1
22011 F20101115_AABOLN rajah_k_Page_09.QC.jpg
e1530d0bacaf0e278edd9c2202446bbd
6c7d75a2e087f5576b0e43b24eed641e5f5bae01
29534 F20101115_AABNZT rajah_k_Page_77.jpg
e9ac47176d082b794fb239bb3d7ce4f8
e5c84c48da11408a997d1bbf74d540fc7649890b
1001350 F20101115_AABOBT rajah_k_Page_58.jp2
4f33d4dc52e931e0a9ba432764e96153
53b92c84e75c41f4d0ab604f625edeae171d1211
2258 F20101115_AABNUV rajah_k_Page_34.txt
79cb2535e815a9558c8c5590ac48b011
cf2a061e1ce1c6fefff3c5ce351812c09c24c15e
55600 F20101115_AABOGQ rajah_k_Page_53.pro
613d6f7f3808180e8852f880effd9ea8
43aebdfa6ce26120d4c216fcbd3d4e25c4babf12
6660 F20101115_AABOLO rajah_k_Page_52thm.jpg
b14bdc0e96fd31d74080c121b27202d2
5da7c936cb1574f039938d2ee62a3196625edaa8
22974 F20101115_AABNZU rajah_k_Page_01.jp2
74e5105efab8758e5bc789a9c0df124a
877ae244b9ad0d85bb6123796aff2d17017b8c09
F20101115_AABOBU rajah_k_Page_59.jp2
a9e62911d77b667c4e3dc05229779c47
b8b0587bc841d1ad246bd95a8d8a0fe6a09d3dd1
F20101115_AABNUW rajah_k_Page_03.tif
5ec692bc7fd19510008ac3e608d70a6f
631429dcfee029655951331b5c4a159e176d84f7
59998 F20101115_AABOGR rajah_k_Page_54.pro
adae0403d058742fab2708faa17f8881
62e2f9ebea79ffaa2e67a4a4be57af1b093d12b1
5223 F20101115_AABOLP rajah_k_Page_66thm.jpg
155fbb9c63e6d04778d41cbbdd03a066
e91d4210c78c0b3655e7ac9deb5db5ed05e7f941
1040848 F20101115_AABOBV rajah_k_Page_60.jp2
5d6cb67fd1d31e270b6f7150274b4fe1
3ef5e9994715108a34a79d00a0a1596a097995da
22934 F20101115_AABNUX rajah_k_Page_24.QC.jpg
cf4f82d89a83048931d9d1e2105ae08f
007b4ae01471d4cf33b1211132024863ddbf24ae
59704 F20101115_AABOGS rajah_k_Page_55.pro
8f740ae050ee15a1d9acc600dbb2359f
df2e3e1fab3b74dadeae598e351f92e064bf98e9
25980 F20101115_AABOLQ rajah_k_Page_54.QC.jpg
c09d693ec1848ecbac8590326e1f09e9
d10de74070dcdcec01014f7a6879753172597ea1
F20101115_AABOBW rajah_k_Page_61.jp2
5cfbce458602d55ad1a4ae95099f7a94
fdb81ae7fda05a186ef06affa9305788d0b6a7c1
31852 F20101115_AABNUY rajah_k_Page_39.pro
5cd4d6e0537f37718128f0689e3a0720
577ee07f1aa07fba93e91ef4e352d467835bc08e
30308 F20101115_AABOGT rajah_k_Page_57.pro
2392d8c68d9e6415e6eccd46377ae08c
52ddee7829debc663cbd937f7970a8109f67b550
4863 F20101115_AABNZV rajah_k_Page_02.jp2
a1e7235756d9474bae66b48fc2ae3818
0aac21900fd5d436982a4ad5915d4dcd39707ff8
6677 F20101115_AABOLR rajah_k_Page_14thm.jpg
34174ade49c930f606ad77861f3442bb
2d0432defe70817203713117a979dacca09efdd8
998435 F20101115_AABOBX rajah_k_Page_62.jp2
6c3f81050bbb04f010a47897ae348120
474e378a26025f5b6d1ef3afbf92bdad0c473b62
6509 F20101115_AABNUZ rajah_k_Page_74thm.jpg
8a76134710ce05bba7493fdce8f9ed95
90f0df47daccacf8125c4ef12897d847835bd8f5
43419 F20101115_AABOGU rajah_k_Page_58.pro
99023d22645aad0c5522b53d2a1ae7cc
39ca344807c12ccb544644ae2af69d0a42fa7ef3
4728 F20101115_AABNZW rajah_k_Page_03.jp2
e1895a1de91edc7951d7b57c355b0d6c
f38789d6b064a9bd962f649777d22e6016ceccd0
3331 F20101115_AABOLS rajah_k_Page_18thm.jpg
a468582c4a3eaa2bf4ad0742a7202634
50f07a75d7973fa2cceff9ecfdf3e20a188ff0fb
F20101115_AABOBY rajah_k_Page_63.jp2
65b3e09afef6c9e821cefbef1d6fda67
c688f0f54aef17496af6cd6af828f44d2412bd06
54683 F20101115_AABOGV rajah_k_Page_59.pro
cc1333c6795e1746b1eeaa1a0fca1fb9
7ea969491ead0749e55dc710a1278ca82d98d036
29698 F20101115_AABNZX rajah_k_Page_04.jp2
2d57c513ad84c7a90914b922a71f9fc0
9f738916351b919f14ea6b5cfc857ac0e682bdda
5345 F20101115_AABOLT rajah_k_Page_26thm.jpg
d8143836ae45719a4c8446994b5e6b51
a3ffd0385889f65a20f32b81a5a6510e1be8c44a
1037309 F20101115_AABOBZ rajah_k_Page_64.jp2
a7733ef10e4ffefb9ed995f0a3cd637b
32fddf9ea788d290353f4ac8b1b9ab4a66d35106
45231 F20101115_AABOGW rajah_k_Page_60.pro
c051901cb90f1a38b86cc6bb2253d079
b1e0e9bba3a4cd4bf413e715e9437822e273853b
4044 F20101115_AABNXA rajah_k_Page_68thm.jpg
00a70760c9cff24aeaa7a6eb3f210297
c41ee70a371ff3218e948f69307aabf8ab07db60
411818 F20101115_AABNZY rajah_k_Page_06.jp2
9fa360b89271bd23c014e12f2807336d
8baff3e67da89d10972186bf684790267d8e48c0
1850 F20101115_AABOLU rajah_k_Page_69thm.jpg
8b52630fd28b42774875e19aca96ddad
8161d686fcf799c394650f09de6f38fa91a8e588
57216 F20101115_AABOGX rajah_k_Page_63.pro
c4f5e6ce11b56c815238e0472b5f26fc
de6647ab0bfd6ca3ddfe105b452f100381a1f489
90152 F20101115_AABNXB UFE0021168_00001.mets
d2795e8b0e5845cf39a5729701056cc5
87d876c1b20fddfac21b1e37359bbf00703222c7
494984 F20101115_AABNZZ rajah_k_Page_07.jp2
70ff14991e1eb4848e6efbe147d7d5d9
69e09cfbc08d5292183f0499e00e4bd831e07b13
27190 F20101115_AABOLV rajah_k_Page_71.QC.jpg
2375bf3f0a4cf5ee03cf65761bee9d85
1a332ddf396c66fa43f9f2aca034d06cc8a4a06c
47512 F20101115_AABOGY rajah_k_Page_65.pro
18b971a34b97cba1d987dd3521d85dc0
d500847759a449de7ba54025fa9a7315d34c48f4
5983 F20101115_AABOLW rajah_k_Page_29thm.jpg
c3654a4451e5e3c05fa55343b6d59264
4d699d965d52cccf57094e9c128ce41fd32a130a
F20101115_AABOEA rajah_k_Page_51.tif
eb52f3cc1e7668e791be4f02b2f9583c
92f4d3690a7b3f2f9a529766af66c26c51c242f8
46481 F20101115_AABOGZ rajah_k_Page_66.pro
ebbfb9896d6970884e5d6b894f65fa50
acc2d5623e8c7582c556ed47daa8a7079e79567b
6894 F20101115_AABOLX rajah_k_Page_17thm.jpg
56f155d8679d1f5b5a7ced81c6b02fae
26da595bd5264c269e0ca8433710e02a374eae34
F20101115_AABOEB rajah_k_Page_52.tif
cf3098da4f7dc9cd784042dcbe08b544
d7eac08e025a1f82f5a20480c59934bddcdeae44
22507 F20101115_AABNXE rajah_k_Page_01.jpg
240d7b4dbcf1b7eb3880703a85c13d4f
ee50095898e6b100a8f4bdba60a182952d186ff7
27103 F20101115_AABOLY rajah_k_Page_38.QC.jpg
65ac17c0bfaa15fb2c215bb31e73db8e
07803d8ddd670d21178a46fce9dca74ee0bf0812
F20101115_AABOEC rajah_k_Page_53.tif
111aedf881c295d438522c228a84d897
3ce7ca1d5866e772a43bda13d71ae21c1f1f3aad
9785 F20101115_AABNXF rajah_k_Page_02.jpg
0d8c5e25a3d914fc4efdbf1bad9f31c9
3c4efea656976319c43c37c72e0f0f8f62579363
2182 F20101115_AABOJA rajah_k_Page_53.txt
e17d705352df031ed41f8f263b278635
59599d855343e61382d0ac9c953119a261544d29
25972 F20101115_AABOLZ rajah_k_Page_70.QC.jpg
3e6c76d53947cd9db453739cfce4bb5a
fd3653498dee44eb7049f218461011df6221f10a
F20101115_AABOED rajah_k_Page_55.tif
9a89705bb2d432b8005b125ea56aed5a
5ade5609b041667ef41ac9f186a25bd951aa1852
9533 F20101115_AABNXG rajah_k_Page_03.jpg
58d0538e1b6770c3e9879d9a32939bca
04dcdb487c7086f79014a7da9912ad2b8f23c0f9
2374 F20101115_AABOJB rajah_k_Page_54.txt
10a2e834e1550d47b2290c63204db30e
dbd272903e18e45d82ec01e6283b436f4af5ed9b
F20101115_AABOEE rajah_k_Page_56.tif
6eb4c83d9f84bc73acffd3c00d14aa44
923d06a19e21c8d52bbb8fb36a443e6af425b9c5
23613 F20101115_AABNXH rajah_k_Page_04.jpg
7883f1ccdc896447ff7ecbe48edc094d
52258dc1e8b875d0244b962d9e196ec4e0e3601b
2421 F20101115_AABOJC rajah_k_Page_55.txt
5b537f999690b2f0d9f48e372813160e
6245bd03d7f9dbbdc4027390bd4ee027500d7bd7
6892 F20101115_AABOOA rajah_k_Page_54thm.jpg
ea87489ec73314453fb63c5d4335d537
d3fd3ff8c26852adfb704ddedbaa81da1c90bace
F20101115_AABOEF rajah_k_Page_57.tif
5fec4365d47129d9d0009ae4fd4db24c
ad23d88f37543749df8b5f372455438d190fc29c
77893 F20101115_AABNXI rajah_k_Page_05.jpg
ac45f6e6a1d586b34d47037b0bf46ccd
e13f96261ef0a66842d7d71436104e3b5b15e225
1835 F20101115_AABOJD rajah_k_Page_58.txt
51e4db84c0da4ad4459adb68541526d5
f61d3e85c94f3207cd8aa6f5b8cde09c56721974
26592 F20101115_AABOOB rajah_k_Page_55.QC.jpg
41b97be71fd086f28519ca14f3d576eb
d308edc5e4b5f420e76e75402e45a1f15a65d832
8423998 F20101115_AABOEG rajah_k_Page_58.tif
9825812cdcb118b5a42e1770ff479bbd
10119e6dbece5d784ae3258640b32444f3b86c9d
24411 F20101115_AABNXJ rajah_k_Page_06.jpg
12e402bb34b35565fb2a93f5d9270acf
bd83c7629757ef324bf69dba3490fe8aac5f94a5
6521 F20101115_AABOOC rajah_k_Page_55thm.jpg
3c269df90472d4ae41765a2eec415eb9
10c074ffeef4135b6948e986ec98a4457b1b9d75
F20101115_AABOEH rajah_k_Page_59.tif
4ff9d77c02a3958a362c8613037fbffa
e383fbbcc299d8b254dcb85314522a0bfe119c07
99790 F20101115_AABNXK rajah_k_Page_08.jpg
5aae6d4c66c5c1183b3409189ea58a2d
4f9e73cbffe7dda4cc92f21dcce7b04b53428365
1983 F20101115_AABOJE rajah_k_Page_60.txt
2516e9174901deb2f6833056d0134b2c
e00d20c873b797225400498e49c7c54fa4f5dc01
15537 F20101115_AABOOD rajah_k_Page_56.QC.jpg
786b74f10a7ae8fd86a55b0e7f55214f
a62cf9fdb9bd9cdc64937e5f178f75901e564545
F20101115_AABOEI rajah_k_Page_60.tif
f4022c720077f4c834e489f33be6f379
1b2b09782c95eaa07465ff427882823ac8ca5511
78344 F20101115_AABNXL rajah_k_Page_09.jpg
651d7780cd5fde680d399793ea42ad4c
6f5ab97e5a0d0a53d0676591ee659937707ee7df
2198 F20101115_AABOJF rajah_k_Page_61.txt
8f297a83535dcb00172e2b02b2b863cf
dc370e5f5b441c88402ad47e1da91e04a2d3a29a
4278 F20101115_AABOOE rajah_k_Page_56thm.jpg
19093dfe80d38ba59218d3f42d054b35
42c60750a707e8b7c1eb02ccbd1550ecffcb0e39
F20101115_AABOEJ rajah_k_Page_61.tif
90af0952c96e42d20f2662dcc6a337e8
12aeb170566f6354597d1d1277dc1c11e183a5dd
69166 F20101115_AABNXM rajah_k_Page_10.jpg
49bdcd8dd6a3c2b2450341b4516b96fa
e0ab2fe75708d840aff7f6c15f9cc40972983d0e
2105 F20101115_AABOJG rajah_k_Page_62.txt
694116e7829d0ec4b5eda55a01438dde
00a8c192b2b42b4cfa5e7c7a701f1df55510f5c0
5297 F20101115_AABOOF rajah_k_Page_57thm.jpg
209096cdcdbab6f8dd8286355c5ca8e8
b8bc7689619582155300d5f5fc22d8825c0c0194
F20101115_AABOEK rajah_k_Page_62.tif
d89408aa09ad20be0331017b9f9cfe44
06f617edd57ee5b5fca0dbd3ccc943ba3fd0839e
17310 F20101115_AABNXN rajah_k_Page_11.jpg
67fb98bc0d79164229a5fd5168f2b8fa
5357cfc7652c85ad340ed412840f6c7a682a6382
2301 F20101115_AABOJH rajah_k_Page_63.txt
c6537ae01d43ec599d27a7b96f745cd9
cd2cfd01ce8a90a8692535bb7de63d03af698ea6
24520 F20101115_AABOOG rajah_k_Page_59.QC.jpg
e819231e753c92862e57b99ff0c4ba0c
596274d8145cb43502d63a63924aad72b284f521
F20101115_AABOEL rajah_k_Page_63.tif
63c2e54e817352f60405a89457c1cf68
e7e74b8e0198dedd9fc64d4dbb32dc11314b8bea
86365 F20101115_AABNXO rajah_k_Page_12.jpg
8e588444bb0c2c592e3f3b57fb51d1e5
f81dc3eab8e4adb00c7c08c69788d65f6612fb52
2076 F20101115_AABOJI rajah_k_Page_64.txt
d8ab59f887b445727ba57fd2409a08b7
11cc7ee2be7aa84bbcb901d27c2c910da83f8e50
6610 F20101115_AABOOH rajah_k_Page_59thm.jpg
15d7e75da240ec7c071fe4d3e913ead7
4272530df39ffe31e5f6d38f08a6928012398d64
F20101115_AABOEM rajah_k_Page_64.tif
a75993335212742218a24d4cb2e6f68c
94f4e7514a773904dd57910e61513ed31e98500d
90174 F20101115_AABNXP rajah_k_Page_13.jpg
abf2d7f3a836848baef2c23c06ee5172
22f0ea2bfd3e024194fa0fa0a14b3e425e80a2d2
F20101115_AABOJJ rajah_k_Page_65.txt
4f2b4d4f5ca715c40c1cc5bbf8eb9c13
32e622e07db265837f3ddd8fdbacd89c011c8a3e
5830 F20101115_AABOOI rajah_k_Page_60thm.jpg
529b19bc590261b243dc682723682b02
d304d2e8f4f035ce2299c028ce9cb1a5135cd789
F20101115_AABOEN rajah_k_Page_66.tif
3dfa9990df7b040795dcbf8985b3855f
f3e4eac2ce3b1d545802eaedcbb3685fb88cc4e8
89551 F20101115_AABNXQ rajah_k_Page_14.jpg
43f9b5e47753957c7f86f4e242740b4c
92ffd0ab9422ea2ff10d86664079ab3231d5966c
2698 F20101115_AABOJK rajah_k_Page_66.txt
3b2ca6df16d6d7c9b09a25aae14a0b70
e2c0ed7420abc52c0ee42b0c4b6adb6fba0cc9e2
F20101115_AABOEO rajah_k_Page_67.tif
b36999f51b52b3a2249ceb4724a3c66e
bf1eb9aba28c87814715ed30c45c5ad58ed81848
90061 F20101115_AABNXR rajah_k_Page_15.jpg
b226b7e867bb102e7b164fb5cab85d57
c8a18344218b3bbbbb82e9d2fa6bb0d0405e2e79
1422 F20101115_AABOJL rajah_k_Page_68.txt
c3ccbb26733cb7ec65a34e1376c333df
8d76d340fc0c12e8ff0087ce3eef7e2711392a6c
26097 F20101115_AABOOJ rajah_k_Page_61.QC.jpg
a9b81418b24c34440b13f0443fb6ae82
ec3cafc9850a4e5af2acee3a7d4e1f1c54b42bc4
F20101115_AABOEP rajah_k_Page_70.tif
a69a9f60e42b46fd96c953757c981fd4
7b53a2217ac65468cbec62b121098f1816a8a9d0
87953 F20101115_AABNXS rajah_k_Page_16.jpg
d0116a92271a6e6e4c0fe6a91f4aa519
57906c93f8cc98cf602026f1689603a3fefdfd77
394 F20101115_AABOJM rajah_k_Page_69.txt
9ced7623b884a3de561c837a14d1c1ad
c4fa87c01d3fc841892a468dde6fc972155541d3







ADVANCE RESERVATION AND


SCHEDULING OF BULK( FILE TRANSFERS IN
E-SCIENCE


By
K(ANNAN RAJAH


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA

2007


































S2007 K~annan R ii II!



































To my Mom and Dad









ACKENOWLED GMENTS

I would like to express my sincere gratitude to my advisors Dr. 8 Ilri ly Ranka and

Dr. Ye Xia for their continuous support and encouragement throughout my research work.

I am thankful to Dr. Sartaj Sahni for being a vital member of my thesis committee and

providing valuable comments on my thesis. I would also like to thank Dr. Rick Cavanaugh

and Dr. Paul Avery from the Physics department for several discussions on the Ultralight

project.










TABLE OF CONTENTS


page


ACKNOWLEDGMENTS

LIST OF TABLES.

LIST OF FIGURES

ABSTRACT

CHAPTER

1 INTRODUCTION

1.1 Related Work.

2 CONCURRENT FILE TRANSFER PROBLEM ......_._


2.1 Problem Definition
2.2 The Time Slice Structure
2.3 Node-Are Form.
2.4 Edgfe-Path Form
2.4.1 Shortest Paths.
2.4.2 Shortest Disjoint Paths.
2.5 Evaluation.
251Sing~le Slirce Srchduling~ (SSS)


.1.
2.5.1.1
2.5.1.2
2.5.1.3

2.5.2 Multiple
2.5.2.1
2.5.2.2
2.5.2.3


'"" """"""~ "" .
Performance comparison of the formulations.
Comparison of algorithm execution time
Algorithm salability with network size.
Average results over random network instances
Slice Scheduling (j!SS).
Performance comparison of different formulations
Comparison of algorithm execution time
Optimal time slice


:3 ADMISSION CONTROL AND SCHEDULING ALGORITHM .

:3.1 The Setup.
:3.2 The Time Slice Structure
:3.3 Admission Control
:3.4 Scheduling Algorithm.
:3.5 Putting It Together: The AC and Scheduling Algorithm
:3.6 Non-unifornt Slice Structure
:3.6.1 ?-. -i I Slice Structure
:3.6.2 Variant of N. -i0 I1 Slice Structure
:3.7 Evaluation.
:3.7.1 Comparison of Algorithm Execution Time











3.7.2 Performance Comparison of the Algorithms .. .. .. 62
3.7.3 Single vs Multi-path Scheme .... ... .. .. 64
3.7.4 Comparison with Typical AC/Scheduling Algorithm .. .. .. .. 66
3.7.5 Scalability of AC/Scheduling Algorithm .. .. .. .. 67

4 CONCLUSION ........ ... .. 70

REFERENCES ............. ............. 73

BIOGRAPHICAL SK(ETCH ....._._. . 77










LIST OF TABLES


Table page

:3-1 Frequently used notations and definitions .... .. .. 4:3

:3-2 Average admission control/scheduling algorithm execution time (s) .. .. .. 60

:3-3 Comparison of ITS and NS (-r = 5 minutes) ..... .... . 62

:3-4 Average number of slices of ITS and NS (-r = 5 minutes) .. .. .. 62

:3-5 Performance comparison of different algorithms .... .. .. 6:3

:3-6 Rejection ratio of the simple scheme . ..... .. 67










LIST OF FIGURES


Figure page

2-1 Examples of stringent rounding. The unshaded rectangles are time slices. The
shaded rectangles represent jobs. The top ones show the requested start and
end times. The bottom ones show rounded start and end times. .. .. .. .. 21

2-2 A network with 11 nodes and 1:3 bi-directional links, each of capacity 1GB shared
in both directions. ......... . 24

2-3 The Abilene network with 11 backbone nodes. A and B are stub networks. .. :31

2-4 Z for different formulations on Abilene network using SSS. A) 121 jobs; B) 605
jobs; C) 1210 jobs; D) 6050 jobs. .. ... .. 3:3

2-5 Z for different formulations on a random network with 100 nodes using SSS. A)
100 jobs; B) 500 jobs; C) 1000 jobs; D) 5000 jobs. ... .. .. 34

2-6 Execution time for different formulations on the Abilene network using SSS. A)
121 jobs; B) 605 jobs; C) 1210 jobs; D) 6050 jobs. ... .. .. 35

2-7 Execution time for different formulations on a random network with 100 nodes
using SSS. A) 100 jobs; B) 500 jobs; C) 1000 jobs; D) 5000 jobs. .. .. .. .. :35

2-8 Random network with k = 8. Execution time for different network sizes. .. :36

2-9 Average Z for different formulations on a random network with 100 nodes and
1000 jobs using SSS. The result is the average over 50 instances of the random
network. ......... ... .. 37

2-10 Average execution time for different formulations on a random network with
100 nodes and 1000 jobs using SSS. The result is the average over 50 instances
of the random network. ......... . :37

2-11 Average throughput ratio for different formulations on a random network with
100 nodes and 1000 jobs using SSS. The result is the average over 50 instances
of the random network. ......... . :37

2-12 Z for different formulations on the Abilene network with 121 jobs using AISS.
A) Time slice = 60 nxin; B) Time slice = :30 nmin; C) Time slice = 15 nmin; D)
Time slice = 10 nmin. .. ... .. . :39

2-1:3 Z for different algorithms on a 100-node random network with 100 jobs using
AISS. A) Time slice = 60 nxin; B) Time slice = :30 nxin; C) Time slice = 15 nxin;
D) Time slice = 10 nxin. .. ... . :39

2-14 Execution time for different formulations on the Abilene network with 121 jobs
using AISS. A) Time slice = 60 nxin; B) Time slice = :30 nxin; C) Time slice=
15 nmin; D) Time slice = 10 nxin. ....... ... .. 40










2-15 Execution time for different formulations on a 100-node random network with
100 jobs using MSS. A) Time slice = 60 min; B) Time slice = 30 min; C) Time
slice = 15 min; D) Time slice = 10 min. ...... .. . 41

2-16 The Abilene network with 121 jobs and k = 8. A) Z for different time slices; B)
Execution time for different time slice sizes. ..... .. . 41

3-1 Uniform time slice structure ......... .. .. 44

3-2 Two rounding policies. The unshaded rectangles are time slices. The shaded
rectangles represent jobs. The top ones show the requested start and end times.
The bottom ones show the rounded start and end times. .. .. .. 46

3-3 Two-level nested time-slice structure. -r = 2, Al = 4 and A2 = 1. The anchored
slice sets shown are for t = 7r, 27r and 37r, respectively. At-Most-a Design. 2~ = 8. 56

3-4 Three-level nested time-slice structure. -r = 2, Al = 16, A2 = 4 and A3 i
The anchored slice sets shown are for t = 7r, 27r and 87r, respectively. At-Most-a
Design. a3 8, 2 = 2. ......... . .. 57

3-5 Three-level nested slice structure Almost-a Variant. -r = 2, Al = 16, A2
and A3 = 1. The anchored slice sets shown are for t = 7r, 27r and 37r, respectively.
a3 8, 2 = 2. The shaded areas are also slices, but are different in size from
any level-j slice, j = 1, 2 or 3. ......... ... .. 58

3-6 Rejection ratio for different co's under SR. ..... .. 64

3-7 Single vs. multiple paths under different traffic load. A) Response time; B) Rejection
ratio .... ......... ............... 65

3-8 Single vs. multiple paths under medium traffic load for different algorithms. A)
Response time for QF; B) Response time for LB; C) Rejection ratio. .. .. .. 66

3-9 Scalability of the execution times with the number of jobs. .. .. .. 68

3-10 Scalability of the execution times with the number of time slices. .. .. .. 68

3-11 Scalability of the execution times with the network size. .. .. .. 69










Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

ADVANCE RESERVATION AND SCHEDULING OF BITLK FILE TRANSFERS IN
E-SCIENCE

By

K~annan R ii II!

August 2007

Cl. ny~: Dr. Sartaj Sahni
Major: Computer Engineering

The advancement of optical networking technologies has enabled e-science applications

that often require transport of large volumes of scientific data. In support of such

data-intensive applications, we develop and evaluate control plane algorithms for

admission control and scheduling of bulk file transfers. Each file transfer request is

made in advance to the central network controller by specifying a start time and an end

time. If admitted, the network guarantees to begin the transfer after the start time and

complete it before the end time. We formulate the scheduling problem as a special type

of the niulti-coninodity flow problem. To cope with the start and end time constraints

of the file-transfer jobs, we divide time into uniform time slices. Bandwidth is allocated

to each job on every time slice and is allowed to vary from slice to slice. This enables

periodical adjustment of the bandwidth assignment to the jobs so as to improve a chosen

performance objective: throughput of the concurrent transfers. In this thesis, we study

the effectiveness of using multiple time slices, the performance criterion being the trade-off

between achievable throughput and the required computation time. Furthermore, we

investigate using multiple paths for each file transfer to improve the throughput. We

show that using a small number of paths per job is generally sufficient to achieve near

optimal throughput with a practical execution time, and this is significantly higher than

the throughput of a simple scheme that uses single shortest path for each job. The thesis

combines the following novel elements into a cohesive framework of network resource










management: advance reservation, multi-path routing, rerouting and flow reassignment via

periodic re-optimization. We evaluate our algorithm in terms of both network efficiency

and the performance level of individual transfer. We also evaluate the feasibility of our

scheme by studying the algorithm execution time.










CHAPTER 1
INTRODUCTION

The advancement of optical communication and networking technologies, together

with the computing and storage technologies, is dramatically changing the v-wsi~ how

scientific research is conducted. A new term, e-science, has emerged to describe the

"large-scale science carried out through distributed global collaborations enabled by

networks, requiring access to very large scale data collections, computing resources, and

high-performance visualization". Well-quoted e-science (and the related grid computing

[22]) examples include high-energy nuclear physics [10], radio astronomy, geoscience and

climate studies.

The need for transporting large volume of data in e-science has been well-argued

[1, 10, 33]. For instance, the HENP data is expected to grow from the current petahytes

(PB) (10m5) to exabytes (101s) by 2012 to 2015. Similarly, the Large Hadron Collider

(LHC) facility at CERN is expected to generate petahytes of experimental data every

year, for each experiment. In addition to the large volume, as noted in [17], "e-scientists

routinely request schedulable high-bandwidth low-latency connectivity with known and

knowable characteristics". Instead of relying on the public Internet, national governments

are sponsoring a new generation of optical networks to support e-science. Examples of

such research and education networks include the Internet2 related National Lambda Rail

and Abilene networks in the II.S., CA~net4 in Canada, and SITRFnet in the Netherlands.

To meet the need of e-science, this thesis examines admission control and scheduling

of high-bandwidth data transfers in the research networks. Admission control and network

resource allocation are among the toughest classical problems for the Internet or any

global-scale networks (See [16, 28] and their references.). There are three important

aspects that motivate us to re-examine this issue, namely, specialized applications, fewer

quality of service (QoS) classes and much smaller network size. Research networks are

different from the public Internet as they typically have less than 10" core nodes in the










backbone. This makes it possible to have a centralized network controller for managing

the network resources and for providing user service quality guarantee. With the central

controller, there is more flexibility in designing sophisticated, efficient algorithms for

scheduling user reservation requests, setting up network paths, and allocating bandwidth.

Our work assumes that the optical network contains enough IP routers for traffic

grooming, which is true for current research networks. Such a network allows fine-grained

multiplexingf of traffic for better network resource utilization.

The objective of this thesis is to develop and evaluate control plane algorithms for

admission control (AC) and scheduling of large file transfers (also known as jobs) over

optical networks. We assume that job requests are made in advance to a central network

controller. Each request specifies a start time, an end time and the total file (demand)

size. Such a request is satisfied as long as the network begins the transfer after the start

time and completes it before the end time. There is, however, flexibility in how soon the

transfer should be completed. It can be completed as soon as possible or, alternatively, be

stretched until the requested end time. Our algorithms allow both possibilities and we will

examine the consequences.

The network controller determines the admissibility of the new jobs by a process

known as admission control (AC). Any admitted job will be guaranteed the performance

level in accordance with its traffic class. The user of a rejected request may subsequently

modify and re-submit the request. Once the jobs are admitted, the network controller

has the flexibility in deciding the manner in which the files are transferred, i.e., how the

bandwidth assignment to each job varies over time. This decision process is known as

e.1, ~I; d.;1:1i Bulk transfer is not sensitive to the network delay but may be sensitive to the

delivery time. It is useful for distributing high volumes of scientific data, which currently

often relies on ground transportation of the storage media.

In C'!s Ilter 2, we focus on the scheduling problem at a single scheduling instance and

compare different variations of the algorithm. Here, all file transfer requests are known in










advance; they can have different start and end times. We call this scheduling problem the

concurrent file 'imeder problem (CFTP). There is no AC phase. We will formulate CFTP

as a special type of the multi-commodity flow problem, known as the maximum concurrent

flow (j!CF) problem [24, 36]. While AICF is concerned with allocating bandwidth to

persistent concurrent flows, CFTP has to cope with the start and end time constraints of

the jobs. For this purpose, our formulations for CFTP involve dividing time into uniform

time slices (Section 2.2) and allocating bandwidth to each job on every time slice. Such a

setup allows an easy representation of the start and end time constraints, by setting the

allocated bandwidth of a job to zero before the start time and after the end time. More

importantly, in between the start and end times, the bandwidth allocated for each job is

allowed to vary from time slice to time slice. This enables periodical adjustment of the

bandwidth assignment to the jobs so as to improve some performance objective.

Motivated by the MCF problem, the chosen objective is the throughput of the

concurrent transfers. For fixed traffic demand, it is well known that such an objective is

equivalent to minimizing the worst-case link congestion, a form of network load balancing

[36]. A balanced traffic load enables the network to accept more future job requests,

and hence, achieve higher long-term resource utilization. In addition to the problem

formulation, other contributions of this thesis are as follows. First, in scheduling file

transfers over multiple time slices, we focus on the tradeoff between achievable throughput

and the required computation time. Second, we investigate using multiple paths for each

file transfer to improve the throughput. We will show that using a small number of paths

per job is generally sufficient to achieve near optimal throughput, and this is shown to be

significantly higher than the throughput of a simple scheme that uses single shortest path.

In addition, the computation time for the formulation with a small number of paths is

considerably shorter than that for the optimal scheme, which utilizes all possible paths for

each job.










In (I Ilpter :3, we describe a suite of algorithms for admission control and scheduling

and compare their performance. Here, the file transfer requests arrive at different times;

a decision needs to be taken at run time on which requests to be accepted and scheduled.

Again, the key methodology is the discretization of time into a time slice structure so

that the problems can he put into the linear programming framework. A highlight of our

scheme is the introduction of non-uniform time slices, which can dramatically shorten the

execution time of the AC and scheduling algorithms, making them practical (Section :3.6).

Our system handles two classes of jobs, bulk dest, rinta,;er and those that require a

minimum bandwidth Il;,;.;<.;;l 1. (ill:G). A request for the MBG class specifies a start time,

an end time and the minimum bandwidth that the network should guarantee throughout

the duration from the start to the end times. We assume that, once the bandwidth is

granted, the optical network can he configured to achieve the desired low-latency for

e-science. Such service is useful for realtime rendering or visualization of large volumes

of data. In our framework, the algorithms for handling bulk transfer contain the main

ingredients of the algorithms for handling the MBG class. For this reason, we will only

give light treatment to the MBG class.

The e-science setting provides both new challenges and new possibilities for resource

management that are not considered in the classical setting. The novel features of our

work are as follows. First, bulk transfer is usually regarded as low-priority best-effort

traffic, not subject to admission control in most QoS-provisioning frameworks such as

InterSery [8], DiffSery [6], the ATM network [:32], or MPLS [:34]. The deadline-based AC

and scheduling for the entire transfer (not each packet) has generally not been considered

in traditional QoS frameworks. Second, our scheme allows each transfer session to take

multiple paths rather than a single path. Third, the route and bandwidth assignment can

he periodically re-evaluated and reassigned. This is in contrast to earlier schemes where

such assignment remains fixed throughout the lifetime of the session.










To elaborate, we take the optimization approach for AC and scheduling on the en-

semble of the jobs in the system. At each of the periodic AC and scheduling instances,

AC is first administered. The admission of new jobs is formulated as a feasibility

problem subject to the constraint that the existing jobs admitted earlier must retain

their performance guarantee. However, to increase the admission rate, the routes and

bandwidth of the existing jobs can he reassigned. In the second step, scheduling, the

network controller assigns the actual routes and bandwidth to all jobs in the system so

as to optimize a performance objective. Examples that we consider in this chapter are

to minimize the worst case link utilization or to minimize an objective that encourages

earlier completion of the jobs. The result of scheduling in turn affects the admission rate

for future jobs. The classical AC schemes do not conduct periodic rerouting or bandwidth

re-allocation of existing jobs. They only ask if the remaining network capacity is sufficient

to handle new jobs. Furthermore, there is no additional scheduling step for performance

optimization on all jobs in the system.

The rest of this thesis is organized as follows. The related work is shown in Section

1.1. There are two main technical contributions of this thesis: CF TP, described in

C'!s Ilter 2 and Admission Control/Scheduling algorithms described in 3. In addition to the

proposed formulations, we present a rigorous discussion on their experimental results in

Section 2.5 and 3.7, respectively. Finally, the conclusions are drawn in (I Ilpter 4.

1.1 Related Work

Our work is focused on building an efficient scheduling framework to perform

advance reservation of bulk file transfer requests with admission control. The main

technical contributions of this thesis are as follows: Path hased scheduling is close to

the optimal solution and also fast; Use of multiple paths and multiple time slices for

scheduling; Non-uniform time slice structure to enable long coverage of reservation;

Periodic re-optimization of flows to achieve better network utilization. Similar to our

work, the authors of [5] also advocate periodic re-optimization to determine new routes









and bandwidth in optical networks. They also use a multi-commodity flow formulation.

However, they do not assume users making advance reservations with requested start and

end times. As a result, the scheduling problem is for a single time instance, rather than

over multiple time slices. Furthermore, it does not consider the edge-path formulation with

limited number of paths per job.

Several earlier studies [9, 11, 13, 15, 35, 37, 38] consider advance bandwidth

reservation with start and end times at an individual link for traffic that requires

minimum bandwidth guarantee (ill:G). The concern is typically about designing

efficient data structures for keeping track of and querying bandwidth usage at the link

on different time intervals. New jobs are admitted one at a time without changing the

bandwidth assignment of the existing jobs in the system. The admission of a new job

is based on the availability of the requested bandwidth between its start time and end

time. [11, 14, 19, 25, 37] and [15] all go beyond single-link advance reservation and

tackle the more general path-finding problem for the MBG traffic class, but typically

only for the new requests, one at a time. The routes and bandwidth of the existing jobs

are unchanged. [12] discusses architectural and signaling-protocol issues about advance

reservation of network resources. [30] considers a network with known routing in which

each admitted job derives a profit. It gives approximation algorithms for admitting a

subset of the jobs so as to maximize the total profit.

[14, 25] touch upon advance reservation for bulk transfer. [14] proposes a malleable

reservation scheme. The scheme checks every possible interval between the requested start

time and end time for the job and tries to find a path that can accommodate the entire

job on that interval. It favors intervals with earlier deadlines. [25] studies the computation

complexity of a related path-finding problem and so__~--- -is an approximation algorithm.

[31] starts with an advance reservation problem for bulk transfer. Then, the problem is

converted into a bandwidth allocation problem at a single time instance to maximize the

job acceptance rate. This is shown to be an NP-hard combinatorial problem. Heuristic










algorithms are then proposed. Alany papers study advance reservation, re-routing, or

re-optimization of lightpaths, at the granularity of a wavelength, in WDM optical networks

[4, 7, 40]. They are complementary to our study.

In the control plane, [27] and [26] present architectures for advance reservation of

intra and interdomain lightpaths. The DR AGON project [29] develops control plane

protocols for multi-domain traffic engineering and resource allocation on GMPLS-capable

[18] optical networks. GARA [23], the reservation and allocation architecture for the

grid computing toolkit, Globus, supports advance reservation of network and computing

resources. [20] adapts GARA to support advance reservation of lightpaths, MPLS paths

and DiffSery paths.









CHAPTER 2
CONCURRENT FILE TRANSFER PROBLEM

2.1 Problem Definition

A network is represented as a directed graph G = (V, E) where V is the set of nodes

and E is the set of edges (or arcs). Each edge e EE represents a link whose capacity is

denoted by Ce. A path p is understood as a collection of links with no cycles. Job requests

are submitted to the network using a 6-tuple representation (Ai, as, di, Di, Si Ei), where Ai

is the arrival time of the request, as and di are source and destination nodes, respectively,

Di is the size of the file, Si and Ei are requested start service time and end service time,

where Ai < Si < Ei. The meaning of the 6-tuple is, request i is made at time t = Ai,

asking the network to transfer a file of size Di from source node as to destination node di

over the time interval [si, Ei].

In our framework, the network resource is managed by a central network controller.

File transfer requests arrive following a random process and are submitted to the network

controller. The network controller verifies admissibility of the jobs through a process

known as admission control (AC). Admitted jobs are thereafter scheduled with a guarantee

of the start and end time constraints. Cl. .pter 3 is devoted to a discussion on how the

AC and scheduling algorithms work together. In this chapter, we focus on the scheduling

problem at a single scheduling instance and compare different variations of the algorithm.

There is no AC phase.

More specifically, we have the following scheduling problem. At a scheduling instance

t, we have a network G = (V, E) and the link capacity vector C = (Ce)e6E. The network

may have some on-going file transfers, it may also have some jobs that were admitted

earlier but yet to be started. The capacity C is understood as the remaining capacity,

obtained by removing the bandwidth committed to all unfinished jobs admitted prior to










t. The network controller has a collection of new job requests, denoted by J 1 The task

of the network controller is to schedule the transfer of the jobs in J so as to optimize a

network efficiency measure. The chosen measure, which will be further explained later, is

the value Z such that, if the demands are all scaled by Z (i.e., from Di to ZDi for every

job i), they can be carried by the network without exceeding any link capacity. Such a Z

is known as the throughput.

2.2 The Time Slice Structure

At any scheduling time t, the timeline from t onward is divided into uniform time

slices (intervals). The set of time slices starting from time t is denoted as Ot. The

bandwidth assignment to each job is done on every time slice. In other words, the

bandwidth reserved for a job remains constant throughout the time slice, but it can

vary across time slices. At the scheduling time t, let the time slices in Or be indexed as

1, 2, ... in increasing order of time. Let the start and end time of slice i be denoted by

STt(i) and ETt(i), respectively, and let its length be LEI~t(i). We w?-a time instance

t' > t falls into slice i if STt(i) < t' < ETt(i). The index of the slice that t' falls in is

denoted by It(t').

The time slice structure is useful for bulk file transfers, wherein a request is satisfied

as long as the network transfers the entire file between the start and end time. Such jobs

offer a high degree of flexibility to the network in modulating the bandwidth assignment

across time slices. This is in contrast to applications that require minimum bandwidth

guarantee, for which the network must maintain the minimum required bandwidth from

the start to the end time.

Rounding of the start and end time. While working with the time slice

structure, the start and end time of the jobs should be adjusted to align on the slice




1 We no longer need to consider the request arrival times, Ai, for i e J. We may take
A, = t for ie J .










boundaries. This is required because bandwidth assignment is done on a slice level. To

illustrate, consider a file request (Ai, as, di, Di, Si, Ei). Let the rounded start and end time

be denoted as Si and Ei, respectively. We round the requested start time Si to be the

maximum of the current time or the end time of the slice in which Si falls, i.e.,



Si = max~t, ET,(It(Si))}. (2.1)

For rounding of the requested end time, we follow a stringent I, ~.:ll ; wherein the end

time is rounded down, subject to the constraint that Ei > Si. That is, there has to be at

least one-slice separation between the rounded start and end time. Otherwise, there is no

way to schedule the job. More specifically,



ETi(It(Si)L + 1) if STt(It(Es)) < Si

Es = < E, else if ET,(It(E,)) =E, (2.2)

STt(It(Ei)) otherwise.

Fig. 2-1 shows several rounding examples. In practice, several variations of this strategy

can be adopted. From the definition of uniform slices, the slice set anchored at t, Os,

contains infinitely many slices. In general, only a finite subset of Or is useful to us. Let

i t, be the index of last slice in which the rounded end time of some job falls. That is,

ii f, = It(maxie y Ei). Let Lt C Or be the collection of time slices {1, 2, ..., ii1, }. It is

sufficient to consider Le for scheduling.
Jobs

Jobs After Rounding

Figure 2-1. Examples of stringent rounding. The unshaded rectangles are time slices. The
shaded rectangles represent jobs. The top ones show the requested start and
end times. The bottom ones show rounded start and end times.









The maximum concurrent file transfer problem is formulated as a special type of

network linear programs (LP), known as the maximum concurrent flow problem (1! CF)

[24, 36]. We consider both the node-arc form and the edge-path form of the problem.
2.3 Node-Arc Form

Let f1,k) (j) be the total amoulnt, of data transfer on link (1, k;) EE~ that is assigned to

job ie J on the time slice je L t. We will loosely call it the flow for job i on arc (1, k) on

time slice j.



Node-Arc(t, J)

max Z (2.3)

subject toflk)k)


yi(j) if I = si

-yi(j) if I = di

0 otherwise

Vi eJ, V e V Vj Le(2.4)


~y (j) =~n Z~e i e (2.5)
j= 1

fil~) (,k)()LEe~j) V(, k)e E Vj Le(2.6)
iEJ

f 1,k)(j) = 0,jI(E)

Vi eJ, V1, k EE(2.7)

f 1,k) (j) > 0, i Jj ,V(,k E.(2.8)

Condition (2.4) is the flow conservation equation that is required to hold on every

time slice je L t. It ;7io that, for each job i, if node 1 is neither the source node for job

i nor its destination, then the total flow of job i that enters node I must be equal to the









total flow of job i that leaves node 1. Moreover, on each time slice, the supply of job i from

its source must be equal to the demand at job i's destination. This common quantity is

denoted by yi(j) for job i on time slice j. Condition (2.5) ;7i-s that, for each job, the total

supply (or, equivalently, total demand), when summed over all time slices, must be equal

to Z times the job size, where Z is the variable to be maximized. Condition (2.6) ;7i-s

that the capacity constraints must be satisfied for all edges on every time slice. Note that

the allocated rate on link (1, k) for job i on slice j is fil,k)(j)/LEVt~j), where LEI~t(j) is

the length of slice j. The rate is assumed to be constant on the entire slice. Here, C~l~(1,)

is the capacity of link (1, k) on slice j. In all the experiments in this paper, each link

capacity is assumed to be a constant across the time slices, i.e., C(1,k)() (1,kI~) foT all

j. But, the formulation allows the more general time-varying link capacity. (2.7) is the

start and end time constraint for every job on every link. The flow must be zero before the

rounded start time and after the rounded end time.

The linear program asks, what is the largest constant scaling factor Z such that, after

every job size is scaled by Z, the link capacity constraints, as well as the start and end

time constraints, are still satisfied for all time slices? Let the optimal flow vector for the

linear program be denoted by f = (fl~k) j))i,1,k~j. If Z > 1, then the flow Zf can still

be handled by the network without the link capacity constraints being violated. If, in

practice, the flow vector Z f is used instead of f the file transfer can be completed faster.

If Z < 1, it is not possible to satisfy the deadline of all the jobs. However, if the file sizes

are reduced by a common factor ZDi for all i, then, the requests can all be satisfied.

There exists a different perspective to our optimization objective. Maximizing the

throughput of the concurrent flow is equivalent to finding a concurrent flow that carries

all the demands and also minimizes the worst-case link utilization, i.e., link congestion.

To see th~is, we m~a~ke the following substitution, f = f/Z~. For our ca~se, th~e largest link

utilization over all links and across all time slices is minimized. The result is that the

traffic load is balanced over the whole network and across all time slices. This feature










is desirable if the network also carries other types of traffic that is sensitive to network

load bursts, such as real-time traffic or traffic requiring minimum bandwidth guarantee.

In addition, by reserving only the minimum bandwidth in each time slice, more future

requests can potentially be accommodated.

The problem formulated here is related to the MCF problem. The difference is that,

in the MCF problem, the time dimension does not exist. Our problem becomes exactly

the MCF problem if ii T, = 1 (i.e., there is only one time slice) and if the constraints for

the start and end times of the jobs, (2.7), are removed. In the MCF problem, the variable

Z is called the the.. ;,111ral, of the concurrent flow. The MCF problem has been studied

in a sequence of papers, e.g., [2, 3, 21, 24, 36]. Several approximation algorithms have

been proposed, which run faster than the usual simplex or interior point methods. For our

problem, we can replicate the graph G into a sequence of temporal graphs representing the

network at different time slices and use virtual source and destination nodes to connect

them. We then have an MCF problem on the new graph and we can apply the fast

approximation algorithms to this MCF instance.

11 9
e11 e10 e9 etc


el e7
62 66



e3 e5
4 e4 5

Figure 2-2. A network with 11 nodes and 13 bi-directional links, each of capacity 1GB
shared in both directions.



Example-1: Consider the network shown in Fig. 2-2 with two file transfer requests, A

(0, 1, 9, 8000, 0, 60) and 4 : (0, 3, 6, 1000, 0, 60). Here, we have used our 6-tuple convention

to represent the requests. Both jobs requests arrive at time 0. The start and end times

are both at t = 0 and t = 60, respectively. The job size is measured in GB and the time










in minutes. When we schedule using a single slice of length 60 minutes, the node-arc

formulation gives the following flow reservation for each job on edges el through el3-

Jr : {3600, 0, 0,0, 0, 0,3600, 3600, 3600, 3600, 3600,036}

J2 : {0, 0,900, 900, 900, 0, 0,0, 0,0, 0,0, 0}

The throughput Z is 0.9, which is optimal.

The number of variables required to solve the node-arc model is 8(|E| x |Lt| x

| J|), because, for every job, there is an are flow variable associated with every link for

every time slice. The resulting problem is computationally expensive even with the fast

approximation algorithms. In Section 2.4, we will consider the edge-path form of the

problem, where every job is associated with a set of path-flow variables corresponding to a

small number of paths, for every time slice.

2.4 Edge-Path Form

The edge-path formulation uses a set of simple paths for each i EJ and determines

the flow on each of these paths on every time slice. The number of possible simple paths

can actually be higher than the number of arcs and therefore the edge-path form has no

computational advantage over the node-arc form. To avoid the computational complexity,

we consider sub-optimal formulations where we allow only a small number of paths for

each job. In such a setting, the edge-path form is an appropriate formulation.

Let Pt(Si, Ui) be the~ set of allowedV pathsI for job i (from the source node as to the

destination di). Let f (j) be the total amount of data transfer on path p e Pt(si, di) that

is assigned to job ie J on the time slice je L t. We will loosely call it the flow for job i on

path p on time slice j.












Edge-Path(t, J)

max Z (2.9)


srubjet toil fi(j) = Z), Vi e (2.10)
j=1 pEPt(s ,di)
fil~ j) /; < Cej)Ei(j), Ve E E, j E Le(211
i6J p6Pt(si,di)
p:e~p

f (j) = 0,jI(s, (2.12)

Vi E J, VpE Pt(si, di) (2.13)

f (j > Vie J Vj Lt VP PtSi, i).(2.14)

Condition (2.10) ;7i-s that, for every job, the sum of all the flows assigned on all

time slices for all allowed paths must be equal to Z times the job size, where Z is the

variable to be maximized. (2.11) ;7i-s that the capacity constraints must be satisfied for

all edges on every time slice. Note that the allocated rate on path p for job i on slice j
is f (j)/LE1Vt(j), where LEI~t(j) is the length of slice j. Cej is' th aaiy fln

on slice j. (2.13) is the start and end time constraint for every job on every allowed path.

The flow must be zero before the rounded start time and after the rounded end time.

The edge-path formulation allows an explicitly defined collection of paths for each

file-transfer job and flow reservations are done only on these paths. The number of

variables required to solve the edge-path model is 8(k x |Lt| x | J|), where k is the

maximum number of paths allowed for each job. We will examine two possible collections

of paths, k-shortest paths and k-shortest disjoint paths.
2.4.1 Shortest Paths

We use the algorithm in [39] to generate k-shortest paths. This algorithm is not

the fastest one, but is easy to implement. Also, in Section 2.4.2, we will use it as a









building block in our algorithm for finding k-shortest cl;-i si~ paths. The key steps of the

k-shortest-path algorithm are

1. Compute the shortest path using Dijkstra's algorithm. This path is called the ith
shortest path for i = 1. Set B = 0.

2. Generate all possible deviations to the ith shortest path and add them to B. Pick the
shortest path from B as the (i + 1)th shortest path.

3. Repeat step 2) until k paths are generated or there are no more paths possible (i.e.,
B = 0).).

Given a sequence of paths pi, p2, ***, pk, from node a to d, the deviation to pk, GE itS jth

node is defined as a new path, p, which is the shortest path under the following constraint.

First, p overlaps with pk, up to the je" node, but the (j + 1)th node of p cannot be the

(j + 1)th node of pk. In addition, if p also overlaps with pl up to the je" node, for any

I = 1, 2, ..., k 1, then the (j + 1)th node of p cannot be the (j + 1)th node of pi.

Example-2: Let us apply the edge-path formulation with k-shortest paths to the file

transfer requests in Example-1 for the network shown in Fig. 2-2. The case of k = 1

corresponds to using th~e single shortest palth for each job. L~et p5 den~ote the je" sh~ortest

path for job i. The shortest paths are,


p:: 1 -11 -10 -9 p~ : 3- 2- 7- 6

Flow reservation for each job is given by


f (1) = 3600 f~ (1) = 450

The throughput is 0.45, which is only half the optimal value obtained from the node-arc

formulation.









For the case k = 2, i.e., with two shortest paths per job, we have,


p 1:1


11 -10 9

2 -10 9

S3600

S0


p 2:3-2-7-6

p 2:3-4-5-6

f~ (1) = 450

f 1


The total flow for J1 is f11(1) + ft- (1) = 3600. The total flow for J is ~f (1)- + fp2 (1) = 450.
The throughput is 0.45.

From k = 1 to 2, we do not find any throughput improvement. This is because for

Ji, the second path shares an edge with the first, and hence, the total flow reaching the

destination node is limited to 3600. By increasing the number of paths per job from 2 to

4, we get the following results.


p :1-11


pf : 1-2


p :3

p :3

p, : 3


- 10 9

10 9

7-8-9



11- 10 -2 7

2- 1-11- 10



(1) = 3600

(1`) 0


2-7-

4-5-

2-10


6i

6i

-9-8-7-6


9


f (1)


f

f ,


fpl (1)

f l(1)

f l(1)

f (1)


36;00

0


The total flow for Jr

equal to the optimal


is 7200, the total flow for A is 900. The throughput is 0.9. This is

value achieved by the node-arc formulation.










2.4.2 Shortest Disjoint Paths

One interesting aspect that we noticed in Exaniple-2 is that, while the k-shortest path

algorithm nxinintizes the number of links used, the k-shortest paths for each job have a

tendency to overlap on some links. As a result, addition of new paths do not necessarily

improve the throughput. This motivates us to consider the k-shortest disjoint paths.

The algorithm for finding the k-shortest disjoint paths front node s to d is straightforward

if such k paths indeed exist. Given the directed graph G, in the first step of the algorithm,

we find the shortest path front node s to d, and then we remove all the edges on the path

front the graph G. In the next step, we find the shortest path in the remaining graph,

and then remove those edges on the selected path to create a new remaining graph. The

algorithm continues until we find k paths.

When the number of disjoint paths is less than k, we first find all the disjoint paths

and then resort to the following heuristics to select additional paths so that the total

number of selected paths is k. Let S be the list of selected disjoint paths.

1. Set S to be an empty list. Set B = 0).

2. Find all the di..l;-i~; paths between the source s and destination d and append them
to S in the order they are found. Let p he the first path in the list S.

3. Generate the deviations for p and add them to B.

4. Select the path in B that has the least number of overlapped edges with the paths in
S, and append it to S.

5. Set p to be the next path in the list S.

6. Repeat from step 3) until S contains k paths or there are no more paths possible
(i.e., B = 0).

In the above steps, the set B contains short paths, generated front the deviations of

some already selected disjoint paths. The newly selected path front B has the least overlap

with the already selected ones. It should be noted that while this approach reduces the

overlap between the k paths of each job, it does not guarantee the same for paths across

jobs. This is because, the average path length of k-shortest di..l;-i~ ; paths tends to be









greater than that of the k-shortest paths, potentially causing the shortest cl;-i si~ paths of

one job to heavily overlap with those of other jobs. This can have a negative effect on the

overall throughput.

Example-3: Let us apply the k-shortest disjoint paths to Example-1. For k = 2, we have,



p: : 1-11 -10 -9 p~ : 3- 2- 7- 6

p~ : 1-2-7-8-9 p:---

S()=36i00 .f (1) =450

f()=36i00 f()=450

The total flow for J1 is f 1(1) + 1)=7200. The total flow for J2 is ~f (1)+ f 1(1)= 900.

The throughput is 0.9. Hence, the optimal throughput is achieved with k = 2.

2.5 Evaluation

This section shows the performance results of the edge-path formulation using the

single and multi-path schemes. We compare its throughput with the optimal solution

obtained from node-arc formulation. The scalability of the formulations are evaluated

based on their required computation time.

The experiments were conducted on random networks and Abilene, an Internet2

high-performance backbone network (Fig.2-3). The random networks have between

100 and 1000 nodes with a varying node degree of 5 to 10. Our instance of the Abilene

network consists of a backbone with 11 nodes, in which each node is connected to a

randomly generated stub network of average size 10. The backbone links are each 10GB.

The entire network has 121 nodes and 490 links. We use the commercial CPLEX package

for solving linear programs on Intel-based workstationS2 In order to simulate the file



2 Since fast approximation algorithms are not the focus of this thesis, we use the
standard LP solver for the evaluations.









size distribution of Internet traffic, we resort to the widely accepted heavy-tailed Pareto

distribution, with the distribution function F(x) = 1- (x/b)-", where x > b and a~ > 1. As

a~ value gets closer to 1, the distribution becomes more heavy-tailed and there is a higher

probability of generating large file sizes. All the experiments described in this section were

done using Pareto parameter a~ = 1.8 and an average job size of 50GB. The plots use the

following acronyms, S (Shortest path), SD (Shortest Disjoint path) and NA (Node-Arc).

While configfuringf the simulation environment, we can ignore the connection setup

(path setup for the edge-path form) time for the following reasons. First, the small

network size allows us to pre-compute the allowed paths for every possible request.

Second, in the actual operation, the scheduling algorithm runs every few minutes or every

tens of minutes. There is plenty of time to re-configure the control parameters for the

paths in the small research network.












Figure 2-3. The Abilene network with 11 backbone nodes. A and B are stub networks.


2.5.1 Single Slice Scheduling (SSS)

When |Lt| = 1 in the node-arc and edge-path formulations, we call the situation

single slice s. 1,. /;;l, y::1 (SSS). In this experiment, we keep the time-slice structure simple

in order to examine how other factors affect the performance of different formulations. All

jobs start at the Oth minute and end at 60t" minute. Scheduling is done at time 0 with a

(single) time slice size equal to 60 minutes.










2.5.1.1 Performance comparison of the formulations

Fig. 2-4 shows the throughput improvement on the Abilene network with increasing

number of paths for the shortest (S) and shortest disjoint (SD) schemes, respectively. The

optimal throughput obtained from the node-arc (NA) form is shown as a horizontal line.

Similar plots are shown in Fig. 2-5 for a random network with 100 nodeS3

Single v.s. Multiple paths. Moving from a single path to multiple paths per job,

we observe a drastic throughput increase. A small number of paths per job is sufficient

to realize such throughput improvement. On the Abilene network, the throughput is

increased by up to 10 times with 4 to 8 paths per job. Simply by switching from a single

path to two paths per job, we observe Iun' throughput gain. On the random network, the

throughput is increased by 10 to 30 times with 4 or more paths. In most of our examples,

the S and SD schemes reach the optimal throughput with k = 8 or less.

In summary, the optimal throughput obtained from our multi-path scheme is

significantly higher than that of a simple scheme, which uses single shortest path for

every job. Throughput improvement by an order of magnitude can be expected with only

a small number of paths. The performance gains saturate at around 8 paths in most of our

simulation the exact number in general depends on the topology and actual traffic.

Shortest (S) v.s. Shortest Disjoint (SD) paths. For random networks, SD tends

to perform better than S. In most of our examples, the throughput of SD is several times

higher than that of S for k = 2 to 8. For the Abilene network, the opposite trend can often

be observed. This behavior can be explained as follows. As we have mentioned in Section

2.4, the paths for different jobs have a higher chance to overlap in the SD case, potentially

causing throughput degradation. In a well-connected random network, disjoint or nearly




3 The node-arc case is not shown in Fig. 2-5 (d) and in several subsequent figures
because the problem size becomes too large to be solved on our workstations with 2 to
4 GB of memory, mainly due to the large memory requirement.












disjoint paths are more abundant and also tend to be short. The throughput benefit

from the disjoint paths exceeds the throughput degradation from the longer average path

length. On the other hand, in the Abilene network, the backbone network has few dl;i ini!

paths between each pair of nodes. Insisting on having (nearly) disjoint paths leads to

longer average path length due to the lack of choices. Hence, the throughput penalty from

longer path length is more pronounced in a small network such as Abilene. Therefore, it is

often more beneficial to use the shortest paths instead.

In summary, we expect SD to be preferable in large, well-connected networks. In

a small network with few disjoint paths, the performance of S and SD are generally

comparable, with S sometimes being better. Finally, the difference between S and SD

disappears quickly as the number of paths per job increase.

4~- -- ~ 06 04
35 5035 --


15 -- -- 0 2 0 015 n r a -
NAr 02 Ao B

0 2 4 6 8 10 12 1416 0 2 4 6 8 10 12 1416 0 2 4 6 8 10 1214 16
Number of paths (k) Number of paths (k) Number of paths (k)
A B C
0 07
0 06 r -
0 05-


0 02-
0 01-
0 2 6 8 10 12 14 16
Number of paths (k)
D

Figure 2-4. Z for different formulations on Abilene network using SSS. A) 121 jobs; B) 605
jobs; C) 1210 jobs; D) 6050 jobs.



2.5.1.2 Comparison of algorithm execution time

Recall that our motivation to move from the node-arc formulation to the edge-path

formulation is that the latter allows us to restrict the number of permitted paths for each

job, resulting in lower algorithm execution time. Fig. 2-6 and Fig. 2-7 show the execution











20
L J l__L_____J__ 2
15 /3 ------ 2-- --



0 y 2 4, 6 01 1 6 0 2 4 0 1 14 160 2 4 6 8 01 41

Number of paths (k) Number of paths (k) Number of paths (k)
A B C
07

05
S04 --- -''
03
0 2 - - i

0 2 4 6 8 10 1214 16
Number of paths (k)
D

Figure 2-5. Z for different formulations on a random network with 100 nodes using SSS.
A) 100 jobs; B) 500 jobs; C) 1000 jobs; D) 5000 jobs.


time for the Abilene network and for a random network with 100 nodes, respectively

The horizontal axis is the number of selected paths for the shortest (S) and shortest

disjoint (SD) cases. The execution time for the node-arc (NA) form is shown as a flat line.

We observe that the execution time for S or SD increases roughly linearly, when the

number of permitted paths per job is small (up to 16 paths in the figures). With several

hundred jobs or more, even the longest execution time (at 16 paths) is much shorter

than that for the node-arc case, by an order of magnitude. We expect this difference in

execution time to increase with more jobs and larger networks.

In Fig. 2-6 C and D, we see that the scheduling time for the node-arc formulation

approaches or exceeds the actual 60-minute transfer time of the files. On the other hand,

the edge-path formulation with a small number of allowed paths, is much more scalable

with traffic intensity. ~Fast approximation algorithms in [2, 3, 21, 24, 36], if used, should





4 Unless mentioned otherwise, the execution time for the edge-path formulations does
not include the path computation time for findings the shortest paths. This is because the
shortest paths are computed only once, and the computation can be carried out off-line.













improve the execution time for all formulations. But, the significant difference between the


node-arc case and the shortest or shortest disjoint cases should still remain.


1000

S100 --S




0 2 4 6 8 10 12 14 16
Number of paths (k)

B

S10000 -it i
1000 -SD -X -i
NA Bi
100 -i t i
S10-

0 2 4 6 8 10 1214 16
Number of paths (k)

D


0 2 4 6 8 10 12 14 16
Number of paths (k)

C:


0 2 4 6 8 10 12 14 16
Number of paths (k)

A


Figure 2-6.


Execution time for different formulations on the Abilene network usingf SSS.

A) 121 jobs; B) 605 jobs; C) 1210 jobs; D) 6050 jobs.


1000 .




S10 -- -


0 2 4 6 8 10 12 14 16
Number of paths (k)

B
1000







0 2 4 6 8 10 1214 16
Number of paths (k)


1000

100o
10


0 2 4 6 8 10 12 14 16
Number of paths (k)

A


0 2 4 6 8 10 1214 16
Number of paths (k)
r


Figure 2-7. Execution time for different formulations on a random network with 100 nodes

using SSS. A) 100 jobs; B) 500 jobs; C) 1000 jobs; D) 5000 jobs.



2.5.1.3 Algorithm scalability with network size


Fig. 2-8 shows the variation of the algorithm execution time with network size.


In our simulations, we schedule 100 jobs using SSS for a period of 60 minutes. The










edge-path algorithms (S and SD) with 8 paths have an execution time under 10 seconds

for networks with less than 800 nodes. On the other hand, the execution time for the

node-are algorithm is nearly 15 minutes for a network size of 500 nodes. We conclude that

the node-are formulation is unsuitable for real-time scheduling of file transfers on networks

of more than several hundred nodes.






NA D
0100 200 300 400 500 600 700 800
Network size

Figure 2-8. Random network with k = 8. Execution time for different network sizes.


2.5.1.4 Average results over random network instances

When the experiments are conducted on random networks, unless mentioned

otherwise, each plot typically presents the results obtained from a single network instance

rather than an average result over many network instances. To demonstrate that the

single-instance results are not anomalies but representative, we repeated the experiments

in Section 2.5.1 for a 100-node random network and plotted the data points averaged

over 50 network instances. Due to space limitation, we present only the results for 1000

jobs in Fig. 2-9. This should be compared with Fig. 2-5 C, which is for a single network

instance. Besides the fact that the curves in Fig. 2-9 are smoother, the two figures show

similar characteristics. All the observations that we have made about Fig. 2-5 C remain

essentially true for Fig. 2-9. We should point out that, in order to run the experiment on

many network instances in a reasonable amount of time, the networks for Fig. 2-9 were

generated with fewer links than that for Fig. 2-5 C. This accounts for the difference in the

throughput values between the two cases. Finally, the corresponding average execution

time is shown in Fig. 2-10 on semilog scale.

We further confirmed the validity of our data and results by computing the confidence

interval of the mean values plotted in Fig. 2-9. For instance, the mean and standard












0 16
0 14 -- itrr
0 12 -- -- -
N 01 -
0 08-
0 06-

0 2 4 6 8 10 1214 16
Number of paths (k)


Figure 2-9. Average Z for different formulations on a random network with 100 nodes and
1000 jobs using SSS. The result is the average over 50 instances of the random
network.


10000






0 1 NA B
0 2 4 6 8 10 12 14 16
Number of paths (k)


Figure 2-10. Average execution time for different formulations on a random network with
100 nodes and 1000 jobs using SSS. The result is the average over 50
instances of the random network.



deviation of the throughput for node-arc formulation is 0.1489 and 0.0807, respectively.


The 95' confidence interval for the mean is +0.0188 around the mean. This is a good


indicator of the accuracy of our results.


In addition, we also computed the average of the throughput ratio of S and SD


schemes to the node-arc formulation. In Fig. 2-11, both S and SD schemes achieve nearly


N '.of the optimal throughput by switching from single path to 2 paths. The throughput


reaches C,'I' with 8 paths. For k < 4, SD performs better than S. The plot is consistent


with our earlier results shown in Fig. 2-9.




08-


S04~----- -- -
02 -- -- SD

0 2 4 6 8 10 1214 16
Number of paths (k)


Figure 2-11. Average throughput ratio for different formulations on a random network
with 100 nodes and 1000 jobs using SSS. The result is the average over 50
instances of the random network.










2.5.2 Multiple Slice Scheduling (MSS)

When |G|l > 1 in the node-are and edge-path formulations, we call the situation

multiple .slice .<.1, ~In dislic (jl!SS). In this experiment, 121 jobs are scheduled for a period of

1 dwi using multiple slices of identical size. The interval between the start times of the

jobs are independently and identically distributed exponential random variables with a

mean of 1 minute. We have tried four time-slice sizes, 60, :30, 15 and 10 minutes.

2.5.2.1 Performance comparison of different formulations

Fig. 2-12 shows the throughput improvement for the Abilene network with increasing

number of paths for the S and SD schemes, respectively. The throughput of the node-are

formulation is shown as a flat line.

For each fixed slice size, the general behavior of the throughput follows the same

pattern as the SSS case discussed in Section 2.5.1.1. In particular, the throughput

improvement is significant as the number of paths per job decreases. In Fig. 2-12, we

observe more than 501' throughput increase with 4 or fewer paths and nearly t:Il' to

50I' increase with 8 or more paths. When comparing across different slice sizes, we see

that smaller slice sizes have a throughput advantage, because they lead to more accurate

quantization of time. Having more time slices in a fixed scheduling interval offers more

opportunities to adjust the flow assignment to the jobs. In Fig. 2-12, the throughput

values at 16 paths per job is 9 for 10-min slice size and 6 for 60-min slice size. This shows

the benefit of having a fine-grained slice size, since in this experimental setup, 16 paths

are sufficient for S and SD schemes to reach the optimal throughput. We observed more

significant throughput improvement from using smaller time slices in other settings. For

instance, with 60:3 jobs, the throughput obtained from 10-min slice size is nearly twice the

throughput from 60-min slice size.

Fig. 2-1:3 shows similar results for a 100-node random network with 100 jobs.

The maximum throughput at 16 paths is nearly the same for all cases. However, for

situations with a small number of paths per job, smaller time slice sizes have a throughput














advantage. More throughput improvement has been observed under other experimental


settings. For instance, with 500 jobs and 16 paths, a 2 !' improvement is observed when


using 10-minute slices instead of 60-minute slices.


10
8-






0 2


S+60 -
SD+60 -X
NA+60 0
4 6 8 10 12 14 16
Number of paths (k)

A


0 2 4 6 8 10 12 14 16
Number of paths (k)

B



101



o I D10 M
0 2 4 6 8 10 12 14 16
Number of paths (k)

D


0 2 4 6 8 10 12 14 16
Number of paths (k)

C:


Figure 2-12.


Z for different formulations on the Abilene network with 121 jobs using MSS.

A) Time slice = 60 min; B) Time slice = 30 min; C) Time slice = 15 min; D)
Time slice = 10 min.


30
,,
,,f


10




0 ~SD+30 -
0 2 4 6 8 10 12 14 16
Number of paths (k)

B
30
25 -- r--E -

10


5 -6 -- +10
0 I ~SD+10 -M
0 2 4 6 8 10 12 14 16
Number of paths (k)


10-



5 -- -+ E -- S+15
0 ~SD+15 --X-
0 2 4 6 8 10 12 14 16
Number of paths (k)

C


I I S+60
5 1 .. --4 -

0 2 4 6 8 10 12 14 16
Number of paths (k)

A


Figure 2-13. Z for different algorithms on a 100-node random network with 100 jobs using
MSS. A) Time slice = 60 min; B) Time slice = 30 min; C) Time slice = 15

min; D) Time slice = 10 min.











2.5.2.2 Comparison of algorithm execution time

Fig. 2-14 and Figf. 2-15 show the execution time for the Abilene network with 121

jobs and for a 100-node random network with 100 jobs, respectively. For each fixed time

slice size, we continue to observe the linear or faster increase of the execution time as the

number of paths increase in the S and SD schemes. Again, the execution time for the

node-arc form is much greater than that for the S and SD cases; in most cases, too larget

to be observed from our experiments. Finally, the throughput advantage of using smaller

slice sizes is achieved at the expense of significant longer execution time.

100o 1000-- 100 S1
SD+15 X

10I" -- SD+30~ ~t l -N-~ I I I10


B 1 1 11;;
0 2 4 8 10 12 14 16 0 2 4 6 8 10 12 1416 0 2 4 6 8 10 1214 16
Number of paths (k) Number of paths (k) Number of paths (k)
A B C:
10 S10
SD+10 -x-





0 2 6 8 10 12 14 16
Number of paths (k)



Figure 2-14. Execution time for different formulations on the Abilene network with 121
jobs using MSS. A) Time slice = 60 min; B) Time slice = 30 min; C) Time
slice = 15 min; D) Time slice = 10 min.



2.5.2.3 Optimal time slice

The tradeoff of the three scheduling algorithms lies in two metrics, throughput

and execution time. Fig. 2-16 helps to identify a suitable time slice size for which the

throughput is high and the execution time is acceptable. We observe that the throughput

begins to saturate when the time slice size is 15 minutes and the execution time is under

half a minute. Note the sharp rise of the execution time as the slice size decreases. It is

therefore essential to choose an appropriate slice size.




















1000

B 100 S+60 -
S SD+60--
8 NA+60 E



0 2 4 6 8 10 12 14 16
Number of paths (k)

A


4 6 8 10 12 1416 0 2 4 6 8 10 1214 16
Number of paths (k) Number of paths (k)

B C:


0 2 4 6 8 10 1214 16
Number of paths (k)


Figure 2-15. Execution time for different formulations on a 100-node random network with

100 jobs using MSS. A) Time slice = 60 min; B) Time slice = 30 min; C)
Time slice = 15 min; D) Time slice = 10 min.


95S
85 PfS
75 --- -



45
10 20 30 40 50 60
Time slice

A


10 20 30 40 50
Time shiee

B


Figure 2-16. The Abilene network with 121 jobs and k = 8. A) Z for different time slices;

B) Execution time for different time slice sizes.









CHAPTER 3
ADMISSION CONTROL AND SCHEDULING ALGORITHM

3.1 The Setup

For easy reference, notations and definitions frequently used in this chapter

are summarized in Table 3-1. The notations for network and job requests are same

as discussed in Section 2.1. In addition, a request from the MBG class is a 6-tuple

(Ai, as, di, Bi, Si, Ei), where Bi is the requested minimum bandwidth on the interval

[Si, Ei]. It may optionally specify a maximum bandwidth. But, we will ignore this option

in the presentation.

The network controller performs admission control (AC) by evaluating the available

network capacity to satisfy new job requests. It admits only those jobs whose required

performance can be guaranteed by the network and rejects the rest. The network

controller also performs file transfer s.1, ~In dul: .9 for all admitted jobs, which determines

how each job is transferred over time, i.e., how much bandwidth is allocated to each path

of the job at every time instance.

In the basic scheme, AC and scheduling are done periodically after every -r time units,

where -r is a positive number. More specifically, at time instances k-r, k = 1, 2, ..., the

controller collects all the new requests that arrived on the interval [(k 1)-r, k-r], makes the

admission control decision, and schedules the transfer of all admitted jobs. Both AC and

scheduling must take into account the old jobs, i.e., those jobs that were admitted earlier

but remain unfinished. The value of -r should be small enough so that new job requests

can be checked for admission and scheduled as early as possible 1 However, -r should be

more than the computation time required for AC and scheduling.




1 In this scheme, a request generally needs to wait a duration no longer than -r for the
admission decision. We will comment on how to conduct realtime admission control later.









Table 3-1. Frequently used notations and definitions
Ce Capacity of link e
Di Demand size of job i
&, & Start time and rounded start time of job i
Ei, Ei End time and rounded end time of job i
-r Interval between consecutive AC/scheduling runs
In the following, assume t = k-r.
Gk Slice set anchored at time k-r
ii f, Index of the last slice in which some rounded
end time falls
Ck C k~ Finite SliCe Set 1, ..., si f
STk~(i), ETk(i) Start and end times of slice i
LEI~k(i) Length of slice i
Ik(t) Index of the slice that time t falls in
Sko Set of the old jobs
Ska Set of the new jobs
Pk(s, d) Allowable paths from node a to d
Rk (i) Remaining demand of job i
fi(p, j) Total flow allocated to job i on path p on slice j
C (j) Remaining capacity of link e on slice j


3.2 The Time Slice Structure

At each scheduling instance, t = k-r, the timeline from t onward is partitioned into

time slices, i.e., closed intervals on the timeline, which are not necessarily uniform in size.

A set of time slices, Gk, iS Said to be anchored at t = k-r if all slices in Ok, are mutually

disjoint and their union forms an interval [t, t'] for some t'. The set {0k =1 iS called a slice

structure if each Gk, is a Set Of SliCeS anchored at t = k-r, for k = 1, ..., 00.

Definition 1. A slice structure {Gk =1 iS Said to be congruent if the following r', *I~ *i' I /

is -r.:;04 for every pair of positive integers, k and k', where k' > k > 1. For r:;, slice

s' E O/I, ifs SOUCTIapS in time with a SliCC 8, S 6 Gk, then s' s.

In words, any slice in a later anchored slice collection must be completely contained

in a slice of any earlier collection, if it overlaps in time with the earlier collection.

Alternatively speaking, if slice se E k overlaps in time with G/,, then either se E k'

or s is partitioned into multiple slices all belonging to Gk'-










The motivation for the definition of the congruent slice structure will become

more clear later. In a nutshell, the AC and scheduling algorithm introduced in this

thesis applies to any congruent slice structure, the congruent slice structure is the key

construct that allows us to guarantee the performance of old jobs admitted previously

while admitting new jobs, when a non-uniform slice structure is used. In this thesis, we

focus on two simple congruent slice structures, the uniform slices (US) and the nested

slices (NS), as shown in Fig. 3-1 and 3-3, respectively. For ease of presentation, we use the

uniform slices as an example to explain the AC and scheduling algorithm. Discussion on

the more sophisticated nested slices is deferred to Section 3.6.

In US, the timeline is divided into equal-sized time slices of duration -r (coinciding

with the AC/scheduling interval length). The set of slices anchored at any t = k-r is

all the slices after t. Figure 3-1 shows the uniform slice structure at two time instances

t = -r and t = 27r. In this example, -r = 4 time units. The arrows point to the scheduling

instances. The two collections of rectangles are the time slices anchored at t = -r and

t = 27r, respectively. It is easy to check the congruent property of this slice structure.

Uniform Slices



0 1 4 8 12 16 20 24 28



0 1 4 8 12 16 20 24 28

Figure 3-1. Uniform time slice structure


At any AC/scheduling time t = k-r, let the time slices anchored at t, i.e., those in

Ok, be indexed 1, 2, ... in increasing order of time. Let the start and end times of slice i be

denoted by STk(i) and ETk i), TOSpectively, and let its length be LEIVk(i2). We w?-a time

instance t' > t falls into slice i if STk (i2) < t' < ETk (i). The index of the slice that t' falls

in is denoted by Ik t).










At t = k-r, let the set of jobs in the system yet to be completed be denoted by A.

A contains two types of jobs, those new requests (also known as new jobs) made on the

interval ((k 1)-r, k-r], denoted by Sk, and those old jobs admitted at or before (k 1)-r,

denoted by Sk. The old jobs have already been admitted and should not be rejected by

the admission control conducted at t. But some of the new requests may be rejected.

Rounding of the start and end times. With the time slice structure and the

advancement of time, we adjust the start and end times of the requests. The main

objective is to align the start and end times on the slice boundaries. After such rounding,

the start and the end times will be denoted as Si and Ei, respectively. For a new request i,

let the requested response time be Ti = Ei Si. We round the requested start time to be

the maximum of the current time or the end time of the slice in which the requested start

time Si falls, i.e.,

Si = max {t, ETk kI(Si))U. (3.1)

For rounding of the requested end time, we allow two policy choices, the stringent

Iy ~.. J.;i and the relaxed I4...1..~;; In the stringent policy, if the requested end time does not

coincide with a slice boundary, it is rounded down, subject to the constraint that Ei > Si

2 This constraint ensures that there is at least one-slice separation between the rounded

start time and the rounded end time. Otherwise, there is no way to schedule the job. In

the relaxed policy, the end time is first shifted by Ti with respect to the rounded start

time, and then rounded up. More specifically,



2 In the more sophisticated non-uniform slice structure introduced in Section 3.6, we
allow the end time to be re-rounded at different scheduling instances. This way, the
rounded end time can become closer to the requested end time, as the slice sizes become
finer over time.













stringent


ETk kI(Ss) + 1)ifSk (E)
E" = < Ei else if ETk k ~(E>)) = Ei

STk kI(Ei)) otherwise.

(3.2)

relaxed





Figure 3-2 shows the effect of the two policies after three jobs are rounded.

Relaxed Policy


Jobs After Rounding

Stringent Policy
Jobs


Jobs After Rounding

Figure 3-2. Two rounding policies. The unshaded rectangles are time slices. The shaded
rectangles represent jobs. The top ones show the requested start and end
times. The bottom ones show the rounded start and end times.


If a job i is an old one, its rounded start time Si is replaced by the current time

t. The remaining demand is updated by subtracting from it the total amount of data

transferred for job i on the previous interval, ((k 1)-r, k-r].

Fr-om the definition of uniform slices, the slice set anchored at each t = k-r, Gk,

contains an infinite number of slices. In general, only a finite subset of Gk is useful tO

us. Let -T T, be the index of the last slice in which the rounded end time of some jobs

falls. That is, i f, = Ik maXiefs Ei). Let k C k~ be the collection of time slices

1, 2,..., -T T,. We call the slices in k, aS the active time slices. We will also think of k aS









an array of slices when there is no ambiguity. Clearly, the collection { k =1 inherits the

congruent property from {Qk =1. Therefore, it is sufficient to consider { k =1 for AC

and scheduling.

3.3 Admission Control

For each pair of nodes s and d, let the collection of allowable paths from a to d be

denoted by Pk(s, d). In general, the set may vary with k. For each job i, let the remaining

demand at time t = k-r be denoted by Rk i), Which is equal to the total demand Di minus

the amount of data transferred till time t.

At t = k-r, let J c yk be a subset of the jobs in the systems. Let fi(p, j) be the total

flow (total data transfer) allocated to job i on path p, where p e Pk si, di), on time slice

j, where je E k. As part of the admission control algorithm, the solution to the following

feasibility problem is used to determine whether the jobs in J can all be admitted.




AC(k, J)





fi(p, j)< Ce(j)L~Ei~k(j), Ve t E, Vj' t (3.4
iEJ p6Pl,(s ,di)
p:e~p

f,(p, j) = 0, j < JIk(Se) or j > Ik(EsF),

Vi eJ, V E P Sidi)(3.5)

fi(p, j) > 0, VieJ je,9EP i i.(3.6)


(3.3) ;7i-s that, for every job, the sum of all the flows assigned on all time slices for

all paths must be equal to its remaining demand. (3.4) ;7i-s that the capacity constraints

must be satisfied for all edges on every time slice. Note that the allocated rate on path p

for job i on slice j is fi(p, j)/LE1Vk j), Where LEIVk j) is the length of slice j. The rate

is assumed to be constant on the entire slice. Here, Ce(j) is the remaining link capacity










of link e on slice .j. (:3.5) is the start and end time constraint for every job on every path.

The flow must he zero before the rounded start time and after the rounded end time.

Recall that we are assuming every job to be a bulk transfer for simplicity. If job i is

of the 1\BG class, then the the remaining capacity constraint (:3.3) will be replaced by a

minimum bandwidth guarantee condition.






The AC/scheduling algorithm is tli--=-,1 II every -r time units with the AC part

before the scheduling part. AC examines the newly arrived jobs and determines their

admissibility. In doing so, we need to ensure that the earlier commitments to the old jobs

are not broken. This can he achieved by adopting one of the following AC procedures.

1. Subtract-Resource (SR): An updated (remaining) network is obtained by subtracting

the bandwidth assigned to old jobs on future time slices, from the link capacity.

Then, we determine a subset of the new jobs that can he accommodated in this

remaining network. This method is helpful to perform quick admission tests

However, it runs the risk of rejecting new jobs that can actually be accommodated

by reassigning the flows to the old jobs on different paths and time slices.

2. Reassign-Resource (RR): This method attempts to reassign flows to the old jobs.

First, we cancel the existing flow assignment to the old jobs on the future time slices

and restore the network to its original capacity. Then, we determine a subset of

the new jobs that can he admitted along with all the old jobs under the original

network capacity. This method is expected to have a better acceptance ratio than

SR. However, it is computationally more expensive because the flow assignment is

computed for all the jobs in the system, both the old and the new.



SWe can perform realtime admission with this method.









The actual admission control is as follows. In the SR scheme, the remaining capacity

of link e on slice j, C,(j), is computed by subtracting from C, (the original link capacity),

the total bandwidth allocated on slice j for all paths crossing e, during the previous run of

the AC/scheduling algorithm (at t = (k 1)-r). In the RR scheme, simply let C,(j) = @,

for all e and j.

In the SR scheme, we list the new jobs, ~, in a sequence, 1, 2, ..., m. The particular

order of the sequence is flexible, possibly dependent on some customizable policy. For

instance, the order may be arbitrary, or based on the priority the jobs or based on

increasing order of the request times. We apply a binary search to the sequence to find

the last job j, 1 < j < m, in the sequence such that all jobs before and including

it are admissible. That is, j is the largest index for which the subset of the new jobs

J = {1, 2,..., j} is feasible for AC(k, J). All the jobs after j are rejected.

In the RR scheme, at time t = k-r, all the jobs are listed in a sequence where the

old jobs 3{ are ahead of the new jobs & in the sequence. The order among the old jobs

is arbitrary. The order among the new jobs is again flexible. Denote this sequence as

1, 2, ..., m, in which jobs 1 through I are the old ones. We then apply a binary search to

the sequence of new jobs, I + 1, I + 2, ..., m, to find the last job j, I < j < m, such that

all jobs before and including it are admissible. That is, j is the largest index for which the

resulting subset of the jobs J = {1, 2,...,1, I + 1,..., j} is feasible for AC(k, J) under the

original network capacity.

Discussion. The binary search technique assumes a pre-defined list of jobs and

identifies the first j jobs that can be admitted into the system without violating the

deadline constraints. The presence of an exceptionally large job with unsatisfiable

demand will cause other jobs following it to be rejected, even though it may be possible

to accommodate them after removing the large job. The rejection ratio tends to be higher

when the large job lies closer to the head of the list. An interesting problem is how to

admit as many new jobs as possible, after all the old jobs are admitted. A solution to this









problem is orthogonal to the main issues addressed in this thesis, but can be incorporated

into our general scheduling framework.

3.4 Scheduling Algorithm

Given the set of admitted jobs, Sk, Which ahr-l-w includes the old jobs, the scheduling

algorithm allocates flows to these jobs to optimize a certain objective. We consider two

objectives, Quick-Finish (QF) and Load-Balancing (LB). Given a set of admissible

jobs J, the problem associated with the former is


Quick-Finish(k, J)


j60c iEJ p6Py(si,di)

subject to (3.3) (3.6).


In the above, y(j) is a weight function increasing in j, which is chosen to be y(j) = j + 1

in our experiments. In this problem, the cost increases as time increases. The intention is

to finish a job early rather than later, when it is possible. The solution tends to pack more

flows in the earlier slices but leaves the load light in later slices. The problem associated

with the LB objective is,


Load-Balancing(k, J)

max Z (3.9)


subjct to ):f,(p, j)= ZRk(i), Vi t J7 (3.10)
j=1 pEPl,(si,di)

(3.4) (3.6).


Let the optimal solution be Z* and fg*(p, j) for all i, j, and p. The actual flows assigned

are ff(p, j)/Z*. Note that (3.10) ensures that ff (p, j)/Z*'s satisfy (3.3). Also, Z* >

1 must be true since J is admissible. Hence, ff(p, j)/Z*'s are a feasible solution to

the AC(k, J) problem. The Load-Balancing(k, J) problem above is written in the









maximizing concurrent throughput form. It reveals its load-balancing nature when written

in the equivalent minimizing congestion form. For that, make a substitution of variables,

fi(p, j) e fi(p, j)/Z, and let p = 1/Z.

We have,


Load-Balancing-1(k, J)

min p- (3.11)
subject to fsp j) i6J p6Pl,(si,di)
p~e

Ve E E, Vj E k (3.12)

(3.3), (3.5) and (3.6).


Hence, the solution minimizes the worst link congestion across all time slices in k.

The scheduling algorithm is to apply J = ff to Quick-Finish(k, J) or Load-

Balancing(k, J). This determines an optimal flow assignment to all jobs on all allowed

paths and on all time slices. Given the flow assignment fi(p, j), the allocated rate on each

time slice is denoted by xi(p, j) = fi(p, j)/LE1Vk j) foT all j 6 k~. The remaining capacity

of each link on each time slice is given by,


Ce Ci,cg CEp6Pyisidi) ri(p, j) if SR
Ce\J (3)=e (3.13)
Ce if RR.


3.5 Putting It Together: The AC and Scheduling Algorithm

In this section, we integrate various algorithmic components and present the complete

AC and scheduling algorithm.

On the interval ((k 1)-r, k-r], the system keeps track of the new requests arriving on

that interval. It also keeps track of the status of the old jobs. If an old job is completed, it

is removed from the system. If an old job is serviced on the interval, the amount of data









transferred for that job is recorded. At t = k-r, the steps described in Algorithm 1 are

taken.

Algorithm 1 Admission Control and Scheduling
1: Construct the anchored slice set at t = k-r, Gk*
2: Construct the job sets A, W and @, which are the collection of all jobs, the
collection of old jobs, and the collection of new jobs in the system, respectively.
3: For each old job i, update the remaining demand Rk (i) by subtracting from it the
amount of data transferred for i on the interval ((k 1)-r, k-r]. Round the start times
as Si t.
4: For each new job 1, let Rk(1) = DI. Round the requested start and end time according
to (3.1) and (3.2), depending on whether the stringent or relaxed rounding policy is
used. This produces the rounded start and end times, SI and El.
5: Derive ifl, = Ik maXiefs Ei). This determines the finite collection of slices k,
{1, 2,...,-1 }ii, the first if slices of Gk*
6: Perform admission control as in Algorithm 2. This produces the list of admitted jobs

7: Schedule the admitted jobs as in Algorithm 3. This yields the flow amount fi(p, j) for
each admitted job is E over all paths for job i, and all time slices j E k.
8: Compute the remaining network capacity by (3.13).



Algorithm 2 AC Step 6 of Algorithm 1
1: if Subtract-Resource is used then
2: Sequence the new jobs (g) in the system. Denote the sequence by (1, 2, ..., m).
3: Find the last job j in the sequence so that the set of jobs J = {12..j}i
admissible by AC(k, J).
4: else if Reassign-Resource is used then
5: Sequence all the jobs (A) in the system, so that the old jobs (g) are ahead of the
new jobs (g). Denote the sequence of jobs by (1, 2, ..., 1, I + 1, ..., m), where the first
1 jobs are the old jobs, followed by the new jobs.
6: Apply binary search to the subsequence of new jobs (1 + 1, I + 2, ..., m). Find the
last job j in the subsequence so that the set of jobs J = {1, 2,..., j} is admissible by
AC(k, J).
7: end if
8: Return the admissible set, S = J.


3.6 Non-uniform Slice Structure

The number of time slices directly affect the number of variables in our AC and

scheduling linear programs, and in turn the execution speed of our algorithm. We face a

problem of covering a large enough segment of the timeline for advance reservations with










Algorithm 3 Scheduling Step 7 of Algorithm 1
1: if Quick-Finish is preferred then
2: Run Quick-Finish(k, g)
3: else
4: Run Load-Balancing(k, g)
5: end if


a small number of slices, 11- about 100. In order to cover a 30-d~i- reservation period with

100 slices, the slice size in the US structure is 7.2 hours, too coarse for small to medium

sized jobs. In this section, we will design a new slice structure with non-uniform slice sizes.

They contain a geometrically increasing subsequence, and therefore, are able to cover a

large timeline with a small number of slices. The challenge is that the slice structure must

remain congruent.

Recall that the congruent property means that, if a slice in an earlier anchored

slice set overlaps in time with a later anchored slice set, it either remains as a slice, or is

partitioned into smaller slices in the later slice set. The definition is motivated by the need

for maintaining consistency in bandwidth assignment across time. As an example, suppose

at time (k 1)-r, a job is assigned a bandwidth x on a path on the slice jk-1. At the next

scheduling instance t = k-r, suppose the slice jk-1 iS partitioned into two slices. Then, we

understand that a bandwidth x has been assigned on both slices. Without the congruent

property, it is likely that a slice, 11- jk, in the slice set anchored at k-r cuts across several

slices in the slice set anchored at (k 1)-r. If the bandwidth assignments at (k 1)-r are

different for these latter slices, the bandwidth assignment for slice jk is HOt Well defined

just before the AC/scheduling run at time k-r.

3.6.1 Nested Slice Structure

In the nested slice structure, there are 1 types of slices, known as level-i slices,

i = 1, 2, ..., 1. Each level-i slice has a duration Ai, with the property that Ai = iask ,,

where as > 1 is an integer, for i = 1, ..., I 1. Hence, the slice size increases at least

geometrically as i decreases. For practical applications, a small number of levels suffices.

We also require that, for i such that a,, I 5 7 < Ai, -r is an integer multiple of Aity










and Ai is an integer multiple of -r. This ensures that each scheduling interval contains an

integral number of slices and that the sequence of scheduling instances does not skip any

level-j slice boundaries, for 1 < j < i.

The nested slice structure can be defined by construction. At t = 0, the timeline is

partitioned into level-1 slices. The first jl level-1 slices, where jl > 1, are each partitioned

into level-2 slices. This removes jl level-1 slices but adds jlKI level-2 slices. Next, the first

j2 level-2 slices, where j2 1 1l~, are each partitioned into level-3 slices. This removes

j2 level-2 slices but adds j2~ 2 1Vel-3 SliCeS. This process continues until, in the last

step, the first ji-1 level-(1 1) slices are partitioned into jl-1KI-l level-1 slices. Then,

the first ji-1 level-(1 1) slices are removed and ji-1KI-l level-1 slices are added at the

beginning. In the end, the collection of slices at t = 0 contains at n jl-Im-l level-1 slices,

al-1 i l-2 1-2 31-1 LeVel- 1 1) SliCeS, ..., U2 ~1 1 2a level-2 slices, and followed by an

infinite number of level-1 slices. The sequence of je' mus satisfy DL~Y j 2 1i/ 1 3 2 2ZZ **

ji-1 < il-2 1-2. This collection of slices is denoted by Go.

As an example, to cover a maximum of 30-d~i- period, we can take Al = 1 d~i-,

a2 = 1 hour, and A3 = 10 minutes. Hence, at = 24 and x2 = 6. The first two dli-e

are first divided into a total 48 one-hour slices, out of which the first 8 hours are further

divided into 48 10-minute slices. The final slice structure has 48 level-3 (10-minute) slices,

40 level-2 (one-hour) slices, and as many level-1 (one-d~i-) slices as needed, in this case, 28.

The total number of slices is 116.

In designing the slice structure, sometimes one wishes to begin with specifying the

set of aj's. To have a nested slice structure, the aj's should satisfy the following property.

First, At n a is an integer multiple of <1-1 and XI-1 Az/mII-1 + a-1 is an integer multiple

of 1-2~. Ill general, for i from I 1 down to 2, define As n As 1/se + ai 4 Xi Should be an




4 For each i, 2 < i < 1, As has the meaning that the length of the portion of the timeline
covered by level-j slices, for all i < j < 1, is equivalent to the length of As level-i slices.










integer multiple of as_l. The ai's can be determined one by one in decreasing order of i.

In the previous example, we can first choose a3 = 48 SillCe 11 is a multiple of K2 = 6. This

gives X2 = 48/6 + 2~. If We choose 2~ = 40, then X2 = 48 is divisible by at = 24.

For the subsequent scheduling instances, the objective is to maintain the same

number of slices as Go at different levels. But this cannot be done while satisfying the slice

congruent property. Hence, we allow the number of slices at each level to deviate from aj,

for j = 2, ..., 1. This can be done in various v-wsi~. Let zy be the current number of level-j

slices at t = k-r, for j = 1, 2, ..., 1. Set zz = co.

1. At-Least-a: For j from I down to 2, if the number of slices at level j, zy, is less

than aj, bring in (and remove) the next level-(j 1) slice and partition it into my_l

level-j slices. This scheme maintains at least aj and at most aj + my_l 1 level-j

slices for j = 2, ..., 1.

2. At-Most-a: In this scheme, we try to bring the current number of slices at level j,

zj, to aj, for j = 2,..., 1, subject to the constraint that new slices at level j can only

be created if t is an integer multiple of Aj_1.

More specifically, at t = k-r, the following is repeated for j from I down to 2. If

t is not an integer multiple of a,_l, then nothing is done. Otherwise, if zy < aj,

we try to create level-j slices out of a level-(j 1) slice. In the creation process, if

a level-(j 1) slice exists, then bring in the first one and partition it. Otherwise,

we try to create more level-(j 1) slices, provided t is an integer multiple of Aj-2-

Hence, a recursive slice-creation process may be involved. This procedure is made

more concrete in Algorithm 4, which calls Algorithm 5, a recursive subroutine.

Fig. 3-3 and 3-4 show a two-level and three-level nested slice structure, respectively,

under the At-Most-a design. In the special but typical case of aj > Ky_l, for j = 2,...,1,

the At-Most-a algorithm can be simplified as follows. For j from I down to 2, if zy <

aj <- 1~, bring in (and remove) the next level-(j -1) slice and partition it into Ky_l level-j

slices. This scheme maintains at least aj my_l and at most aj level-j slices for j = 2,..., 1.










Algorithm 4 At-Most-a
1: for j = 1 down to 2 do
2: if t is an integer multiple of a,_l then
3: while zy < aj do

5: Create-Slices (j)
6: if (zy = wj) then
7: break // New slices cannot be created.
8: end if
9: end while
10: end if
11: end for

Algorithm 5 Create-Slices (j)
1: if zy_1 < 1 and j > 2 and t is an integer multiple of aj-2 then
2: Create-Slices (j 1)
3: end if
4: if zy _1 > 1 then
5: // The next level-(j 1) slice ex~ists.
6: Bring in the next level-(j 1) slice and partition it into Ky _l level-j slices.
7: zy <- zy + Ky_1
8: zy_1t <-1-z_ 1
9: end if


3.6.2 Variant of Nested Slice Structure

When some my is large, it may be unappealing that the number of level-j slices varies

by xy _l (sometimes more than xy _l). To solve this problem, we next introduce another

T As AtNested Slices






0 1 4 8 12 16 20 24 28

FigureI 3-3 Tw-ee nete tieslc stutue 7 =4an 2=1 h



Figue anchTorlvlnsed tieslice set hw re fore tr = 7, 27= and 3, repetiel. At-Mot-

Design. 2~ 8-









Nested Shees




_1111111111



Dein. 4 8, 12 = 6 2.2 8 2 3 4 4






Almost -. Variantof tel nested tieslice structure,- beas it maintains at les ag and at i





most aj + 1 level-j slices for j = 2, ..., 1.

The Almost-a Variant starts the same way as the nested slice structure at t = 0. As

time progresses from (k 1)-r to k-r, for k = 1, 2, ..., the collection of slices anchored at

t = k-r, i.e., Gk, iS updated from Gk-1 aS in algorithm 6.

Algorithm 6 Almost-a-Variant
1: for j = 1 down to 2 do
2: if zy < aj then
3: Bring in (and remove) the next available slice of a larger size and create
additional aj zy level-j slices.

5: The remaining portion of the removed level-(j 1) slice forms another slice.
6: end if
7: end for


The price to pwli is that the Almost-a Variant introduces new slice types different

from the pre-defined level-i slices, for i = 1, ..., 1. Fig. 3-5 shows a three-level Almost-a

Variant .

3.7 Evaluation

This section shows the performance results of different variations of our AC/scheduling

algorithm. We also evaluate the required computation time to determine the scalability of

our algorithms.









Nested Slices Almost-cr Variant

3 11









Figure 3-5. Three-level nested slice structure Almost-a Variant. -r = 2, Al = 16, A2
and a3 = 1. The anchored slice sets shown are for t = 7r, 27r and 3-r,
respectively. as3 8, a2 = 2. The shaded areas are also slices, but are different
in size from any level-j slice, j = 1, 2 or 3.


Most of the experiments are conducted on the Abilene network, which consists of

11 backbone nodes connected by 10 Gbps links. Each backbone node is connected to

a randomly generated stub network. The link speed between each stub network and

the backbone node is 1 Gbps. The entire network has 121 nodes and 490 links. For the

scalability study of the algorithms, we use random networks with nodes ranging from

100 to 1000. We use the commercial CPLEX package for solving linear programs on

Intel-based workstations.

Unless mentioned otherwise, we use the following experimental models and parameters.

Job requests arrive following a Poisson process. In order to simulate the file size

distribution of Internet traffic, we resort to the widely accepted heavy-tailed Pareto

distribution, with the distribution function F(x) = 1 (x/b)-", where x > b and a~ > 1.

The closer a~ is to 1, the more heavy-tailed is the distribution, and it is more likely to

generate very large demand sizes. In most of our experiments, the average file size is 50

GB and a~ = 1.3. By default, each job uses 8 shortest paths. We adopt this approach

because our experiments on multi path scheduling revealed the following significant result,

for a network of size several hundred nodes, 8 shortest paths are sufficient to achieve near










optimal solutions under practical execution time We evaluate our algorithms under :3

traffic loads, namely, light, medium and heavy. By light, medium and heavy traffic loads,

we mean that the average inter-arrival time between jobs is 5 minutes, 2 minutes and :30

seconds, respectively. In order to get stable results, we generated jobs under these different

traffic loads for a period of :3 dei~ For example, under the heavy traffic load, roughly

10,000 file transfer requests were generated.

We will compare the uniform time slice (ITS) and the nested slice structure (NS) of

the Almost-o- Variant type. For ITS, the time slice and AC/scheduling interval (-r) is 21.17

minutes. This corresponds to 68 slices in every 24-hour period. For NS, we use a two-level

NS structure with 48 fine (level-2) slices and 20 coarse (level-1) slices. The fine slice size

is a2 = 5 minutes, and the coarse slice size is Al = 60 minutes. These parameters are

chosen so that the first 24-hour period is divided into 68 fine and coarse slices, the same

number as the ITS case. The AC/scheduling interval -r is 5 minutes, which is finer than the

IJS case.

The plots and tables use acronyms to denote the algorithms used in the experiments.

Recall that SR stands for Subtract-Resource and RR stands for Reassign-Reesource in

admission control; LB stands for Load-Balancing as the scheduling objective and QF

stands for Quick-Finish.

The performance measures are,

Rejection ratio: This is the ratio between the number of jobs rejected and total
number of job requests. From the system's perspective, it is desirable to admit as
many jobs as possible.

Response time: This is the difference between the completion time of a job and the
time when it is first being transmitted. From an individual job's perspective, it is
desirable to have shorter response time.




5 While configfuringf the simulation environment, we can ignore the connection setup
(path setup) time because the small network size allows us to pre-compute the allowed
paths for every possible request.









3.7.1 Comparison of Algorithm Execution Time

Before comparing the performance of the algorithms, we first compare their execution

time. Short execution time is important for the practicality of our centralized network

control strategy. The results on execution time put the performance comparison

(Section 3.7.2) in perspective: better performance often comes with longer execution

time. Table 3-2 shows the execution time of different schemes under two representative

traffic conditions.

Table 3-2. Average admission control/scheduling algorithm execution time (s)
Algorithm Heavy Load Light Load
AC: Scheduling AC: Scheduling
US+SR+LB 13.13 5.70 0.40 0.61
US+SR+QF 12.03 1.86 0.32 0.23
US+RR+LB 80.89 5.89 1.05 0.65
UJS+RR+QF 34.36 4.74 0.36; 0.21
NS+SR+LB 1.54 4.50 0.14 0.60
NS+SR+QF 1.57 1.60 0.13 0.07
NS+RR+LB 25.16 4.30 1.07 0.61
NS+RR+QF 17.43 3.54 0.17 0.06;


SR vs. RR and LB vs. QF. The results show that for admission control, SR can

have much smaller average execution time than RR. This is because, in SR, AC works

only on the new jobs, whereas in RR, AC works on all the jobs currently in the system.

Hence, for SR, the AC(k, J) feasibility problem has much fewer variables.

When the AC algorithm is fixed, the choice of the scheduling algorithm, LB or QF,

also affects the execution time for AC. For instance, the RR+LB combination has much

longer execution time for AC than the RR+QF combination. This is because, in LB, the

flow for each job tends to be stretched over time in an effort to reduce the network load

on each time slice. This results in more jobs and more active slices (slices in k) in the

system at any moment, which mean more variables for the linear program.










For scheduling, since LB and QF are very different linear programs, it is difficult to

explain their execution times. But, we do observe that LB has longer execution time,

again, possibly due to more variables for the reason stated in the previous paragraph.

US vs. NS. Depending on the number of levels of the nested slice structure, the

number of slices at each level and the slice sizes, the NS can he configured to achieve

different objectives; improving the algorithm performance, reducing the execution time,

or doing both simultaneously. Our experimental results in Table :3-2 correspond to the

third case. Since the two-level NS structure has Al = 60 minutes and the ITS has the

uniform slice size a = 21.17 minutes, the NS typically has fewer slices than the ITS. For

instance, under heavy load, ITS+RR+QF uses 150.5 active slices on an average for AC,

while NS+RR+QF uses 129.6 active slices on an average. The number of variables, which

directly affect the computation time of the linear programs, is generally proportional to

the number of slices.

Part of the performance advantage of NS (this is shown in Section :3.7.2 later.) is

attributed to the smaller scheduling interval -r. To reduce the scheduling interval for ITS,

we must reduce the slice size a, since a = -r in ITS. In the next experiment, we set the ITS

slice size to be 5 minutes, which is equal to the size of the finer slice in the NS. Table :3-:3

shows the performance and execution time comparison between ITS and NS. Here, we use

RR for admission control and QF for Scheduling. The ITS and NS have nearly identical

performance in terms of the response time and job rejection ratio. But, NS is far superior

in execution times for both AC and scheduling. Upon closer inspection (Table :3-4), the NS

requires far fewer active time slices than the ITS on an average.

In summary,

SR is much faster than RR for admission control.
LB tends to be slower than QF for both AC and scheduling.
NS requires much smaller execution time than ITS, or achieves better performance, or
has both properties.









Table 3-3. Comparison of US and NS (-r = 5 minutes)
Response Rejection Execution Time (s)
Time (min) Ratio AC Scheduling
LIGHT LOAD
US 6.064 0.000 0.469 0.309
NS 5.821 110.000 0.162 0.062
MEDIUM LOAD
US 9.767 0.006 3.177 2.694
NS 9.354 0.006 0.587 0.387
HEAVY LOAD
US 16.486 0.183 131.958 263.453
NS 17.107 0.173 17.428 3.539

Table 3-4. Average number of slices of US and NS (-r = 5 minutes)
Average Number of Slices
AC Scheduling
Light Load US 299.0 299.9
NS 68.9 69.0
Medium Load US 421.63 462.9
NS 79.1 82.1
Heavy Load US 975.1 799.8
NS 129.6 113.4


The advantage of NS can be furthered by increasing the number of slice levels. In practice,

it is likely that US is too time consuming and NS is a must.

3.7.2 Performance Comparison of the Algorithms

In this subsection, the experimental parameters are as stated in the introduction

for Section 3.7. In particular, we fix the number of paths per job (K) to be 8. Table 3-5

shows the response time and rejection ratio of different algorithms.

US vs. NS. In Table 3-5, the algorithms with NS have a comparable to much better

performance than those with US. Furthermore, it has already been established in Section

3.7.1 that NS has much smaller algorithm execution times.

Best performance. The best performance in terms of both response time and the

rejection ratio is achieved by the RR+QF combination.

Suppose we fix the slice structure and the scheduling algorithm. Then, SR has worse

rejection ratio than RR because SR does not consider flow reassignment for the old jobs










Table 3-5. Performance comparison of different algorithms
Algorithm Light Load Medium Load Heavy Load
Response Rejection Response Rejection Response Rejection
Time (s) Ratio Time (s) Ratio Time (s) Ratio
UJS+SR+LB 46.55 0.000 42.35 0.056 35.56 0.423
UJS+SR+QF 21.51 0.014 22.21 0.100 23.56 0.477
UJS+RR+LB 46.55 0.000 40.73 0.026 35.73 0.313
US+RR+QF 21.55 0.000 23.36 0.021 25.16 0.312
NS+SR+LB 49.60 0.000 43.83 0.021 28.74 0.237
NS+SR+QF 5.73 0.006 7.56 0.052 11.06 0.403
NS+RR+LB 49.60 0.000 43.88 0.011 30.16 0.168
NS+RR+QF 5.82 0.000 9.35 0.006 17. 11 0.173


during admission control. Since response time increases with the admitted traffic load, an

algorithm that leads to lower rejection ratio can have higher response time. This explains

why RR often has higher response time than the corresponding SR algorithm. Note that a

lower rejection ratio does not rl;, ne, lead to higher traffic load since some algorithms, such

as RR, use the network capacity more efficiently.

Suppose we fix the slice structure and the AC algorithm. Then, LB does much worse

than QF in terms of response time, because LB tends to stretch the job until its requested

end time while QF tries to complete a job early. If RR is used for admission control, then

under high load, the different scheduling algorithms have a similar effect on the rejection

ratio of the next admission control operation. However, for medium load we notice that

the work conserving nature of QF contributes to a low rejection ratio as compared to LB

that tends to waste some bandwidth.

Merit of SR and LB. Given the above discussion, one may tend to quickly dismiss

SR and LB. But as we have noticed in Section 3.7.1, SR can be considerably faster than

RR in execution speed. Furthermore, it is a candidate for conducting real time admission

control at the instant a request is made, which is not possible with RR.

If SR is used, then LB often has smaller rejection ratio than QF. The reason is that

QF tends to highly utilize the network on earlier time slices, making it more likely to

reject small jobs requested for the near future. This is a legitimate concern because, in










practice, it is more likely that small jobs are requested to be completed in the near future

rather than the more distant future.

There is indication that, the more heavy-tailed is the file size distribution, the larger

is the difference in rejection ratio between LB and QF. Evidence is shown in Fig. 3-6 for

the light traffic load. As the Pareto parameter a~ approaches 1 while the average job size

is held constant, the chances of having a very large file increases. Even if it is transmitted

at full network capacity, as in QF, such a large file can still congest the network for a long

time, causing more future jobs to be rejected. The correct thing to do, if SR is used, is to

spread out the transmission of a large file over its requested time interval.


I \\ I I
0.08
0.06 -
0.04
0.02

1.1 1.3 1.5 1.8
Alpha

Figure 3-6. Rejection ratio for different co's under SR.


To summarize the key points,

between the admission control methods, RR is much more efficient in utilizing the
network capacity, which leads to fewer jobs being rejected, while SR is suitable for
fast or realtime admission control,

if SR is used for admission control, then the scheduling method LB is superior to QF
in terms of the rejection ratio.

3.7.3 Single vs Multi-path Scheme

The effect of using multiple paths is shown in Fig. 3-7 for the light, medium and

heavy traffic loads. Here, NS is used along with the admission control scheme RR, and

scheduling objective QF. For every source-destination node pair, the K shortest paths

between them are selected and used by any job between the node pair. We vary K from 1

to 10, and find that multi-path often produces better response time and akr- .-- produces










a lower rejection ratio. The amount of improvement depends on many factors such as

the traffic load, the version of the algorithm, and the network parameters. For light

load, no job is rejected. As the number of paths per job increases from 1 to 8, we get

35' reduction in response time. No further improvement is gained with more than 8

paths. For medium load, the response time is almost halved from 1 path to 10 paths. The

improvement in the rejection ratio is even more impressive, from 13.;:' down to 0.;:' For

heavy load, there is no improvement in response time due to the significant reduction in

the rejection ratio; with multiple paths, many more jobs are admitted, resulting in a large

increase of the actual network load.


18 I.-i---l 0.6 Light -3
16 ;K 0.5 Medium ------X---
14 Light -E- 0.4 Heavy ---m---
.E ~Medium ------N----
12 Heavy ----m- 0.3

10 -------


12345678910 12345678910
Number of paths (K) Number of paths (K)
A B

Figure 3-7. Single vs. multiple paths under different traffic load. A) Response time; B)
Rejection ratio.



Fig. 3-8 shows the response time (A and B) and the rejection ratio (C) under medium

traffic load for all algorithms. It is observed that the rejection ratio decreases significantly

for all algorithms, as K increases. All algorithms that use LB for scheduling, experience

an increase in response time due to the reduction in the rejection ratio. But, this is not

a disappointing result because it is not the goal of LB to reduce response time. All the

algorithms using QF for scheduling experience a decrease in response time. Inspite of the

increased load, QF is able to pack more number of jobs in earlier slices by utilizing the

additional paths.











30. 45

S25, 40

20 US+SR+QF -- 35 1
UTS+RR+QF --X--~f--
NS+SR+QF -----m-- -
15s NS+RR+QF ---I}--- &*
1 C -i U 5CS+SR+LB -E-
10 I 5 S+RR+LB --K--~f--
~NS+SR+LB -
NS+RR+LB ----f3- -
5 20
12345678910 12345678910
Number of paths (K) Number of paths (K)
A B
0 35
UTS+SR+LB
0 4US+SR+QF ---x-----
1 S+RR+LB -----m----
US+RR+QF ----o---
0 2 NS+SR+LB ---,--
P: :~ t. 6 LB *-----


01 ....~~~
0 05
0'- --
12345678910
Number of paths (K)
C

Figure 3-8. Single vs. multiple paths under medium traffic load for different algorithms.
A) Response time for QF; B) Response time for LB; C) Rejection ratio.



3.7.4 Comparison with Typical AC/Scheduling Algorithm

The next experiment compares our AC/scheduling algorithm with typical, incremental

AC algorithm proposed in most QoS architectures, which will be called the simple scheme.

The simple scheme decouples AC from routing, and assumes a single default path given


by the routing protocol. AC is conducted in real time upon the arrival of a request.

The requested resource is compared with the remaining resource in the network on the

default path. If the latter is sufficient, then the job is admitted. The remaining resource is

updated by subtracting from it what is allocated to the new request.

Compared to our AC/scheduling algorithm, the simple scheme resembles our SR

admission control algorithm but operates only on one path. For bulk transfer with start

and end time constraints, the simple scheme still requires a scheduling stage, because

bandwidth needs to be allocated to the newly admitted job over the time slices on its

default path. Hence, we can apply the time slice structure and the scheduling objective









of LB or QF to the newly admitted job. However, unlike our scheduling algorithm, the

scheduling of the simple scheme does not reschedule the old jobs, that is, it does not

involve multi-path traffic re-assignment for the old jobs. Table 3-6 shows the rejection

ratio of the simple scheme with different slice structures and scheduling algorithms for

different traffic loads. This should be compared with Table 3-5. The simple scheme leads

to considerably higher rejection ratio than all of our schemes involving SR, which in turn

have higher rejection ratio than the corresponding schemes involving RR.

Table 3-6. Rejection ratio of the simple scheme
Light Load Medium Load Heavy Load
UJS+SR+LB 0.010 0.345 0.781
US+SR+QF 0.031 0.308 0.792
NS+SR+LB 0.000 0.225 0.596
NS+SR+QF 0.026 0.249 0.642


3.7.5 Scalability of AC/Scheduling Algorithm

For this experiment, all the jobs request to start and end at the same time, and

the AC/scheduling algorithm runs only once. The objective is to determine how the

execution time of the algorithm scales with the number of simultaneous jobs in the system,

or the number of time slices used, or the network size. In this case, RR and SR are

indistinguishable. In the following results, we use the US+SR+QF scheme.

Fig. 3-9 shows the execution time of AC and scheduling as a function of the number

of jobs. The interval between the start and end times is partitioned into 24 uniform time

slices. It is observed that the increase in execution time is linear or slightly faster than

linear. Scaling up to thousands of simultaneous jobs appears to be possible.

Fig. 3-10 shows the execution time against the number of time slices for 100 requests.

The increase is linear. With respect to the execution time, the practical limit is several

hundred slices. This is sufficient if NS is used. But with US, the slice size may be too

coarse for practical use if one wishes to cover several months of advance reservation.












Fig. 3-11 shows the scalability of the algorithm against the network size. For this, we

generate random networks with 100 to 1000 nodes in 100-node increments. The average

node degree is 5, 5, 7, 9, 9, 10, 10, 11, 11, and 11 respectively, so that the number of edges

also increases. The network link capacity ranges from 0.1 Gbps to 10 Gbps. There are

100 jobs to be admitted and scheduled. It is observed that the execution times increase

slightly faster than linear, indicating acceptable scaling behavior.

35
S30
25
20 y
15
10

0 Scheduling -----X-~----
100 300 500 700 900
Number of Jobs

Figure 3-9. Scalability of the execution times with the number of jobs.



6
5
S4





Scheduling :-----X-:----
10 20 30 40 50 60 70 80 90 100
Number of Timeslices

Figure 3-10. Scalability of the execution times with the number of time slices.








































250
AC -
20Scheduling X---~--

150

100

50


100 300 500 700 900
Number of Nodes


Figure 3-11. Scalability of the execution times with the network size.
































69S










CHAPTER 4
CONCLUSION

This study aims at contributing to the management and resource allocation of

research networks for data-intensive e-science collaborations. The need for large file

transfers is among the main challenges posed by such applications. The opportunities

lie in the fact that research networks are generally much smaller in size than the public

Internet, and hence, can afford a centralized resource management platform.

In C'!s Ilter 2, we formulate two linear programs, the node-are form and edge-path

form, for scheduling bulk file transfers with start and end time constraints. Our objective

is to maximize the throughput, subject to the link capacity constraints. The throughput

is a common scaling factor for all demand (file) sizes. This performance objective is

equivalent to findings a transfer schedule that carries all the demands and also minimizes

the worst-case link congestion across all links and time. It has the effect of balancing the

traffic load over the whole network and across time. This feature enables the network to

accept more future file transfer requests and in turn achieve higher long-term resource

utilization.

An important contribution of this thesis is towards the application of the edge-path

formulation to obtaining close to optimal throughput with a reasonable time complexity.

We have shown that the node-are formulation, while giving the optimal throughput, is

computationally very expensive. The edge-path formulation can lead to drastic reduction

of the computation time by using a small number of pre-defined paths for each file-transfer

job. We discussed two path selection schemes, the shortest paths (S) and the shortest

disjoint paths (SD). Both schemes are capable of achieving near optimal throughput

with a small number of paths, e~g 8 or less, for each file-transfer request. Both S and

SD perform well in a small network with few cl;i ini! paths, e.g. the Abilene backbone,

while SD performs better than S in larger, well connected networks. In the evaluation

process, we also showed that having multiple paths per job yields much higher throughput










than having one shortest path per job. To handle the start and end time requirement

of advance reservation, we divide time into uniform time slices in our formulations. The

thesis showed that using finer slices leads to significant throughput increase at the expense

of longer execution time. It is therefore important to choose the right slice size that best

balances such a tradeoff.

In ChI Ilpter 3, we developed a cohesive framework of admission control and flow

scheduling algorithms with the following novel elements: advance reservation for bulk

transfer and nxininiunt andwidth guaranteed traffic, niulti-path routing, and rerouting

and flow reassignment via periodic re-optintization.

In order to handle the advancement of time, we identify a suitable family of discrete

tinte-slice structures, namely, the congruent slice structures. With such a structure, we

avoid the combinatorial nature of the problem and are able to formulate several linear

programs as the core of our AC and scheduling algorithm. Our main algorithms apply to

all congruent slice structures, which are fairly rich. In particular, we describe the design

of the nested slice structure and its variants. They allow the coverage of a long segment

of time for advance reservation with a small number of slices without compromising

performance. They lead to reduced execution time of the AC/scheduling algorithm,

thereby making it practical. The following inferences were drawn front our experiments.

The algorithm can handle up to several hundred time slices within the time limit
imposed by practicality concern. If NS is used, this number can cover months, even
years, of advance reservation with sufficient time slice resolution. If US is used,
either the duration of coverage must he significantly shortened or the time slice
he kept very coarse. Either approach tends to degrade the algorithnt's utility or
performance .

We have argued that between the admission control methods, RR is much more
efficient than SR in utilizing the network capacity, thereby leading to fewer jobs
being rejected. On the other hand, SR is suitable for fast or real time admission
control. If SR is used for admission control, then the scheduling method LB
is superior to QF in terms of rejection ratio. We also observed that nmulti-path
improves the network utilization dramatically.










*The execution time of our AC/scheduling algorithms exhibit acceptable scaling
behavior, i.e., linear or slight faster than linear cl II1... with respect to the network
size, the number of simultaneous jobs, and the number of slices. We have high
confidence that they can he practical. The execution time can he further shortened
by using fast approximation algorithms, more powerful computers, and better
decomposition of the algorithms for parallel implementation.

Even in the limited application context of e-science, admission control and scheduling

is a large and complex problem. In this thesis, we have limited our attention to a set

of issues that we think are unique and important. This work can he extended in many

directions. To name just a few, one can develop and evaluate faster approximation

algorithms as in [3, 21, 24, 36]; address additional policy constraints for the network usage;

incorporate the discrete ligfhtpath scheduling problem; develop a price-based bidding

system for making admission request; or address more carefully the needs of the 1\BG

traffic class, such as minimizing the end-to-end d.l li-.









REFERENCES


[1] Paul Avery. Grid computing in high energy physics. In Proceedings of the Interna-
tional BC~r sh;, 2008 Conference, Pittsburgh, PA, Oct. 2003.

[2] B. Awerbuch and F. T. Leighton. A simple local-control approximation algorithm
for multicommodity flow. In Proceedings of the IEEE Symp~osium on Theory of
Compr;l1. .9, pages 459-468, 1993.

[3] B. Awerbuch and F. T. Leighton. Improved approximation algorithms for
multi-commodity flow problem and local competitive routing in dynamic networks. In
Proceedings of the AC 11/ Symp~osium on Theory of Comp~uting, pages 487-496, 1994.

[4] D. Banerjee and B. Mukherjee. Wavelength-routed optical networks: linear
formulation, resource budgeting tradeoffs, and a reconfiguration study. IEEE/A C'~ \
Transactions on Networking, 8(5):598-607, Oct. 2000.

[5] R. Bhatia, M. K~odialam, and T. V. Lakshman. Fast network re-optimization schemes
for MPLS and optical networks. Computer Networks: The International Journal of
Computer and Telecommunications, 50(3), Feb. 2006.

[6] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss. An architecture
for differentiated services. RFC 2475, IETF, Dec. 1998.

[7] E. Bouillet, J.-F. Labourdette, R. Ramamurthy, and S. Ch1 .tInt s!1~. Lightpath
re-optimization in mesh optical networks. IEEE/AC'~f Transactions on Networking,
13(2):437-447, 2005.

[8] R. Braden, D. Clark, and S. Shenker. Integrated services in the internet architecture:
An overview. RFC 1633, IETF, June 1994.

[9] Andrej Brodnik and Andreas Nilsson. A static data structure for discrete advance
bandwidth reservations on the Internet. Technical Report Tech report cs.DS/0308041,
Department of Computer Science and Electrical Engineering, Lulea University of
Technology, Sweden, 2003.

[10] J. Bunn and H. N. x.--n! Ias Data-intensive grids for high-energy physics. In F. Berman,
G. Fox, and T. Hey, editors, Grid Comp~uting: M~aking the Global Infrastructure a
R..~rli;, John Wiley & Sons, Inc, 2003.

[11] Lars-O. Burchard. Source routing algorithms for networks with advance reservations.
Technical Report Technical Report 2003-03, Communications and Operating Systems
Group, Technical University of Berlin, 2003.

[12] Lars-O. Burchard. Networks with advance reservations: applications, architecture,
and performance. Journal of Network and S;,lii 1. Mar..~rll' I,. 13(4):429-449, Dec.
2005.










[13] Lars-O. Burchard and Hans-U Heiss. Performance evaluation of data structures
for admission control in bandwidth brokers. Technical Report Technical Report
TR-K(BS-01-02, Communications and Operating Systems Group, Technical University
of Berlin, 2002.

[14] Lars-O. Burchard and Hans-U. Heiss. Performance issues of bandwidth reservation for
grid computing. In Proceedings of the 15th Symp~osium on Computer Archetecture and
High Performance Computing (SBAC-PAD'OS), 2003.

[15] Lars-O. Burchard, J. Schneider, and B. Linnert. Rerouting strategies for networks
with advance reservations. In Proceedings of the First IEEE International Conference
on e-Science and Grid Compet/,.::l (e-Science 2005), Melbourne, Australia, Dec. 2005.

[16] G. de Veciana, G. K~esidis, and J. Walrand. Resource management in wide-area
ATM networks using effective bandwidths. IEEE Journal on Selected Areas in
Communications, 13(6):1081-1090, Aug. 1995.

[17] T. DeFanti, C. d. Laat, J. Mambretti, K(. N.__ l-c I and B. Arnaud. TransLight: A
global-scale LambdaGrid for e-science. Communications of the ACO~~, 46(11):34-41,
Nov. 2003.

[18] E. Mannie (Ed.). Generalized multi-protocol label switching (GM~PLS) architecture.
RFC 3945, IETF, Oct. 2004.

[19] T. Erlebach. Call admission control for advance reservation requests with alternatives.
Technical Report TIK(-Report Nr. 142, Computer Engineering and Networks
Laboratory, Swiss Federal Institute of Technology (ETH) Zurich, 2002.

[20] C. Curti et. al. On advance reservation of heterogeneous network paths. Future
Generation Computer S;;-1.1ii 21(4):525-538, Apr. 2005.

[21] L. K(. Fleischer. Approximating fractional multicommodity flow independent of the
number of commodities. Siam Journal of Discrete M~athematics, 13(4):505-520, 2000.

[22] I. Foster and C. K~esselman. The Grid: Bluep~rint for a New Conty ~;,/.::.9 Infrastructure.
Morgan K~aufmann, 1999.

[23] I. Foster, C. K~esselman, C. Lee, R. Lindell, K(. N Ili stedt, and A. Roy. A
distributed resource management architecture that supports advance reservations
and co-allocation. In Proceedings of the International Workshop on Q;~l:ndU tiof Service
(IW~oS '99), 1999.

[24] N. Garg and J. Koienemann. Faster and simpler algorithms for multi-commodity flow
and other fractional packing problems. In Proceedings of the 89th Annual Symp~osium
on Foundations of Computer Science, pages 300-309, November 1998.

[25] R. Guerin and A. Orda. Networks with advance reservations: The routing
perspective. In Proceedings of IEEE INFOCOM~ 99, 1999.










[26] E. He, X. Wang, and J. Leigh. A flexible advance reservation model for multi-domain
WDM optical networks. In Proceedings of GRIDNETS :'tith.l San Jose, CA, 2006.

[27] E. He, X. Wang, V. Vishwanath, and J. Leigh. AR-PIN/PDC: Flexible advance
reservation of intradomain and interdomain lightpaths. In Proceedings of the IEEE
GLOBEC'OM :'tith.l 2006.

[28] F. P. K~elly, P. B. K~ey, and Stan Zachary. Distributed admission control. IEEE
Journal On Selected Areas In C'ommunications, 18(12), Dec. 2000.

[29] T. Lehman, J. Sohieski, and B. Jabbari. DRAGON: A framework for service
provisioningf in heterogeneous grid networks. IEEE C'- oncton ftr.. March
2006.

[:30] L. Lewin-Eytan, J. Naor, and A. Orda. Routing and admission control in networks
with advance reservation. In Proceedings of the Fi~fth International Workshop on
Approxrimation Algorithms for C'ombinatorial Op~timization (APPROX 02), 2002.

[:31] L. Alarchal, P. Vicat-Blane Primet, Y. Robert, and J. Zeng. Scheduling network
requests with transmission window. Technical Report 2005-32, LIP, ENS Lyon,
France, 2005.

[:32] D. E. McDysan and D. L. Spohn. ATM~ Theory and Applications. McGraw-Hill, 1998.

[:33] H. B. N. x.--in! ll, 31. H. Ellisman, and J. A. Orcutt. Data-intensive e-science frontier
research. Communications of the ACO~I 46(11):68-77, Nov. 200:3.

[:34] E. Rosen, A. Viswanathan, and R. Gallon. M~ultip~rotocol label .switching architecture.
RFC :30:31, IETF, Jan. 2001.

[:35] O. Scheli~n, A. Nilsson, Joakim Norrgard, and S. Pink. Performance of QoS agents
for provisioning network resources. In Proceedings of IFIP Seventh International
Workshop on Q;~l:ldH eiof Service (IW~oS'99), London, UK(, June 1999.

[:36] Farhad Shahrokhi and D. W. Alatula. The maximum concurrent flow problem.
Journal of the A~ssociation for C'o r,,;,l.::l Iafchinery, :37(2)::318-3:34, April 1990.

[:37] Tao Wang and Jianer C'I, .. Bandwidth tree A data structure for routing in
networks with advanced reservations. In Proceedings of the IEEE International
Performance. Computing and C'otmunication~s C'onference (IPC'C'C 2002), April
2002.

[:38] Qing Xiong, Chanle Wu, Jianbing Xing, Libing Wu, and Huyin Zhang. A linked-list
data structure for advance reservation admission control. In IC'CNMG' 2005, 2005.
Lecture Notes in Computer Science, Volume :3619/2005.

[:39] Jin Y. Yen. Finding the k shortest loopless paths in a network. AIt. r,:.;n.;.4 ,: Science,
17(11):712-716, 1971.










[40] Jun Zheng and Hussein T. Alouftah. Routing and wavelength assignment for advance
reservation in wavelength-routed WDM optical networks. In Proceelings of the IEEE
International C'onference on C'omatunications (IC'C), 2002.









BIOGRAPHICAL SKETCH

K~annan R ii II! received his Master of Science in computer engineering from University

of Florida in 2007. He pursued research in scheduling and optimization algorithms for bulk

file transfers under advisors Dr. CI Il) ly Ranka and Dr. Ye Xia. He has published a paper

titled Scheduling Bulk File Transfers with Start and End Times in the IEEE Network

Computing and Applications (NCA) 2007 proceedings. K~annan received his Bachelor of

Engineering (Hons.) in computer science and Master of Science (Hons.) in chemistry from

Birla Institute of Technology and Science (BITS)-Pilani, India in 2000.





PAGE 1

1

PAGE 2

2

PAGE 3

3

PAGE 4

IwouldliketoexpressmysinceregratitudetomyadvisorsDr.SanjayRankaandDr.YeXiafortheircontinuoussupportandencouragementthroughoutmyresearchwork.IamthankfultoDr.SartajSahniforbeingavitalmemberofmythesiscommitteeandprovidingvaluablecommentsonmythesis.IwouldalsoliketothankDr.RickCavanaughandDr.PaulAveryfromthePhysicsdepartmentforseveraldiscussionsontheUltralightproject. 4

PAGE 5

page ACKNOWLEDGMENTS ................................. 4 LISTOFTABLES ..................................... 7 LISTOFFIGURES .................................... 8 ABSTRACT ........................................ 10 CHAPTER 1INTRODUCTION .................................. 12 1.1RelatedWork .................................. 16 2CONCURRENTFILETRANSFERPROBLEM .................. 19 2.1ProblemDenition ............................... 19 2.2TheTimeSliceStructure ............................ 20 2.3Node-ArcForm ................................. 22 2.4Edge-PathForm ................................ 25 2.4.1ShortestPaths .............................. 26 2.4.2ShortestDisjointPaths ......................... 29 2.5Evaluation .................................... 30 2.5.1SingleSliceScheduling(SSS) ...................... 31 2.5.1.1Performancecomparisonoftheformulations ........ 32 2.5.1.2Comparisonofalgorithmexecutiontime .......... 33 2.5.1.3Algorithmscalabilitywithnetworksize ........... 35 2.5.1.4Averageresultsoverrandomnetworkinstances ...... 36 2.5.2MultipleSliceScheduling(MSS) .................... 38 2.5.2.1Performancecomparisonofdierentformulations ..... 38 2.5.2.2Comparisonofalgorithmexecutiontime .......... 40 2.5.2.3Optimaltimeslice ...................... 40 3ADMISSIONCONTROLANDSCHEDULINGALGORITHM .......... 42 3.1TheSetup .................................... 42 3.2TheTimeSliceStructure ............................ 43 3.3AdmissionControl ............................... 47 3.4SchedulingAlgorithm .............................. 50 3.5PuttingItTogether:TheACandSchedulingAlgorithm .......... 51 3.6Non-uniformSliceStructure .......................... 52 3.6.1NestedSliceStructure ......................... 53 3.6.2VariantofNestedSliceStructure ................... 56 3.7Evaluation .................................... 57 3.7.1ComparisonofAlgorithmExecutionTime .............. 60 5

PAGE 6

.............. 62 3.7.3SinglevsMulti-pathScheme ...................... 64 3.7.4ComparisonwithTypicalAC/SchedulingAlgorithm ......... 66 3.7.5ScalabilityofAC/SchedulingAlgorithm ................ 67 4CONCLUSION .................................... 70 REFERENCES ....................................... 73 BIOGRAPHICALSKETCH ................................ 77 6

PAGE 7

Table page 3-1Frequentlyusednotationsanddenitions ...................... 43 3-2Averageadmissioncontrol/schedulingalgorithmexecutiontime(s) ....... 60 3-3ComparisonofUSandNS(=5minutes) ..................... 62 3-4AveragenumberofslicesofUSandNS(=5minutes) .............. 62 3-5Performancecomparisonofdierentalgorithms .................. 63 3-6Rejectionratioofthesimplescheme ........................ 67 7

PAGE 8

Figure page 2-1Examplesofstringentrounding.Theunshadedrectanglesaretimeslices.Theshadedrectanglesrepresentjobs.Thetoponesshowtherequestedstartandendtimes.Thebottomonesshowroundedstartandendtimes. ......... 21 2-2Anetworkwith11nodesand13bi-directionallinks,eachofcapacity1GBsharedinbothdirections. .................................. 24 2-3TheAbilenenetworkwith11backbonenodes.AandBarestubnetworks. ... 31 2-4ZfordierentformulationsonAbilenenetworkusingSSS.A)121jobs;B)605jobs;C)1210jobs;D)6050jobs. .......................... 33 2-5Zfordierentformulationsonarandomnetworkwith100nodesusingSSS.A)100jobs;B)500jobs;C)1000jobs;D)5000jobs. ................. 34 2-6ExecutiontimefordierentformulationsontheAbilenenetworkusingSSS.A)121jobs;B)605jobs;C)1210jobs;D)6050jobs. ................. 35 2-7Executiontimefordierentformulationsonarandomnetworkwith100nodesusingSSS.A)100jobs;B)500jobs;C)1000jobs;D)5000jobs. ......... 35 2-8Randomnetworkwithk=8.Executiontimefordierentnetworksizes. .... 36 2-9AverageZfordierentformulationsonarandomnetworkwith100nodesand1000jobsusingSSS.Theresultistheaverageover50instancesoftherandomnetwork. ........................................ 37 2-10Averageexecutiontimefordierentformulationsonarandomnetworkwith100nodesand1000jobsusingSSS.Theresultistheaverageover50instancesoftherandomnetwork. ................................ 37 2-11Averagethroughputratiofordierentformulationsonarandomnetworkwith100nodesand1000jobsusingSSS.Theresultistheaverageover50instancesoftherandomnetwork. ............................... 37 2-12ZfordierentformulationsontheAbilenenetworkwith121jobsusingMSS.A)Timeslice=60min;B)Timeslice=30min;C)Timeslice=15min;D)Timeslice=10min. ................................. 39 2-13Zfordierentalgorithmsona100-noderandomnetworkwith100jobsusingMSS.A)Timeslice=60min;B)Timeslice=30min;C)Timeslice=15min;D)Timeslice=10min. ............................... 39 2-14ExecutiontimefordierentformulationsontheAbilenenetworkwith121jobsusingMSS.A)Timeslice=60min;B)Timeslice=30min;C)Timeslice=15min;D)Timeslice=10min. .......................... 40 8

PAGE 9

...................... 41 2-16TheAbilenenetworkwith121jobsandk=8.A)Zfordierenttimeslices;B)Executiontimefordierenttimeslicesizes. .................... 41 3-1Uniformtimeslicestructure ............................. 44 3-2Tworoundingpolicies.Theunshadedrectanglesaretimeslices.Theshadedrectanglesrepresentjobs.Thetoponesshowtherequestedstartandendtimes.Thebottomonesshowtheroundedstartandendtimes. ............. 46 3-3Two-levelnestedtime-slicestructure.=2,1=4and2=1.Theanchoredslicesetsshownarefort=;2and3,respectively.At-Most-Design.2=8. 56 3-4Three-levelnestedtime-slicestructure.=2,1=16,2=4and3=1.Theanchoredslicesetsshownarefort=;2and8,respectively.At-Most-Design.3=8,2=2. ................................ 57 3-5Three-levelnestedslicestructureAlmost-Variant.=2,1=16,2=4and3=1.Theanchoredslicesetsshownarefort=;2and3,respectively.3=8,2=2.Theshadedareasarealsoslices,butaredierentinsizefromanylevel-jslice,j=1,2or3. ............................ 58 3-6Rejectionratiofordierent'sunderSR. ..................... 64 3-7Singlevs.multiplepathsunderdierenttracload.A)Responsetime;B)Rejectionratio. .......................................... 65 3-8Singlevs.multiplepathsundermediumtracloadfordierentalgorithms.A)ResponsetimeforQF;B)ResponsetimeforLB;C)Rejectionratio. ....... 66 3-9Scalabilityoftheexecutiontimeswiththenumberofjobs. ............ 68 3-10Scalabilityoftheexecutiontimeswiththenumberoftimeslices. ........ 68 3-11Scalabilityoftheexecutiontimeswiththenetworksize. ............. 69 9

PAGE 10

Theadvancementofopticalnetworkingtechnologieshasenablede-scienceapplicationsthatoftenrequiretransportoflargevolumesofscienticdata.Insupportofsuchdata-intensiveapplications,wedevelopandevaluatecontrolplanealgorithmsforadmissioncontrolandschedulingofbulkletransfers.Eachletransferrequestismadeinadvancetothecentralnetworkcontrollerbyspecifyingastarttimeandanendtime.Ifadmitted,thenetworkguaranteestobeginthetransferafterthestarttimeandcompleteitbeforetheendtime.Weformulatetheschedulingproblemasaspecialtypeofthemulti-commodityowproblem.Tocopewiththestartandendtimeconstraintsofthele-transferjobs,wedividetimeintouniformtimeslices.Bandwidthisallocatedtoeachjoboneverytimesliceandisallowedtovaryfromslicetoslice.Thisenablesperiodicaladjustmentofthebandwidthassignmenttothejobssoastoimproveachosenperformanceobjective:throughputoftheconcurrenttransfers.Inthisthesis,westudytheeectivenessofusingmultipletimeslices,theperformancecriterionbeingthetrade-obetweenachievablethroughputandtherequiredcomputationtime.Furthermore,weinvestigateusingmultiplepathsforeachletransfertoimprovethethroughput.Weshowthatusingasmallnumberofpathsperjobisgenerallysucienttoachievenearoptimalthroughputwithapracticalexecutiontime,andthisissignicantlyhigherthanthethroughputofasimpleschemethatusessingleshortestpathforeachjob.Thethesiscombinesthefollowingnovelelementsintoacohesiveframeworkofnetworkresource 10

PAGE 11

11

PAGE 12

Theadvancementofopticalcommunicationandnetworkingtechnologies,togetherwiththecomputingandstoragetechnologies,isdramaticallychangingthewayshowscienticresearchisconducted.Anewterm,e-science,hasemergedtodescribethe\large-scalesciencecarriedoutthroughdistributedglobalcollaborationsenabledbynetworks,requiringaccesstoverylargescaledatacollections,computingresources,andhigh-performancevisualization".Well-quotede-science(andtherelatedgridcomputing[ 22 ])examplesincludehigh-energynuclearphysics[ 10 ],radioastronomy,geoscienceandclimatestudies. Theneedfortransportinglargevolumeofdataine-sciencehasbeenwell-argued[ 1 10 33 ].Forinstance,theHENPdataisexpectedtogrowfromthecurrentpetabytes(PB)(1015)toexabytes(1018)by2012to2015.Similarly,theLargeHadronCollider(LHC)facilityatCERNisexpectedtogeneratepetabytesofexperimentaldataeveryyear,foreachexperiment.Inadditiontothelargevolume,asnotedin[ 17 ],\e-scientistsroutinelyrequestschedulablehigh-bandwidthlow-latencyconnectivitywithknownandknowablecharacteristics".InsteadofrelyingonthepublicInternet,nationalgovernmentsaresponsoringanewgenerationofopticalnetworkstosupporte-science.ExamplesofsuchresearchandeducationnetworksincludetheInternet2relatedNationalLambdaRailandAbilenenetworksintheU.S.,CA*net4inCanada,andSURFnetintheNetherlands. Tomeettheneedofe-science,thisthesisexaminesadmissioncontrolandschedulingofhigh-bandwidthdatatransfersintheresearchnetworks.AdmissioncontrolandnetworkresourceallocationareamongthetoughestclassicalproblemsfortheInternetoranyglobal-scalenetworks(See[ 16 28 ]andtheirreferences.).Therearethreeimportantaspectsthatmotivateustore-examinethisissue,namely,specializedapplications,fewerqualityofservice(QoS)classesandmuchsmallernetworksize.ResearchnetworksaredierentfromthepublicInternetastheytypicallyhavelessthan103corenodesinthe 12

PAGE 13

Theobjectiveofthisthesisistodevelopandevaluatecontrolplanealgorithmsforadmissioncontrol(AC)andschedulingoflargeletransfers(alsoknownasjobs)overopticalnetworks.Weassumethatjobrequestsaremadeinadvancetoacentralnetworkcontroller.Eachrequestspeciesastarttime,anendtimeandthetotalle(demand)size.Sucharequestissatisedaslongasthenetworkbeginsthetransferafterthestarttimeandcompletesitbeforetheendtime.Thereis,however,exibilityinhowsoonthetransfershouldbecompleted.Itcanbecompletedassoonaspossibleor,alternatively,bestretcheduntiltherequestedendtime.Ouralgorithmsallowbothpossibilitiesandwewillexaminetheconsequences. Thenetworkcontrollerdeterminestheadmissibilityofthenewjobsbyaprocessknownasadmissioncontrol(AC).Anyadmittedjobwillbeguaranteedtheperformancelevelinaccordancewithitstracclass.Theuserofarejectedrequestmaysubsequentlymodifyandre-submittherequest.Oncethejobsareadmitted,thenetworkcontrollerhastheexibilityindecidingthemannerinwhichthelesaretransferred,i.e.,howthebandwidthassignmenttoeachjobvariesovertime.Thisdecisionprocessisknownasscheduling.Bulktransferisnotsensitivetothenetworkdelaybutmaybesensitivetothedeliverytime.Itisusefulfordistributinghighvolumesofscienticdata,whichcurrentlyoftenreliesongroundtransportationofthestoragemedia. InChapter 2 ,wefocusontheschedulingproblematasingleschedulinginstanceandcomparedierentvariationsofthealgorithm.Here,allletransferrequestsareknownin 13

PAGE 14

24 36 ].WhileMCFisconcernedwithallocatingbandwidthtopersistentconcurrentows,CFTPhastocopewiththestartandendtimeconstraintsofthejobs.Forthispurpose,ourformulationsforCFTPinvolvedividingtimeintouniformtimeslices(Section 2.2 )andallocatingbandwidthtoeachjoboneverytimeslice.Suchasetupallowsaneasyrepresentationofthestartandendtimeconstraints,bysettingtheallocatedbandwidthofajobtozerobeforethestarttimeandaftertheendtime.Moreimportantly,inbetweenthestartandendtimes,thebandwidthallocatedforeachjobisallowedtovaryfromtimeslicetotimeslice.Thisenablesperiodicaladjustmentofthebandwidthassignmenttothejobssoastoimprovesomeperformanceobjective. MotivatedbytheMCFproblem,thechosenobjectiveisthethroughputoftheconcurrenttransfers.Forxedtracdemand,itiswellknownthatsuchanobjectiveisequivalenttominimizingtheworst-caselinkcongestion,aformofnetworkloadbalancing[ 36 ].Abalancedtracloadenablesthenetworktoacceptmorefuturejobrequests,andhence,achievehigherlong-termresourceutilization.Inadditiontotheproblemformulation,othercontributionsofthisthesisareasfollows.First,inschedulingletransfersovermultipletimeslices,wefocusonthetradeobetweenachievablethroughputandtherequiredcomputationtime.Second,weinvestigateusingmultiplepathsforeachletransfertoimprovethethroughput.Wewillshowthatusingasmallnumberofpathsperjobisgenerallysucienttoachievenearoptimalthroughput,andthisisshowntobesignicantlyhigherthanthethroughputofasimpleschemethatusessingleshortestpath.Inaddition,thecomputationtimefortheformulationwithasmallnumberofpathsisconsiderablyshorterthanthatfortheoptimalscheme,whichutilizesallpossiblepathsforeachjob. 14

PAGE 15

3 ,wedescribeasuiteofalgorithmsforadmissioncontrolandschedulingandcomparetheirperformance.Here,theletransferrequestsarriveatdierenttimes;adecisionneedstobetakenatruntimeonwhichrequeststobeacceptedandscheduled.Again,thekeymethodologyisthediscretizationoftimeintoatimeslicestructuresothattheproblemscanbeputintothelinearprogrammingframework.Ahighlightofourschemeistheintroductionofnon-uniformtimeslices,whichcandramaticallyshortentheexecutiontimeoftheACandschedulingalgorithms,makingthempractical(Section 3.6 ). Oursystemhandlestwoclassesofjobs,bulkdatatransferandthosethatrequireaminimumbandwidthguarantee(MBG).ArequestfortheMBGclassspeciesastarttime,anendtimeandtheminimumbandwidththatthenetworkshouldguaranteethroughoutthedurationfromthestarttotheendtimes.Weassumethat,oncethebandwidthisgranted,theopticalnetworkcanbeconguredtoachievethedesiredlow-latencyfore-science.Suchserviceisusefulforrealtimerenderingorvisualizationoflargevolumesofdata.Inourframework,thealgorithmsforhandlingbulktransfercontainthemainingredientsofthealgorithmsforhandlingtheMBGclass.Forthisreason,wewillonlygivelighttreatmenttotheMBGclass. Thee-sciencesettingprovidesbothnewchallengesandnewpossibilitiesforresourcemanagementthatarenotconsideredintheclassicalsetting.Thenovelfeaturesofourworkareasfollows.First,bulktransferisusuallyregardedaslow-prioritybest-eorttrac,notsubjecttoadmissioncontrolinmostQoS-provisioningframeworkssuchasInterServ[ 8 ],DiServ[ 6 ],theATMnetwork[ 32 ],orMPLS[ 34 ].Thedeadline-basedACandschedulingfortheentiretransfer(noteachpacket)hasgenerallynotbeenconsideredintraditionalQoSframeworks.Second,ourschemeallowseachtransfersessiontotakemultiplepathsratherthanasinglepath.Third,therouteandbandwidthassignmentcanbeperiodicallyre-evaluatedandreassigned.Thisisincontrasttoearlierschemeswheresuchassignmentremainsxedthroughoutthelifetimeofthesession. 15

PAGE 16

Therestofthisthesisisorganizedasfollows.TherelatedworkisshowninSection 1.1 .Therearetwomaintechnicalcontributionsofthisthesis:CFTP,describedinChapter 2 andAdmissionControl/Schedulingalgorithmsdescribedin 3 .Inadditiontotheproposedformulations,wepresentarigorousdiscussionontheirexperimentalresultsinSection 2.5 and 3.7 ,respectively.Finally,theconclusionsaredrawninChapter 4 5 ]alsoadvocateperiodicre-optimizationtodeterminenewroutes 16

PAGE 17

Severalearlierstudies[ 9 11 13 15 35 37 38 ]consideradvancebandwidthreservationwithstartandendtimesatanindividuallinkfortracthatrequiresminimumbandwidthguarantee(MBG).Theconcernistypicallyaboutdesigningecientdatastructuresforkeepingtrackofandqueryingbandwidthusageatthelinkondierenttimeintervals.Newjobsareadmittedoneatatimewithoutchangingthebandwidthassignmentoftheexistingjobsinthesystem.Theadmissionofanewjobisbasedontheavailabilityoftherequestedbandwidthbetweenitsstarttimeandendtime.[ 11 14 19 25 37 ]and[ 15 ]allgobeyondsingle-linkadvancereservationandtacklethemoregeneralpath-ndingproblemfortheMBGtracclass,buttypicallyonlyforthenewrequests,oneatatime.Theroutesandbandwidthoftheexistingjobsareunchanged.[ 12 ]discussesarchitecturalandsignaling-protocolissuesaboutadvancereservationofnetworkresources.[ 30 ]considersanetworkwithknownroutinginwhicheachadmittedjobderivesaprot.Itgivesapproximationalgorithmsforadmittingasubsetofthejobssoastomaximizethetotalprot. [ 14 25 ]touchuponadvancereservationforbulktransfer.[ 14 ]proposesamalleablereservationscheme.Theschemecheckseverypossibleintervalbetweentherequestedstarttimeandendtimeforthejobandtriestondapaththatcanaccommodatetheentirejobonthatinterval.Itfavorsintervalswithearlierdeadlines.[ 25 ]studiesthecomputationcomplexityofarelatedpath-ndingproblemandsuggestsanapproximationalgorithm.[ 31 ]startswithanadvancereservationproblemforbulktransfer.Then,theproblemisconvertedintoabandwidthallocationproblematasingletimeinstancetomaximizethejobacceptancerate.ThisisshowntobeanNP-hardcombinatorialproblem.Heuristic 17

PAGE 18

4 7 40 ].Theyarecomplementarytoourstudy. Inthecontrolplane,[ 27 ]and[ 26 ]presentarchitecturesforadvancereservationofintraandinterdomainlightpaths.TheDRAGONproject[ 29 ]developscontrolplaneprotocolsformulti-domaintracengineeringandresourceallocationonGMPLS-capable[ 18 ]opticalnetworks.GARA[ 23 ],thereservationandallocationarchitectureforthegridcomputingtoolkit,Globus,supportsadvancereservationofnetworkandcomputingresources.[ 20 ]adaptsGARAtosupportadvancereservationoflightpaths,MPLSpathsandDiServpaths. 18

PAGE 19

Inourframework,thenetworkresourceismanagedbyacentralnetworkcontroller.Filetransferrequestsarrivefollowingarandomprocessandaresubmittedtothenetworkcontroller.Thenetworkcontrollerveriesadmissibilityofthejobsthroughaprocessknownasadmissioncontrol(AC).Admittedjobsarethereafterscheduledwithaguaranteeofthestartandendtimeconstraints.Chapter 3 isdevotedtoadiscussiononhowtheACandschedulingalgorithmsworktogether.Inthischapter,wefocusontheschedulingproblematasingleschedulinginstanceandcomparedierentvariationsofthealgorithm.ThereisnoACphase. Morespecically,wehavethefollowingschedulingproblem.Ataschedulinginstancet,wehaveanetworkG=(V;E)andthelinkcapacityvectorC=(Ce)e2E.Thenetworkmayhavesomeon-goingletransfers;itmayalsohavesomejobsthatwereadmittedearlierbutyettobestarted.ThecapacityCisunderstoodastheremainingcapacity,obtainedbyremovingthebandwidthcommittedtoallunnishedjobsadmittedpriorto 19

PAGE 20

Thetimeslicestructureisusefulforbulkletransfers,whereinarequestissatisedaslongasthenetworktransferstheentirelebetweenthestartandendtime.Suchjobsoerahighdegreeofexibilitytothenetworkinmodulatingthebandwidthassignmentacrosstimeslices.Thisisincontrasttoapplicationsthatrequireminimumbandwidthguarantee,forwhichthenetworkmustmaintaintheminimumrequiredbandwidthfromthestarttotheendtime. 20

PAGE 21

^Si=maxft;ETt(It(Si))g:(2.1) Forroundingoftherequestedendtime,wefollowastringentpolicywhereintheendtimeisroundeddown,subjecttotheconstraintthat^Ei>^Si.Thatis,therehastobeatleastone-sliceseparationbetweentheroundedstartandendtime.Otherwise,thereisnowaytoschedulethejob.Morespecically, ^Ei=8>>>>>><>>>>>>:ETt(It(^Si)+1)ifSTt(It(Ei))^SiEielseifETt(It(Ei))=EiSTt(It(Ei))otherwise.(2.2) Fig. 2-1 showsseveralroundingexamples.Inpractice,severalvariationsofthisstrategycanbeadopted.Fromthedenitionofuniformslices,theslicesetanchoredatt,Gt,containsinnitelymanyslices.Ingeneral,onlyanitesubsetofGtisusefultous.LetMtbetheindexoflastsliceinwhichtheroundedendtimeofsomejobfalls.Thatis,Mt=It(maxi2J^Ei).LetLtGtbethecollectionoftimeslicesf1;2;:::;Mtg.ItissucienttoconsiderLtforscheduling. Figure2-1. Examplesofstringentrounding.Theunshadedrectanglesaretimeslices.Theshadedrectanglesrepresentjobs.Thetoponesshowtherequestedstartandendtimes.Thebottomonesshowroundedstartandendtimes. 21

PAGE 22

24 36 ].Weconsiderboththenode-arcformandtheedge-pathformoftheproblem. Condition( 2.4 )istheowconservationequationthatisrequiredtoholdoneverytimeslicej2Lt.Itsaysthat,foreachjobi,ifnodelisneitherthesourcenodeforjobinoritsdestination,thenthetotalowofjobithatentersnodelmustbeequaltothe 22

PAGE 23

2.5 )saysthat,foreachjob,thetotalsupply(or,equivalently,totaldemand),whensummedoveralltimeslices,mustbeequaltoZtimesthejobsize,whereZisthevariabletobemaximized.Condition( 2.6 )saysthatthecapacityconstraintsmustbesatisedforalledgesoneverytimeslice.Notethattheallocatedrateonlink(l;k)forjobionslicejisfi(l;k)(j)=LENt(j),whereLENt(j)isthelengthofslicej.Therateisassumedtobeconstantontheentireslice.Here,C(l;k)(j)isthecapacityoflink(l;k)onslicej.Inalltheexperimentsinthispaper,eachlinkcapacityisassumedtobeaconstantacrossthetimeslices,i.e.,C(l;k)(j)=C(l;k)forallj.But,theformulationallowsthemoregeneraltime-varyinglinkcapacity.( 2.7 )isthestartandendtimeconstraintforeveryjoboneverylink.Theowmustbezerobeforetheroundedstarttimeandaftertheroundedendtime. Thelinearprogramasks,whatisthelargestconstantscalingfactor^Zsuchthat,aftereveryjobsizeisscaledby^Z,thelinkcapacityconstraints,aswellasthestartandendtimeconstraints,arestillsatisedforalltimeslices?Lettheoptimalowvectorforthelinearprogrambedenotedby^f=(^fi(l;k)(j))i;l;k;j.If^Z1,thentheow^Z^fcanstillbehandledbythenetworkwithoutthelinkcapacityconstraintsbeingviolated.If,inpractice,theowvector^Z^fisusedinsteadof^f,theletransfercanbecompletedfaster.If^Z<1,itisnotpossibletosatisfythedeadlineofallthejobs.However,ifthelesizesarereducedbyacommonfactor^ZDiforalli,then,therequestscanallbesatised. Thereexistsadierentperspectivetoouroptimizationobjective.Maximizingthethroughputoftheconcurrentowisequivalenttondingaconcurrentowthatcarriesallthedemandsandalsominimizestheworst-caselinkutilization,i.e.,linkcongestion.Toseethis,wemakethefollowingsubstitution,~f=f=Z.Forourcase,thelargestlinkutilizationoveralllinksandacrossalltimeslicesisminimized.Theresultisthatthetracloadisbalancedoverthewholenetworkandacrossalltimeslices.Thisfeature 23

PAGE 24

TheproblemformulatedhereisrelatedtotheMCFproblem.Thedierenceisthat,intheMCFproblem,thetimedimensiondoesnotexist.OurproblembecomesexactlytheMCFproblemifMt=1(i.e.,thereisonlyonetimeslice)andiftheconstraintsforthestartandendtimesofthejobs,( 2.7 ),areremoved.IntheMCFproblem,thevariableZiscalledthethroughputoftheconcurrentow.TheMCFproblemhasbeenstudiedinasequenceofpapers,e.g.,[ 2 3 21 24 36 ].Severalapproximationalgorithmshavebeenproposed,whichrunfasterthantheusualsimplexorinteriorpointmethods.Forourproblem,wecanreplicatethegraphGintoasequenceoftemporalgraphsrepresentingthenetworkatdierenttimeslicesandusevirtualsourceanddestinationnodestoconnectthem.WethenhaveanMCFproblemonthenewgraphandwecanapplythefastapproximationalgorithmstothisMCFinstance. Anetworkwith11nodesand13bi-directionallinks,eachofcapacity1GBsharedinbothdirections. 2-2 withtwoletransferrequests,J1:(0;1;9;8000;0;60)andJ2:(0;3;6;1000;0;60).Here,wehaveusedour6-tupleconventiontorepresenttherequests.Bothjobsrequestsarriveattime0.Thestartandendtimesarebothatt=0andt=60,respectively.ThejobsizeismeasuredinGBandthetime 24

PAGE 25

Thenumberofvariablesrequiredtosolvethenode-arcmodelis(jEjjLtjjJj),because,foreveryjob,thereisanarcowvariableassociatedwitheverylinkforeverytimeslice.Theresultingproblemiscomputationallyexpensiveevenwiththefastapproximationalgorithms.InSection 2.4 ,wewillconsidertheedge-pathformoftheproblem,whereeveryjobisassociatedwithasetofpath-owvariablescorrespondingtoasmallnumberofpaths,foreverytimeslice. LetPt(si;di)bethesetofallowedpathsforjobi(fromthesourcenodesitothedestinationdi).Letfip(j)bethetotalamountofdatatransferonpathp2Pt(si;di)thatisassignedtojobi2Jonthetimeslicej2Lt.Wewilllooselycallittheowforjobionpathpontimeslicej. 25

PAGE 26

(2.13)fip(j)0;8i2J;8j2Lt;8p2Pt(si;di): Condition( 2.10 )saysthat,foreveryjob,thesumofalltheowsassignedonalltimeslicesforallallowedpathsmustbeequaltoZtimesthejobsize,whereZisthevariabletobemaximized.( 2.11 )saysthatthecapacityconstraintsmustbesatisedforalledgesoneverytimeslice.Notethattheallocatedrateonpathpforjobionslicejisfip(j)=LENt(j),whereLENt(j)isthelengthofslicej.Ce(j)isthecapacityoflinkeonslicej.( 2.13 )isthestartandendtimeconstraintforeveryjoboneveryallowedpath.Theowmustbezerobeforetheroundedstarttimeandaftertheroundedendtime. Theedge-pathformulationallowsanexplicitlydenedcollectionofpathsforeachle-transferjobandowreservationsaredoneonlyonthesepaths.Thenumberofvariablesrequiredtosolvetheedge-pathmodelis(kjLtjjJj),wherekisthemaximumnumberofpathsallowedforeachjob.Wewillexaminetwopossiblecollectionsofpaths,k-shortestpathsandk-shortestdisjointpaths. 39 ]togeneratek-shortestpaths.Thisalgorithmisnotthefastestone,butiseasytoimplement.Also,inSection 2.4.2 ,wewilluseitasa 26

PAGE 27

1. ComputetheshortestpathusingDijkstra'salgorithm.Thispathiscalledtheithshortestpathfori=1.SetB=;. 2. GenerateallpossibledeviationstotheithshortestpathandaddthemtoB.PicktheshortestpathfromBasthe(i+1)thshortestpath. 3. Repeatstep2)untilkpathsaregeneratedortherearenomorepathspossible(i.e.,B=;.). Givenasequenceofpathsp1,p2,...,pkfromnodestod,thedeviationtopkatitsjthnodeisdenedasanewpath,p,whichistheshortestpathunderthefollowingconstraint.First,poverlapswithpkuptothejthnode,butthe(j+1)thnodeofpcannotbethe(j+1)thnodeofpk.Inaddition,ifpalsooverlapswithpluptothejthnode,foranyl=1;2;:::;k1,thenthe(j+1)thnodeofpcannotbethe(j+1)thnodeofpl. 2-2 .Thecaseofk=1correspondstousingthesingleshortestpathforeachjob.Letpijdenotethejthshortestpathforjobi.Theshortestpathsare,p11:111109p21:3276 Flowreservationforeachjobisgivenbyf1p11(1)=3600f2p21(1)=450 Thethroughputis0:45,whichisonlyhalftheoptimalvalueobtainedfromthenode-arcformulation. 27

PAGE 28

ThetotalowforJ1isf1p11(1)+f1p12(1)=3600.ThetotalowforJ2isf2p21(1)+f2p22(1)=450.Thethroughputis0:45. Fromk=1to2,wedonotndanythroughputimprovement.ThisisbecauseforJ1,thesecondpathsharesanedgewiththerst,andhence,thetotalowreachingthedestinationnodeislimitedto3600.Byincreasingthenumberofpathsperjobfrom2to4,wegetthefollowingresults.p11:111109p21:3276p12:12109p22:3456p13:12789p23:32109876p14:111102789p24:32111109876f1p11(1)=3600f2p21(1)=0f1p12(1)=0f2p22(1)=900f1p13(1)=3600f2p23(1)=0f1p14(1)=0f2p24(1)=0 ThetotalowforJ1is7200;thetotalowforJ2is900.Thethroughputis0:9.Thisisequaltotheoptimalvalueachievedbythenode-arcformulation. 28

PAGE 29

Thealgorithmforndingthek-shortestdisjointpathsfromnodestodisstraightforwardifsuchkpathsindeedexist.GiventhedirectedgraphG,intherststepofthealgorithm,wendtheshortestpathfromnodestod,andthenweremovealltheedgesonthepathfromthegraphG.Inthenextstep,wendtheshortestpathintheremaininggraph,andthenremovethoseedgesontheselectedpathtocreateanewremaininggraph.Thealgorithmcontinuesuntilwendkpaths. Whenthenumberofdisjointpathsislessthank,werstndallthedisjointpathsandthenresorttothefollowingheuristicstoselectadditionalpathssothatthetotalnumberofselectedpathsisk.LetSbethelistofselecteddisjointpaths. 1. SetStobeanemptylist.SetB=;. 2. FindallthedisjointpathsbetweenthesourcesanddestinationdandappendthemtoSintheordertheyarefound.LetpbetherstpathinthelistS. 3. GeneratethedeviationsforpandaddthemtoB. 4. SelectthepathinBthathastheleastnumberofoverlappededgeswiththepathsinS,andappendittoS. 5. SetptobethenextpathinthelistS. 6. Repeatfromstep3)untilScontainskpathsortherearenomorepathspossible(i.e.,B=;). Intheabovesteps,thesetBcontainsshortpaths,generatedfromthedeviationsofsomealreadyselecteddisjointpaths.ThenewlyselectedpathfromBhastheleastoverlapwiththealreadyselectedones.Itshouldbenotedthatwhilethisapproachreducestheoverlapbetweenthekpathsofeachjob,itdoesnotguaranteethesameforpathsacrossjobs.Thisisbecause,theaveragepathlengthofk-shortestdisjointpathstendstobe 29

PAGE 30

ThetotalowforJ1isf1p11(1)+f1p12(1)=7200.ThetotalowforJ2isf2p21(1)+f2p22(1)=900.Thethroughputis0:9.Hence,theoptimalthroughputisachievedwithk=2. TheexperimentswereconductedonrandomnetworksandAbilene,anInternet2high-performancebackbonenetwork(Fig. 2-3 ).Therandomnetworkshavebetween100and1000nodeswithavaryingnodedegreeof5to10.OurinstanceoftheAbilenenetworkconsistsofabackbonewith11nodes,inwhicheachnodeisconnectedtoarandomlygeneratedstubnetworkofaveragesize10.Thebackbonelinksareeach10GB.Theentirenetworkhas121nodesand490links.WeusethecommercialCPLEXpackageforsolvinglinearprogramsonIntel-basedworkstations 30

PAGE 31

Whileconguringthesimulationenvironment,wecanignoretheconnectionsetup(pathsetupfortheedge-pathform)timeforthefollowingreasons.First,thesmallnetworksizeallowsustopre-computetheallowedpathsforeverypossiblerequest.Second,intheactualoperation,theschedulingalgorithmrunseveryfewminutesoreverytensofminutes.Thereisplentyoftimetore-congurethecontrolparametersforthepathsinthesmallresearchnetwork. Figure2-3. TheAbilenenetworkwith11backbonenodes.AandBarestubnetworks. 31

PAGE 32

2-4 showsthethroughputimprovementontheAbilenenetworkwithincreasingnumberofpathsfortheshortest(S)andshortestdisjoint(SD)schemes,respectively.Theoptimalthroughputobtainedfromthenode-arc(NA)formisshownasahorizontalline.SimilarplotsareshowninFig. 2-5 forarandomnetworkwith100nodes Insummary,theoptimalthroughputobtainedfromourmulti-pathschemeissignicantlyhigherthanthatofasimplescheme,whichusessingleshortestpathforeveryjob.Throughputimprovementbyanorderofmagnitudecanbeexpectedwithonlyasmallnumberofpaths.Theperformancegainssaturateataround8pathsinmostofoursimulation-theexactnumberingeneraldependsonthetopologyandactualtrac. 2.4 ,thepathsfordierentjobshaveahigherchancetooverlapintheSDcase,potentiallycausingthroughputdegradation.Inawell-connectedrandomnetwork,disjointornearly 2-5 (d)andinseveralsubsequentguresbecausetheproblemsizebecomestoolargetobesolvedonourworkstationswith2to4GBofmemory,mainlyduetothelargememoryrequirement. 32

PAGE 33

Insummary,weexpectSDtobepreferableinlarge,well-connectednetworks.Inasmallnetworkwithfewdisjointpaths,theperformanceofSandSDaregenerallycomparable,withSsometimesbeingbetter.Finally,thedierencebetweenSandSDdisappearsquicklyasthenumberofpathsperjobincrease. A B C D Figure2-4. ZfordierentformulationsonAbilenenetworkusingSSS.A)121jobs;B)605jobs;C)1210jobs;D)6050jobs. 2-6 andFig. 2-7 showtheexecution 33

PAGE 34

B C D Figure2-5. Zfordierentformulationsonarandomnetworkwith100nodesusingSSS.A)100jobs;B)500jobs;C)1000jobs;D)5000jobs. timefortheAbilenenetworkandforarandomnetworkwith100nodes,respectively WeobservethattheexecutiontimeforSorSDincreasesroughlylinearly,whenthenumberofpermittedpathsperjobissmall(upto16pathsinthegures).Withseveralhundredjobsormore,eventhelongestexecutiontime(at16paths)ismuchshorterthanthatforthenode-arccase,byanorderofmagnitude.Weexpectthisdierenceinexecutiontimetoincreasewithmorejobsandlargernetworks. InFig. 2-6 CandD,weseethattheschedulingtimeforthenode-arcformulationapproachesorexceedstheactual60-minutetransfertimeoftheles.Ontheotherhand,theedge-pathformulationwithasmallnumberofallowedpaths,ismuchmorescalablewithtracintensity.Fastapproximationalgorithmsin[ 2 3 21 24 36 ],ifused,should 34

PAGE 35

A B C D Figure2-6. ExecutiontimefordierentformulationsontheAbilenenetworkusingSSS.A)121jobs;B)605jobs;C)1210jobs;D)6050jobs. A B C D Figure2-7. Executiontimefordierentformulationsonarandomnetworkwith100nodesusingSSS.A)100jobs;B)500jobs;C)1000jobs;D)5000jobs. 2-8 showsthevariationofthealgorithmexecutiontimewithnetworksize.Inoursimulations,weschedule100jobsusingSSSforaperiodof60minutes.The 35

PAGE 36

Figure2-8. Randomnetworkwithk=8.Executiontimefordierentnetworksizes. 2.5.1 fora100-noderandomnetworkandplottedthedatapointsaveragedover50networkinstances.Duetospacelimitation,wepresentonlytheresultsfor1000jobsinFig. 2-9 .ThisshouldbecomparedwithFig. 2-5 C,whichisforasinglenetworkinstance.BesidesthefactthatthecurvesinFig. 2-9 aresmoother,thetwoguresshowsimilarcharacteristics.AlltheobservationsthatwehavemadeaboutFig. 2-5 CremainessentiallytrueforFig. 2-9 .Weshouldpointoutthat,inordertoruntheexperimentonmanynetworkinstancesinareasonableamountoftime,thenetworksforFig. 2-9 weregeneratedwithfewerlinksthanthatforFig. 2-5 C.Thisaccountsforthedierenceinthethroughputvaluesbetweenthetwocases.Finally,thecorrespondingaverageexecutiontimeisshowninFig. 2-10 onsemilogscale. WefurtherconrmedthevalidityofourdataandresultsbycomputingthecondenceintervalofthemeanvaluesplottedinFig. 2-9 .Forinstance,themeanandstandard 36

PAGE 37

AverageZfordierentformulationsonarandomnetworkwith100nodesand1000jobsusingSSS.Theresultistheaverageover50instancesoftherandomnetwork. Figure2-10. Averageexecutiontimefordierentformulationsonarandomnetworkwith100nodesand1000jobsusingSSS.Theresultistheaverageover50instancesoftherandomnetwork. deviationofthethroughputfornode-arcformulationis0.1489and0.0807,respectively.The95%condenceintervalforthemeanis0:0188aroundthemean.Thisisagoodindicatoroftheaccuracyofourresults. Inaddition,wealsocomputedtheaverageofthethroughputratioofSandSDschemestothenode-arcformulation.InFig. 2-11 ,bothSandSDschemesachievenearly80%oftheoptimalthroughputbyswitchingfromsinglepathto2paths.Thethroughputreaches99%with8paths.Fork4,SDperformsbetterthanS.TheplotisconsistentwithourearlierresultsshowninFig. 2-9 Figure2-11. Averagethroughputratiofordierentformulationsonarandomnetworkwith100nodesand1000jobsusingSSS.Theresultistheaverageover50instancesoftherandomnetwork. 37

PAGE 38

2-12 showsthethroughputimprovementfortheAbilenenetworkwithincreasingnumberofpathsfortheSandSDschemes,respectively.Thethroughputofthenode-arcformulationisshownasaatline. Foreachxedslicesize,thegeneralbehaviorofthethroughputfollowsthesamepatternastheSSScasediscussedinSection 2.5.1.1 .Inparticular,thethroughputimprovementissignicantasthenumberofpathsperjobdecreases.InFig. 2-12 ,weobservemorethan50%throughputincreasewith4orfewerpathsandnearly30%to50%increasewith8ormorepaths.Whencomparingacrossdierentslicesizes,weseethatsmallerslicesizeshaveathroughputadvantage,becausetheyleadtomoreaccuratequantizationoftime.Havingmoretimeslicesinaxedschedulingintervaloersmoreopportunitiestoadjusttheowassignmenttothejobs.InFig. 2-12 ,thethroughputvaluesat16pathsperjobis9for10-minslicesizeand6for60-minslicesize.Thisshowsthebenetofhavingane-grainedslicesize,sinceinthisexperimentalsetup,16pathsaresucientforSandSDschemestoreachtheoptimalthroughput.Weobservedmoresignicantthroughputimprovementfromusingsmallertimeslicesinothersettings.Forinstance,with603jobs,thethroughputobtainedfrom10-minslicesizeisnearlytwicethethroughputfrom60-minslicesize. Fig. 2-13 showssimilarresultsfora100-noderandomnetworkwith100jobs.Themaximumthroughputat16pathsisnearlythesameforallcases.However,forsituationswithasmallnumberofpathsperjob,smallertimeslicesizeshaveathroughput 38

PAGE 39

A B C D Figure2-12. ZfordierentformulationsontheAbilenenetworkwith121jobsusingMSS.A)Timeslice=60min;B)Timeslice=30min;C)Timeslice=15min;D)Timeslice=10min. A B C D Figure2-13. Zfordierentalgorithmsona100-noderandomnetworkwith100jobsusingMSS.A)Timeslice=60min;B)Timeslice=30min;C)Timeslice=15min;D)Timeslice=10min. 39

PAGE 40

2-14 andFig. 2-15 showtheexecutiontimefortheAbilenenetworkwith121jobsandfora100-noderandomnetworkwith100jobs,respectively.Foreachxedtimeslicesize,wecontinuetoobservethelinearorfasterincreaseoftheexecutiontimeasthenumberofpathsincreaseintheSandSDschemes.Again,theexecutiontimeforthenode-arcformismuchgreaterthanthatfortheSandSDcases;inmostcases,toolargetobeobservedfromourexperiments.Finally,thethroughputadvantageofusingsmallerslicesizesisachievedattheexpenseofsignicantlongerexecutiontime. A B C D Figure2-14. ExecutiontimefordierentformulationsontheAbilenenetworkwith121jobsusingMSS.A)Timeslice=60min;B)Timeslice=30min;C)Timeslice=15min;D)Timeslice=10min. 2-16 helpstoidentifyasuitabletimeslicesizeforwhichthethroughputishighandtheexecutiontimeisacceptable.Weobservethatthethroughputbeginstosaturatewhenthetimeslicesizeis15minutesandtheexecutiontimeisunderhalfaminute.Notethesharpriseoftheexecutiontimeastheslicesizedecreases.Itisthereforeessentialtochooseanappropriateslicesize. 40

PAGE 41

B C D Figure2-15. Executiontimefordierentformulationsona100-noderandomnetworkwith100jobsusingMSS.A)Timeslice=60min;B)Timeslice=30min;C)Timeslice=15min;D)Timeslice=10min. A B Figure2-16. TheAbilenenetworkwith121jobsandk=8.A)Zfordierenttimeslices;B)Executiontimefordierenttimeslicesizes. 41

PAGE 42

3-1 .ThenotationsfornetworkandjobrequestsaresameasdiscussedinSection 2.1 .Inaddition,arequestfromtheMBGclassisa6-tuple(Ai;si;di;Bi;Si;Ei),whereBiistherequestedminimumbandwidthontheinterval[Si;Ei].Itmayoptionallyspecifyamaximumbandwidth.But,wewillignorethisoptioninthepresentation. Thenetworkcontrollerperformsadmissioncontrol(AC)byevaluatingtheavailablenetworkcapacitytosatisfynewjobrequests.Itadmitsonlythosejobswhoserequiredperformancecanbeguaranteedbythenetworkandrejectstherest.Thenetworkcontrolleralsoperformsletransferschedulingforalladmittedjobs,whichdetermineshoweachjobistransferredovertime,i.e.,howmuchbandwidthisallocatedtoeachpathofthejobateverytimeinstance. Inthebasicscheme,ACandschedulingaredoneperiodicallyaftereverytimeunits,whereisapositivenumber.Morespecically,attimeinstancesk,k=1;2;:::,thecontrollercollectsallthenewrequeststhatarrivedontheinterval[(k1);k],makestheadmissioncontroldecision,andschedulesthetransferofalladmittedjobs.BothACandschedulingmusttakeintoaccounttheoldjobs,i.e.,thosejobsthatwereadmittedearlierbutremainunnished.Thevalueofshouldbesmallenoughsothatnewjobrequestscanbecheckedforadmissionandscheduledasearlyaspossible 42

PAGE 43

Frequentlyusednotationsanddenitions Di Si,^Si Ei,^Ei Inthefollowing,assumet=k. Mk endtimefalls Startandendtimesofslicei LENk(i) Lengthofslicei Ik(t) Indexoftheslicethattimetfallsin Allowablepathsfromnodestod Rk(i) Remainingdemandofjobi fi(p;j) Totalowallocatedtojobionpathponslicej Ce(j) Remainingcapacityoflinkeonslicej 43

PAGE 44

3-1 and 3-3 ,respectively.Foreaseofpresentation,weusetheuniformslicesasanexampletoexplaintheACandschedulingalgorithm.DiscussiononthemoresophisticatednestedslicesisdeferredtoSection 3.6 InUS,thetimelineisdividedintoequal-sizedtimeslicesofduration(coincidingwiththeAC/schedulingintervallength).Thesetofslicesanchoredatanyt=kisalltheslicesaftert.Figure 3-1 showstheuniformslicestructureattwotimeinstancest=andt=2.Inthisexample,=4timeunits.Thearrowspointtotheschedulinginstances.Thetwocollectionsofrectanglesarethetimeslicesanchoredatt=andt=2,respectively.Itiseasytocheckthecongruentpropertyofthisslicestructure. Uniformtimeslicestructure AtanyAC/schedulingtimet=k,letthetimeslicesanchoredatt,i.e.,thoseinGk,beindexed1;2;:::inincreasingorderoftime.LetthestartandendtimesofsliceibedenotedbySTk(i)andETk(i),respectively,andletitslengthbeLENk(i).Wesayatimeinstancet0>tfallsintosliceiifSTk(i)
PAGE 45

^Si=maxft;ETk(Ik(Si))g:(3.1) Forroundingoftherequestedendtime,weallowtwopolicychoices,thestringentpolicyandtherelaxedpolicy.Inthestringentpolicy,iftherequestedendtimedoesnotcoincidewithasliceboundary,itisroundeddown,subjecttotheconstraintthat^Ei>^Si 3.6 ,weallowtheendtimetobere-roundedatdierentschedulinginstances.Thisway,theroundedendtimecanbecomeclosertotherequestedendtime,astheslicesizesbecomenerovertime. 45

PAGE 46

(3.2)relaxed^Ei=ETk(Ik(^Si+Ti)) Figure 3-2 showstheeectofthetwopoliciesafterthreejobsarerounded. Figure3-2. Tworoundingpolicies.Theunshadedrectanglesaretimeslices.Theshadedrectanglesrepresentjobs.Thetoponesshowtherequestedstartandendtimes.Thebottomonesshowtheroundedstartandendtimes. Ifajobiisanoldone,itsroundedstarttime^Siisreplacedbythecurrenttimet.Theremainingdemandisupdatedbysubtractingfromitthetotalamountofdatatransferredforjobionthepreviousinterval,((k1);k]. Fromthedenitionofuniformslices,theslicesetanchoredateacht=k,Gk,containsaninnitenumberofslices.Ingeneral,onlyanitesubsetofGkisusefultous.LetMkbetheindexofthelastsliceinwhichtheroundedendtimeofsomejobsfalls.Thatis,Mk=Ik(maxi2Jk^Ei).LetLkGkbethecollectionoftimeslices1;2;:::;Mk.WecalltheslicesinLkastheactivetimeslices.WewillalsothinkofLkas 46

PAGE 47

Att=k,letJJkbeasubsetofthejobsinthesystems.Letfi(p;j)bethetotalow(totaldatatransfer)allocatedtojobionpathp,wherep2Pk(si;di),ontimeslicej,wherej2Lk.Aspartoftheadmissioncontrolalgorithm,thesolutiontothefollowingfeasibilityproblemisusedtodeterminewhetherthejobsinJcanallbeadmitted. (3.5)fi(p;j)0;8i2J;8j2Lk;8p2Pk(si;di): ( 3.3 )saysthat,foreveryjob,thesumofalltheowsassignedonalltimeslicesforallpathsmustbeequaltoitsremainingdemand.( 3.4 )saysthatthecapacityconstraintsmustbesatisedforalledgesoneverytimeslice.Notethattheallocatedrateonpathpforjobionslicejisfi(p;j)=LENk(j),whereLENk(j)isthelengthofslicej.Therateisassumedtobeconstantontheentireslice.Here,Ce(j)istheremaininglinkcapacity 47

PAGE 48

3.5 )isthestartandendtimeconstraintforeveryjoboneverypath.Theowmustbezerobeforetheroundedstarttimeandaftertheroundedendtime. Recallthatweareassumingeveryjobtobeabulktransferforsimplicity.IfjobiisoftheMBGclass,thenthetheremainingcapacityconstraint( 3.3 )willbereplacedbyaminimumbandwidthguaranteecondition. TheAC/schedulingalgorithmistriggeredeverytimeunitswiththeACpartbeforetheschedulingpart.ACexaminesthenewlyarrivedjobsanddeterminestheiradmissibility.Indoingso,weneedtoensurethattheearliercommitmentstotheoldjobsarenotbroken.ThiscanbeachievedbyadoptingoneofthefollowingACprocedures. 1. 2. 48

PAGE 49

IntheSRscheme,welistthenewjobs,Jnk,inasequence,1;2;:::;m.Theparticularorderofthesequenceisexible,possiblydependentonsomecustomizablepolicy.Forinstance,theordermaybearbitrary,orbasedontheprioritythejobsorbasedonincreasingorderoftherequesttimes.Weapplyabinarysearchtothesequencetondthelastjobj,1jm,inthesequencesuchthatalljobsbeforeandincludingitareadmissible.Thatis,jisthelargestindexforwhichthesubsetofthenewjobsJ=f1;2;:::;jgisfeasibleforAC(k,J).Allthejobsafterjarerejected. IntheRRscheme,attimet=k,allthejobsarelistedinasequencewheretheoldjobsJokareaheadofthenewjobsJnkinthesequence.Theorderamongtheoldjobsisarbitrary.Theorderamongthenewjobsisagainexible.Denotethissequenceas1;2;:::;m,inwhichjobs1throughlaretheoldones.Wethenapplyabinarysearchtothesequenceofnewjobs,l+1;l+2;:::;m,tondthelastjobj,l
PAGE 50

(3.8)subjectto( 3:3 )( 3:6 ): 3:4 )( 3:6 ): 3.10 )ensuresthatfi(p;j)=Z'ssatisfy( 3.3 ).Also,Z1mustbetruesinceJisadmissible.Hence,fi(p;j)=Z'sareafeasiblesolutiontotheAC(k,J)problem.TheLoad-Balancing(k,J)problemaboveiswritteninthe 50

PAGE 51

Wehave,Load-Balancing-1(k,J)min 3:3 );( 3:5 )and( 3:6 ): TheschedulingalgorithmistoapplyJ=JaktoQuick-Finish(k,J)orLoad-Balancing(k,J).Thisdeterminesanoptimalowassignmenttoalljobsonallallowedpathsandonalltimeslices.Giventheowassignmentfi(p;j),theallocatedrateoneachtimesliceisdenotedbyxi(p;j)=fi(p;j)=LENk(j)forallj2Lk.Theremainingcapacityofeachlinkoneachtimesliceisgivenby, Ontheinterval((k1);k],thesystemkeepstrackofthenewrequestsarrivingonthatinterval.Italsokeepstrackofthestatusoftheoldjobs.Ifanoldjobiscompleted,itisremovedfromthesystem.Ifanoldjobisservicedontheinterval,theamountofdata 51

PAGE 52

1 aretaken. 3.1 )and( 3.2 ),dependingonwhetherthestringentorrelaxedroundingpolicyisused.Thisproducestheroundedstartandendtimes,^Sland^El. 2 .ThisproducesthelistofadmittedjobsJak. 3 .Thisyieldstheowamountfi(p;j)foreachadmittedjobi2Jak,overallpathsforjobi,andalltimeslicesj2Lk. 3.13 ). 1 52

PAGE 53

1 Recallthatthecongruentpropertymeansthat,ifasliceinanearlieranchoredslicesetoverlapsintimewithalateranchoredsliceset,iteitherremainsasaslice,orispartitionedintosmallerslicesinthelatersliceset.Thedenitionismotivatedbytheneedformaintainingconsistencyinbandwidthassignmentacrosstime.Asanexample,supposeattime(k1),ajobisassignedabandwidthxonapathontheslicejk1.Atthenextschedulinginstancet=k,supposetheslicejk1ispartitionedintotwoslices.Then,weunderstandthatabandwidthxhasbeenassignedonbothslices.Withoutthecongruentproperty,itislikelythataslice,sayjk,intheslicesetanchoredatkcutsacrossseveralslicesintheslicesetanchoredat(k1).Ifthebandwidthassignmentsat(k1)aredierentfortheselatterslices,thebandwidthassignmentforslicejkisnotwelldenedjustbeforetheAC/schedulingrunattimek.

PAGE 54

Thenestedslicestructurecanbedenedbyconstruction.Att=0,thetimelineispartitionedintolevel-1slices.Therstj1level-1slices,wherej11,areeachpartitionedintolevel-2slices.Thisremovesj1level-1slicesbutaddsj11level-2slices.Next,therstj2level-2slices,wherej2j11,areeachpartitionedintolevel-3slices.Thisremovesj2level-2slicesbutaddsj22level-3slices.Thisprocesscontinuesuntil,inthelaststep,therstjl1level-(l1)slicesarepartitionedintojl1l1level-lslices.Then,therstjl1level-(l1)slicesareremovedandjl1l1level-lslicesareaddedatthebeginning.Intheend,thecollectionofslicesatt=0containsl,jl1l1level-lslices,l1,jl2l2jl1level-(l1)slices,...,2,j11j2level-2slices,andfollowedbyaninnitenumberoflevel-1slices.Thesequenceofji'smustsatisfyj2j11,j3j22,...,jl1jl2l2.ThiscollectionofslicesisdenotedbyG0. Asanexample,tocoveramaximumof30-dayperiod,wecantake1=1day,2=1hour,and3=10minutes.Hence,1=24and2=6.Thersttwodaysarerstdividedintoatotal48one-hourslices,outofwhichtherst8hoursarefurtherdividedinto4810-minuteslices.Thenalslicestructurehas48level-3(10-minute)slices,40level-2(one-hour)slices,andasmanylevel-1(one-day)slicesasneeded,inthiscase,28.Thetotalnumberofslicesis116. Indesigningtheslicestructure,sometimesonewishestobeginwithspecifyingthesetofj's.Tohaveanestedslicestructure,thej'sshouldsatisfythefollowingproperty.First,l,lisanintegermultipleofl1andl1,l=l1+l1isanintegermultipleofl2.Ingeneral,forifroml1downto2,denei,i+1=i+i 54

PAGE 55

Forthesubsequentschedulinginstances,theobjectiveistomaintainthesamenumberofslicesasG0atdierentlevels.Butthiscannotbedonewhilesatisfyingtheslicecongruentproperty.Hence,weallowthenumberofslicesateachleveltodeviatefromj,forj=2;:::;l.Thiscanbedoneinvariousways.Letzjbethecurrentnumberoflevel-jslicesatt=k,forj=1;2;:::;l.Setz1=1. 1. 2. Morespecically,att=k,thefollowingisrepeatedforjfromldownto2.Iftisnotanintegermultipleofj1,thennothingisdone.Otherwise,ifzjj1,forj=2;:::;l,theAt-Most-algorithmcanbesimpliedasfollows.Forjfromldownto2,ifzjjj1,bringin(andremove)thenextlevel-(j1)sliceandpartitionitintoj1level-jslices.Thisschememaintainsatleastjj1andatmostjlevel-jslicesforj=2;:::;l. 55

PAGE 56

Algorithm5Create-Slices(j) 3.6.2VariantofNestedSliceStructure Two-levelnestedtime-slicestructure.=2,1=4and2=1.Theanchoredslicesetsshownarefort=;2and3,respectively.At-Most-Design.2=8. 56

PAGE 57

Three-levelnestedtime-slicestructure.=2,1=16,2=4and3=1.Theanchoredslicesetsshownarefort=;2and8,respectively.At-Most-Design.3=8,2=2. congruenceslicestructurerelatedtothenestedslicestructure.WewillcalledittheAlmost-Variantofthenestedslicestructure,becauseitmaintainsatleastjandatmostj+1level-jslicesforj=2;:::;l. TheAlmost-Variantstartsthesamewayasthenestedslicestructureatt=0.Astimeprogressesfrom(k1)tok,fork=1;2;:::,thecollectionofslicesanchoredatt=k,i.e.,Gk,isupdatedfromGk1asinalgorithm 6 3-5 showsathree-levelAlmost-Variant. 57

PAGE 58

Three-levelnestedslicestructureAlmost-Variant.=2,1=16,2=4and3=1.Theanchoredslicesetsshownarefort=;2and3,respectively.3=8,2=2.Theshadedareasarealsoslices,butaredierentinsizefromanylevel-jslice,j=1,2or3. MostoftheexperimentsareconductedontheAbilenenetwork,whichconsistsof11backbonenodesconnectedby10Gbpslinks.Eachbackbonenodeisconnectedtoarandomlygeneratedstubnetwork.Thelinkspeedbetweeneachstubnetworkandthebackbonenodeis1Gbps.Theentirenetworkhas121nodesand490links.Forthescalabilitystudyofthealgorithms,weuserandomnetworkswithnodesrangingfrom100to1000.WeusethecommercialCPLEXpackageforsolvinglinearprogramsonIntel-basedworkstations. Unlessmentionedotherwise,weusethefollowingexperimentalmodelsandparameters.JobrequestsarrivefollowingaPoissonprocess.InordertosimulatethelesizedistributionofInternettrac,weresorttothewidelyacceptedheavy-tailedParetodistribution,withthedistributionfunctionF(x)=1(x=b),wherexband>1.Thecloseristo1,themoreheavy-tailedisthedistribution,anditismorelikelytogenerateverylargedemandsizes.Inmostofourexperiments,theaveragelesizeis50GBand=1:3.Bydefault,eachjobuses8shortestpaths.Weadoptthisapproachbecauseourexperimentsonmultipathschedulingrevealedthefollowingsignicantresult;foranetworkofsizeseveralhundrednodes,8shortestpathsaresucienttoachievenear 58

PAGE 59

Wewillcomparetheuniformtimeslice(US)andthenestedslicestructure(NS)oftheAlmost-Varianttype.ForUS,thetimesliceandAC/schedulinginterval()is21.17minutes.Thiscorrespondsto68slicesinevery24-hourperiod.ForNS,weuseatwo-levelNSstructurewith48ne(level-2)slicesand20coarse(level-1)slices.Theneslicesizeis2=5minutes,andthecoarseslicesizeis1=60minutes.Theseparametersarechosensothattherst24-hourperiodisdividedinto68neandcoarseslices,thesamenumberastheUScase.TheAC/schedulingintervalis5minutes,whichisnerthantheUScase. Theplotsandtablesuseacronymstodenotethealgorithmsusedintheexperiments.RecallthatSRstandsforSubtract-ResourceandRRstandsforReassign-Resourceinadmissioncontrol;LBstandsforLoad-BalancingastheschedulingobjectiveandQFstandsforQuick-Finish. Theperformancemeasuresare, 59

PAGE 60

3.7.2 )inperspective:betterperformanceoftencomeswithlongerexecutiontime.Table 3-2 showstheexecutiontimeofdierentschemesundertworepresentativetracconditions. Table3-2. Averageadmissioncontrol/schedulingalgorithmexecutiontime(s) Algorithm HeavyLoad LightLoad ACScheduling ACScheduling US+SR+LB 13.135.70 0.400.61 US+SR+QF 12.031.86 0.320.23 US+RR+LB 80.895.89 1.050.65 US+RR+QF 34.364.74 0.360.21 NS+SR+LB 1.544.50 0.140.60 NS+SR+QF 1.571.60 0.130.07 NS+RR+LB 25.164.30 1.070.61 NS+RR+QF 17.433.54 0.170.06 WhentheACalgorithmisxed,thechoiceoftheschedulingalgorithm,LBorQF,alsoaectstheexecutiontimeforAC.Forinstance,theRR+LBcombinationhasmuchlongerexecutiontimeforACthantheRR+QFcombination.Thisisbecause,inLB,theowforeachjobtendstobestretchedovertimeinaneorttoreducethenetworkloadoneachtimeslice.Thisresultsinmorejobsandmoreactiveslices(slicesinLk)inthesystematanymoment,whichmeanmorevariablesforthelinearprogram. 60

PAGE 61

3-2 correspondtothethirdcase.Sincethetwo-levelNSstructurehas1=60minutesandtheUShastheuniformslicesize=21:17minutes,theNStypicallyhasfewerslicesthantheUS.Forinstance,underheavyload,US+RR+QFuses150.5activeslicesonanaverageforAC,whileNS+RR+QFuses129.6activeslicesonanaverage.Thenumberofvariables,whichdirectlyaectthecomputationtimeofthelinearprograms,isgenerallyproportionaltothenumberofslices. PartoftheperformanceadvantageofNS(thisisshowninSection 3.7.2 later.)isattributedtothesmallerschedulinginterval.ToreducetheschedulingintervalforUS,wemustreducetheslicesize,since=inUS.Inthenextexperiment,wesettheUSslicesizetobe5minutes,whichisequaltothesizeofthenersliceintheNS.Table 3-3 showstheperformanceandexecutiontimecomparisonbetweenUSandNS.Here,weuseRRforadmissioncontrolandQFforScheduling.TheUSandNShavenearlyidenticalperformanceintermsoftheresponsetimeandjobrejectionratio.But,NSisfarsuperiorinexecutiontimesforbothACandscheduling.Uponcloserinspection(Table 3-4 ),theNSrequiresfarfeweractivetimeslicesthantheUSonanaverage. Insummary, 61

PAGE 62

ComparisonofUSandNS(=5minutes) Response Rejection ExecutionTime(s) Time(min) Ratio ACScheduling LIGHTLOAD US 6.064 0.000 0.4690.309 NS 5.821 0.000 0.1620.062 MEDIUMLOAD US 9.767 0.006 3.1772.694 NS 9.354 0.006 0.5870.387 HEAVYLOAD US 16.486 0.183 131.95826.453 NS 17.107 0.173 17.4283.539 Table3-4. AveragenumberofslicesofUSandNS(=5minutes) AverageNumberofSlices ACScheduling LightLoad US 299.0299.9 NS 68.969.0 MediumLoad US 421.6462.9 NS 79.182.1 HeavyLoad US 975.1799.8 NS 129.6113.4 TheadvantageofNScanbefurtheredbyincreasingthenumberofslicelevels.Inpractice,itislikelythatUSistootimeconsumingandNSisamust. 3.7 .Inparticular,wexthenumberofpathsperjob(K)tobe8.Table 3-5 showstheresponsetimeandrejectionratioofdierentalgorithms. 3-5 ,thealgorithmswithNShaveacomparabletomuchbetterperformancethanthosewithUS.Furthermore,ithasalreadybeenestablishedinSection 3.7.1 thatNShasmuchsmalleralgorithmexecutiontimes. Supposewextheslicestructureandtheschedulingalgorithm.Then,SRhasworserejectionratiothanRRbecauseSRdoesnotconsiderowreassignmentfortheoldjobs 62

PAGE 63

Performancecomparisonofdierentalgorithms Algorithm LightLoad MediumLoad HeavyLoad ResponseTime(s)RejectionRatio ResponseTime(s)RejectionRatio ResponseTime(s)RejectionRatio US+SR+LB 46.550.000 42.350.056 35.560.423 US+SR+QF 21.510.014 22.210.100 23.560.477 US+RR+LB 46.550.000 40.730.026 35.730.313 US+RR+QF 21.550.000 23.360.021 25.160.312 NS+SR+LB 49.600.000 43.830.021 28.740.237 NS+SR+QF 5.730.006 7.560.052 11.060.403 NS+RR+LB 49.600.000 43.880.011 30.160.168 NS+RR+QF 5.820.000 9.350.006 17.110.173 duringadmissioncontrol.Sinceresponsetimeincreaseswiththeadmittedtracload,analgorithmthatleadstolowerrejectionratiocanhavehigherresponsetime.ThisexplainswhyRRoftenhashigherresponsetimethanthecorrespondingSRalgorithm.Notethatalowerrejectionratiodoesnotalwaysleadtohighertracloadsincesomealgorithms,suchasRR,usethenetworkcapacitymoreeciently. SupposewextheslicestructureandtheACalgorithm.Then,LBdoesmuchworsethanQFintermsofresponsetime,becauseLBtendstostretchthejobuntilitsrequestedendtimewhileQFtriestocompleteajobearly.IfRRisusedforadmissioncontrol,thenunderhighload,thedierentschedulingalgorithmshaveasimilareectontherejectionratioofthenextadmissioncontroloperation.However,formediumloadwenoticethattheworkconservingnatureofQFcontributestoalowrejectionratioascomparedtoLBthattendstowastesomebandwidth. 3.7.1 ,SRcanbeconsiderablyfasterthanRRinexecutionspeed.Furthermore,itisacandidateforconductingrealtimeadmissioncontrolattheinstantarequestismade,whichisnotpossiblewithRR. IfSRisused,thenLBoftenhassmallerrejectionratiothanQF.ThereasonisthatQFtendstohighlyutilizethenetworkonearliertimeslices,makingitmorelikelytorejectsmalljobsrequestedforthenearfuture.Thisisalegitimateconcernbecause,in 63

PAGE 64

Thereisindicationthat,themoreheavy-tailedisthelesizedistribution,thelargeristhedierenceinrejectionratiobetweenLBandQF.EvidenceisshowninFig. 3-6 forthelighttracload.AstheParetoparameterapproaches1whiletheaveragejobsizeisheldconstant,thechancesofhavingaverylargeleincreases.Evenifitistransmittedatfullnetworkcapacity,asinQF,suchalargelecanstillcongestthenetworkforalongtime,causingmorefuturejobstoberejected.Thecorrectthingtodo,ifSRisused,istospreadoutthetransmissionofalargeleoveritsrequestedtimeinterval. Figure3-6. Rejectionratiofordierent'sunderSR. Tosummarizethekeypoints, 3-7 forthelight,mediumandheavytracloads.Here,NSisusedalongwiththeadmissioncontrolschemeRR,andschedulingobjectiveQF.Foreverysource-destinationnodepair,theKshortestpathsbetweenthemareselectedandusedbyanyjobbetweenthenodepair.WevaryKfrom1to10,andndthatmulti-pathoftenproducesbetterresponsetimeandalwaysproduces 64

PAGE 65

A B Figure3-7. Singlevs.multiplepathsunderdierenttracload.A)Responsetime;B)Rejectionratio. Fig. 3-8 showstheresponsetime(AandB)andtherejectionratio(C)undermediumtracloadforallalgorithms.Itisobservedthattherejectionratiodecreasessignicantlyforallalgorithms,asKincreases.AllalgorithmsthatuseLBforscheduling,experienceanincreaseinresponsetimeduetothereductionintherejectionratio.But,thisisnotadisappointingresultbecauseitisnotthegoalofLBtoreduceresponsetime.AllthealgorithmsusingQFforschedulingexperienceadecreaseinresponsetime.Inspiteoftheincreasedload,QFisabletopackmorenumberofjobsinearlierslicesbyutilizingtheadditionalpaths. 65

PAGE 66

B C Figure3-8. Singlevs.multiplepathsundermediumtracloadfordierentalgorithms.A)ResponsetimeforQF;B)ResponsetimeforLB;C)Rejectionratio. ComparedtoourAC/schedulingalgorithm,thesimpleschemeresemblesourSRadmissioncontrolalgorithmbutoperatesonlyononepath.Forbulktransferwithstartandendtimeconstraints,thesimpleschemestillrequiresaschedulingstage,becausebandwidthneedstobeallocatedtothenewlyadmittedjoboverthetimeslicesonitsdefaultpath.Hence,wecanapplythetimeslicestructureandtheschedulingobjective 66

PAGE 67

3-6 showstherejectionratioofthesimpleschemewithdierentslicestructuresandschedulingalgorithmsfordierenttracloads.ThisshouldbecomparedwithTable 3-5 .ThesimpleschemeleadstoconsiderablyhigherrejectionratiothanallofourschemesinvolvingSR,whichinturnhavehigherrejectionratiothanthecorrespondingschemesinvolvingRR. Table3-6. Rejectionratioofthesimplescheme LightLoad MediumLoad HeavyLoad US+SR+LB 0.010 0.345 0.781 US+SR+QF 0.031 0.308 0.792 NS+SR+LB 0.000 0.225 0.596 NS+SR+QF 0.026 0.249 0.642 Fig. 3-9 showstheexecutiontimeofACandschedulingasafunctionofthenumberofjobs.Theintervalbetweenthestartandendtimesispartitionedinto24uniformtimeslices.Itisobservedthattheincreaseinexecutiontimeislinearorslightlyfasterthanlinear.Scalinguptothousandsofsimultaneousjobsappearstobepossible. Fig. 3-10 showstheexecutiontimeagainstthenumberoftimeslicesfor100requests.Theincreaseislinear.Withrespecttotheexecutiontime,thepracticallimitisseveralhundredslices.ThisissucientifNSisused.ButwithUS,theslicesizemaybetoocoarseforpracticaluseifonewishestocoverseveralmonthsofadvancereservation. 67

PAGE 68

3-11 showsthescalabilityofthealgorithmagainstthenetworksize.Forthis,wegeneraterandomnetworkswith100to1000nodesin100-nodeincrements.Theaveragenodedegreeis5;5;7;9;9;10;10;11;11,and11respectively,sothatthenumberofedgesalsoincreases.Thenetworklinkcapacityrangesfrom0:1Gbpsto10Gbps.Thereare100jobstobeadmittedandscheduled.Itisobservedthattheexecutiontimesincreaseslightlyfasterthanlinear,indicatingacceptablescalingbehavior. Figure3-9. Scalabilityoftheexecutiontimeswiththenumberofjobs. Figure3-10. Scalabilityoftheexecutiontimeswiththenumberoftimeslices. 68

PAGE 69

Scalabilityoftheexecutiontimeswiththenetworksize. 69

PAGE 70

Thisstudyaimsatcontributingtothemanagementandresourceallocationofresearchnetworksfordata-intensivee-sciencecollaborations.Theneedforlargeletransfersisamongthemainchallengesposedbysuchapplications.TheopportunitieslieinthefactthatresearchnetworksaregenerallymuchsmallerinsizethanthepublicInternet,andhence,canaordacentralizedresourcemanagementplatform. InChapter 2 ,weformulatetwolinearprograms,thenode-arcformandedge-pathform,forschedulingbulkletransferswithstartandendtimeconstraints.Ourobjectiveistomaximizethethroughput,subjecttothelinkcapacityconstraints.Thethroughputisacommonscalingfactorforalldemand(le)sizes.Thisperformanceobjectiveisequivalenttondingatransferschedulethatcarriesallthedemandsandalsominimizestheworst-caselinkcongestionacrossalllinksandtime.Ithastheeectofbalancingthetracloadoverthewholenetworkandacrosstime.Thisfeatureenablesthenetworktoacceptmorefutureletransferrequestsandinturnachievehigherlong-termresourceutilization. Animportantcontributionofthisthesisistowardstheapplicationoftheedge-pathformulationtoobtainingclosetooptimalthroughputwithareasonabletimecomplexity.Wehaveshownthatthenode-arcformulation,whilegivingtheoptimalthroughput,iscomputationallyveryexpensive.Theedge-pathformulationcanleadtodrasticreductionofthecomputationtimebyusingasmallnumberofpre-denedpathsforeachle-transferjob.Wediscussedtwopathselectionschemes,theshortestpaths(S)andtheshortestdisjointpaths(SD).Bothschemesarecapableofachievingnearoptimalthroughputwithasmallnumberofpaths,e.g.8orless,foreachle-transferrequest.BothSandSDperformwellinasmallnetworkwithfewdisjointpaths,e.g.theAbilenebackbone,whileSDperformsbetterthanSinlarger,wellconnectednetworks.Intheevaluationprocess,wealsoshowedthathavingmultiplepathsperjobyieldsmuchhigherthroughput 70

PAGE 71

InChapter 3 ,wedevelopedacohesiveframeworkofadmissioncontrolandowschedulingalgorithmswiththefollowingnovelelements:advancereservationforbulktransferandminimumbandwidthguaranteedtrac,multi-pathrouting,andreroutingandowreassignmentviaperiodicre-optimization. Inordertohandletheadvancementoftime,weidentifyasuitablefamilyofdiscretetime-slicestructures,namely,thecongruentslicestructures.Withsuchastructure,weavoidthecombinatorialnatureoftheproblemandareabletoformulateseverallinearprogramsasthecoreofourACandschedulingalgorithm.Ourmainalgorithmsapplytoallcongruentslicestructures,whicharefairlyrich.Inparticular,wedescribethedesignofthenestedslicestructureanditsvariants.Theyallowthecoverageofalongsegmentoftimeforadvancereservationwithasmallnumberofsliceswithoutcompromisingperformance.TheyleadtoreducedexecutiontimeoftheAC/schedulingalgorithm,therebymakingitpractical.Thefollowinginferencesweredrawnfromourexperiments. 71

PAGE 72

Eveninthelimitedapplicationcontextofe-science,admissioncontrolandschedulingisalargeandcomplexproblem.Inthisthesis,wehavelimitedourattentiontoasetofissuesthatwethinkareuniqueandimportant.Thisworkcanbeextendedinmanydirections.Tonamejustafew,onecandevelopandevaluatefasterapproximationalgorithmsasin[ 3 21 24 36 ];addressadditionalpolicyconstraintsforthenetworkusage;incorporatethediscretelightpathschedulingproblem;developaprice-basedbiddingsystemformakingadmissionrequest;oraddressmorecarefullytheneedsoftheMBGtracclass,suchasminimizingtheend-to-enddelay. 72

PAGE 73

[1] PaulAvery.Gridcomputinginhighenergyphysics.InProceedingsoftheInterna-tionalBeauty2003Conference,Pittsburgh,PA,Oct.2003. [2] B.AwerbuchandF.T.Leighton.Asimplelocal-controlapproximationalgorithmformulticommodityow.InProceedingsoftheIEEESymposiumonTheoryofComputing,pages459{468,1993. [3] B.AwerbuchandF.T.Leighton.Improvedapproximationalgorithmsformulti-commodityowproblemandlocalcompetitiveroutingindynamicnetworks.InProceedingsoftheACMSymposiumonTheoryofComputing,pages487{496,1994. [4] D.BanerjeeandB.Mukherjee.Wavelength-routedopticalnetworks:linearformulation,resourcebudgetingtradeos,andarecongurationstudy.IEEE/ACMTransactionsonNetworking,8(5):598{607,Oct.2000. [5] R.Bhatia,M.Kodialam,andT.V.Lakshman.Fastnetworkre-optimizationschemesforMPLSandopticalnetworks.ComputerNetworks:TheInternationalJournalofComputerandTelecommunications,50(3),Feb.2006. [6] S.Blake,D.Black,M.Carlson,E.Davies,Z.Wang,andW.Weiss.Anarchitecturefordierentiatedservices.RFC2475,IETF,Dec.1998. [7] E.Bouillet,J.-F.Labourdette,R.Ramamurthy,andS.Chaudhuri.Lightpathre-optimizationinmeshopticalnetworks.IEEE/ACMTransactionsonNetworking,13(2):437{447,2005. [8] R.Braden,D.Clark,andS.Shenker.Integratedservicesintheinternetarchitecture:Anoverview.RFC1633,IETF,June1994. [9] AndrejBrodnikandAndreasNilsson.AstaticdatastructurefordiscreteadvancebandwidthreservationsontheInternet.TechnicalReportTechreportcs.DS/0308041,DepartmentofComputerScienceandElectricalEngineering,LuleaUniversityofTechnology,Sweden,2003. [10] J.BunnandH.Newman.Data-intensivegridsforhigh-energyphysics.InF.Berman,G.Fox,andT.Hey,editors,GridComputing:MakingtheGlobalInfrastructureaReality.JohnWiley&Sons,Inc,2003. [11] Lars-O.Burchard.Sourceroutingalgorithmsfornetworkswithadvancereservations.TechnicalReportTechnicalReport2003-03,CommunicationsandOperatingSystemsGroup,TechnicalUniversityofBerlin,2003. [12] Lars-O.Burchard.Networkswithadvancereservations:applications,architecture,andperformance.JournalofNetworkandSystemsManagement,13(4):429{449,Dec.2005. 73

PAGE 74

[13] Lars-O.BurchardandHans-UHeiss.Performanceevaluationofdatastructuresforadmissioncontrolinbandwidthbrokers.TechnicalReportTechnicalReportTR-KBS-01-02,CommunicationsandOperatingSystemsGroup,TechnicalUniversityofBerlin,2002. [14] Lars-O.BurchardandHans-U.Heiss.Performanceissuesofbandwidthreservationforgridcomputing.InProceedingsofthe15thSymposiumonComputerArchetectureandHighPerformanceComputing(SBAC-PAD'03),2003. [15] Lars-O.Burchard,J.Schneider,andB.Linnert.Reroutingstrategiesfornetworkswithadvancereservations.InProceedingsoftheFirstIEEEInternationalConferenceone-ScienceandGridComputing(e-Science2005),Melbourne,Australia,Dec.2005. [16] G.deVeciana,G.Kesidis,andJ.Walrand.Resourcemanagementinwide-areaATMnetworksusingeectivebandwidths.IEEEJournalonSelectedAreasinCommunications,13(6):1081{1090,Aug.1995. [17] T.DeFanti,C.d.Laat,J.Mambretti,K.Neggers,andB.Arnaud.TransLight:Aglobal-scaleLambdaGridfore-science.CommunicationsoftheACM,46(11):34{41,Nov.2003. [18] E.Mannie(Ed.).Generalizedmulti-protocollabelswitching(GMPLS)architecture.RFC3945,IETF,Oct.2004. [19] T.Erlebach.Calladmissioncontrolforadvancereservationrequestswithalternatives.TechnicalReportTIK-ReportNr.142,ComputerEngineeringandNetworksLaboratory,SwissFederalInstituteofTechnology(ETH)Zurich,2002. [20] C.Curtiet.al.Onadvancereservationofheterogeneousnetworkpaths.FutureGenerationComputerSystems,21(4):525{538,Apr.2005. [21] L.K.Fleischer.Approximatingfractionalmulticommodityowindependentofthenumberofcommodities.SiamJournalofDiscreteMathematics,13(4):505{520,2000. [22] I.FosterandC.Kesselman.TheGrid:BlueprintforaNewComputingInfrastructure.MorganKaufmann,1999. [23] I.Foster,C.Kesselman,C.Lee,R.Lindell,K.Nahrstedt,andA.Roy.Adistributedresourcemanagementarchitecturethatsupportsadvancereservationsandco-allocation.InProceedingsoftheInternationalWorkshoponQualityofService(IWQoS'99),1999. [24] N.GargandJ.Koenemann.Fasterandsimpleralgorithmsformulti-commodityowandotherfractionalpackingproblems.InProceedingsofthe39thAnnualSymposiumonFoundationsofComputerScience,pages300{309,November1998. [25] R.GuerinandA.Orda.Networkswithadvancereservations:Theroutingperspective.InProceedingsofIEEEINFOCOM99,1999.

PAGE 75

[26] E.He,X.Wang,andJ.Leigh.Aexibleadvancereservationmodelformulti-domainWDMopticalnetworks.InProceedingsofGRIDNETS2006,SanJose,CA,2006. [27] E.He,X.Wang,V.Vishwanath,andJ.Leigh.AR-PIN/PDC:Flexibleadvancereservationofintradomainandinterdomainlightpaths.InProceedingsoftheIEEEGLOBECOM2006,2006. [28] F.P.Kelly,P.B.Key,andStanZachary.Distributedadmissioncontrol.IEEEJournalOnSelectedAreasInCommunications,18(12),Dec.2000. [29] T.Lehman,J.Sobieski,andB.Jabbari.DRAGON:Aframeworkforserviceprovisioninginheterogeneousgridnetworks.IEEECommunicationsMagazine,March2006. [30] L.Lewin-Eytan,J.Naor,andA.Orda.Routingandadmissioncontrolinnetworkswithadvancereservatione.InProceedingsoftheFifthInternationalWorkshoponApproximationAlgorithmsforCombinatorialOptimization(APPROX02),2002. [31] L.Marchal,P.Vicat-BlancPrimet,Y.Robert,andJ.Zeng.Schedulingnetworkrequestswithtransmissionwindow.TechnicalReport2005-32,LIP,ENSLyon,France,2005. [32] D.E.McDysanandD.L.Spohn.ATMTheoryandApplications.McGraw-Hill,1998. [33] H.B.Newman,M.H.Ellisman,andJ.A.Orcutt.Data-intensivee-sciencefrontierresearch.CommunicationsoftheACM,46(11):68{77,Nov.2003. [34] E.Rosen,A.Viswanathan,andR.Callon.Multiprotocollabelswitchingarchitecture.RFC3031,IETF,Jan.2001. [35] O.Schelen,A.Nilsson,JoakimNorrgard,andS.Pink.PerformanceofQoSagentsforprovisioningnetworkresources.InProceedingsofIFIPSeventhInternationalWorkshoponQualityofService(IWQoS'99),London,UK,June1999. [36] FarhadShahrokhiandD.W.Matula.Themaximumconcurrentowproblem.JournaloftheAssociationforComputingMachinery,37(2):318{334,April1990. [37] TaoWangandJianerChen.Bandwidthtree-Adatastructureforroutinginnetworkswithadvancedreservations.InProceedingsoftheIEEEInternationalPerformance,ComputingandCommunicationsConference(IPCCC2002),April2002. [38] QingXiong,ChanleWu,JianbingXing,LibingWu,andHuyinZhang.Alinked-listdatastructureforadvancereservationadmissioncontrol.InICCNMC2005,2005.LectureNotesinComputerScience,Volume3619/2005. [39] JinY.Yen.Findingthekshortestlooplesspathsinanetwork.ManagementScience,17(11):712{716,1971.

PAGE 76

[40] JunZhengandHusseinT.Mouftah.Routingandwavelengthassignmentforadvancereservationinwavelength-routedWDMopticalnetworks.InProceedingsoftheIEEEInternationalConferenceonCommunications(ICC),2002.

PAGE 77

KannanRajahreceivedhisMasterofScienceincomputerengineeringfromUniversityofFloridain2007.HepursuedresearchinschedulingandoptimizationalgorithmsforbulkletransfersunderadvisorsDr.SanjayRankaandDr.YeXia.HehaspublishedapapertitledSchedulingBulkFileTransferswithStartandEndTimesintheIEEENetworkComputingandApplications(NCA)2007proceedings.KannanreceivedhisBachelorofEngineering(Hons.)incomputerscienceandMasterofScience(Hons.)inchemistryfromBirlaInstituteofTechnologyandScience(BITS)-Pilani,Indiain2000. 77