<%BANNER%>

Automated System for Load-Balancing EBGP Peers

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110115_AAAAAR INGEST_TIME 2011-01-15T08:05:05Z PACKAGE UFE0008800_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 46635 DFID F20110115_AAANNA ORIGIN DEPOSITOR PATH wallace_b_Page_36.pro GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
073f408be59ff70524b3c8a4b3b33952
SHA-1
20dcf990a81455dce3bc99a8bfc73119c199880a
919707 F20110115_AAANPY wallace_b.pdf
eacbd51584452130a797dda94e26fbc0
1cc37865b2c626f742b2719f974b0f4412c17412
113884 F20110115_AAANID wallace_b_Page_15.jp2
7134e9e2ae9e638c216737c098254c0d
208474e691ee0c5f2e8817e106a3ea4601ffe214
46136 F20110115_AAANNB wallace_b_Page_37.pro
b959cfa894e3dd59b5ffcc21f7dd947a
c6bb48c48e9a7e1aece77a3410c4474de2257e12
6583 F20110115_AAANPZ wallace_b_Page_01thm.jpg
e82bc0d9e0a45d87ea9d85daf1ea5343
f7a3fdf3329cc57a0988f6b5011ad48b3dad4df4
763814 F20110115_AAANIE wallace_b_Page_16.jp2
d178c73eff5e32b55f71ec04f4e8528b
2d708995d7f03edf72026037091890e137a864d2
46537 F20110115_AAANNC wallace_b_Page_38.pro
c72c245131823d7940cf282ff2f91c36
d0f841d16082a281c264ff050a9bd5f37c0f177f
48915 F20110115_AAANND wallace_b_Page_39.pro
888fca7a0ed95ae3af5cfbc04b970034
ca80a425a7495899a5e2ad488102485af8ac06f0
763637 F20110115_AAANIF wallace_b_Page_17.jp2
2db8c53fb8cf9f2409be1ce2f7ba4cc2
e0fd7646114377c5a44bd0cf3b005fb83840cbbf
22422 F20110115_AAANSA wallace_b_Page_33thm.jpg
1e4926e1162a913c85118bdc018935cb
dbbd95eebfef84aefe53fb0bd130426b6d0c4a2f
18696 F20110115_AAANNE wallace_b_Page_40.pro
1ed8d708f7457b9a06e5a15cd5741551
9ee1540d14aabb144a5980384aa7e85e92768c3d
102608 F20110115_AAANIG wallace_b_Page_18.jp2
848b6f0d3825bd4de486d4b297c6f50e
58b9a72068aafc53a203a0277ace79db5885c8ee
74164 F20110115_AAANSB wallace_b_Page_33.QC.jpg
d93d2bbe1e5a05653e6fed91ef8735d6
8ff5925d28f675c920d06e7d815cd9535f8b0ce9
43347 F20110115_AAANNF wallace_b_Page_41.pro
4992463e7503b37e72b3384de4599b71
7e01500cbcde9a15c6bce2febae43ddb1f5b0bcb
104906 F20110115_AAANIH wallace_b_Page_19.jp2
a693beea0392eed9cc19ffd5130e2ef8
5ff10c3202600d6db8bc451d08daad6a9aef9a2a
22941 F20110115_AAANSC wallace_b_Page_34thm.jpg
d2aac7ca7a6d33c86ee48dd4d16d22f3
0e1563f9a821acd1bf71985ff28d73cf06787a20
18078 F20110115_AAANNG wallace_b_Page_42.pro
79af2c8ca1ddd8bbd612005a7df5e177
b53cc80df72483563d3afbfbf2bcf43988e95cfb
83552 F20110115_AAANII wallace_b_Page_20.jp2
55b9dc4ba984625b9be1ed16c9415de5
3c9613b0df903bf0197a5fbbbed85b349409b10d
71172 F20110115_AAANSD wallace_b_Page_34.QC.jpg
65feb43ef95e2c52f01d4d93eb2806ee
04eb494771b625d61c09d5d51593ddac0b59cbbe
33805 F20110115_AAANNH wallace_b_Page_43.pro
137dd0b278b14a01cb6cf4a6c9e46644
31d65a06bdfa801e06381e844497860c7d7485ac
82172 F20110115_AAANIJ wallace_b_Page_21.jp2
e97d3cb1e49ab516c21fc51cf8f85269
3fabdf00010fcc655a678a203a040e78cf69eddc
70993 F20110115_AAANSE wallace_b_Page_36.QC.jpg
a3f3ce9b3c9f503a96e2320da0658c20
a3b54a4a154edccec87c905ef30cef9669f6e2b8
18107 F20110115_AAANNI wallace_b_Page_44.pro
dfb810e043060fb438c5a461f39f4527
0c20a70ff84fdbedbfcdbbc81e5f46fec8aa2cae
980842 F20110115_AAANIK wallace_b_Page_23.jp2
e0afc42e755ab75d3dfdf1c0e683c1c9
3b260b7eea03abbfa551f07498bb10006e8b4e86
48275 F20110115_AAANSF wallace_b_Page_37thm.jpg
9e4eeb435f3e3b78a790370b8bd92a46
2858df4f720b9cbda157acaca160c19aab9f97c2
15971 F20110115_AAANNJ wallace_b_Page_45.pro
c8ea9930f52df873f4db589345a486bb
4f815e61dce47a5b44128c897953886be863c138
88442 F20110115_AAANIL wallace_b_Page_24.jp2
49294a7d41577d71bb334d081716c332
f34551679183bf3c6cd8d1a81bcbde98e56a659a
96496 F20110115_AAANSG wallace_b_Page_37.QC.jpg
67cd62e52f1b5860ef9c8147eafe6e21
930742ff97e8a1e5d6a152b50f14df66c9a35e9b
45218 F20110115_AAANDO wallace_b_Page_33.pro
d291ce02dc1c1b2c962925764512882f
b0a708dcd93818e932e2716f68559b22004e37ec
926719 F20110115_AAANIM wallace_b_Page_25.jp2
ec575375114eb0f4de25fd68e46d5e6c
8f141fe9964286b9c968faa09960f92a64ec720e
23143 F20110115_AAANSH wallace_b_Page_38thm.jpg
0ad544ad76a424c92aa96301a541f360
af2cd0b124f3ec68faa37b4cb930f87c5508842b
41181 F20110115_AAANDP wallace_b_Page_30.pro
25a2dd786076c0749586224365b83f1f
7ea39fb73cb6d694002f1a98326f47b353e88dbf
103099 F20110115_AAANIN wallace_b_Page_26.jp2
0ede394b0d60736a25636ca2271b4782
0a7ad9c28c0cad3c4adff21bf0fee721a68ce0db
70085 F20110115_AAANSI wallace_b_Page_38.QC.jpg
359b39d0ff1bf12c566bbcd3cf06c991
2823fd6f389ad60f8ef67feaa8eadc18c20a3879
8423998 F20110115_AAANDQ wallace_b_Page_47.tif
b9e8e1b86c08ce64a96da5852c3c53e4
0e1ecfd4e1933d1ab3a059f8050f8187b8640900
40486 F20110115_AAANNK wallace_b_Page_46.pro
1a36d1590fefa866407438f1ae222aaa
e17363075fe4f2d7a2635a96ba56e7321681c757
858765 F20110115_AAANIO wallace_b_Page_28.jp2
f4b87c627a98cdb3501a127231e89fb6
cd902b3514b839226be67bc181a1922d688d990d
24498 F20110115_AAANSJ wallace_b_Page_39thm.jpg
4cd5d402ecd429807f7e4b5c9da47946
c1dd7b396190613668d1afcfb666f1c24fb981f7
6863 F20110115_AAANDR wallace_b_Page_03.QC.jpg
f34956fa3507735c2d0e007a85071c68
4151671581cf95014de465fb2c125fd6ee301423
32381 F20110115_AAANNL wallace_b_Page_48.pro
aebe07f91589468cadae697b7537ec92
666cef6e343bb04dec3b98b29781b5a5cc45ec7e
923420 F20110115_AAANIP wallace_b_Page_29.jp2
aabaffabc492f18307a69e05f722189e
0f6d96c3b511a413010702cdd719c6037a4e3312
77848 F20110115_AAANSK wallace_b_Page_39.QC.jpg
7003e6f15ff35dfc90f3cae02427f965
a3e9610a160c059773208d2d0aacbe9410a4450e
1053954 F20110115_AAANDS wallace_b_Page_57.tif
4d4d36fb2b5f7606a66731367d266b38
972b76e8815e08186c026144357be09d19a8349b
30924 F20110115_AAANNM wallace_b_Page_50.pro
e6c14b9613966d6e5674a5c3f8595edf
f6940bbe2c7b453733bac06afa07b46f69fdaa8b
884307 F20110115_AAANIQ wallace_b_Page_30.jp2
00bdb825053be29bc460a2afa551b0e9
23dbbb5621bc9ef0106596956d29fe07ec1976b9
39547 F20110115_AAANSL wallace_b_Page_40thm.jpg
451a119637cae96c0d43c55819327a1b
e355c4518d66b8976a8815ad1cee75e73500fd39
16470 F20110115_AAANDT wallace_b_Page_41thm.jpg
ad757ff8be3ab05531a1e6e699b7ad65
6f1d6126951081e0d1a5744c4587be234db55d13
19231 F20110115_AAANNN wallace_b_Page_51.pro
1d5a27a3b13bef3926c6f94640872639
41c73afd6e4f42e427d3385682735db660bc433c
728901 F20110115_AAANIR wallace_b_Page_31.jp2
1695de9d778bd61a228ccff33cf6299c
6d7274070f2c989573921b57c35344cac8ccb25d
67587 F20110115_AAANSM wallace_b_Page_40.QC.jpg
a7048edf7e6bbf4a092beb24df62d6ee
3468972f27a38e31fa16f1eaf74375a7b17a16bf
614 F20110115_AAANDU wallace_b_Page_04.txt
f1a9bc68b8faa5f77123a4ecaf87c127
89d6446a9b7db5a9c30154d08cf655f33dff0054
34525 F20110115_AAANNO wallace_b_Page_52.pro
5a78ac8e4835ba4945123b9a3aba414d
2974b5250b714234a57ac3296d678217dfae3865
902756 F20110115_AAANIS wallace_b_Page_32.jp2
8b543200fe2f752ffc89046c951c9d7f
2a4693a56d7d1428bc425d0067c2f84902772496
55315 F20110115_AAANSN wallace_b_Page_41.QC.jpg
542dcf0876c84d095cea75cc8a5d79bc
7b3f8535574c7548f1464ec0d7bf7b46124aa1ac
200878 F20110115_AAANDV wallace_b_Page_18.jpg
cfa667f975caafbd1e1f45f8b0cafd3a
0167850c880cca40c692958185abce91b7c7831c
35519 F20110115_AAANNP wallace_b_Page_53.pro
c65625f3c47333820c5f49def93074c2
2fcf09fa1e1bcf8ad44f8ed0b99a119f1b3bee8b
100217 F20110115_AAANIT wallace_b_Page_33.jp2
467f6a612dbf12a3247a00c4b7144d9e
3e5b87da2353c1b8bd2987aded567f65e5d85336
10490 F20110115_AAANSO wallace_b_Page_42thm.jpg
1adb19046319582b93fd3c3d3eeb727a
0510ccc4360cf39565880169b0ccfd46fd945220
77934 F20110115_AAANDW wallace_b_Page_55.QC.jpg
7a4a9327aaa459ab072db2688580967b
c68daf7894b2c9495e9b98c7a0399c64ee7a4f92
40464 F20110115_AAANNQ wallace_b_Page_54.pro
2fc89d0ec0814273c49ff3487b099ecb
bc86ecc355ac1589dca9c236dffe8e119a556399
99742 F20110115_AAANIU wallace_b_Page_34.jp2
fdcc7d592cdbfe352301e7e13f543df2
1f712e1043c583443f132f365a96a2190bb901e5
1920 F20110115_AAANDX wallace_b_Page_19.txt
7239003d34e73be7eac681254fea59ec
65a91c8aa16d6ba69b46761493cdd9f0fe798449
47554 F20110115_AAANNR wallace_b_Page_55.pro
c6c9844c19a685f52b92e900c11974f9
35a1a8530563aa6fef47f092357ad98a011c8c22
99882 F20110115_AAANIV wallace_b_Page_35.jp2
628689694b6cdb532ce828d866e1fef1
ff1f058d30e4b5826a68ca387801db6f169f9cda
29726 F20110115_AAANSP wallace_b_Page_42.QC.jpg
00cfeaf1688f6cac04b254a1a8cefda2
e5141216afb58bfaade196a5e9cfee1618bd485b
66490 F20110115_AAANDY wallace_b_Page_44.QC.jpg
5acf4068cbfae94723f7e3d7ff73379e
06cbdd3aa404c05d0d973d734138d91ccc909174
41676 F20110115_AAANNS wallace_b_Page_56.pro
03fcd02d9e43ea59656df88983cb3f90
9f9efd06f5c1dc043e60cf6f08392485103bb21c
98421 F20110115_AAANIW wallace_b_Page_36.jp2
1bdcaed08bb6b91003ab1153dedee730
4e19fa6e74ad71bbc6869a8eca424331030e1d5d
18215 F20110115_AAANSQ wallace_b_Page_43thm.jpg
0530c4b0df053c28fdbb2089f5a7c2c3
8f7182192889b082cf8e64c03f9413e497531d82
46337 F20110115_AAANDZ wallace_b_Page_25thm.jpg
dd90060bdb543ea70f5000ec349c0230
e842bdf3649aa1d1e57c901b9c7c667f37b820ec
42293 F20110115_AAANNT wallace_b_Page_57.pro
cb3cd3c0236eae33ae3086c787860502
231537de0e8ed4daf51d7c357a27f052794bf5e5
1006517 F20110115_AAANIX wallace_b_Page_37.jp2
8cc7912af8fa1abb7520ec47758040b7
b135755eb4a105adf3f539822fdad7f2778f1762
53692 F20110115_AAANSR wallace_b_Page_43.QC.jpg
10b28590b58aff67087475abb3b4d554
793e5a8b6b003d6cf75cc9bcb862ce0ffa280082
26925 F20110115_AAANNU wallace_b_Page_58.pro
d715cc0fddb345aa4e1b93e50a3e7080
5ebc1ec09e97f310102b62f4f84487d9bd754571
102627 F20110115_AAANIY wallace_b_Page_38.jp2
567d57a1d1e2e706b527b3a091dba149
233b4b4c2f726b79b3032d02af222bf9b04908f2
40363 F20110115_AAANSS wallace_b_Page_44thm.jpg
148c4955abafd79a84b28dfb42726335
5a621589797f8a09bdd87d921e0c9962d78175c9
46781 F20110115_AAANNV wallace_b_Page_59.pro
c695477460079c4df80dab8e3eea6351
1bb63175d790d90b659e88fe3a55a2f8d4b7705e
171441 F20110115_AAANGA wallace_b_Page_16.jpg
c8394c708c9e3f857671f001064a9982
2ef9a27a0f5e263fdd69e6f5d66de51b7f240e27
537700 F20110115_AAANIZ wallace_b_Page_40.jp2
09b12b0897fb8f08de73c7c759f7e74d
54f22b6ac60466cc14e2a253c9f970aeacbc823f
38454 F20110115_AAANST wallace_b_Page_45thm.jpg
3e01dc8aaeaa457f2a84e4f227ddc768
2ff7285e5931a38d71491dfe53aa0925b5052275
28344 F20110115_AAANNW wallace_b_Page_60.pro
fa881c79340068db9379924446a0a7e3
86b6d19fd7a650d080f750de9d3ec796d56e0b3b
170192 F20110115_AAANGB wallace_b_Page_17.jpg
abb6feb9534dd9ac16aba4caad0d8f94
aaafd69a9a34c7f9087f97da0c7824846fd3b643
62696 F20110115_AAANSU wallace_b_Page_45.QC.jpg
e598556bb13ef9b060a5dc4bb8ba8cd3
072cca1a4ded12d2ac331b590d6760f224a82e04
397 F20110115_AAANNX wallace_b_Page_01.txt
44afc7ac541e6e75ff73f0598637477e
166898457ee8d726287677b4aec522d19f1fd148
203303 F20110115_AAANGC wallace_b_Page_19.jpg
08b3eef5b330e7b8ad3fe13d19a300db
bf399ba45d5f327ad1a0502baa47992ce9c38f6e
27083 F20110115_AAANSV wallace_b_Page_46thm.jpg
7b63a18467836c1ecf45b32ec9f9076a
478c53054112e90f2bd92cb1706d0b9a0b65bfa4
25271604 F20110115_AAANLA wallace_b_Page_37.tif
423c278edc7650b9c779d6a4d3d14972
0c207a0f7b08b7848b767d8aa573387aa6915caf
111 F20110115_AAANNY wallace_b_Page_02.txt
b73dcc358b543e85039ed2691e4e26dd
b1baceb3bfb566c8afa5a42082b08ca5f6eb7ae6
69724 F20110115_AAANSW wallace_b_Page_46.QC.jpg
46d49771df4f7e2137ed9721541843f1
567b6e7023668e11c5b66d9625857506b4301355
F20110115_AAANLB wallace_b_Page_38.tif
6bee5d4412ba2681b0c5a9638ac3d244
7f7aa47deac1a58cd3f7e9718b66b5e2466d9109
193 F20110115_AAANNZ wallace_b_Page_03.txt
89d40a387748b77150be07ded40ad80c
caebac620abf9c19d775f00dcb6ef1b965db35e1
163978 F20110115_AAANGD wallace_b_Page_20.jpg
1c489369d1b047bde95476e46dd67686
540f1b369b7a8aabb978189d624d42fbaa4e1929
23557 F20110115_AAANSX wallace_b_Page_47thm.jpg
c2cd5835036fbd0a84f218da422960d1
f4cde01da2b14b3561a1ffe5e599c9bbc41cc386
F20110115_AAANLC wallace_b_Page_39.tif
172a32d4f96a4b1880869d4842a2ff7d
4b37503ac8a5cbe4c69f6a30aeed652cc4804902
158255 F20110115_AAANGE wallace_b_Page_21.jpg
c0c2eb509873d6179c74cc6412b0a7b4
35537a2974f26246c58f47b91d477e48eb25764b
17072 F20110115_AAANQA wallace_b_Page_01.QC.jpg
d496301d697876f38e92988a603b8889
06ebc9a145080e2a693fc8ff87bae8b1c28b0e5b
52684 F20110115_AAANSY wallace_b_Page_47.QC.jpg
e8a906a98f28baf651f720d30e78d852
4d267e6cb3ae9a4e42412221bc820393e9f986d6
F20110115_AAANLD wallace_b_Page_40.tif
dd57446839c43f6562f5b7bf78def35d
4dc19d4ac3ffb2f26b9d868244e7eb9956a75454
224833 F20110115_AAANGF wallace_b_Page_22.jpg
df02cf6d4c2a8ecd441506791a4ab3bc
85cec55aa67192f83514b349482a365da18fdd04
3153 F20110115_AAANQB wallace_b_Page_02thm.jpg
edf34f5a771fb16bdb7e7ce1ebbe42f6
3084b1344674a0507d97e9725df4579b47429ca1
42004 F20110115_AAANSZ wallace_b_Page_48thm.jpg
4ea971c8c5bb6a33434f53255dd825cd
fcc8bf28d8d2ba9029f09db36c2bb37f92c21a6d
F20110115_AAANLE wallace_b_Page_41.tif
ed0f91ca4e3e423738492c34b1e1f5d0
7064059aeb701cec6ab2b8f3961da113550ab670
199078 F20110115_AAANGG wallace_b_Page_23.jpg
32508c3dfc68f67b739928e58595e651
fd9668f77929d65c0df401d75fd183d44792f01c
5842 F20110115_AAANQC wallace_b_Page_02.QC.jpg
165c34da6cd944e2faabc192646af67c
f4caf7b25f30588ecb58be4771146dd7d8915540
F20110115_AAANLF wallace_b_Page_42.tif
2c74b3ba1892fa5e45bdbe6098152904
ff2a6dc9683d4626c4da2c4bbba5cac5dd42d7bf
169226 F20110115_AAANGH wallace_b_Page_24.jpg
861cfb7c75fdc1736156121f9c1fc8ee
7a44f20081ddbbd695daabe81a098f7b062144a2
3686 F20110115_AAANQD wallace_b_Page_03thm.jpg
7c55062d1cd4695e5fe4dcc82fc3cd11
939f20748447b8e67cd67f4a5bea1cde304dcc06
F20110115_AAANLG wallace_b_Page_43.tif
d0affa236fdf5cc649321673e1584c15
b3830114aa720dcfe7a76b3c04a5278155a74dc6
203299 F20110115_AAANGI wallace_b_Page_25.jpg
18f61d96d81e809c0a3e29b10b96e0f3
e32d3ef5ede8c470c519205c8e85fbeccc6f18b3
9413 F20110115_AAANQE wallace_b_Page_04thm.jpg
865fb82ccc5713ad86d90aa2984a108f
47d854414e1640e2f055b6e03ce69eb520acdc94
F20110115_AAANLH wallace_b_Page_44.tif
843bc7f666c02fdb16e716102c351fc5
6dd91d10bb15adb57f4f8027c64efcc829ae77cb
194933 F20110115_AAANGJ wallace_b_Page_26.jpg
ac61a564d890d691cb0c9881e5d60406
30813cd82100da35422510aa39aa9f960321d853
26284 F20110115_AAANQF wallace_b_Page_04.QC.jpg
f8526ad77d901b0734c9e25c37ef5edb
d9f50913bcdbd9faddc0998da96c0d5b06bab624
178221 F20110115_AAANGK wallace_b_Page_27.jpg
14cb94f837ef12ae012d30df6bd56d75
223fb7cc99046de62c0fbb627105120725b5a2da
F20110115_AAANLI wallace_b_Page_45.tif
696921eede2956f5b7c21558be8bef4f
dbdcdfe0ee8d9ff1165efc69aff185526feb9d0f
187317 F20110115_AAANGL wallace_b_Page_28.jpg
a52670c7be5952efe9ee0a5b753f72db
e395aa03a56056bc4b2880561f10108ca3997207
44650 F20110115_AAANQG wallace_b_Page_05thm.jpg
65f6711901c474e596c3705993428385
fff11a9a3f69f4dafbd9aee1995779eb0c0d38af
F20110115_AAANLJ wallace_b_Page_46.tif
55c7cb6cdf69fcdc1557c465338d33a0
f07c0e994cbb6590ae4333dcdcaba1ac988d52f3
189804 F20110115_AAANGM wallace_b_Page_29.jpg
f3c2013903c0dd6d8cabca0ff40c1a0e
af83ed0f6de5267150b3182ea341906cf57f9139
95102 F20110115_AAANQH wallace_b_Page_05.QC.jpg
e187af95002991cf7afe3249ce4c1cd7
a979cebe1d6ecb46cd833f494ce7b568573d0337
F20110115_AAANLK wallace_b_Page_48.tif
de30fbfec3765b09afd043337102a501
f597c9cf09afc5d5ded3d8cfc8dcc7df42e0766f
186485 F20110115_AAANGN wallace_b_Page_30.jpg
a8aae88ac5d2ada3797d459caefbcfc0
5d092bf9a9815e7c2fba61d7657858426039024b
49386 F20110115_AAANQI wallace_b_Page_06thm.jpg
4f6232bf30ee18fc5d62d7b641034430
d860732abe1f349ffb880fa9920179951f8ff8da
F20110115_AAANLL wallace_b_Page_49.tif
149027ea41a1a104a5bc06eebe4f14fe
b60a8d12ccd5464d423aa44564727ce777533ace
157248 F20110115_AAANGO wallace_b_Page_31.jpg
d148210a85e2747b8df2f72780cd4baf
ecc7ccb42b8062211fd84faae18b825c422a8ab7
98946 F20110115_AAANQJ wallace_b_Page_06.QC.jpg
c9bb4443eb84898dd77b06efeaaecfac
57221b0cc2430c43dc48daa0c0d4705f3034d04b
F20110115_AAANLM wallace_b_Page_50.tif
45d5070392af15c484a36782ed477184
b18f7c813f5f16a861de554a7719d96079cef2fb
196639 F20110115_AAANGP wallace_b_Page_32.jpg
7867074e8f55f33e9e1181b1b484919b
6ddf6cd77d5dc9e197da7aad250e660df4b0e5c2
40062 F20110115_AAANQK wallace_b_Page_07thm.jpg
54ccb0aebec83bcf24014e350f063677
2e6bd6f1ca58cbc207e15b8143295a57bac17076
F20110115_AAANLN wallace_b_Page_51.tif
ae8377f259c8877ba2a4a524e6860f18
1642418b2346dec9e12551c94037185f30bbdf49
191039 F20110115_AAANGQ wallace_b_Page_33.jpg
ae34658584b6a42b411072e3c5c630d4
31991992b17315ca9a1cfd86a8c6a6a72b18905b
75640 F20110115_AAANQL wallace_b_Page_07.QC.jpg
7a9b9b99ac807a5df89183e9d2a99cfa
b3d49a565a3844472b561ecc3935351b6d3470d1
F20110115_AAANLO wallace_b_Page_52.tif
c7bd337aa7c09c5cd847337229cca64a
402a45a0a163592aa327e837e7008d0396f97102
195614 F20110115_AAANGR wallace_b_Page_34.jpg
1d3d3886145a585172a8a49adf4bb72c
2411bb93827ac00297f43a1ab2b82a67625821e1
47091 F20110115_AAANQM wallace_b_Page_08thm.jpg
e5d6bbc6e939fc486d4ca723c63e4626
c9c82b1844f9a26b2f6f8eb97bc8cf8a48c5d435
F20110115_AAANLP wallace_b_Page_53.tif
7bbe52779dda72f37e9a0d30442ba8a8
345f6b25e1aa0b499c5a82f26a5fab137552e5f9
194825 F20110115_AAANGS wallace_b_Page_35.jpg
529db471ee22bebf1e863fa33d29f10b
c2a31488761fe8fded98cfee233da7b5c5ae40b3
F20110115_AAANLQ wallace_b_Page_54.tif
92f1526f5b31fb93b6e52d2ac8292bf0
b76f7477caa6bb1dbd4cf61a99c229311bde4ad4
189379 F20110115_AAANGT wallace_b_Page_36.jpg
c6d2e73383d540e6e034d34d8c7663c4
1100c869f900e5acaf1e1a73d65815892fb5cd8e
95096 F20110115_AAANQN wallace_b_Page_08.QC.jpg
0e98de30ca06dcee11e9aaa98910489a
b58ba4077868ce86512c4d8800c5d3583bdea1b3
F20110115_AAANLR wallace_b_Page_55.tif
358f42b947554c180946fe51a0c00b16
6f9f3366e9f10be983cfa3b148d9eb713a0bce63
217336 F20110115_AAANGU wallace_b_Page_37.jpg
7289c6c4e00aff70a34ca0fedb4e2923
003d57d4d3b1df764f8513e4c48811d430761df2
19641 F20110115_AAANQO wallace_b_Page_09thm.jpg
f86652eb92ba6c6a7e03dfa60c493fac
227dec93593b2cc866a68c04a194ffcf39c90e90
F20110115_AAANLS wallace_b_Page_56.tif
bc99c2ee74bc8cbae68ec30d3b57e599
2edd84d0c8ff92aea786efaf0aa9b73739278686
198250 F20110115_AAANGV wallace_b_Page_38.jpg
3a83f470a3d58daa04d0983fb6871c45
c8d42115ee69add13d073a281df5695ffed574b2
59753 F20110115_AAANQP wallace_b_Page_09.QC.jpg
c2ea93c292b370388ea70b5dfab28095
3fcbeb3d784a1748482e203ef0b9a68ef3ee450b
F20110115_AAANLT wallace_b_Page_58.tif
109702a5fa86b49d8ecde1cd63077182
13e78967dfc163829ead22fb07b84da7798d2cd3
203513 F20110115_AAANGW wallace_b_Page_39.jpg
205fd1a5c6965977cd6927fae233ed93
4771d341a13fbd987bbe9fbbd2ff2db5027f4db6
18997 F20110115_AAANQQ wallace_b_Page_10thm.jpg
aa801bc434babf78294ffb90d29d4690
0c865a57af9e70a57d70d9a5771f51fcdd6fc859
F20110115_AAANLU wallace_b_Page_59.tif
941bb66db318c18eef59d4ac490b0a6d
f94cf907a2153f30367fe3215da24b1312034975
125642 F20110115_AAANGX wallace_b_Page_40.jpg
3e8a089597d746643d4b0aa178762fbb
652fe5afa9a687dc950a1151b3156c1f36a83647
58114 F20110115_AAANQR wallace_b_Page_10.QC.jpg
d5a597e4fb27cac5e00ee55fd5d4855c
32dd2c43e077e0c33e57d6803074eef5ca24113d
F20110115_AAANLV wallace_b_Page_60.tif
605336c3c8b28befb3946ed4613738c6
b45f0185eeaac6f1442a47568d0369bdda9a0c94
1894 F20110115_AAANEA wallace_b_Page_11.txt
d13f13dbbb4230b09c669dbf1c1fedd4
92c2e42d3cb5f05dd814140ece7bbc07af740d74
181134 F20110115_AAANGY wallace_b_Page_41.jpg
a02194f96b1e7dca963c1449ea5ee631
9d30e94f8729ad7d64cee337edc19e64d87a63b4
21948 F20110115_AAANQS wallace_b_Page_11thm.jpg
3fef3850e98a2ef762596acc5176c1ae
8a396bf90337530b2f6a94940b69ab5d4af66e3d
7043 F20110115_AAANLW wallace_b_Page_01.pro
d9ec18400f2006304b546724cb628d26
86a4b834f44fbe546e7965a2625e8a9af1705305
83848 F20110115_AAANGZ wallace_b_Page_42.jpg
d60ec955cdf5cfb6e3b6f22729bb133c
3f5cab71873c47b91af50b033e492b6cc03398b1
46986 F20110115_AAANQT wallace_b_Page_12thm.jpg
bbab70d7ac5e949dd9c711ec3491c017
686386c5215c2042df9d81e1de3cd23b220763b3
1201 F20110115_AAANLX wallace_b_Page_02.pro
ce921bda683b46986c71dbeb5491db05
6fa51d90b4a0bd03e3781adf602a806c64310511
73467 F20110115_AAANEB wallace_b_Page_29.QC.jpg
8d9f27a6f2579f646637c60e41351030
f4d62206a143680b6431831db063a58580a67d7f
96730 F20110115_AAANQU wallace_b_Page_12.QC.jpg
0b03d08733e2d12121b07286f61744e5
9480ad7d2dcace07a4e7856d42ec35b397106fc8
2668 F20110115_AAANLY wallace_b_Page_03.pro
3a6d119b5e375d28d53fa58d58dafde0
0ecff522eb2ad222776a9d091d469edbc1363c95
1051950 F20110115_AAANEC wallace_b_Page_22.jp2
31e84baf390845f4785f1751c4de62ad
d9a718faf2bdb06768d10c7c0112529347d9786c
22662 F20110115_AAANQV wallace_b_Page_13thm.jpg
ee0b95001068ae154f0edebb5783bd63
d5002fcdac47bc681ada0efa7cc36d4b55095eae
90449 F20110115_AAANJA wallace_b_Page_41.jp2
959ca3d41c129211b84d7b63fa2f1120
fc23794b57e3213e6f73d1a9b1636fa699ff9840
14094 F20110115_AAANLZ wallace_b_Page_04.pro
2befb4b336b61d4f4bbff97ce029e372
fd6de62d0930e02e19fab2d0489a190cb71081f8
138817 F20110115_AAANED wallace_b_Page_44.jpg
147295ec6eef22a2cce0ef9cbeff0e48
e9a257dd0feeb7790a50c8106f9082b302b296b3
100095 F20110115_AAANQW wallace_b_Page_14.QC.jpg
e61ae43a9122d3cd7be83a8c623d81eb
f16ff902b4d3405a4ed8220aacc5f7db475b60b1
41846 F20110115_AAANJB wallace_b_Page_42.jp2
59ab2f9e051825b89d86b798c930758d
b43d16f5c679c65d2b56b5f4736709e2769f4c6c
39919 F20110115_AAANEE wallace_b_Page_07.pro
57243118559344a34f51a134929088b2
a75d048a7578b8ae265be6e0ad5fc157bbd71c2d
77872 F20110115_AAANQX wallace_b_Page_15.QC.jpg
dee22dc75466603d4de5af794f07d981
3dca6044873a40435575b22e7cce96d06e412128
571615 F20110115_AAANJC wallace_b_Page_44.jp2
e11e377d3fd87ccfdc7d7e8df7c4a963
828bc873af1705d0e0661543512d92df92b27715
2166 F20110115_AAANEF wallace_b_Page_26.txt
5cf88e49457f7a9f4f3defb64a14c891
09129678861ea7e36ed22eb0916feb7e0a8f1a0b
3607 F20110115_AAANOA wallace_b_Page_05.txt
bbc5359d38a428903284f9729db91a91
60671c8907b7d3753f48b03d716cd39b3b2cf823
44732 F20110115_AAANQY wallace_b_Page_16thm.jpg
e65aa136fe49086933a315bbf7c6bbaa
2c3e8a6573078e2c26d34056c07ba4437d7d655d
513597 F20110115_AAANJD wallace_b_Page_45.jp2
d71e31fecc744b4f0c368b48dd93bdf4
092f4eabd457a2cd665b9fc613cae803301d210e
33767 F20110115_AAANEG wallace_b_Page_16.pro
a47181d70f9436014d7a308b52501033
dd31508999b90cf1c7ce550f77ee0b4b4332e743
4303 F20110115_AAANOB wallace_b_Page_06.txt
09d64736a0e97c8d94fa349342947433
223e3d10b6f01e445971260d0cb9c2fa5ed315fb
80456 F20110115_AAANQZ wallace_b_Page_16.QC.jpg
3437eb8f336e053acefef29d86b53128
75242e9d42a9c94db1df920d397905e7c3bc4270
889509 F20110115_AAANJE wallace_b_Page_46.jp2
450b696bf2805415b2aa799b2f602281
725f48590d7f7e3d601a930250e3970ab80ede86
100128 F20110115_AAANEH wallace_b_Page_22.QC.jpg
52580f65f1b5f26e15632d7e3acdafbe
740c8dd9ebce806b906661751e6eb3c296e9d0be
1611 F20110115_AAANOC wallace_b_Page_07.txt
9192949ee9e1b8a90be5be10548fee9e
5f398a8039b6fcb3c12d96c3374416444736807a
609499 F20110115_AAANJF wallace_b_Page_47.jp2
cf3c50041c0d21f5ea88c3d270d7bbbe
3782f9c900f108d2b05a3431cbaf10d05463933a
74305 F20110115_AAANTA wallace_b_Page_48.QC.jpg
6710a10bb23fc195ff60da24129cc5b2
a14cfd611f1493a437952ec548e5f9e3fa2a0f86
103476 F20110115_AAANEI wallace_b_Page_39.jp2
47c71d0b531e853e799d9ba565f9a76d
7294ff4ba28cf8ff02df4d8b66aefa82db879602
2330 F20110115_AAANOD wallace_b_Page_08.txt
0e33276360fdb92ff79e774f9d0e076c
e051890ede34e84a1fd34332b01ab599533c0563
41621 F20110115_AAANTB wallace_b_Page_49thm.jpg
72ab2abc106edf735e8a8cc1290d4e9d
3d496b886d88ca21539fd06cabb760acd0622f9a
28720 F20110115_AAANEJ wallace_b_Page_47.pro
82402115905f680f5730fca72e483eaf
69261718312be45bc2efddcea0ed8927d2c86246
1527 F20110115_AAANOE wallace_b_Page_10.txt
1b945d7e5c83bf1a10cecf954ceab143
b4a02a149a6ecf3f653af33327318aa43ea4633a
689622 F20110115_AAANJG wallace_b_Page_48.jp2
6447c1539d36017de1dcc9982508ca68
d6061fb8a35fbcf0ab9702587d43f1c1aaaed27a
71463 F20110115_AAANTC wallace_b_Page_49.QC.jpg
223e04aead2641be1ec8b80cca2649aa
3116558174b57e05a87e0c192ee1ea291fab616a
74318 F20110115_AAANEK wallace_b_Page_35.QC.jpg
8d0d8f63bc5334b6a1a6a5b0ba994b1a
4cc2c2ab176cdf2d453a54fb2f7f1119d58424ad
1819 F20110115_AAANOF wallace_b_Page_12.txt
2989fca784b13dc77f74d42753200f3f
aaa9572d8225f4e8842257645ddfce2abd6d30dc
641126 F20110115_AAANJH wallace_b_Page_49.jp2
4169e756108d99a74ce9382f4aef416b
1ec7a2d47afa6a96ac6f551c20004f31011e05f7
41331 F20110115_AAANTD wallace_b_Page_50thm.jpg
db69aec8abc83efb7ec90873c9d6da83
6db887e47fc8d6a2ef845004251f09ce33bc447b
23074 F20110115_AAANEL wallace_b_Page_36thm.jpg
4f989f872f14dfd81057304805a57c93
3a72e328cefcca0926532167a4853c42418de7b1
1876 F20110115_AAANOG wallace_b_Page_13.txt
8c3a89f39d19198e0fc91c78c3168399
c57b30ada93a50ed7a29af91f37b2329bcd19a98
44442 F20110115_AAANJI wallace_b_Page_51.jp2
25a2cc8d87c9f17215362696e8883ddb
532a8327324f904e2c7a9ebff249ae8657be8786
72193 F20110115_AAANTE wallace_b_Page_50.QC.jpg
1e288fa180f77f92e0cc2108ea491571
311e0ef7bf196bc73a06b9acab66f28bfd5168f3
1665 F20110115_AAANEM wallace_b_Page_09.txt
76a2c061c9fb2463ffab4fe2c1481ded
fa80e213cbf33eb4ef0f3b9ddd8b42b163468c86
1801 F20110115_AAANOH wallace_b_Page_14.txt
3be258ea2e9e6ee00aa6865fcaf06ee9
875c3194fedd54d7fd9ed526acbed65ffdb703fd
77017 F20110115_AAANJJ wallace_b_Page_52.jp2
8cf2fe3eb7019b579f0d421ca66d274f
937f82d14546b99eb7dd6940c96f9943ac3ee1e2
10959 F20110115_AAANTF wallace_b_Page_51thm.jpg
b0f83fe9b243f04a190fab67ddbc6ee4
f2ee530961c67c05d3fe3a1d947b5ce63883d0aa
76111 F20110115_AAANEN wallace_b_Page_43.jp2
55d1ad0830c5024fe1fd2fa2c6c23254
aee184c19942276a20029fa7639925bf757b0678
1654 F20110115_AAANOI wallace_b_Page_16.txt
7670f07b0469f9fbbac028becbede477
d9e2d922d872da6d25274f329ea1b6b9fba53b8d
1051969 F20110115_AAANJK wallace_b_Page_53.jp2
df5f17b6107880286a20f8c19df1ba23
c0d0266822733467055d348bc5c108ae43518777
31437 F20110115_AAANTG wallace_b_Page_51.QC.jpg
fe85d73a74b8aa0f256393e638884b57
e4f82055ea39c6ff930b5f1081265d6e08a63f07
F20110115_AAANEO wallace_b_Page_24.tif
c7db07c7da09e32d36204d166ff64697
5094435dda6c6801c0873a8e352aaed4661a5200
1623 F20110115_AAANOJ wallace_b_Page_17.txt
de6069466cbacf7b570638d6033705ed
c545ee89381958bfefdbb33e06113fb54e222be7
89194 F20110115_AAANJL wallace_b_Page_54.jp2
08a955efd432e2fbd856807bc0424d4d
e8af3900d4a3604d60f822434e79644e8720bcd5
18588 F20110115_AAANTH wallace_b_Page_52thm.jpg
7589bf105752140b2926f05bc5a0b3a7
0cf416ba34de707757d537891fdf194399975620
56752 F20110115_AAANEP wallace_b_Page_52.QC.jpg
0269e03bcddaaf0652a67984a4962cbe
8d45dc1afe2ee4dadaf174c78c03d3b095946e30
1863 F20110115_AAANOK wallace_b_Page_18.txt
2eeb54a1554a0e591da31a83367557a1
b03b313b8118b6e9348054191fad3d9f058207e0
931508 F20110115_AAANJM wallace_b_Page_56.jp2
389229321a9c89c447ca81062dda8376
d1c8c1e865478f6caa93b276d3386e93f868c83b
48478 F20110115_AAANTI wallace_b_Page_53thm.jpg
24f0b246652e08470774706fa20c8915
285c4e5932ffdf27cad8bd26d348355f4556ea1c
1051986 F20110115_AAANEQ wallace_b_Page_07.jp2
3b85b0ce070f426cb4df5f88995d3bbe
297f61d6bb2ac1457bd570b373ac22fcb48901e4
92176 F20110115_AAANJN wallace_b_Page_57.jp2
d8fb7d43cca9ddbcfbda31919be7aeca
72753c1cd0bc7d546d70f04c096c30ac5369d1fd
95799 F20110115_AAANTJ wallace_b_Page_53.QC.jpg
d94c59a86f66eb6075e6cf6c47d0d124
f2f6e581a2bde277f1877d90a1bb4375aaeec546
41952 F20110115_AAANER wallace_b_Page_25.pro
4517c62f021d5f1804e9998512084c51
39f2cdfae903462396e4211b810efe9be12dc804
1636 F20110115_AAANOL wallace_b_Page_20.txt
4b596e80cb1c5aadd6400c0c5e56ebd5
d5a49a76620605ef35b5d8fc9bc74627496b961a
60461 F20110115_AAANJO wallace_b_Page_58.jp2
eaa861d4c3bcfc26071cde95fa8574bc
b3ccf5cad612cf836556d2a4fc105c3c030cd498
21393 F20110115_AAANTK wallace_b_Page_54thm.jpg
21239ff7a6ac8a0a66b02d92b72fade8
262ea366c22a231ea5fa7adb44db220042728f31
682826 F20110115_AAANES wallace_b_Page_50.jp2
886ae1bd5c924c7880e2b089dd771c50
29f78b99475a179b025a0d716b84bb1e86258d8b
1566 F20110115_AAANOM wallace_b_Page_21.txt
258130836b7660e628e605d478d61ba8
8e23bb26b7610ce6e5af6652bca9e2b9217f7776
104372 F20110115_AAANJP wallace_b_Page_59.jp2
11fc44fa7429874687ca6c9c5fcdeb9f
30dc210f4db174a875c338a20c7f2dd180e20090
63514 F20110115_AAANTL wallace_b_Page_54.QC.jpg
af9bb0d183f2500a1f1d1b5d758ec6ea
486f830666b10baf7672c4c5d4ec186dba152310
48930 F20110115_AAANET wallace_b_Page_22thm.jpg
8c4adee6f07a9652102e50739c221ec7
d4a2553cb55d693988cefbe4b257e772ef433d4b
1910 F20110115_AAANON wallace_b_Page_22.txt
42b50e974346bb31a1eef9908501a3ae
8cf713a0a354c0030a93a68bf9c75e710b2af0ae
66226 F20110115_AAANJQ wallace_b_Page_60.jp2
e7ab981af279f6b79633fabcd0fe5c39
d4f022b56f5e88890ae155b54258e3058355eb53
30151 F20110115_AAANTM wallace_b_Page_55thm.jpg
681b49d2a3c887fe66b2ac6c1c77652c
d2241f58769b0bc96287e165b83ee178af27761f
87053 F20110115_AAANEU wallace_b_Page_32.QC.jpg
a7439d5c36ecf6b29e3369a70be4bb0d
e5866a7c75a37fe76890745f1ab81052190cd789
1368 F20110115_AAANOO wallace_b_Page_23.txt
4c149545de0f7840ec2316a800745710
2e8ddd1da8d2965d68f4331dd093011694bf0d31
F20110115_AAANJR wallace_b_Page_01.tif
e4a206a5152f546b4c6b4ea93374e08b
081b2456da4fa8362052d2618dc91874aeb41339
46624 F20110115_AAANTN wallace_b_Page_56thm.jpg
392913c283d227d6a3ab5d6e25b7b861
8b976e12b828e06a01e5540c8f0acea420dfa6d4
1716 F20110115_AAANOP wallace_b_Page_25.txt
09889218085262caa21bf10383bad22c
45c82f663519578a65dc8147ef2ca4e344147869
F20110115_AAANJS wallace_b_Page_02.tif
9b043503b5d70c76986ae9f89015aaa4
11d139a12296adde0231987259dc45c9479fb620
73087 F20110115_AAANEV wallace_b_Page_13.QC.jpg
fe7c3428b12c391da31bb0c150bcc5fa
ca944f0f5f678b1b86e33dcfb1096efe1afc5c3a
92038 F20110115_AAANTO wallace_b_Page_56.QC.jpg
b353e794e3beb6413fa8c060b8c11a71
7b65b3cbb75f4d33727a542dc801a2f66c455793
1590 F20110115_AAANOQ wallace_b_Page_27.txt
a5cb24ced1f5f34d4d284f4b37e63d17
116aada86f3e1e46f52f5f41cd93f3ec13830689
F20110115_AAANJT wallace_b_Page_03.tif
af37861ce334d1e8bc69a71d7fd76427
7f3e1ec5e40913dad06dc930fe85610bde3f170a
1026488 F20110115_AAANEW wallace_b_Page_55.jp2
d7cd7c57f9d03980fd79fcdc2096bc42
831715b9a68e21dfb780543d9a2eac343946c4c6
21646 F20110115_AAANTP wallace_b_Page_57thm.jpg
28e2eefb97794dbf13e4bd7bfc5d31da
122130ec6e646417554a214c3911e32a3b1fb142
1907 F20110115_AAANOR wallace_b_Page_28.txt
f5b61fc93b8750e4a6195c71c4fc5f59
7913f18a66dbd3b3f4d2ffa559d3854bb8bd3d1e
F20110115_AAANJU wallace_b_Page_04.tif
e0978a81c29d889e510a6af7569c5e23
f41c5c08fd2ece21c5daeabdcee2a7489a45024f
47584 F20110115_AAANEX wallace_b_Page_14thm.jpg
3434ec872718626c24a4c0f42757a50e
5f7afe3afcbe21f0cb193ea70e6c89c2693b9974
1972 F20110115_AAANOS wallace_b_Page_29.txt
27f7edc58ae99f8aef5fe49d345e8578
b7048241000e5aaffb16c252f6884fd0bc97ef01
F20110115_AAANJV wallace_b_Page_05.tif
33c01f25c989ffd3064492dd77396f91
c7c55ace0e8074899bd513f97e3e4175f8bcb8d1
88088 F20110115_AAANEY wallace_b_Page_25.QC.jpg
5f9c19bdc649aa4713e6b29c257ed3b9
10a096d5110e06b99432ae33fd5cc1247a566879
64603 F20110115_AAANTQ wallace_b_Page_57.QC.jpg
acfc5cd36abeb3b40c9f7a3589d833f7
608f559034a03255632b4af15db88031c2d91593
1857 F20110115_AAANOT wallace_b_Page_30.txt
eb3acd37dfd07150278ca7eaf45724a0
0a8937ea165bb8730b939bb59ef164c485dc5b57
F20110115_AAANJW wallace_b_Page_06.tif
c6a73fa1e69b446a8abe120464d47830
47ad37937ed9b9142e1aee8bc711021cc47371fb
1687 F20110115_AAANEZ wallace_b_Page_24.txt
7c7a8a10e4d7a4b5f0bc319e583791ac
69876987622faae1be596a1eb5b5207ea8f9ceff
14272 F20110115_AAANTR wallace_b_Page_58thm.jpg
c0a878910fc66309b96c9be78e53e60e
294648e0beecfd894569f3464c82044315556fb4
1246 F20110115_AAANOU wallace_b_Page_31.txt
da750e741fa16b09180c805b7dc7b99b
98e0689b518e8d38a859d38ca9f43df20b5ae66d
F20110115_AAANJX wallace_b_Page_07.tif
80ccbb50ba2bb5ac218f01631aa25480
dcde41c1e9f0936f38b433f169ff6ee42a4c5f23
41582 F20110115_AAANTS wallace_b_Page_58.QC.jpg
8f78eeccab30beb2d2e8abd2ba60010e
41ff289605b5c753f5d4d6ee15a3d816b0ddc48b
1709 F20110115_AAANOV wallace_b_Page_32.txt
b7d3dc37af5ee99ffed06410b3a79ddf
64b4eb31cd315e5dbf264b7418bcc97f50ef22c9
146270 F20110115_AAANHA wallace_b_Page_43.jpg
a2fd08cf653ef0d148019d14920d5531
001f7c36dd2a22b122310157197c1ce5c085a6d2
F20110115_AAANJY wallace_b_Page_08.tif
ef456c28a02a63f7d3a00ab668b8c34d
52bd94cf024f0d934871525b848d7695cec24f55
21905 F20110115_AAANTT wallace_b_Page_59thm.jpg
e52b6c63d61119e4be1ae09c2cba9610
f7e8ca7c4f45e57e5e268bc02eb482a56108d1cd
1806 F20110115_AAANOW wallace_b_Page_33.txt
3387bb0fc6d902564b483b7f442121e2
3a46d844377570dae8d79b3453b542c96023f8e6
125133 F20110115_AAANHB wallace_b_Page_45.jpg
10ebd769127f5c68abd248cd70fb675d
1e3a0babc944841c4c24d0ebbf806c155093218c
F20110115_AAANJZ wallace_b_Page_09.tif
450355f950fef500156de693324e6087
4a04cb57d7ca43e2c9d514b8082c3b69844bf2e1
15892 F20110115_AAANTU wallace_b_Page_60thm.jpg
769224062e00f2fb805704d80e6a0256
b060d9b44700017dee1896db988549b1b6f858db
1878 F20110115_AAANOX wallace_b_Page_34.txt
9074737b4b801d95628d1d803111df73
5038ee87408c3fb9d61d4cbd0b6761728e7055a0
178355 F20110115_AAANHC wallace_b_Page_46.jpg
9c1ece85945df182ded4142a0babce60
22012f961b6cafed80ed3bd8b881b6dbb0c4ad58
48553 F20110115_AAANTV wallace_b_Page_60.QC.jpg
9354f31c60e31a85d145e6ef49d4da5e
b2715bae6704ab7c09377b41db7afe592745144f
F20110115_AAANOY wallace_b_Page_35.txt
25095c99efa25ce0adcc793ab8d3bcbe
a4de614d4011f5a2a7d9359da28e0c30df8fab45
132880 F20110115_AAANHD wallace_b_Page_47.jpg
a5f36d0b135f0fde507b6b63f040e784
12fbf1d0efedb0e56b6455c1d8cbbdce2dba7424
85890 F20110115_AAANMA wallace_b_Page_05.pro
05d88783c4fba4717b9ff347dded678c
ef51086ff879b35beead358a755e0755d03e71fa
71812 F20110115_AAANTW UFE0008800_00001.mets FULL
ddd18af50ed38a4c92606c4de6cb4588
f5f4e7dbfec15a596bb37405925720634b0d3a49
1859 F20110115_AAANOZ wallace_b_Page_36.txt
442728292a8b25db20d420dfa75ad0cd
339a61782ac2b4f09bcba11d26d0bd02603ff881
106129 F20110115_AAANMB wallace_b_Page_06.pro
d9c3bdf5dbdecfd8ebed37bc83335ff9
deeab4b83353c602189a576437b49ef0d25daa9c
171180 F20110115_AAANHE wallace_b_Page_48.jpg
15749abee9793c32a5b5dd63e4c8ac14
35cc6465d6e6559d5a1779e031bc97c1fd9b2f1d
58000 F20110115_AAANMC wallace_b_Page_08.pro
bbf9e37e2021e916f5db0327d8d5af71
2987a1addfb31460453b81514a0700afbd078a4a
150990 F20110115_AAANHF wallace_b_Page_49.jpg
537ea0679586f0295d9e9bf9d02ec4d2
b520df75f85f6fea284a90f92e97ba4b67a6c10d
44052 F20110115_AAANRA wallace_b_Page_17thm.jpg
a2065001e31a6bef2e08eaa642788ee5
87ecc36e24bd0b686cfc4eeaa25b9c0b14df5ffc
37283 F20110115_AAANMD wallace_b_Page_09.pro
501ebc5d5c1a47bcb9aafccdd08e9dc4
c4bd4b1bb92ec8ab725bd568286ef7f91f480f79
165388 F20110115_AAANHG wallace_b_Page_50.jpg
6e14f704b9de18ee039911388725bcd5
4ea3b70be986da6081acb01de741afa29e0e17ee
77550 F20110115_AAANRB wallace_b_Page_17.QC.jpg
d83f884aa58e53bc28e53c61b959d096
76e92ecf019dd6a8234843db6e66d307e7b236eb
35438 F20110115_AAANME wallace_b_Page_10.pro
30a1b952e6f613ccad97f3fb1204e75f
dfdd241f99b95ed6c21a28d2a675650798343ad9
84245 F20110115_AAANHH wallace_b_Page_51.jpg
4cb1ae7910921c90d119cbb4d2991c41
f49a1d3719522a0d69549b7f604bf9019a931f40
22896 F20110115_AAANRC wallace_b_Page_18thm.jpg
dafa566c93345c94e7f5e2e24fa9384a
2b2f216f1f3a6cd2c0e3ca8ccc474a3f371da631
43693 F20110115_AAANMF wallace_b_Page_11.pro
86d777b23f356f618e64f7afbeccc412
f9614befb1db83d0dcba460864d4bfcfe327282d
151065 F20110115_AAANHI wallace_b_Page_52.jpg
b71df78264fd6a5e21e515582c26397e
1bf0581064b88addff813522894d587d38c6c661
72780 F20110115_AAANRD wallace_b_Page_18.QC.jpg
ad4ee1d9dab8e13f7551889ecfa74cbe
f47d8a7a1303c886e60d1a17a7404a8acf745324
45030 F20110115_AAANMG wallace_b_Page_12.pro
8cbec1d217015a52afc759bc405bc894
1f367c78273862fb9459eaad1a8e27e40ff1e1d7
219799 F20110115_AAANHJ wallace_b_Page_53.jpg
df7c2cc0e4f4eb47f0603476585c1c6f
bea60146104c7429590fd43a54e05591b2af81a8
24628 F20110115_AAANRE wallace_b_Page_19thm.jpg
6312ef226164a93b009b6165d15e16df
977d14caf4801fa8f578696a462d1aeb1e25c0f3
46038 F20110115_AAANMH wallace_b_Page_13.pro
570da8a8b6710367f508074bc64cbbb9
53d2db4b8c40238add960986c3b783c3dcb72c56
171702 F20110115_AAANHK wallace_b_Page_54.jpg
9418008266d681ef4d2d92fc34af0216
b7c973c69b023a7e51764408761532e2e496fee0
78433 F20110115_AAANRF wallace_b_Page_19.QC.jpg
198d77b7d58eca1e3954169773fd47fc
a3d1266c73f4883097a0b6c24977e454f7452732
45357 F20110115_AAANMI wallace_b_Page_14.pro
e6758456a1ddfb4afe83f8aaa1917263
85e83baec348aef7d3e69a1d260c607672eaee5e
203000 F20110115_AAANHL wallace_b_Page_55.jpg
f623bb8139fff3a6297726b93125da80
d3cb4487a6407d22c790682557d74cdc95176302
17836 F20110115_AAANRG wallace_b_Page_20thm.jpg
1430d1bd8b949addd2b8de7d476dae70
0afea16275fb7654c63b22b5b2a9ef7416287231
204407 F20110115_AAANHM wallace_b_Page_56.jpg
944d1abb7fc31ce52eaf00773a51ac4b
014d861cf86a3fedd8f55165bfca179353624f32
57793 F20110115_AAANRH wallace_b_Page_20.QC.jpg
9ee355b74a277d8d839b10a4eb7610a7
459a8e3ff9337aa47de7fa3a0d9cb9079d38c4f4
55277 F20110115_AAANMJ wallace_b_Page_15.pro
ed463ec6c9297a72dfa304420f2d3b89
7ba75ab7df4391eba8bc88f6212604afef12a555
175407 F20110115_AAANHN wallace_b_Page_57.jpg
660e453359b361b1b457d6ed0946dcdc
86ec64ffc98e05898dbbbcb0cdd87f483b9ea185
20468 F20110115_AAANRI wallace_b_Page_21thm.jpg
6bfb294ad3a271e28a42dd2b014aa2b6
ec736371bd7d317ec033afb196e474cfc7737819
31875 F20110115_AAANMK wallace_b_Page_17.pro
491194252becde43d6969637ea2940bf
017aa06b4a3e2eb7dc920fbafcfe5f39626105bc
113733 F20110115_AAANHO wallace_b_Page_58.jpg
02ea48674168fdfce411eb835c6ea55a
6d60b75f74953b778845990ce841365dff8297a6
59612 F20110115_AAANRJ wallace_b_Page_21.QC.jpg
f83f15f0c5a9549a00335d48fc1adeb9
cffa4299a6207272caf76ad60bc43de7db88d671
46778 F20110115_AAANML wallace_b_Page_18.pro
3ef06b0762331503cf921bd5593e0aa6
40024cce5944f76c30f45d0cb147dbfe0a78bb92
198558 F20110115_AAANHP wallace_b_Page_59.jpg
ce5d09c33d190aeac4f5bd2bed8b6645
10555c04f21b907e4c15806446886186a0483302
47276 F20110115_AAANRK wallace_b_Page_23thm.jpg
b571de26a1955e780ef59b6928fbb647
450abab69acf777ee0aa5c929a6064deff0f0277
48321 F20110115_AAANMM wallace_b_Page_19.pro
1496b4c09ad27a7bbe4a4f06e328ad7a
36a162a4aeded2a7e0786f36238d371b5364d1ab
127782 F20110115_AAANHQ wallace_b_Page_60.jpg
851e264d7a5d95eb356b8685fbdbff5c
8ef0386a5d5c12f5d1b6871cd601250b2832055c
91676 F20110115_AAANRL wallace_b_Page_23.QC.jpg
f7daa0cffac611be1b15456eb2cc7b22
ce33245fcb7e9224be702289aeca6e0c6bb1ee79
39617 F20110115_AAANMN wallace_b_Page_20.pro
b6d24687220a3d2cb58bc4024deeb11a
07a824635f0c47c0d7801f01078d0d389eededfb
20538 F20110115_AAANRM wallace_b_Page_24thm.jpg
49a5432822c3f3759f6f6431ac502666
ec42fc68b033ccbc6cd51609f126cadc00aabcae
36548 F20110115_AAANMO wallace_b_Page_21.pro
a9efd10585a73007dc1f724d0a50163b
06223d1133541691672e2589cedaae68ac29a4e6
21454 F20110115_AAANHR wallace_b_Page_01.jp2
a44948b6195738ef2ee6d9fb66589a38
f79f1fe9b6ac83b7d96c8853ae0103256224783c
62925 F20110115_AAANRN wallace_b_Page_24.QC.jpg
b6add2f08e8ddd2aca406f0048dd7692
6354119f330a5ed2d9ce8bcebbd5921c957bb433
47341 F20110115_AAANMP wallace_b_Page_22.pro
e401b0b134b71bfe6285d3ccdaa7cd74
b94060f00dd47c3e2c75bb5e92df77baca32d366
5536 F20110115_AAANHS wallace_b_Page_02.jp2
5b81fc75bc5ea9e5bba15bfc4d82f1c6
8ddeffbdeee4b4144f2a93cf3108a18066432056
33679 F20110115_AAANMQ wallace_b_Page_23.pro
4f0ebd8d8db9751b2bbfb97fe8218e61
8d991062ab33b2b21d799137512bba9d98974eb9
8820 F20110115_AAANHT wallace_b_Page_03.jp2
d4d517f077a0453deec16e9da16e4049
ae1732ee8e0c6c5d517207b8cc1620ea81a581bd
21038 F20110115_AAANRO wallace_b_Page_26thm.jpg
4fd02a6fe56fd838f0238dbae43a2f3c
ce02d900b4c3407b50d5b127c46d7028340c57e1
40729 F20110115_AAANMR wallace_b_Page_24.pro
82b34a97cbdb29e7d0f3c412a1ed2deb
f774d95148c89ea151257ffdeffcd5571a4b578b
34646 F20110115_AAANHU wallace_b_Page_04.jp2
2b61d95b95d547f8aa0a42cafcea2ba6
67bf29c7422d6f2fba897fd4e9a6d41464f3ea8d
65489 F20110115_AAANRP wallace_b_Page_26.QC.jpg
a00af431d10705453db36103fc45e1a8
058fdd1a7b256e9ea7955d9c3c8f878b6af45916
51948 F20110115_AAANMS wallace_b_Page_26.pro
27028031bde6ac22db2cd2719aac33fa
447a739ad30c55d862505df9531abc85e26c9768
1051980 F20110115_AAANHV wallace_b_Page_06.jp2
5cf4d9788e766099e67e73dc332ce9a8
731b4984fd4ec144fb829b3e857ddb0a1ed621c2
27618 F20110115_AAANRQ wallace_b_Page_27thm.jpg
b4398d83db541d656323767bf70e96c6
daa8a3437e746ca2718e5b844ecc86a60e7e87b3
39526 F20110115_AAANMT wallace_b_Page_27.pro
a9323bab4ebfd6c77baff46a70b93c52
14d273a7e03e1e05ff218d0d2695e2fd13856a99
F20110115_AAANHW wallace_b_Page_08.jp2
a27395930b87a82fe072d6a16525f306
e2c371855dd8ad3a2f3b7417cc3eaefae0614420
68235 F20110115_AAANRR wallace_b_Page_27.QC.jpg
9f910242d201248dcf9d9d53c7e16302
a839cee34dc2af59247ecbe8c58aedca7661e48a
39509 F20110115_AAANMU wallace_b_Page_28.pro
5ed652b9f9c3a32449f17f40867402e4
06af86b3c6b0139f99971c63e8f333470df56f7b
83524 F20110115_AAANHX wallace_b_Page_09.jp2
f81657780534963acf5ab3505aae2165
5293c28a5178646e10e737b3bc755f8e9123fd0a
46490 F20110115_AAANRS wallace_b_Page_28thm.jpg
4d986ce6774c1f4a19e5f44ac0baa982
bf04f894114719063b71e889a7df57894d7fa358
41078 F20110115_AAANMV wallace_b_Page_29.pro
d8d8ec665f615a9f002dbf34138f5392
b0b840b43377bdcd3ceef52ef1472f61df7e3514
68039 F20110115_AAANFA wallace_b_Page_11.QC.jpg
d689c3eb7d2e797058c41fdbcff34e37
e94ebb395b4511a6e6a6bef100956a03b275f804
80203 F20110115_AAANHY wallace_b_Page_10.jp2
9f5676f785fcce6716f5a8982d99d7b8
cc3c0d55651086cd04cd76bbd6cc54a8ee611968
85446 F20110115_AAANRT wallace_b_Page_28.QC.jpg
a0609a2c5a6baa4f1559151eb7e4f287
7314ad3d5c29a99027a7bc51c9d99433a61f7a94
27989 F20110115_AAANMW wallace_b_Page_31.pro
9f5477761ce1100cfe4a69034a20ff92
c311965f942d316a3c7931f1ee1489417e6c02aa
1051952 F20110115_AAANFB wallace_b_Page_05.jp2
ccdd738c1f5f6605eab55f59ed9a955a
01c745af3d34c18019c013a8a3e95389b183b09c
92151 F20110115_AAANHZ wallace_b_Page_11.jp2
1c48ec433c0bcf857446758104b53a29
254c7873ebf0695b94b0ef640447f6473d9e6d13
28216 F20110115_AAANRU wallace_b_Page_29thm.jpg
65a509aa04d1060605086f90817d4a38
9baca85cc0f14fb5148677d1b94b2f6ce97a65cc
40312 F20110115_AAANMX wallace_b_Page_32.pro
40b90e75366c5752fb16afd3f2c9e142
0867fc0cd01dbc00606ec226479f96746f827522
26014 F20110115_AAANRV wallace_b_Page_30thm.jpg
fdd1101501d8ceaffaaafaeebd2be175
3f7e61de327f97fc94fff20bb4e14acf7c266cf8
F20110115_AAANKA wallace_b_Page_10.tif
c143cc643ce208555a1a35bb2722231c
453b734067ff56c94150459bdf6a0ccd74c6b06d
46958 F20110115_AAANMY wallace_b_Page_34.pro
f38b7590a2035493cb26318451ef52ee
916041470dd2005a1bab388b03d7fc75bde373dc
68058 F20110115_AAANFC wallace_b_Page_59.QC.jpg
1113d4061f558547bdbb7d9983fa675c
ddcad5a62b6fda612d96b09426af0750b562df51
67692 F20110115_AAANRW wallace_b_Page_30.QC.jpg
3b4169af2d975f1632c1f578fc3cc9d0
deddea14ca96393d1f35c338374b472e3ec4412c
F20110115_AAANKB wallace_b_Page_11.tif
5c7ab71550c036a57f2726c2d80ef5e5
9b291898c87b5fdef07848996ac4b75614c25acc
46952 F20110115_AAANMZ wallace_b_Page_35.pro
83f251713daaec8bfa5dfbb55ce9c415
23a4151cfdd17930025a9492ea7113c03a6a630a
2216 F20110115_AAANFD wallace_b_Page_15.txt
f6d9ac1d306652d7022fd443107de38b
4551cd57aa02f9bc382b247672ed1527aa838413
42943 F20110115_AAANRX wallace_b_Page_31thm.jpg
fb3ffcd9a2c190b76aab994bb5bc9f3a
2f3436ac814f8149e569a43d585e5f7f5bb00d42
F20110115_AAANKC wallace_b_Page_12.tif
e869cfced23a14293fae71562588a73f
a95fed715ef1b9aa0353373d3ce440729889e3c9
240833 F20110115_AAANFE wallace_b_Page_08.jpg
049f683413d98570a85f7c06931c3a3d
ce59a115cbec5c000fbca88ab231565c721a29e9
76157 F20110115_AAANRY wallace_b_Page_31.QC.jpg
63aecc65b84aa39588a46d784a45a134
fa934ac95879646fe37e2f18bdcf941bc6eeb170
F20110115_AAANKD wallace_b_Page_13.tif
e34e291350dc509c71521efcfb4b2ccd
70e1996792e5db8a67edaaafbc81f5c39d1378b5
29020 F20110115_AAANFF wallace_b_Page_49.pro
b515f85308099cdaaacc179288b878b3
ebdb92fc04aa32da546c1ad68d7cbe83fe21c81a
1836 F20110115_AAANPA wallace_b_Page_37.txt
ffa8de26e4c1171117baad92392f328d
3fc9327785951b3d51be35eeb55b286a432a45b7
46123 F20110115_AAANRZ wallace_b_Page_32thm.jpg
8fb0d3a07c7d1af92b945224a2e1c87e
4de0fcaf2773be03175903ed4f1c8687ea7c61df
F20110115_AAANKE wallace_b_Page_14.tif
5d5c3cc4aed10a51cb880b9c512432c3
4888fe1887f52c8cd070c525442c562fb70b6b22
24080 F20110115_AAANFG wallace_b_Page_15thm.jpg
be569a14d83e88483b69f8daf6bd7c2e
e5c8c436263c557c9946b8c76547236d472b3554
1885 F20110115_AAANPB wallace_b_Page_38.txt
2393856874aefa7a1cd81b8c7efdac37
7624925862ba49b14f4b2e237419fc14beb22132
F20110115_AAANKF wallace_b_Page_15.tif
19f54c546c5ef0d5a5b52e0de0379080
c2b2cd27ef2217fca6b7e0b69581cfd640c718ac
23370 F20110115_AAANFH wallace_b_Page_35thm.jpg
fd57ae496f3e9a335fcae363f2f96b7f
23e54b0dedb5af56187c99fee2fbb6d83600babc
1928 F20110115_AAANPC wallace_b_Page_39.txt
e9ec4bb19eca65d4ae2c98bbe87a8a95
94f2df77561d3c23059c685bee0e3e727ee4c586
F20110115_AAANKG wallace_b_Page_16.tif
744cedc6df7709ce96379e019836279e
ec5d8f3cd603e14ac4087f32a0d881ab879db89b
892567 F20110115_AAANFI wallace_b_Page_27.jp2
de2da4d402735d541a5fe1b12f6c4e35
36f48e41ee3d1ec647d3f0aefacbd6a763bbbbb5
1087 F20110115_AAANPD wallace_b_Page_40.txt
c86ccc064f1e53f6c694d2befbecc9a8
cb9c2aac71781fe24319c84cb404cd1104085013
92620 F20110115_AAANFJ UFE0008800_00001.xml
842c16fed75e1893c09438734b11efd6
caa5bddcfffc8722a1d0679a1cead1ec3383d682
1915 F20110115_AAANPE wallace_b_Page_41.txt
dde23cfe77910602f4721b610a66695f
1116242b0798bf664d0949973e27d5b13ef97a73
F20110115_AAANKH wallace_b_Page_17.tif
f4b674551140feac3c2c2dfd6f364f78
e2eddc34c491dbe13b7ab3d7e096448a91e7407b
914 F20110115_AAANPF wallace_b_Page_42.txt
e3a22be4df3889865187a2126b9bbf23
d0d24fbcd42b39ba4f28aa4735a8f1cd80fefb18
F20110115_AAANKI wallace_b_Page_18.tif
b39719d49e46f026bdfaa093c8aa16c8
d1a50bf8b58e5e8dcab7113b3cde77f283a33c35
1473 F20110115_AAANPG wallace_b_Page_43.txt
0fbcb3021e930625ba594ccd46bd55ab
9d5e0ee767c9abf3cfeda2dc699373d39224edbb
F20110115_AAANKJ wallace_b_Page_19.tif
9d811132f6dd589ff616a9961f15a204
cb22058d97df4e678176b3a43ace0356400d1f2f
47470 F20110115_AAANFM wallace_b_Page_01.jpg
18fe1a853708bf77e4d8dce50de59543
e8bfe0430ea7b1cbef493202e4bf6dd9004ad658
862 F20110115_AAANPH wallace_b_Page_44.txt
f769da0fa101fbccc9a8bf56cabc2d7e
a76f149774b5322b1dd38fc03679a99e2687b22d
F20110115_AAANKK wallace_b_Page_20.tif
25eb8d1ce91aeeedb878ccafd8ed56f5
a8a07526e8e10bc5e130faf69df98778c844fb75
14616 F20110115_AAANFN wallace_b_Page_02.jpg
6b2403caa182826c4581e9056e1ab5f9
390c674eb80b9776f6c5fce5df2481e640c961fd
714 F20110115_AAANPI wallace_b_Page_45.txt
1e77dbeaedc002c1906f764b314c19a8
f4844b6b48a20778921804b545ee280d8b471cd2
F20110115_AAANKL wallace_b_Page_21.tif
61a7986d8922f6b913d77196fd8391df
738bb6c386af50b3a2770229b6ccfbeb591dccf5
21028 F20110115_AAANFO wallace_b_Page_03.jpg
cbc4fc8fd4f07bcd7eab781d59942aec
8aa6d42ee8c04deaaf20670e493814875985c9fc
1786 F20110115_AAANPJ wallace_b_Page_46.txt
03f3be85dbcbc60c18d6a3dff4415335
a97746b8154551d0e9a7deb4d64ec6803cc7a688
F20110115_AAANKM wallace_b_Page_22.tif
2cf0e9fe7e0412601349735e1bceff7d
0df4b59a491909561dd60c99af5328e4f7fdb5c9
69495 F20110115_AAANFP wallace_b_Page_04.jpg
cfe6c1f6b24cdc84db352e64e8037cb5
5cdfe350af0056fbeabda157e6acd8646cf964cf
1498 F20110115_AAANPK wallace_b_Page_47.txt
fd403e7792852f46a6dc8fb65edcf9c0
3b2e7727889291071aecde9603003b6b2e209b5d
260280 F20110115_AAANFQ wallace_b_Page_05.jpg
f188944793c53cb46d1e51c7a917996a
e4fca0f49f39f0543139ec046f69e1ab5c216166
1674 F20110115_AAANPL wallace_b_Page_48.txt
5e872c6d7f8e0b3224f7f3921e4b0292
904519c85760f8a70c3e83b66919d9908c66bfb0
F20110115_AAANKN wallace_b_Page_23.tif
92062e3b889f0a0a425c796b1a519bdd
3bcec61e34b8fb434d0867dcf8717a1bfae9d003
303116 F20110115_AAANFR wallace_b_Page_06.jpg
14a0543215b28dcdc1e2a54ec5dcb356
02a4f4796b9156e132e68c421f5a8b63fb5c1f0c
F20110115_AAANKO wallace_b_Page_25.tif
80d462c1ff30f3a02ae80ab9401e4667
f223c3166b31c68a69932e306091fbc18372cd5a
172983 F20110115_AAANFS wallace_b_Page_07.jpg
a3b14eccdbb041ce633e8e9b2d631bec
3df2b100ed25ac7a3e95966c3d9c1cbf0be77381
1398 F20110115_AAANPM wallace_b_Page_49.txt
c91145d84dec47f3b2e0c38c753dbc60
8574ccde15ea25b0ef936504049fedd6ddb48c27
F20110115_AAANKP wallace_b_Page_26.tif
36fb5459a5cae89087a9518656b626fc
4306a878d5f819a9ec1a717ad55687ca497d26a2
160219 F20110115_AAANFT wallace_b_Page_09.jpg
f8e0f1aa88d5ba4cfac760c6f8080bd4
00f78802d399909629480502ec9cc9fc585d4903
1626 F20110115_AAANPN wallace_b_Page_50.txt
4073bae91a62ed8c09407be94007d0a6
be0d0e86b454b483eea661b503c79ebcf1975de2
F20110115_AAANKQ wallace_b_Page_27.tif
f6d1c0f0624968c64dd9b0e5618eb7ba
3dde8256370d30ea2beeaa6b8313af7c9dc1330f
156616 F20110115_AAANFU wallace_b_Page_10.jpg
17e76b8726b7b362d796368aa75f043a
29b5667112aacb5cd708a038a3b7b3e60d5d2daa
777 F20110115_AAANPO wallace_b_Page_51.txt
5047732a45e32036ac6e9d11e993db8d
4856fd93ea7e17f907ccdc9d8a0bac35c980e237
F20110115_AAANKR wallace_b_Page_28.tif
34e19fefa906862bafa58a14d610db8c
474d9d79e7a9c2c5b06be0481b838696c529a4b5
184819 F20110115_AAANFV wallace_b_Page_11.jpg
d7461bbec17d7827dc9b93e6d28f1f56
86f415e4e26135a594e35b1c39e76f583aa68c28
1493 F20110115_AAANPP wallace_b_Page_52.txt
30157f94a8c9ff65509345ca7d32ced1
e506b8f8bd93ed00d9568d6062e56d35c2d64eee
F20110115_AAANKS wallace_b_Page_29.tif
576956ceb8f3b98286a5c42d0ac85407
f63d101f0224b6b112e60c1f39fa96752c8b8473
213286 F20110115_AAANFW wallace_b_Page_12.jpg
f443e6e866b0e3e65c7f717f9982e146
c30de2dd830a6cff9ad93f9494508a635370f275
1459 F20110115_AAANPQ wallace_b_Page_53.txt
904491bc044f120099a384e37a9acd72
645124e7c35aad87322f6ac8ac9123175900810f
F20110115_AAANKT wallace_b_Page_30.tif
da5ab9beff92154720515bacdd10f477
d2340ef2e30d4bad7793a00a101b828b8e764296
192248 F20110115_AAANFX wallace_b_Page_13.jpg
184ae8e084c1a77523cd12a93f2f549b
26f4ed7cc354c6de6ad531c792c1bf0093683214
1625 F20110115_AAANPR wallace_b_Page_54.txt
885252a2c8743bf09e28bc4c112c8c04
94f7513d5017d23e9c70cc9a27b1e5569b225429
F20110115_AAANKU wallace_b_Page_31.tif
5e8a2b66ffb0c19d27e1af8ce7c1e3b9
f8fef9259ce8d95c691562973c2d22ff2773345a
218040 F20110115_AAANFY wallace_b_Page_14.jpg
f2b5ed31d7023de64a710e9453f287e4
bac42f8dc0013a12254f23cce46d72614ec90989
1948 F20110115_AAANPS wallace_b_Page_55.txt
35d77e8dcf2d9d007490abc32d0502ff
88ad591ef9abd671579b30c058af84abf9c1aa9c
F20110115_AAANKV wallace_b_Page_32.tif
427b24ff4866c469f2bfd405e9eef798
0eb0861eb791fb1e513c1ca7823ceb3e71254d30
226793 F20110115_AAANFZ wallace_b_Page_15.jpg
0722f55a1e3540a9e3aa3231a046ae48
a39a37613ed2f7b57d86e36924134045a7d4ab32
1693 F20110115_AAANPT wallace_b_Page_56.txt
03d8b75818b8cce6da3de8a154c169d5
efee4629a9f5a813b566997a9ff442c2e1fd8f1b
F20110115_AAANKW wallace_b_Page_33.tif
f253060ba1a1431e7382c2e655ee76ac
4806975716d7bc908c00423488d0bcc92de20c26
1797 F20110115_AAANPU wallace_b_Page_57.txt
228be195747b774d96f2e1ce4cd2ed7b
feb5d271257c775d9ea4a6d8c9df34749e0ceb60
F20110115_AAANKX wallace_b_Page_34.tif
c38b7312b6a0f34c6173ccf140ffb352
b24ca2b557850b4ea914169777f3228d3b79befb
1091 F20110115_AAANPV wallace_b_Page_58.txt
df079fedfa861479b24c822f4c247379
6ce0a70aa97758cc1cda2bb8e5c279afd749d5b7
1005631 F20110115_AAANIA wallace_b_Page_12.jp2
6e91a2a4cf30ee38a5a14a0849d02ec8
ddd31d8461087f29d061bb196ee640342b4692b8
F20110115_AAANKY wallace_b_Page_35.tif
c93b6ecc855bb012f45a19cf70b41d44
290213c9a52d489da33bd1335d3d361289c02092
1897 F20110115_AAANPW wallace_b_Page_59.txt
0cefdccdbf16888e1da30732d72c3ea6
79ffbb3d52c760c71814e6d0a7e14837a45c60b1
100948 F20110115_AAANIB wallace_b_Page_13.jp2
2262fdbf3eb81f3f36a1dcce20bf4ec7
fd257f0b9961e8a70a1096b8738b92c3c980c4cb
F20110115_AAANKZ wallace_b_Page_36.tif
ea779ba668adebccc1ebdd0823044fac
538673f8d046332fc19fc47bc0baca8272f00bd8
1185 F20110115_AAANPX wallace_b_Page_60.txt
7d247a2c694b7ef76eb4ffaeeead3418
a73f282641315b8ec6721e02923dac2997a5c2e4
1020571 F20110115_AAANIC wallace_b_Page_14.jp2
518fe2f0de19e80f47eb83293afe5b75
c440b7cec405dddc5777448f24d097e935112b87



PAGE 1

AUTOMATED SYSTEM FOR LOAD-BALANCING EBGP PEERS By BRIAN T. WALLACE A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ENGINEERING UNIVERSITY OF FLORIDA 2004

PAGE 2

Copyright 2004 by Brian T. Wallace

PAGE 3

This document is dedicated to my wife, Robin, and our children, Parker, Taylor, and Peyton.

PAGE 4

ACKNOWLEDGMENTS I would like to acknowledge the contribution that Dr. Joe Wilson has made to my academic career. Dr. Wilson has provided me with guidance and inspiration throughout my course of study. He has worked with me to ensure my success both while in-residence and as a FEEDS student. I thank my family for their support during my time as a graduate student. Robin, Parker, Taylor, and Peyton have been understanding of the requirements of completing an advanced degree and have provided me motivation to be successful. iv

PAGE 5

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES............................................................................................................vii LIST OF FIGURES.........................................................................................................viii ABSTRACT.........................................................................................................................9 CHAPTER 1 INTRODUCTION........................................................................................................1 Internet Routing Protocols............................................................................................1 Classful Network Addressing................................................................................2 Classless Internet Domain Routing (CIDR)..........................................................2 Interior and Exterior Routing Protocols.......................................................................3 Distance Vector Algorithms..................................................................................3 Link-State Routing Algorithms.............................................................................4 BGP Overview..............................................................................................................4 BGP Attributes......................................................................................................5 BGP Best Path Algorithm.....................................................................................5 AS Path Pre-pending.............................................................................................6 Adjusting Multi-Exit Discriminator (MED) Values..............................................8 Network Traffic Collection...........................................................................................8 Simple Network Monitoring Protocol (SNMP)....................................................9 Promiscuous Mode Packet Capture.......................................................................9 Network Flow Collection....................................................................................10 2 NETWORK DATA COLLECTION..........................................................................12 Topology Collection...................................................................................................12 Fully Automated Solution...................................................................................12 Database Approach..............................................................................................13 Topology Module.......................................................................................................13 Prefix Tree Generation........................................................................................14 Circuit Representation.........................................................................................15 NetFlow Data Collection............................................................................................15 Definition of a Flow............................................................................................15 v

PAGE 6

Capturing NetFlow Data......................................................................................16 Exporting NetFlow data......................................................................................18 Storage of NetFlow Cache Entries......................................................................18 Data Collector......................................................................................................19 Traffic Assignment.....................................................................................................21 3 DATA ANALYSIS....................................................................................................23 Network Data Analysis...............................................................................................23 Analysis Methodology Overview........................................................................24 First pass analysis.........................................................................................25 Second pass analysis....................................................................................26 Balanced Traffic Condition.................................................................................27 Configuration Generator.............................................................................................29 Benefits of Code Generation...............................................................................29 Implementation Process.......................................................................................30 4 SYSTEM RESULTS..................................................................................................34 Testbed Configuration................................................................................................34 Traffic Generation......................................................................................................35 Test Cases...................................................................................................................37 System Output............................................................................................................38 Test Case #1........................................................................................................38 Test Case #2........................................................................................................39 Test Case #3........................................................................................................40 Test Case #4........................................................................................................41 Conclusions.................................................................................................................41 5 SUMMARY AND FUTURE WORK........................................................................43 System Improvement..................................................................................................43 Instantaneous Data Rate Load-Balancing...........................................................43 Low Utilization Prefixes......................................................................................45 Minimizing Prefix Announcements....................................................................46 Cost Factor in Load Balancing...................................................................................46 Support More Complicated Route-Maps....................................................................47 Fault Detection............................................................................................................48 Infected Host Detection..............................................................................................48 Summary.....................................................................................................................48 LIST OF REFERENCES...................................................................................................50 BIOGRAPHICAL SKETCH.............................................................................................51 vi

PAGE 7

LIST OF TABLES Table page 1-1 Bit patterns associated with the original address classes for IPv4.............................2 2-1 Fields in a NetFlow header packet...........................................................................20 2-2 Fields in a NetFlow flow record...............................................................................21 4-1 Subnets that were utilized during lab testing............................................................36 4-2 Description of test cases used during validation testing..........................................37 4-3 Per circuit loading results from test case #1.............................................................38 4-4 Per prefix loading results from test case #1.............................................................38 4-5 Per circuit loading results from test case #2.............................................................39 4-6 Per circuit loading results from test case #2.............................................................39 4-7 Per circuit loading results from test case #3.............................................................40 4-8 Per prefix loading results from test case #3.............................................................40 4-9 Per circuit loading results from test case #4.............................................................41 4-10 Per prefix loading results from test case #4.............................................................41 vii

PAGE 8

LIST OF FIGURES Figure page 1-1 The AS path attribute can be modified to implement BGP routing policy................7 1-2 The multi-exit discriminator (MED) attribute can be modified to implement BGP routing policy.............................................................................................................8 2-1 Tree structure generated by Prefix.pm.....................................................................14 2-2 Enabling NetFlow on a router interface...................................................................16 2-3 Output from router verifying NetFlow configuration..............................................17 2-4 Configuring NetFlow export on a router interface...................................................18 2-5 Process for transferring data from NetFlow cache to data collection system..........19 2-6 Process of assigning traffic to a prefix object..........................................................22 3-1 Process by which router applies BGP routing policy via route-maps and prefix-lists.31 3-3 IP prefix list configuration to identify groups of prefix that will have routing policy applied......................................................................................................................32 3-4 Route-maps use prefix-lists to apply routing policy to outbound BGP announcements.........................................................................................................33 4-1 A typical access network configuration...................................................................35 4-2 Lab setup used to test BGP load-balancing tool......................................................36 5-1 Total volume of traffic is predictable on a day-to-day basis and grows linearly over time...........................................................................................................................44 5-2 System generated prefix lists are balanced based on load, but not necessarily balanced on number of addresses.............................................................................46 viii

PAGE 9

Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Engineering AUTOMATED SYSTEM FOR LOAD-BALANCING EBGP PEERS By Brian T. Wallace December 2004 Chair: Joseph N. Wilson Major Department: Computer and Information Science and Engineering The goal of this project was to develop a system that could analyze network traffic and topology in order to generate load-balanced router configurations. The motivation for this effort is that the existing process today is manual, intensive, and potentially error prone. The system collects and stores NetFlow data from BGP routers. Traffic data are assigned to IP prefixes that are valid with an AS. These prefixes are then assigned across the set of egress links for the AS. A best fit decreasing (BFD) approach is used to generate a load-balanced condition. If the load is not balanced, IP prefixes are split along CIDR boundaries and another iteration of the BFD approach is performed. Test results demonstrate that the system for generating load-balanced BGP configurations works correctly. Regardless of the complications added to each test case, the system was able to achieve the desired result. This study has shown that load-balanced BGP configurations can be developed in an automated fashion by analyzing traffic data and network topology. ix

PAGE 10

CHAPTER 1 INTRODUCTION As the Internet has experienced explosive growth in recent years, service providers have struggled to build capacity and stay ahead of increasing bandwidth demands. Routing on the Internet backbone is key to maintaining robust global network communications infrastructure. This thesis presents an automated method for generating load-balanced Border Gateway Protocol (BGP) configurations. While traffic patterns on the Internet are constantly changing, router configuration among External Border Gateway Protocol (EBGP) peers is basically static. This study will demonstrate an automated tool for analyzing traffic patterns and developing router configuration to provide a load-balanced state. This system will be an improvement of the present manual process of determining configuration changes for BGP peers. Internet Routing Protocols The Internet has no shortage of protocols. Of interest in this context is the routing protocols that are supported and play a role in provide global connectivity via the Internet. An important distinction must be drawn between routed and routing protocols. Routed protocols are the protocols used to exchange information between hosts. Routing protocols are used to convey information among routers and make decisions on path selection across a network (Stallings 2000). 1

PAGE 11

2 Classful Network Addressing The original Internet routing scheme included addresses classes. Most organizations were assigned Class A, B, or C networks (Tannenbaum 1996). Class D address space is reserved for multicast applications. Since multicast addresses are shared by a group of hosts, the number of hosts per network is not applicable. Class E address space is reserved as experimental but has never been used in any significant way. In classful network addressing, the class of an address can be determined by the first four bits of the address. Table 1-1 illustrates the bit patterns associated with each class. An x indicates that the bit in that location is not relevant. Table 1-1 shows the bit patterns associated with the original address classes for IPv4. Address Class Bit pattern # of networks # of hosts / network Class A 0xxx 128 16,777,214 Class B 10xx 16,384 65,534 Class C 110x 2,097,152 253 Class D 1110 268,435,456 n/a Class E 1111 n/a n/a Classless Internet Domain Routing (CIDR) These class boundaries developed originally did not survive the rapid expansion of the Internet and a new technique was required. Network address classes did not scale with the size of the Internet because this method of allocation could not match the size of networks being attached to the Internet. For most organizations, a Class A network was far too large but a Class C networks was too small. This led to many Class A networks going unallocated while Class B networks were being rapidly depleted. In addition,

PAGE 12

3 small companies were allocated a Class C network and wasted the portion they did not require. Classless Internet Domain Routing (CIDR) was developed to overcome the rapid exhaustion of IPv4 address space. RFCs 1518 and 1519 describe a technique where the subnet mask is a critical piece of information included in routing updates and stored in routing tables. CIDR lifts the restriction that networks have to be divided on classful boundaries. Interior and Exterior Routing Protocols Routing protocols can be broken into two categories: interior and exterior. Interior gateway routing protocols (IGPs) are used for internal routing. Typically, all routers participating in an IGP are under a single administrative authority. There is a variety of interior gateway routing protocols deployed today. Some examples of IGPs include Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP), Intermediate System Intermediate System (IS-IS), and Routing Information Protocol (RIP). While there are a number of IGPs deployed, the dominant exterior gateway routing protocol is the Border Gateway Protocol. IGPs are classified based on how they calculate routing metrics. The two primary classifications are: distance vector and link-state algorithms. Distance Vector Algorithms Distance vector algorithms use a simple metric that takes into account spatial locality. The metric most often used is hop count. Hop count is the number of routers that must be transited to arrive at the destination. It is possible for a path that is longer via hop count to yield network performance that is actually better due to additional bandwidth and lower latency interfaces.

PAGE 13

4 RIP is the most common distance vector algorithm deployed. It is easy to configure and easy to use but does not take into account network performance or degradation when assigning metrics. Link-State Routing Algorithms Link-state algorithms maintain a database of link state information. Whenever a router participating in a link-state algorithm experiences a change in one of its directly connected interfaces, a link state advertisement (LSA) is sent to all neighboring routers participating in the same routing protocol. These updates are propagated through the networks so that all routers have the same topology and state information. Maintaining a database has both advantages and disadvantages. One significant advantage is extremely fast convergence. The disadvantage is that the router must maintain this database entirely separately from the routing table. This increases both the CPU and memory requirements of a router running a link state routing protocol. The OSPF routing protocol is by far the most common link state routing protocol. It is standards-based and performs extremely well. While maintaining a link state database provides the information necessary for fast convergence times and consistent routing information, it also does not scale to the level of the Internet. The amount of traffic generated just by routing updates would flood the backbone. BGP Overview Four regional Internet registries provide the allocation and registration service for the Internet. The resources assigned by these entities include IP addresses and Autonomous System numbers. Autonomous systems (ASs) in North America, a portion of the Caribbean, and sub-equatorial Africa are assigned by the American Registry for

PAGE 14

5 Internet Numbers (ARIN). For Europe, the Middle East, northern Africa, and parts of Asia, allocation and registration services are provided by RIPE Network Coordination Center (RIPE NCC). The Asia Pacific Network Information Centre (APNIC) is responsible for the majority of Asian countries. Lastly, The Latin American and Caribbean IP address Regional Registry is responsible for South America and the Caribbean islands not covered by ARIN. BGP Attributes Attributes are parameters that are used to describe characteristics of a prefix in BGP. A BGP attribute consists of three fields: attribute type, attribute length, and attribute value. Attributes may be sent as part of a BGP UPDATE packet, depending on the type of attribute. These parameters can be used to influence path selection and routing behavior (Halabi 1997). There are four types of BGP attributes: well-known mandatory, well-known discretionary, optional transitive, and optional non-transitive. Well-known attributes are the set of attributes that must be recognized by all RFC compliant BGP implementations. All well-known attributes must be forwarded with the UPDATE message. Mandatory attributes must be included in every UPDATE message, while discretionary attributes may or may not appear in each UPDATE message. Optional attributes may or may not be supported by all implementations. Transitive attributes should be forwarded to other neighbors. BGP Best Path Algorithm By default, BGP will select the current best path. It then compares the best path with a list of other valid paths. If another valid path meets the selection criteria, it will become the current best path and the remaining valid paths will be evaluated.

PAGE 15

6 The following ordered criteria are used to determine the best path in BGP: 1. Prefer the path with the highest WEIGHT. 2. Prefer the path with the highest LOCAL_PREF. 3. Prefer the path that was locally originated via a network or aggregate BGP subcommand, or through redistribution from an IGP. 4. Prefer the path with the shortest AS_PATH. 5. Prefer the path with the lowest origin type: IGP is lower than EGP, and EGP is lower than INCOMPLETE. 6. Prefer the path with the lowest multi-exit discriminator (MED). 7. Prefer external (eBGP) over internal (iBGP) paths. 8. Prefer the path with the lowest IGP metric to the BGP next hop. 9. Check if multiple paths need to be installed in the routing table for BGP Multipath. 10. When both paths are external, prefer the path that was received first (the oldest one). 11. Prefer the route coming from the BGP router with the lowest router ID. 12. If the originator or router ID is the same for multiple paths, prefer the path with the minimum cluster list length. 13. Prefer the path coming from the lowest neighbor address. While load balancing is not inherent in BGP, there are two common methods used to create a load-balanced configuration. These two methods are AS path pre-pending and adjustments of Multi Exit Discriminator (MED) values (Brestoud and Rastogi 2003). AS Path Pre-pending AS path pre-pending involves padding the AS Path attribute in BGP announcements to reduce the likelihood of a route being selected. Normally, a BGP speaker will add its AS to the AS Path attribute of an announcement prior to forwarding that announcement onto another peer. Each router that receives the announcement looks at the AS Path attribute to determine the shortest AS Path to a particular prefix. By pre-pending a path with additional AS entries, a prefix will have a lower probability of being selected as the best route to a destination. In practice, a provider will distribute IP space across multiple egress links. For each egress link, the range of IP addresses that should prefer the path would be advertised

PAGE 16

7 with the normal AS Path attribute. All other IP space is advertised with an artificially long AS Path attribute. These modified announcements serve to provide redundancy in the event of a failure of an egress link. igure 1-1 The AS path attribute can be modified to implement BGP routing policy. net. Both e scenario above shows that Router C1 has pre-pended the AS Path attribute with 1ng Tier IService ProviderCorporateNetworkAS1234192.168.100.0/24Router C1Router ISP2Router C2Router ISP1 AS Path = 1234 -1234 -1234 Router ISP3 AS Path = 1234 F In the example above, there are two paths available to the 192.168.100.0/24 sub Router C1 and Router C2 are announcing this prefix. Without intervention, the normal BGP best path selection algorithm determines the best path selected to this subnet. Th 234-1234-1234, while Router C2 has followed the default behavior of pre-pendiits AS number only once. The effect this has is that from the perspective of Router ISP3, the prefix in question has a longer AS Path length if the path through Router C1 is taken. Therefore, the path through Router C2 is selected as the best path to the prefix 192.168.100.0/24.

PAGE 17

8 Adjusting Multi-Exit Discriminator (MED) Values The Multi-Exit Discriminator attribute is a BGP attribute that gives a peer an indication of preference for entry into an AS. If two identical prefixes are announced from an AS, the path with the lowest MED value will be selected for the routing table. igure 1-2 The multi-exit discriminator (MED) attribute can be modified to implement BGP routing policy. The eboth Router C1 and Router C2 are announcing the prefix 192.1t path Network Traffic Collection There are several optionrk utilization information. Each Tier IService ProviderCorporateNetwork192.168.100.0/24Router C1Router ISP2Router C2Router ISP1 MED = 100 MED = 200 Router ISP3 F xample above shows 68.100.0/24. Router C1 is announcing the prefix with a MED value of 100, while Router C2 is announcing the same prefix but with a MED value of 200. From the perspective of Router ISP3, the best path is the path through Router C1 because thahas the lower MED value. s available for capturing netwo of the methods available has their own strengths and weaknesses depending on upon the application for which they are being used.

PAGE 18

9 Simple Network Monitoring Protocol (SNMP) Simple Network Monitoring Protocol (SNMP) is an extremely common protocol used for monitoring network elements. Fault management systems use SNMP to identify abnormal network conditions based on a pre-determined behavior model. The behavior model specifies what variables to poll and what values indicate an alarm condition. This alarm condition can then be displayed on a screen in the Network Operations Center (NOC), emailed to a mailing-list, or sent to an alpha-numeric pager for resolution. Performance monitoring systems also use SNMP to collect network traffic statistics. Network elements are polled at a specified interval to collect interface specific information. This information can then be presented in graphical format to visualize traffic flows in the network. The drawback of SNMP monitoring for BGP load-balancing is that it does not have the level of granularity required to generate load-balanced configuration. SNMP monitoring can provide interface statistics that indicate whether a particular interface is over-utilized. However, traffic information needs to be collected and correlated at the individual host basis in order to be able to generate a load-balanced configuration. To be useful, the information gained via SNMP must be correlated with network topology information. In many cases, the network topology will not be definitive in assigning traffic information to logical subnets or hosts. Promiscuous Mode Packet Capture Promiscuous mode packet capture involves the deployment of probes at points of interest in the network. While this technique is commonly used to diagnose highly localized network issues, there are several drawbacks that preclude its wide scale deployment.

PAGE 19

10 One significant drawback is the number of devices that would have to be deployed in order to have a complete view of the network. The amount of processing power and disk space required to collect and analyze data in real-time is also significant. Additionally, for any global analysis to take place the data collected at each probe must be transferred to a centralized collection point in some aggregated fashion. Today, most networks of any size are switched. The implementation of probes would require that SPAN ports be created to mirror traffic. While this type of configuration does not typically have any significant impact on switch performance, it consumes switch ports. Ports that would normally be assigned to carry network traffic must now be allocated for traffic collection, thereby increasing the price per port of every switch in the network. As networks continue to grow in speed, the ability of inexpensive probes to process the data rate of large WAN links is reduced. It is not uncommon for egress links to be in the OC-12 (622 Mbps) to OC-48 (2.4 Gbps) range. When these links become fairly heavily utilized, the number of packets per second that must be analyzed can quickly overwhelm a server. Network Flow Collection The IP Flow Information Export (ipfix) is an Internet Engineering Task Force (IETF) Working Group whose purpose is to establish a standard for IP flow information export systems. Though there are a number of flow export systems and mechanisms available today, their design and implementation vary by vendor. The lack of a standard makes it difficult to develop flow analysis tools that are universal. Additionally, having multiple export systems and formats hampers the implementation of back-end systems. The IETF Working Group has identified the following goals:

PAGE 20

11 Define the notion of a standard IP flow. The flow definition will be a practical one, similar to those currently in use by existing non-standard flow information export protocols which have attempted to achieve similar goals but have not documented their flow definition. Devise data encodings that support analysis of IPv4 and IPv6 unicast and multicast flows traversing a network element at packet header level and other levels of aggregation as requested by the network operator according to the capabilities of the given router implementation. Consider the notion of IP flow information export based upon packet sampling. Identify and address any security privacy concerns affecting flow data. Determine technology for securing the flow information export data, e.g., TLS. Specify the transport mapping for carrying IP flow information, one which is amenable to router and instrumentation implementers, and to deployment. Ensure that the flow export system is reliable in that it will minimize the likelihood of flow data being lost due to resource constraints in the exporter or receiver and to accurately report such loss if it occurs. NetFlow is the IP flow export system available in Cisco routers. There are several packet formats and aggregation schemes available within NetFlow. This gives the engineer flexibility in the amount of data that is collected and stored in a NetFlow-based system. NetFlow is the traffic collection method that will be employed in this study.

PAGE 21

CHAPTER 2 NETWORK DATA COLLECTION In order to effectively decide how best to balance traffic, an analysis system must have complete and accurate information regarding both network topology and network traffic patterns. This chapter will discuss why this information is important, how it is stored, and how it is retrieved. Topology Collection There are several important considerations when deciding how to implement a topology information collection and storage system. How often does network topology change? What are the benefits of having a fully automated topology collection solution? Is auto-discovery possible for all network element types? Fully Automated Solution While fully automated solutions that can accurately auto-discover new elements are definitely an attractive solution, this type of solution adds a tremendous amount of complexity to a system. It would require that the acquisition program be capable of extracting topology information from various vendor platforms. It also requires that the system be able to identify both new elements in the network and new hardware or capacity added to an existing element. The BGP load-balancing system described in this paper would typically be deployed at the network edge. Routers that BGP peer externally do not normally have frequent configuration changes made. The types of configuration changes to 12

PAGE 22

13 accommodate network growth from a user perspective would be done at an access router or switch and not a core router. In the future, this type of approach may be utilized to provide more extensive capabilities for configuration generation. Extracting more detailed information from the network elements would allow the system to provide additional configuration and standardize existing configurations. Database Approach The approach selected for this project was to use a database solution to store and maintain network topology information. A MySQL database was developed that could store the relevant topology information. Given the low frequency of changes, this type of solution seems to provide the information required with minimal complexity or effort. Though this database and its schema were developed for the purpose of this study, most organizations probably already have an existing Operational Support System (OSS) package that could be adapted to provide the necessary information. Topology Module The Topology module has been implemented as a Perl module and defines an interface for retrieving network topology information (Holzner 1999). By having this abstraction, we have removed any direct interaction between the analysis modules and the system for collecting and storing topology information. If another more efficient method of collecting topology information is available or an OSS system can be utilized, the system will not required significant changes to incorporate the new technology. The only input required for the module is the AS number for the network to be analyzed. Using this information, the module extracts the circuits that provide egress bandwidth from this AS. In addition, all valid prefixes for this AS are also retrieved.

PAGE 23

14 The module returns the topology information in the form of Circuit and Prefix objects as described in the following sections. Prefix Tree Generation IP prefixes are encapsulated in a Perl module called Prefix.pm. When a Prefix object is instantiated, a tree structure is built recursively. This tree structure is rooted at a node that represents the IP prefix included in the call to the Prefix constructor. The constructor will recursively build Prefix objects for all subnets contained by the root node that have a subnet mask length of 24 or less. Each non-leaf node in the tree will have two children. These children will be the CIDR blocks that result from splitting the current node into two nodes with a subnet mask 1 bit longer than the current subnet mask (e.g. a /21 prefix is split into two /22 prefixes). Figure 2-1 illustrates the tree structure that is built for the following command: Prefixnew(.168.96.0,.255.252.0). Figure 2-1 Tree structure generated by Prefix.pm. This tree structure has several convenient features (Sklower 1991). As load-balancing decisions are made and prefixes must be split to move traffic, the Prefix object tree can be split into two sub-trees with the appropriate mask length. Also, the sub-trees

PAGE 24

15 have the traffic information included (Nilsson and Karlsson 1999). It is not necessary to reassign flow data into the new prefixes. Circuit Representation A Circuit module was developed to provide the ability to assign information at the circuit level. This module is implemented in Perl and is used to create an object representing each egress link in the AS being analyzed. The Circuit module contains all information unique to a particular circuit. The circuit name, capacity, and load factor are all contained in member variables. The purpose of the load factor will be discussed in Chapter 3. The Circuit module also provides members functions for managing Prefix objects assigned to the circuit. Methods are available to add new Prefix objects, to return the largest Prefix object assigned to the Circuit, and to get the current load on the Circuit based on the assigned Prefix objects. NetFlow Data Collection A collector for traffic data was implemented as part of this project. In order to allow the collection and storage of NetFlow data to be uncoupled from the analysis components, the traffic data is stored in MySQL. This is the same approach that was used for topology collection. Definition of a Flow A flow is any communication that can be described using the following tuple: source IP address, source port, destination IP address and destination port. For a NetFlow-enabled router there are seven key fields that identify a unique flow: Source IP address Destination IP address Source port number

PAGE 25

16 Destination port number Layer 3 protocol type ToS byte Input logical interface If a NetFlow-enabled router receives a packet that is not associated with an existing flow, a new entry is created in the NetFlow cache. This will occur even if the flow differs by just one of the above fields. One exception to this process is that NetFlow is only aware of unicast IP flows. When a router receives a multicast packet, it will be forwarded normally but will not generate a new cache entry. Capturing NetFlow Data In order to begin capturing NetFlow data, the router must be configuration for NetFlow on each interface. If NetFlow cache is enabled on an interface that contains sub-interfaces, data will be collected on all sub-interfaces. The figure below shows a configuration example for enabling NetFlow on an Ethernet0 interface. RouterA#config t Enter configuration commands, one per line. End with CNTL/Z. RouterA(config)#interface Ethernet0 RouterA(config-if)#ip route-cache flow u teRouterA(config-if)#end Figure 2-2 Enabling NetFlow on a router interface. For the purposes of this application, the configuration need only be done on the egress interfaces. This tool is focused on analyzing inter-AS traffic and does not consider traffic that is internal to the AS. If analysis of total network traffic flow were to be conducted, the remaining interfaces would need to be configured. Once configured, the router will process the first packet of a flow normally. At this time, a new entry in the NetFlow cache will be created that corresponds to the flow.

PAGE 26

17 There exists an entry in the cache for all active flows. The fields in the NetFlow cache will be used to generate flow records for export and analysis. To verify that the configuration was successful, the command show ip cache flow can be used. This command will display the current status of the NetFlow cache. Figure 2-3 Output from router verifying NetFlow configuration. Another bit of useful information that is available in the router by configuring NetFlow is packet size distribution. Calculations for throughput on router interfaces are dependent on the packet size distribution that a router will see in a production network. This information can be used to develop accurate lab testing scenarios that are consistent with real world patterns and contain a realistic traffic mix.

PAGE 27

18 Exporting NetFlow data Configuring NetFlow switching at the interface will begin the data collection process. This will only create entries in the cache on the router. Unless the flow-export configuration has been completed, flows will be discarded when entries are expired from the cache. The flow export configuration includes at a minimum a destination IP address and destination port of the NetFlow collector. The example shows how to configure NetFlow to export NetFlow Version 5 flow records including origin AS information. All UDP packets sent from the router to the collector will use the source IP address of the Loopback0 interface. Since a router has a number of interfaces, specifying the source interface for traffic originating at the router simplifies the process of associating the packet with a specific network element. RouterA#config t Enter configuration commands, one per line. End with CNTL/Z. RouterA(config)#ip flow-export destination 192.168.0.100 5000 RouterA(config)#ip flow-export version 5 origin-as RouterA(config)#ip flow-export source loopback 0 RouterA(config)#end Figure 2-4 Configuring NetFlow export on a router interface. Storage of NetFlow Cache Entries A router cannot store significant amounts of flow data. Typically, flash cards in routers are only large enough to store the routers operating system and configuration file. Because of this limited storage capacity, the router must transmit flow data to a central location periodically.

PAGE 28

19 In a default configuration, there are four conditions when the router will expire flows from the NetFlow cache: Transport is completed (TCP FIN or RST). The flow cache has become full. The inactive timer has expired after 15 seconds of traffic inactivity. The active timer has expired after 30 minutes of traffic activity. Two of the above conditions are configurable. Both the active and inactive timeout values can be configured in the router. The values shown above are their default values. Netflow v5packet Netflow datacollectionNetflow-enabledrouter Transport is completedFlow cache fullInactive timer expiredActive timer expired Figure 2-5 Process for transferring data from NetFlow cache to data collection system. Data Collector Once the router has been correctly configured to capture and export NetFlow data, packets will begin to be exported. A data collector was implemented in Perl to receive the NetFlow records, extract the data fields, and store that information into MySQL for later analysis. The data collector binds to a user defined port and listens for incoming packets. NetFlow does not have an Internet Assigned Numbers Authority (IANA) specified port number. When a NetFlow datagram arrives, the collector extracts and decodes the header. The header includes a count of the number of flow records included in the packet. This count is important since a packet can contain a variable number of flow

PAGE 29

20 records, depending on the number of cache entries that expired at or near the same time. The schema for the header table follows the header format shown in Table 2-1. Table 2-1 Fields in a NetFlow header packet. Bytes Content Description 0 to 1 Version NetFlow export format version number (in this case, the number is 5). 2 to 3 Count Number of flows exported in this packet (1 to 30). 4 to 7 SysUptime Number of milliseconds since the routing device was last booted. 8 to 11 unix_secs Number of seconds since 0000 UTC 1970. 12 to 15 unix_nsecs Number of residual nanoseconds since 0000 UTC 1970. 16 to 19 flow_sequence Sequence counter of total flows seen. 20 engine_type Type of flow switching engine. 21 engine_id ID number of the flow switching engine. 22 to 23 sampling_interval Sampling mode and the sampling interval information. The first two bits of this field indicates the sampling mode: 00 = No sampling mode is configured 01 = `Packet Interval' sampling mode is configured. (One of every x packet is selected and placed in the NetFlow cache). The information gained from decoding the header can be used to extract the flow records and their associated data for storage. The collector has a second subroutine for collecting and decoding the information stored in each flow record. For each record, the fields are extracted and inserted into a flow table in the database. The schema for the flow table mirrors the definition of the NetFlow flow record.

PAGE 30

21 Table 2-2 Fields in a NetFlow flow record. Bytes Content Description 0 to 3 srcaddr Source IP address. 4 to 7 dstaddr Destination IP address. 8 to 11 nexthop IP address of the next hop routing device. 12 to 13 input SNMP index of the input interface. 14 to 15 output SNMP index of the output interface. 16 to 19 dPkts Packets in the flow. 20 to 23 dOctets Total number of Layer 3 bytes in the flow's packets. 24 to 27 First SysUptime at start of flow. 28 to 31 Last SysUptime at the time the last packet of flow was received. 32 to 33 srcport TCP/UDP source port number or equivalent. 34 to 35 dstport TCP/UDP destination port number or equivalent. 36 pad1 Pad 1 is unused (zero) bytes. 37 tcp_flags Cumulative OR of TCP flags. 38 prot IP protocol (for example, 6 = TCP, 17 = UDP). 39 tos IP ToS. 40 to 41 src_as AS of the source address, either origin or peer. 42 to 43 dst_as AS of the destination address, either origin or peer. 44 src_mask Source address prefix mask bits. 45 dst_mask Destination address prefix mask bits. 46 to 47 pad2 Pad 2 is unused (zero) bytes. Traffic Assignment Traffic data collected via NetFlow has information at the individual host level. Before any network analysis can take place the traffic data must be aggregated at the prefix level. These prefixes can then be assigned to circuits in a load-balanced fashion.

PAGE 31

22 The tree design of the Prefix objects was discussed earlier in this chapter. During traffic assignment, the aggregate traffic information contained in each flow record is assigned to the Prefix object that contains the hosts IP address. The root node of the Prefix tree is the largest subnet that contains the host address. The internal behavior of the Prefix object is as follows: 1. Check if the host belongs to either child of the current Prefix object 2. If so, assign the aggregate traffic information to the child. 3. If not, assign the aggregate traffic information to the current node. Netflow data X Figure 2-6 Process of assigning traffic to a prefix object. Since a Prefix tree is a balanced tree, unless the current Prefix object node is a leaf node the host address will always belong to one of the child nodes. This approach ensures that all the traffic information propagates to and resides in the leaf nodes

PAGE 32

CHAPTER 3 DATA ANALYSIS Network data analysis is the key component in developing a system to load balance EBGP peers. The previous chapters have been concerned with collecting both topology and traffic information necessary to perform an analysis. This chapter will discuss the analysis methodology employed by this tool. Network Data Analysis The goal of the network analysis module is to assign IP prefixes to egress circuits in such a way that inbound traffic is balanced. The ideal balance condition would be an assignment in which the percent utilization on each circuit is within a predefined tolerance. A simple solution to this type of problem would be to break down the prefixes as small as possible to provide more granularity, thereby making it easier to reach a balanced state. However, an additional constraint in the BGP load-balancing problem is that announcing the minimum number of BGP prefixes in the global Internet routing table is considered good routing policy. The global Internet routing table is a representation of all IP space being advertised across the world. In order to ensure that the size of the table does not grow at the same pace as the Internet itself, network operators need to ensure that they contribute the smallest number of prefixes possible. As the table grows, the amount of routing information being exchanged increases. These increases in both total size of the table and frequency of updates imposes increasing CPU and memory requirements on Internet routers. 23

PAGE 33

24 The Network Analysis module was performs load-balancing on a set of prefixes and circuits provided as parameters. This module is written in Perl and provides only an analyze method. Analysis Methodology Overview When developing an analysis methodology, there are typically two primary considerations: accuracy and computational complexity. The design phase weighs both requirements and develops a solution that represents a balance between the two that is appropriate for the application (Sahni 2000). For the BGP load-balancing problem, the accuracy requirement is difficult to quantify. Any router configuration developed by the system will have a measure of accuracy associated with the analysis period selected. If another analysis period is used, the accuracy of the configuration will change. Since the traffic characteristics do not experience dramatic changes in magnitude over normal analysis periods (i.e. the change in maximum load over a 24 hour period is reasonably small), a solution that is reasonably accurate should be sufficient. The problem of assigning traffic to circuits can be considered a form of the bin-packing problem. One distinction between the classical bin-packing problem and the BGP load-balancing problem is that the size of the objects (IP prefixes) being placed in the bins (circuits) can be changed. The constraint is that the splitting of prefixes can only be done along CIDR block boundaries. The approach used in this system is a two-pass approach. The goal of the first pass is to distribute the traffic across the available circuits. This will provide a start state for the second pass analysis. The second pass analysis will refine the load-balanced

PAGE 34

25 condition and provide a final state that is within the defined tolerance. The final state will be used to generate new router configuration. First pass analysis The first pass analysis treats the BGP load-balancing problem as if it were simply a bin-packing problem. No modifications to the Prefix objects are considered during this stage. The bin-packing problem is known to be NP-hard (Horowitx, et al. 1998). To approach this type of problem, an approximation algorithm can be applied. There are four common approximation algorithms: First Fit (FF), Best Fit (BF), First Fit Decreasing (FFD), and Best Fit Decreasing (BFD). The First Fit algorithm considers the objects in the order in which they are presented. The bins are also considered in the order in which they are initially presented. To pack bins, each object is taken in order and placed in the first bin in which it fits. In the case of Best Fit, the initial conditions are the same as for First Fit. Best Fit differs in that each object in turn is packed into the bin that has the least unused capacity. The First Fit Decreasing approach reorders the objects such that they are in decreasing order by size. Once the objects are re-ordered, a FF approach is used to pack objects. Best Fit Decreasing also reorders the objects such that they are in decreasing order by size. After re-ordering, the objects are packed using a BF approach. The algorithm selected for this application was Best Fit Decreasing (BFD). The first step in implementing a BFD solution to the BGP load-balancing problems is to sort the Prefix objects by the amount of traffic generated by that prefix. This step is done to order the Prefix objects for analysis and is not repeated. Next, Circuit objects are sorted

PAGE 35

26 in increasing order by the amount of load currently assigned to the Circuit. The assignment of Prefix objects is done by iteration. A Prefix is assigned to the Circuit with the lowest load. After each assignment, the Circuits are sorted by the amount of load currently assigned to the Circuit. This process continues until all Prefix objects have been assigned to a Circuit. There is no consideration given during the first pass analysis as to whether adding a Prefix object to a Circuit will cause the Circuit to become overloaded. It is assumed that the second pass analysis must result in a load-balanced configuration. If this were not true, then bandwidth must have been exhausted prior to the analysis. While it is possible to overload a circuit during the first pass, the second pass can break Prefix objects down to a sufficient level of granularity that a load-balanced configuration is possible. Second pass analysis The second pass analysis starts off such that all traffic has been assigned to an egress circuit. The challenge in this phase is to determine how to best re-assign some portion of the traffic so that the circuits are closer to the ideal condition of being perfectly load-balanced. Solving this problem requires answering two questions: Which traffic should be moved? Where should the traffic be re-assigned? One option considered involved identifying the most heavily loaded circuit, removing some fraction of the load, and re-assigning that traffic to the least heavily loaded circuit. The methodology chosen for this implementation is to re-utilize the BFD algorithm. The underlying assumption is that by improving the initial conditions of the BFD algorithm, a better solution will be found. Given that the problem set size is

PAGE 36

27 relatively small for BGP configuration, performing multiple rounds of the BFD algorithm is reasonable. In the second pass analysis, the circuit with the highest load is identified. The prefix with the most traffic is then removed from the circuit and split along the next CIDR boundary. This is the next step in granularity for traffic re-distribution and increase the BGP prefixes announcements by only one. Once the prefix has been split, all prefixes are removed from all circuits. The new set of prefixes is now one larger than the previous BFD run and the circuits have no prefixes assigned. This creates a new set of initial conditions for the next round of BFD analysis. If the load on each circuit is not within tolerance of the mean load across all circuits, another round of second pass analysis is performed. With each iteration, the number of prefixes that will be announced into BGP increases by one. No consideration is given into whether splitting the largest prefix from the most heavily loaded circuit will improve the balanced condition. This method is simple to implement and assumes that traffic is fairly well distributed throughout the IP address ranges being evaluated. Balanced Traffic Condition The definition of load in this paper is a measure of the aggregate of all traffic associated with a prefix throughout the duration of the analysis period. It does not indicate the maximum utilization experienced by the circuit during the analysis period. The most obvious approach to determining to what degree traffic is balanced across multiple circuits is to compare the percent utilization on each circuit at some point in time (e.g. 60% utilization on Circuit A and 58% utilization on Circuit B = well-balanced).

PAGE 37

28 Unfortunately, this type of comparison requires data at a fairly high sampling rate (or at least uniform sampling rate). In the case of using NetFlow data for analysis, traffic data is not being sampled at a particular frequency. Rather, the data is exported as is occurs in the network. The frequency at which data arrives at the collector is irregular and the period of time represented by a NetFlow record varies. The conditions for flow export were discussed in Chapter 2. Since not all flows have the same lifetime, each NetFlow record represents data from a slightly different analysis period. It is possible to have a large number of small NetFlow records for a short telnet session or a single large flow record for a long-lived FTP session on a lightly loaded router. Possible techniques for extracting additional information from this type of data source are discussed in Chapter 5. To overcome the sampling limitation, the application of a load factor to each circuit was utilized. This load factor serves to normalize the capacity of each circuit to the capacity of the smallest egress circuit. When load factors are used, the load on each circuit during the analysis period can be used for comparison. This allows circuits of varying capacity (e.g. 2 DS-3s and 1 OC-3) to be load-balanced. This load factor technique has been implemented in the Circuit module. Raw traffic data is added to a Prefix object by the Traffic module. The Network Analysis module uses a custom sort routine in order to sort circuits for each iteration of the BFD algorithm. The custom sort routine calls the get_load() method on each Circuit. This method returns the amount of traffic assigned to the circuit times the load factor for the Circuit.

PAGE 38

29 Configuration Generator The Configuration Generator module was developed to provide a solution for implementing the results of the analysis in a network. This module is implemented in Perl and provides an interface to accept the results of the Network Analysis module and generate the configuration files necessary to correctly configure a router. Benefits of Code Generation Code generation is a technique in which programs are used to write or develop other programs. In this case, the BGP load-balancing program generates code (or configuration files) for a router. There are several benefits to code generation including the reduction of human errors, standardization, and efficiency. Though network engineers are both knowledgeable and professional, they are still human. By developing a system that performs accurate and repeatable analysis of network data, the network engineer can focus on other tasks that require human intervention. Regardless of the size of the network, standardization is a critical element of a successful operation. By creating standard configurations and processes, networks can scale to a very large number of elements being managed by a reasonably small staff. One key in the operational scalability of large networks is documenting processes either in standard, written procedures or by developing systems that establish how a particular function should work. The Configuration Generator module encompasses what a standard configuration should look like. Any configuration generated by this system will be in the correct, standard format. A goal of any system should be to improve the efficiency of the task it implements. By automating the analysis of network traffic, BGP load-balancing can be done more

PAGE 39

30 accurately and efficiently. It is now possible to schedule the analysis to occur at regular intervals and store the results. This will allow an engineer to review the results and choose the best solution to implement. To accomplish the task of analyzing network data would be an arduous task for an engineer to perform at any reasonable frequency. Additionally, without the traffic information being available the typical solution would involve only an educated guess by an engineer familiar with the network. Implementation Process Once the analysis has been completed, an engineer must implement the configuration. While it would be possible to extend the system to implement the configuration in a live network automatically, that functionality is beyond the scope of this work. The first requirement for announcing prefixes into BGP is to configure network statements that include all address space. To avoid issues with IGP synchronization, null routes are also configured for each network statement. Without additional routing policy, the network statements and null routes would generate BGP updates for the entire address space. With only network statements and no routing policy, all updates announced would have the same metric. The addition of routing policy to affect load balancing is accomplished via route-maps. Route-maps use an if-then-else type construct to allow modifications to be made to attributes of a BGP announcement. In the case of load balancing, the route-map has a term that matches all IP space within a prefix. For each match, the MED value is changed to prefer the circuit or not depending on which prefix list is matched. If the IP space falls in the prefix list for the circuit, the MED value is set to 50 and traffic from that range will prefer the circuit. For all other space, the MED

PAGE 40

31 value is set to 200. By using a default MED value, any IP prefix that does not have routing policy applied will still be advertised. In the case of the failure of an egress link, the prefixes that were preferred on the link will continue to route across other links at a higher MED value. Without this catchall, the more strict routing policy would create an outage for the preferred blocks when an egress link goes down. Figure 3-1 Process by which router applies BGP routing policy via route-maps and prefix-lists. neighbor A route-mapRouterA prefix-listRouterA prefix-listIP-ALL Router A route-mapRouterB prefix-listRouterB prefix-listIP-ALL Router B neighbor B networkstatements dampening,synchronization,etc...Global BGPconfiguration

PAGE 41

32 router bgp 65000 no synchronization bgp log-neighbor-changes network 10.0.0.0 mask 255.255.248.0 network 10.0.8.0 mask 255.255.248.0 network 10.0.16.0 mask 255.255.248.0 network 10.0.24.0 mask 255.255.248.0 network 192.168.80.0 mask 255.255.248.0 network 192.168.96.0 mask 255.255.248.0 network 192.168.128.0 mask 255.255.248.0 network 192.168.160.0 mask 255.255.248.0 neighbor 192.168.1.50 remote-as 1234 neighbor 192.168.1.50 description RouterA neighbor 192.168.1.50 route-map RM-RouterA out neighbor 192.168.1.100 remote-as 5678 neighbor 192.168.1.100 description RouterB neighbor 192.168.1.100 route-map RM-RouterB out Figure 3-2 Basic BGP configuration without routing policy. ip prefix-list IP-ALL seq 5 permit 10.0.0.0/21 ip prefix-list IP-ALL seq 10 permit 10.0.8.0/21 ip prefix-list IP-ALL seq 15 permit 10.0.16.0/21 ip prefix-list IP-ALL seq 20 permit 10.0.24.0/21 ip prefix-list IP-ALL seq 25 permit 192.168.80.0/21 ip prefix-list IP-ALL seq 30 permit 192.168.96.0/21 ip prefix-list IP-ALL seq 35 permit 192.168.128.0/21 ip prefix-list IP-ALL seq 40 permit 192.168.160.0/21 ip prefix-list RouterA seq 5 permit 10.0.0.0/21 ip prefix-list RouterA seq 10 permit 10.0.8.0/21 ip prefix-list RouterA seq 15 permit 10.0.16.0/21 ip prefix-list RouterA seq 20 permit 10.0.24.0/21 ip prefix-list RouterB seq 5 permit 192.168.80.0/21 ip prefix-list RouterB seq 10 permit 192.168.96.0/21 ip prefix-list RouterB seq 15 permit 192.168.128.0/21 ip prefix-list RouterB seq 20 permit 192.168.160.0/21 Figure 3-3 IP prefix list configuration to identify groups of prefix that will have routing policy applied.

PAGE 42

33 route-map RM-RouterA permit 10 match ip address prefix-list RouterA set metric 50 route-map RM-RouterA permit 20 match ip address prefix-list IP-ALL set metric 200 route-map RM-RouterB permit 10 match ip address prefix-list RouterB set metric 50 route-map RM-RouterB permit 20 match ip address prefix-list IP-ALL set metric 200 Figure 3-4 Route-maps use prefix-lists to apply routing policy to outbound BGP announcements. The system generates a route-map and prefix list for each BGP neighbor. Another prefix-list call IP-ALL is also generated. This prefix-list includes all valid address space. It is used to ensure that all address space is advertised out of every circuit.

PAGE 43

CHAPTER 4 SYSTEM RESULTS This chapter discusses testing that was conducted to validate the BGP load-balancing system. The test setup and procedures are presented. Results and observations from the various test cases are included. Finally, several topics for further investigation are suggested. Testbed Configuration The testbed used to evaluate the system was built to mimic what a typical access network might look like. In order to understand the lab configuration, it is important to understand how a typical access network is configured. Figure 4-1 shows a typical access network configuration. In a typical network, end users are connected via an access router. This access router could be a PPP aggregator in a DSL network, Cable Modem Termination System (CMTS) in cable modem networks, or access point in wireless networks. This layer is where per-subscriber configuration is done. This configuration can include subscriber authentication, rate-limiting, and IP address assignment. In order to simulate end users in the lab setup, IP pool interfaces were configured in the access router. One interface for each /24 subnet used in the testing was configured. These interfaces would be the default gateway that end users would be assigned in a production network. 34

PAGE 44

35 Access RouterCore Router End Users Internet Figure 4-1 A typical access network configuration. Eight subnets were utilized for the testing. The subnets are contained in Table 4-1. These subnets are initially configured as IP prefixes with a 21 bit subnet mask. The subnets are only split if the Network Analysis module identifies the subnet as a large portion of the traffic. Traffic Generation During testing of the traffic collection module, the test setup shown in Figure 4-2 was used. The ping command was used to send ICMP traffic into the network from a Unix workstation. Command line options allow a user to specify both the number of

PAGE 45

36 packets as well as packet size. This allows for a user-defined amount of ICMP traffic to be sent to a single IP address. Table 4-1 Subnets that were utilized during lab testing. IP prefix Subnet mask 10.0.0.0 255.255.248.0 10.0.8.0 255.255.248.0 10.0.16.0 255.255.248.0 10.0.24.0 255.255.248.0 192.168.80.0 255.255.248.0 192.168.96.0 255.255.248.0 192.168.128.0 255.255.248.0 192.168.160.0 255.255.248.0 Figure 4-2 Lab setup used to test BGP load-balancing tool. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Cisco 7200Series 31 4 2 0 12341234123412341234567856781234SD ENABLED SLOT 0 PCMCIA EJECT FE MII AUI ENABLE FE FE LINKENABLE CPU RST IO POWER OK SD POWERREADYALARM FAIL STAT RUN RESET ENET CLASS 1LASERPRODUCT 1514131211109876432105 CONSOLE 1CISCO YSTEMS Cisco 7206Cisco 7206RedbackSMS1800 Cisco 7500SERIES NORMAL LOWERPOWER UPPERPOWER Traffficgenerator -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Cisco 7200Series 31 4 2 0 12341234123412341234567856781234SD ENABLED SLOT 0 PCMCIA EJECT FE MII AUI ENABLE FE FE LINKENABLE CPU RST IO POWER OK SD 0123456710Power 1Power 2141516 ESD89111213 ks -------------------------------------CISCO YSTEMSS FAST ETHERNET INPUT/OUTPUT CONTROLLER --------------------------------------CISCO YSTEMSS FAST ETHERNET INPUT/OUTPUT CONTROLLER RedBack Networ

PAGE 46

37 This method was used to test the traffic collection mod ule and its ability to decode and sis study. The analyn s Several test cases were developed to test the ability of this system to generate load-balanl ses considered in this study are shown in Table 4-2. Each case is explained in furring validation testing. tore NetFlow records. Additional testing was conducted by generating SQL code to populate the database with traffic information directly. This allowed for the creation and execution of tests cases without utilizing the traffic collection module. There was no consideration for type or distribution of traffic in th sis module balances based on aggregate load values. The specific type or duratioof each flow has little meaning in this approach. Test Case ced BGP configurations. The cases considered include both well-balanced and unbalanced traffic conditions. This tested the performance of the system under normanetwork conditions (nearly balanced) as well as worst-case conditions (significantly unbalanced). Test case #4 also included load balancing across circuits with different capacities. The ca ther detail in the following sections. Table 4-2 Description of test cases used du Case Id Test Case 1 Even distributionross all prefixes of traffic ac 2 Even traf1 prefix fic in 2 /24 prefixes that fall within the same /2 3 Random distribution of traffic across all prefixes 4 Random distribution of traffic across all prefixes with unequal size circuits

PAGE 47

38 System Output The system generates both summary output for the Network Analysis module and router configuration to implement the results. Test Case #1 This test case was used to provide a baseline analysis. Traffic was uniformly distributed across all subnets. Since there were an even number of subnets and the traffic is uniformly distributed, the algorithm should be able to develop a perfectly balanced solution. The results shown in Table 4-3 indicate that a perfectly balanced condition was generated by the system. Table 4-3 Per circuit loading results from test case #1. Circuit ID Traffic (KB) Capacity Load Factor Load PL-Lab-01 1600 2 1.00 1600.00 PL-Lab-02 1600 2 1.00 1600.00 Table 4-4 Per prefix loading results from test case #1. Prefix Traffic (KB) 10.0.0.0/21 400 10.0.8.0/21 400 10.0.16.0/21 400 10.0.24.0/21 400 192.168.80.0/21 400 192.168.96.0/21 400 192.168.128.0/21 400 192.168.160.0/21 400

PAGE 48

39 Test Case #2 The purpose of this test case was to evaluate how well the system performed with a highly skewed traffic distribution. The case was derived so that two prefixes had an identical amount of traffic. These two subnets were chosen such that they fell within the same /23 supernet. With this arrangement, a perfectly balanced configuration was possible but would require several iterations of the algorithm to achieve. Table 4-5 shows that the ideal balanced condition was generated using 11 prefixes. This indicates that there were 4 iterations of the algorithm. The skewed distribution of traffic is visible in Table 4-6. Table 4-5 Per circuit loading results from test case #2. Circuit ID Traffic (KB) Capacity Load Factor Load PL-Lab-01 50 2 1.00 50.00 PL-Lab-02 50 2 1.00 50.00 Table 4-6 Per circuit loading results from test case #2. Prefix Traffic (KB) 10.0.0.0/24 50 10.0.1.0/24 50 10.0.2.0/23 0 10.0.4.0/22 0 10.0.8.0/21 0 10.0.16.0/21 0 10.0.24.0/21 0 192.168.80.0/21 0 192.168.96.0/21 0 192.168.128.0/21 0 192.168.160.0/21 0

PAGE 49

40 Test Case #3 The traffic distributions in the previous test cases were not consistent with normal traffic patterns. The intent of test case #3 is to closely mirror a traffic distribution that might be seen in a live network. Traffic data was generated and distributed across all prefixes. No idle prefixes are contained in this test case. The tolerance used in this test case was 10%. Table 4-7 shows the resulting circuit loads after analyzing the data. The algorithm is able to achieve a load-balanced condition without splitting prefixes. Table 4-7 Per circuit loading results from test case #3. Circuit ID Traffic (KB) Capacity Load Factor Load PL-Lab-01 1411.823 2 1.00 1411.82 PL-Lab-02 1354.287 2 1.00 1354.29 Table 4-8 Per prefix loading results from test case #3. Prefix Traffic (KB) 10.0.0.0/21 390.566 10.0.8.0/21 360.717 10.0.16.0/21 267.394 10.0.24.0/21 289.327 192.168.80.0/21 180.033 192.168.96.0/21 364.333 192.168.128.0/21 422.971 192.168.160.0/21 490.769

PAGE 50

41 Test Case #4 A slightly more complicated scenario is presented in test case #4. This case used randomly distributed traffic across all subnets. The tolerance for determining balance condition was lowered from 10% to 5%. Additionally, circuit #2 has twice the capacity of circuit #1. This validates that the concept of balancing traffic on load rather than traffic works correctly. Table 4-9 Per circuit loading results from test case #4. Circuit ID Traffic (KB) Capacity Load Factor Load PL-Lab-01 1077.613 2 1.00 1077.61 PL-Lab-02 2091.995 4 0.50 1046.00 Table 4-10 Per prefix loading results from test case #4. Prefix Traffic (KB) 10.0.0.0/21 419.265 10.0.8.0/21 260.42 10.0.16.0/21 383.099 10.0.24.0/21 397.928 192.168.80.0/22 296.812 192.168.84.0/22 197.234 192.168.96.0/21 404.345 192.168.128.0/21 358.856 192.168.160.0/21 451.649 Conclusions The test results discussed in this section demonstrate that the system for generating load-balanced BGP configurations works correctly. Scenarios that required prefix

PAGE 51

42 splitting were included to test the algorithms ability to generate a new set of initial conditions that could be used in the next iteration to develop a better solution. The test cases that used random data spread across all prefixes are a more accurate representation of real world traffic. In these test cases, the system was able to achieve a load-balanced condition under both 5% and 10% tolerance settings. With the variations in traffic levels in live networks, these thresholds are reasonable. Regardless of the complications added to each test case, the system was able to achieve the desired result. Balanced BGP configuration can be developed in an automated fashion by analyzing traffic data and network topology.

PAGE 52

CHAPTER 5 SUMMARY AND FUTURE WORK The system developed in this study is a proof of concept implementation to show that load-balanced configurations can be developed through network analysis. This chapter discusses some improvements to the system as well as some opportunities for enhancement that can be realized by implementing this type of system. System Improvement During the development of this system, several issues were discovered that might cause the system to provide less than ideal results. The following sections outline these issues and propose solutions to the underlying problems. Instantaneous Data Rate Load-Balancing The load-balancing done in this system is based on aggregate load during an analysis period. The goal is not to ensure that the instantaneous data rate on each circuit is balanced at some peak time. Rather, the system ensures that the total flow of traffic out each circuit during the entire analysis period is equal. The calculation is more a balance of volume than of rate. The reason that using a volumetric method is appropriate is that network traffic patterns are regular on a day-to-day basis. Peak traffic levels tend to increase in a linear fashion over time. The traffic patterns are somewhat predictable. These patterns are illustrated in Figure 5-1. 43

PAGE 53

44 While the total volume of traffic is fairly predictable, what is not predictable is which subnets will originate the traffic. Because end users are assigned IP addresses out of one of several pools, it is common for dynamic users to change IP addresses. Figure 5-1 Total volume of traffic is predictable on a day-to-day basis and grows linearly over time. In order to balance traffic on an instantaneous data rate basis, the NetFlow data must be discretized. Data must be sampled on a small interval. The flow records contain an aggregate amount of data during a period of time. The period represented by each flow record is different. Each datum from a flow record is assigned to the timeslot during which it was generated. Since the information in a flow record spans many timeslots, some assumptions must be made about the distribution of data during the flow records interval. In order to make this type of assignment, several challenges must be overcome: The flow distribution models must be developed for each protocol present in the network. The protocol contained in the flow must be determined from the information contained in the NetFlow record. (i.e., non-standard ports would skew the results) The sampling interval must be smaller than most of the flow records.

PAGE 54

45 Although not impossible to solve, these problems are beyond the scope of this thesis. When considering the traffic characteristics presented earlier in this section and the difficulty in extracting additional information from the flow records, the system described in this paper is a good trade-off between accuracy and complexity. Low Utilization Prefixes Because the load-balancing system makes prefix assignments based on load, the presence of low or zero utilization prefixes have little impact on these assignments. This can lead to router configurations where the load is perfectly balanced but the number of IP addresses assigned to each egress link is drastically different. This was exhibited in the configurations generated by test case #2 shown in Figure 5-2. The prefix lists are balanced based on current load, but the number of addresses on each circuit are not balanced. If left unchecked, this issue could result in the configuration being significantly out of balance as these subnets become utilized. In order to alleviate this problem, another analysis pass could be performed that would use the load-balanced configuration as the initial condition. This algorithm would look for low or zero utilization prefixes and balance them across the links based on the number of IP addresses in the prefix. After this pass, the configuration would represent a current load-balanced state as well as balancing the number of addresses thereby extending the period of time that this configuration will maintain load-balance.

PAGE 55

46 ip prefix-list IP-PL-Lab-01 description Networks preferred on Circuit ID PL-Lab-01 ip prefix-list IP-PL-Lab-01 seq 5 permit 10.0.1.0/24 ip prefix-list IP-PL-Lab-01 seq 10 permit 10.0.8.0/21 ip prefix-list IP-PL-Lab-01 seq 15 permit 10.0.16.0/21 ip prefix-list IP-PL-Lab-01 seq 20 permit 10.0.24.0/21 ip prefix-list IP-PL-Lab-01 seq 25 permit 192.168.80.0/21 ip prefix-list IP-PL-Lab-01 seq 30 permit 192.168.96.0/21 ip prefix-list IP-PL-Lab-01 seq 35 permit 192.168.128.0/21 ip prefix-list IP-PL-Lab-01 seq 40 permit 192.168.160.0/21 ip prefix-list IP-PL-Lab-01 seq 45 permit 10.0.4.0/22 ip prefix-list IP-PL-Lab-01 seq 50 permit 10.0.2.0/23 ip prefix-list IP-PL-Lab-02 description Networks preferred on Circuit ID PL-Lab-02 ip prefix-list IP-PL-Lab-02 seq 5 permit 10.0.0.0/24 Figure 5-2 System generated prefix lists are balanced based on load, but not necessarily balanced on number of addresses. Minimizing Prefix Announcements Another issue that exists in the current tool is that the number of prefixes is expanded so that a load-balanced condition can be reached. This can lead to a configuration whereby after several iterations, two subnets that belong to the same supernet are assigned to the same circuit. In this case, the two subnets could be collapsed into a single prefix announcement for the supernet. Scanning each circuit after the analysis and looking for adjacent prefixes could resolve this issue. This technique would be simple to implement and would have O(n) complexity. Cost Factor in Load Balancing Including the cost of bandwidth into the load-balancing calculation is potential area for exploration. This study had no preference as to which egress circuit should be utilized first. Dual-homed configurations increase redundancy and reliability of BGP

PAGE 56

47 peering. In a dual-homed configuration, a network operator obtains egress links from multiple providers to ensure that they are protected from a failure in the service provider network. To reduce the cost of this type of configuration, the backup link can be on usage based billing or burstable billing. In usage based billing, the customer pays based on how much bandwidth they utilize. In a burstable billing scenario, the customer pays for a certain amount of bandwidth. If they exceed the allocated bandwidth, they pay a higher rate for the additional capacity. These factors could be included in the algorithm by modifying how the circuits are sorted during the iterations of BFD. Rather than sorting just on load, the sort method would include both the load on the circuit and the cost of exceeding the threshold for burstable circuits. Usage base circuits could be handled by the same method with a zero threshold. Support More Complicated Route-Maps Route-maps are utilized in this system to apply the load-balanced routing policy to BGP announcements. This is the type of route-map term supported. Other types of routing policy are also implemented through route-maps. This system could be adapted to include other routing policy terms present in the route-map prior to analysis. This would integrate the existing routing policy with the changes required to maintain load-balance. This type of change would require parsing of the existing configuration file or additional changes to the database to include routing policy beyond the scope of load-balancing.

PAGE 57

48 Fault Detection The data available in this system would allow for the development of some additional fault detection capabilities. By mapping traffic data to egress links, over-utilization conditions on egress circuits can be identified. This type of information can typically be obtained by other means (e.g., SNMP). The ability to break down the traffic information from an overloaded circuit to a more granular level is an enhanced capability. Once an issue has been identified, the traffic data will indicate what type of services are consuming the link and what hosts are the source of that traffic. Infected Host Detection One specific example of fault detection is infected host detection. The data gathered for load balancing will include signs of virus presence or propagation. Not all viruses and worms could be detected, but many have signatures that can be identified by looking at flow characteristics. Also, host enumeration would be detected in this case. Enumeration might involve ICMP and port scanning to identify hosts where a worm can propagate. Summary The goal of this project was to develop a system that could analyze network traffic and generate load-balanced router configurations. The motivation for this effort is that the existing process today is manual, intensive, and potentially error prone. Today, an engineer must evaluate load conditions on each egress link within an AS. Aggregate traffic levels on a per-circuit basis and the number of IP addresses preferred on each link is the only information that the engineer has available. The engineer

PAGE 58

49 determines an estimate of how much traffic needs to be moved. The step is based on estimation and can lead to errors. This system analyzes data and assigns traffic at an IP prefix level. This allows for accurate determination on how much traffic will migrate when the preference on a particular IP prefix is adjusted. This eliminates the guesswork and estimation in todays process. Rather than simply identifying a prefix to move, the system analyzes the entire system and develops a new set of prefix to circuit assignments that achieves a well-balanced state. This analysis is based on data across all IP prefixes. The system has been shown to be effective across a range of cases that test both normal traffic conditions as well as irregular traffic that represent challenges for the algorithm. The enhancements presented in this chapter are by no means complete. There are certainly additional features that could be developed that would allow this system to play an integral role in a network management suite.

PAGE 59

LIST OF REFERENCES Argawal S, Chuah C, and Bhattacharyya S, Diot C, 2004, The Impact of BGP Dynamics on Intra-Domain Traffic, Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems, ACM Press, pp. 319-330. Bressoud T, and Rastogi R, Optimal Configuration for BGP Route Selection, INFOCOM 2003: Twenty-Second Annual Joint Conference of the IEEE Computer and Communications Societies, Vol. 2, pp 916-926 Christiansen T, and Torkington N, 1999, Perl Cookbook, OReilly & Associates, Inc, Sebastopol, CA. Halabi B, 1997, Internet Routing Architectures, Cisco Press, Indianapolis, IN. Holzner S, 1999, Perl Core Language, Coriolis Technology Press, Scottsdale, AZ. Horowitz E, Sahni S, and Rajasekaran S 1998, Computer Algorithms, Computer Science Press, New York, NY. Nilsson S, and Karlsson G, 1999, IP-Address Lookup Using LC-Tries, IEEE Journal on Selected Areas in Communication, Vol. 17, No. 6, pp 1083-1092. Lili Q, Zhang Y, and Keshav S. "Understanding the Performance of Many TCP Flows," Computer Networks (formerly called Computer Networks and ISDN Systems), Vol. 37, pp 277-306, 2001. Sahni, Sartaj, 2000, Data Structures, Algorithms, and Applications in Java, McGraw-Hill, Boston, MA. Savelsbergh M, 1997, A Branch-and-Price Algorithm for the Generalized Assignment Problem, Operations Research, Vol. 45, No. 6, pp 831-841, 1997. Sklower K, 1991, A Tree-Based Packet Routing Table for Berkeley Unix, Proceedings of the 1991 Winter USENIX Technical Conference, Dallas, TX, pp 93-99. Stallings W, 2000, Data and Computer Communications, Prentice Hall, Sixth Edition, Upper Saddle River, NJ. Tanenbaum A, 1996, Computer Networks, Prentice Hall, Third Edition, Upper Saddle River, NJ. 50

PAGE 60

BIOGRAPHICAL SKETCH I received a Bachelor of Science degree in civil engineering from the Florida State University in 1997. During my undergraduate studies, I was employed by Post, Buckley, Schuh, and Jernigan (PBS&J) and at the Florida Department of Transportation Structural Research Center. This practical, hands-on engineering experience proved beneficial during my graduate studies. I began my graduate school career under Dr. Peter Ifju in the Aerospace Engineering, Mechanics and Engineering Science (AEMES) Department at the University of Florida in 1997. I studied the post-buckling response of composite sandwich structures and published a thesis on the topic. My studies in the Computer and Information Science and Engineering (CISE) Department at the University of Florida began while writing my thesis in aerospace engineering. During my tenure in computer engineering, I have focused on network communications and security. Both my academic progress and network-related research are represented by this thesis. This thesis was defended on October 6 th 2004. 51


Permanent Link: http://ufdc.ufl.edu/UFE0008800/00001

Material Information

Title: Automated System for Load-Balancing EBGP Peers
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0008800:00001

Permanent Link: http://ufdc.ufl.edu/UFE0008800/00001

Material Information

Title: Automated System for Load-Balancing EBGP Peers
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0008800:00001


This item has the following downloads:


Full Text












AUTOMATED SYSTEM FOR LOAD-BALANCING EBGP PEERS


By

BRIAN T. WALLACE













A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF ENGINEERING

UNIVERSITY OF FLORIDA


2004

































Copyright 2004

by

Brian T. Wallace


































This document is dedicated to my wife, Robin, and our children, Parker, Taylor, and
Peyton.















ACKNOWLEDGMENTS

I would like to acknowledge the contribution that Dr. Joe Wilson has made to my

academic career. Dr. Wilson has provided me with guidance and inspiration throughout

my course of study. He has worked with me to ensure my success both while in-

residence and as a FEEDS student.

I thank my family for their support during my time as a graduate student. Robin,

Parker, Taylor, and Peyton have been understanding of the requirements of completing an

advanced degree and have provided me motivation to be successful.
















TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S ................................................................................................. iv

LIST OF TABLES ................................................... vii

LIST OF FIGURES ............................... ........... ............................ viii

A B S T R A C T ........................................................................................................ ........ .. 9

CHAPTER

1 IN T R O D U C T IO N .................................................................. .. ... .... ............... 1

Internet R outing P rotocols ................................................................... ...............1...
C lassful N etw ork A dressing ........................................................... ...............2...
Classless Internet D om ain Routing (CIDR)..................................... ...............2...
Interior and Exterior R outing Protocols .................................................. ...............3...
D instance V ector A lgorithm s ..................................... ..................... ...............3...
L ink-State R outing A lgorithm s ........................................................ ...............4...
B G P O v erview ............................................................................. .. . ... ...............4
BGP Attributes ......................... .. .......... .............................5
B GP B est Path A lgorithm .......................................................5......
A S Path Pre-pending .............................................................. .............. ................ .6
Adjusting Multi-Exit Discriminator (MED) Values ................... ..................... 8
N etw ork T traffic C collection ............................................... ....................... .............. .8...
Simple Network Monitoring Protocol (SNMP) ..............................................9...
Prom iscuous M ode Packet Capture.................................................. ...............9...
N etw ork F low C collection ...................................... ...................... ............... 10

2 NETWORK DATA COLLECTION .....................................................................12

Topology Collection ............................. .......... ........ ............... 12
Fully Autom ated Solution ....................................................... 12
D database Approach ...................................................................... ............ 13
T opology M odule .............. ........... ...................................................... .... .... 13
Prefix Tree G generation ................. ........................................................... 14
Circuit R presentation ... .. ................ ............................................... 15
N etFlow D ata C collection ................................................................... ............... 15
D definition of a Flow ................ .............. ........................................... 15


v









C apturing N etF low D ata....................................... ....................... ............... 16
E xporting N etF low data ....................................... ....................... ............... 18
Storage of N etFlow Cache Entries ................................................... 18
D ata C collector ................................................................................................ 19
Traffic Assignment .......... ............. .. ........... .....................................21

3 D A T A A N A L Y SIS .................................................. ............................................ 23

N etw ork D ata A nalysis......................... ........................................................... 23
A analysis M ethodology O verview ................................................... ................ 24
F irst pass analy sis ............................................... ..................... ... ......... 25
Second pass analysis ..................................... .. .......... .......... .. ........ .... 26
Balanced Traffic Condition ........................................................ 27
C configuration G enerator .................................................................. ................ 29
B benefits of Code G generation ....................... ............................................... 29
Im plem entation P rocess........................................ ....................... ................ 30

4 SY STEM RESULTS .................................... ............................... 34

Testbed C configuration .............. .................. ................................................ 34
Traffic Generation ..................................... ............................. 35
T e st C a se s .......................................................................................... ..................... 3 7
S y ste m O u tp u t ............................................................................................................3 8
T est C ase # 1 .................................................................................................. 38
T est C ase #2 .................................................................... .................. . ......39
T est C ase #3 .................................................................................................. 40
T est C ase #4 .................................................................................................. 4 1
C conclusions .................................................................................. ....................... 4 1

5 SUMMARY AND FUTURE WORK ..................... ...................................43

System Im provem ent..............................................................................................43
Instantaneous Data Rate Load-Balancing ........................................................43
Low U tilization Prefixes ...................................................................................45
M inimizing Prefix Announcements .................................................................46
Cost Factor in Load Balancing ...............................................................46
Support More Complicated Route-Maps................... ....................................47
Fault D election ............................................................................................. . 48
Infected H ost D election ...........................................................................................48
Summary ............................................ .................................. 48

LIST O F R EFEREN CE S ................................................................................................50

BIO GRAPH ICAL SK ETCH ..........................................................................................51















LIST OF TABLES


Table page

1-1 Bit patterns associated with the original address classes for IPv4 ..........................2...

2-1 Fields in a N etFlow header packet ...................................................... ................ 20

2-2 Fields in a N etFlow flow record.......................................................... ................ 21

4-1 Subnets that were utilized during lab testing.......................................................36

4-2 Description of test cases used during validation testing. ...................................37

4-3 Per circuit loading results from test case #1 .................................. ..................... 38

4-4 Per prefix loading results from test case #1. .......................................................38

4-5 Per circuit loading results from test case #2........................................................39

4-6 Per circuit loading results from test case #2....................................................... 39

4-7 Per circuit loading results from test case #3........................................................40

4-8 Per prefix loading results from test case #3. .......................................................40

4-9 Per circuit loading results from test case #4....................................................... 41

4-10 Per prefix loading results from test case #4 ........................................................41
















LIST OF FIGURES


Figure page

1-1 The AS path attribute can be modified to implement BGP routing policy .............7...

1-2 The multi-exit discriminator (MED) attribute can be modified to implement BGP
ro u tin g p o licy ..................................................................................................... 8

2-1 Tree structure generated by Prefix.pm ................................................ ................ 14

2-2 Enabling N etFlow on a router interface .............................................. ................ 16

2-3 Output from router verifying NetFlow configuration. .......................................17

2-4 Configuring NetFlow export on a router interface..............................................18

2-5 Process for transferring data from NetFlow cache to data collection system ......... 19

2-6 Process of assigning traffic to a prefix object ....................................................22

3-1 Process by which router applies BGP routing policy via route-maps and prefix-lists.31

3-3 IP prefix list configuration to identify groups of prefix that will have routing policy
a p p lie d .................................................................. ................................................ ... 3 2

3-4 Route-maps use prefix-lists to apply routing policy to outbound BGP
an n ou n cem en ts ......................................................................................................... 3 3

4-1 A typical access netw ork configuration .............................................. ................ 35

4-2 Lab setup used to test BGP load-balancing tool. ................................................36

5-1 Total volume of traffic is predictable on a day-to-day basis and grows linearly over
tim e ....................................................................................................... . ....... .. 4 4

5-2 System generated prefix lists are balanced based on load, but not necessarily
balanced on num ber of addresses ........................................................ ............... 46














Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Engineering

AUTOMATED SYSTEM FOR LOAD-BALANCING EBGP PEERS

By

Brian T. Wallace

December 2004

Chair: Joseph N. Wilson
Major Department: Computer and Information Science and Engineering

The goal of this project was to develop a system that could analyze network traffic

and topology in order to generate load-balanced router configurations. The motivation

for this effort is that the existing process today is manual, intensive, and potentially error

prone.

The system collects and stores NetFlow data from BGP routers. Traffic data are

assigned to IP prefixes that are valid with an AS. These prefixes are then assigned across

the set of egress links for the AS. A best fit decreasing (BFD) approach is used to

generate a load-balanced condition. If the load is not balanced, IP prefixes are split along

CIDR boundaries and another iteration of the BFD approach is performed.

Test results demonstrate that the system for generating load-balanced BGP

configurations works correctly. Regardless of the complications added to each test case,

the system was able to achieve the desired result.

This study has shown that load-balanced BGP configurations can be developed in

an automated fashion by analyzing traffic data and network topology.














CHAPTER 1
INTRODUCTION

As the Internet has experienced explosive growth in recent years, service providers

have struggled to build capacity and stay ahead of increasing bandwidth demands.

Routing on the Internet backbone is key to maintaining robust global network

communications infrastructure.

This thesis presents an automated method for generating load-balanced Border

Gateway Protocol (BGP) configurations. While traffic patterns on the Internet are

constantly changing, router configuration among External Border Gateway Protocol

(EBGP) peers is basically static. This study will demonstrate an automated tool for

analyzing traffic patterns and developing router configuration to provide a load-balanced

state. This system will be an improvement of the present manual process of determining

configuration changes for BGP peers.

Internet Routing Protocols

The Internet has no shortage of protocols. Of interest in this context is the routing

protocols that are supported and play a role in provide global connectivity via the

Internet. An important distinction must be drawn between routed and routing protocols.

Routed protocols are the protocols used to exchange information between hosts. Routing

protocols are used to convey information among routers and make decisions on path

selection across a network (Stallings 2000).









Classful Network Addressing

The original Internet routing scheme included addresses classes. Most

organizations were assigned Class A, B, or C networks (Tannenbaum 1996). Class D

address space is reserved for multicast applications. Since multicast addresses are shared

by a group of hosts, the number of hosts per network is not applicable. Class E address

space is reserved as experimental but has never been used in any significant way. In

classful network addressing, the class of an address can be determined by the first four

bits of the address. Table 1-1 illustrates the bit patterns associated with each class. An x

indicates that the bit in that location is not relevant.

Table 1-1 shows the bit patterns associated with the original address classes for IPv4.

Address Class Bit pattern # of networks # of hosts / network

Class A Oxxx 128 16,777,214

Class B 10xx 16,384 65,534

Class C 110x 2,097,152 253

Class D 1110 268,435,456 n/a

Class E 1111 n/a n/a



Classless Internet Domain Routing (CIDR)

These class boundaries developed originally did not survive the rapid expansion of

the Internet and a new technique was required. Network address classes did not scale

with the size of the Internet because this method of allocation could not match the size of

networks being attached to the Internet. For most organizations, a Class A network was

far too large but a Class C networks was too small. This led to many Class A networks

going unallocated while Class B networks were being rapidly depleted. In addition,









small companies were allocated a Class C network and wasted the portion they did not

require.

Classless Internet Domain Routing (CIDR) was developed to overcome the rapid

exhaustion of IPv4 address space. RFCs 1518 and 1519 describe a technique where the

subnet mask is a critical piece of information included in routing updates and stored in

routing tables. CIDR lifts the restriction that networks have to be divided on classful

boundaries.

Interior and Exterior Routing Protocols

Routing protocols can be broken into two categories: interior and exterior. Interior

gateway routing protocols (IGPs) are used for internal routing. Typically, all routers

participating in an IGP are under a single administrative authority.

There is a variety of interior gateway routing protocols deployed today. Some

examples of IGPs include Open Shortest Path First (OSPF), Enhanced Interior Gateway

Routing Protocol (EIGRP), Intermediate System Intermediate System (IS-IS), and

Routing Information Protocol (RIP). While there are a number of IGPs deployed, the

dominant exterior gateway routing protocol is the Border Gateway Protocol.

IGPs are classified based on how they calculate routing metrics. The two primary

classifications are: distance vector and link-state algorithms.

Distance Vector Algorithms

Distance vector algorithms use a simple metric that takes into account spatial

locality. The metric most often used is hop count. Hop count is the number of routers

that must be transited to arrive at the destination. It is possible for a path that is longer

via hop count to yield network performance that is actually better due to additional

bandwidth and lower latency interfaces.









RIP is the most common distance vector algorithm deployed. It is easy to

configure and easy to use but does not take into account network performance or

degradation when assigning metrics.

Link-State Routing Algorithms

Link-state algorithms maintain a database of link state information. Whenever a

router participating in a link-state algorithm experiences a change in one of it's directly

connected interfaces, a link state advertisement (LSA) is sent to all neighboring routers

participating in the same routing protocol. These updates are propagated through the

networks so that all routers have the same topology and state information.

Maintaining a database has both advantages and disadvantages. One significant

advantage is extremely fast convergence. The disadvantage is that the router must

maintain this database entirely separately from the routing table. This increases both the

CPU and memory requirements of a router running a link state routing protocol.

The OSPF routing protocol is by far the most common link state routing protocol.

It is standards-based and performs extremely well.

While maintaining a link state database provides the information necessary for fast

convergence times and consistent routing information, it also does not scale to the level

of the Internet. The amount of traffic generated just by routing updates would flood the

backbone.

BGP Overview

Four regional Internet registries provide the allocation and registration service for

the Internet. The resources assigned by these entities include IP addresses and

Autonomous System numbers. Autonomous systems (ASs) in North America, a portion

of the Caribbean, and sub-equatorial Africa are assigned by the American Registry for









Internet Numbers (ARIN). For Europe, the Middle East, northern Africa, and parts of

Asia, allocation and registration services are provided by RIPE Network Coordination

Center (RIPE NCC). The Asia Pacific Network Information Centre (APNIC) is

responsible for the majority of Asian countries. Lastly, The Latin American and

Caribbean IP address Regional Registry is responsible for South America and the

Caribbean islands not covered by ARIN.

BGP Attributes

Attributes are parameters that are used to describe characteristics of a prefix in

BGP. A BGP attribute consists of three fields: attribute type, attribute length, and

attribute value. Attributes may be sent as part of a BGP UPDATE packet, depending on

the type of attribute. These parameters can be used to influence path selection and

routing behavior (Halabi 1997).

There are four types of BGP attributes: well-known mandatory, well-known

discretionary, optional transitive, and optional non-transitive. Well-known attributes are

the set of attributes that must be recognized by all RFC compliant BGP implementations.

All well-known attributes must be forwarded with the UPDATE message. Mandatory

attributes must be included in every UPDATE message, while discretionary attributes

may or may not appear in each UPDATE message. Optional attributes may or may not

be supported by all implementations. Transitive attributes should be forwarded to other

neighbors.

BGP Best Path Algorithm

By default, BGP will select the current best path. It then compares the best path

with a list of other valid paths. If another valid path meets the selection criteria, it will

become the current best path and the remaining valid paths will be evaluated.









The following ordered criteria are used to determine the best path in BGP:

1. Prefer the path with the highest WEIGHT.
2. Prefer the path with the highest LOCAL_PREF.
3. Prefer the path that was locally originated via a network or aggregate BGP
subcommand, or through redistribution from an IGP.
4. Prefer the path with the shortest AS_PATH.
5. Prefer the path with the lowest origin type: IGP is lower than EGP, and EGP is
lower than INCOMPLETE.
6. Prefer the path with the lowest multi-exit discriminator (MED).
7. Prefer external (eBGP) over internal (iBGP) paths.
8. Prefer the path with the lowest IGP metric to the BGP next hop.
9. Check if multiple paths need to be installed in the routing table for BGP Multipath.
10. When both paths are external, prefer the path that was received first (the oldest
one).
11. Prefer the route coming from the BGP router with the lowest router ID.
12. If the originator or router ID is the same for multiple paths, prefer the path with the
minimum cluster list length.
13. Prefer the path coming from the lowest neighbor address.


While load balancing is not inherent in BGP, there are two common methods used

to create a load-balanced configuration. These two methods are AS path pre-pending and

adjustments of Multi Exit Discriminator (MED) values (Brestoud and Rastogi 2003).

AS Path Pre-pending

AS path pre-pending involves padding the AS Path attribute in BGP

announcements to reduce the likelihood of a route being selected. Normally, a BGP

speaker will add it's AS to the AS Path attribute of an announcement prior to forwarding

that announcement onto another peer. Each router that receives the announcement looks

at the AS Path attribute to determine the shortest AS Path to a particular prefix. By pre-

pending a path with additional AS entries, a prefix will have a lower probability of being

selected as the best route to a destination.

In practice, a provider will distribute IP space across multiple egress links. For

each egress link, the range of IP addresses that should prefer the path would be advertised










with the normal AS Path attribute. All other IP space is advertised with an artificially

long AS Path attribute. These modified announcements serve to provide redundancy in

the event of a failure of an egress link.




AS Path = 1234 -1234 -1234
192 168 100 0/24

Router C1 Router ISP1

-- -Router ISP3
Corporate Tier I "" S
Network T I
Networ1234 Service Provider


RouterC2 Router ISP2

AS Path = 1234



Figure 1-1 The AS path attribute can be modified to implement BGP routing policy.

In the example above, there are two paths available to the 192.168.100.0/24 subnet.

Both Router Cl and Router C2 are announcing this prefix. Without intervention, the

normal BGP best path selection algorithm determines the best path selected to this

subnet.

The scenario above shows that Router Cl has pre-pended the AS Path attribute

with 1234-1234-1234, while Router C2 has followed the default behavior of pre-pending

its AS number only once. The effect this has is that from the perspective of Router ISP3,

the prefix in question has a longer AS Path length if the path through Router Cl is taken.

Therefore, the path through Router C2 is selected as the best path to the prefix

192.168.100.0/24.










Adjusting Multi-Exit Discriminator (MED) Values

The Multi-Exit Discriminator attribute is a BGP attribute that gives a peer an

indication of preference for entry into an AS. If two identical prefixes are announced

from an AS, the path with the lowest MED value will be selected for the routing table.





192 168 100 0/24
MED= 100
Router C1 Router ISP1

Router ISP3
Corporate Tier I
Network Service Provider _


MED= 200
Router C2 Router ISP2



Figure 1-2 The multi-exit discriminator (MED) attribute can be modified to implement
BGP routing policy.

The example above shows both Router Cl and Router C2 are announcing the prefix

192.168.100.0/24. Router Cl is announcing the prefix with a MED value of 100, while

Router C2 is announcing the same prefix but with a MED value of 200. From the

perspective of Router ISP3, the best path is the path through Router Cl because that path

has the lower MED value.

Network Traffic Collection

There are several options available for capturing network utilization information.

Each of the methods available has their own strengths and weaknesses depending on

upon the application for which they are being used.









Simple Network Monitoring Protocol (SNMP)

Simple Network Monitoring Protocol (SNMP) is an extremely common protocol

used for monitoring network elements. Fault management systems use SNMP to identify

abnormal network conditions based on a pre-determined behavior model. The behavior

model specifies what variables to poll and what values indicate an alarm condition. This

alarm condition can then be displayed on a screen in the Network Operations Center

(NOC), emailed to a mailing-list, or sent to an alpha-numeric pager for resolution.

Performance monitoring systems also use SNMP to collect network traffic

statistics. Network elements are polled at a specified interval to collect interface specific

information. This information can then be presented in graphical format to visualize

traffic flows in the network.

The drawback of SNMP monitoring for BGP load-balancing is that it does not have

the level of granularity required to generate load-balanced configuration. SNMP

monitoring can provide interface statistics that indicate whether a particular interface is

over-utilized. However, traffic information needs to be collected and correlated at the

individual host basis in order to be able to generate a load-balanced configuration. To be

useful, the information gained via SNMP must be correlated with network topology

information. In many cases, the network topology will not be definitive in assigning

traffic information to logical subnets or hosts.

Promiscuous Mode Packet Capture

Promiscuous mode packet capture involves the deployment of probes at points of

interest in the network. While this technique is commonly used to diagnose highly

localized network issues, there are several drawbacks that preclude its wide scale

deployment.









One significant drawback is the number of devices that would have to be deployed

in order to have a complete view of the network. The amount of processing power and

disk space required to collect and analyze data in real-time is also significant.

Additionally, for any global analysis to take place the data collected at each probe must

be transferred to a centralized collection point in some aggregated fashion.

Today, most networks of any size are switched. The implementation of probes

would require that SPAN ports be created to mirror traffic. While this type of

configuration does not typically have any significant impact on switch performance, it

consumes switch ports. Ports that would normally be assigned to carry network traffic

must now be allocated for traffic collection, thereby increasing the price per port of every

switch in the network.

As networks continue to grow in speed, the ability of inexpensive probes to process

the data rate of large WAN links is reduced. It is not uncommon for egress links to be in

the OC-12 (622 Mbps) to OC-48 (2.4 Gbps) range. When these links become fairly

heavily utilized, the number of packets per second that must be analyzed can quickly

overwhelm a server.

Network Flow Collection

The IP Flow Information Export (ipfix) is an Internet Engineering Task Force

(IETF) Working Group whose purpose is to establish a standard for IP flow information

export systems. Though there are a number of flow export systems and mechanisms

available today, their design and implementation vary by vendor. The lack of a standard

makes it difficult to develop flow analysis tools that are universal. Additionally, having

multiple export systems and formats hampers the implementation of back-end systems.

The IETF Working Group has identified the following goals:









* Define the notion of a standard lPflow. The flow definition will be a practical
one, similar to those currently in use by existing non-standard flow information
export protocols which have attempted to achieve similar goals but have not
documented their flow definition.

* Devise data encodings that support analysis of IPv4 and IPv6 unicast and multicast
flows traversing a network element at packet header level and other levels of
aggregation as requested by the network operator according to the capabilities of
the given router implementation.

* Consider the notion of IP flow information export based upon packet sampling.

* Identify and address any security privacy concerns affecting flow data. Determine
technology for securing the flow information export data, e.g., TLS.

* Specify the transport mapping for carrying IP flow information, one which is
amenable to router and instrumentation implementers, and to deployment.

* Ensure that the flow export system is reliable in that it will minimize the likelihood
of flow data being lost due to resource constraints in the exporter or receiver and to
accurately report such loss if it occurs.

NetFlow is the IP flow export system available in Cisco routers. There are several

packet formats and aggregation schemes available within NetFlow. This gives the

engineer flexibility in the amount of data that is collected and stored in a NetFlow-based

system. NetFlow is the traffic collection method that will be employed in this study.














CHAPTER 2
NETWORK DATA COLLECTION

In order to effectively decide how best to balance traffic, an analysis system must

have complete and accurate information regarding both network topology and network

traffic patterns. This chapter will discuss why this information is important, how it is

stored, and how it is retrieved.

Topology Collection

There are several important considerations when deciding how to implement a

topology information collection and storage system. How often does network topology

change? What are the benefits of having a fully automated topology collection solution?

Is auto-discovery possible for all network element types?

Fully Automated Solution

While fully automated solutions that can accurately auto-discover new elements are

definitely an attractive solution, this type of solution adds a tremendous amount of

complexity to a system. It would require that the acquisition program be capable of

extracting topology information from various vendor platforms. It also requires that the

system be able to identify both new elements in the network and new hardware or

capacity added to an existing element.

The BGP load-balancing system described in this paper would typically be

deployed at the network edge. Routers that BGP peer externally do not normally have

frequent configuration changes made. The types of configuration changes to









accommodate network growth from a user perspective would be done at an access router

or switch and not a core router.

In the future, this type of approach may be utilized to provide more extensive

capabilities for configuration generation. Extracting more detailed information from the

network elements would allow the system to provide additional configuration and

standardize existing configurations.

Database Approach

The approach selected for this project was to use a database solution to store and

maintain network topology information. A MySQL database was developed that could

store the relevant topology information. Given the low frequency of changes, this type of

solution seems to provide the information required with minimal complexity or effort.

Though this database and its schema were developed for the purpose of this study, most

organizations probably already have an existing Operational Support System (OSS)

package that could be adapted to provide the necessary information.

Topology Module

The Topology module has been implemented as a Perl module and defines an

interface for retrieving network topology information (Holzner 1999). By having this

abstraction, we have removed any direct interaction between the analysis modules and

the system for collecting and storing topology information. If another more efficient

method of collecting topology information is available or an OSS system can be utilized,

the system will not required significant changes to incorporate the new technology.

The only input required for the module is the AS number for the network to be

analyzed. Using this information, the module extracts the circuits that provide egress

bandwidth from this AS. In addition, all valid prefixes for this AS are also retrieved.









The module returns the topology information in the form of Circuit and Prefix

objects as described in the following sections.

Prefix Tree Generation

IP prefixes are encapsulated in a Perl module called Prefix.pm. When a Prefix

object is instantiated, a tree structure is built recursively. This tree structure is rooted at a

node that represents the IP prefix included in the call to the Prefix constructor. The

constructor will recursively build Prefix objects for all subnets contained by the root node

that have a subnet mask length of 24 or less. Each non-leaf node in the tree will have two

children. These children will be the CIDR blocks that result from splitting the current

node into two nodes with a subnet mask 1 bit longer than the current subnet mask (e.g. a

/21 prefix is split into two /22 prefixes).

Figure 2-1 illustrates the tree structure that is built for the following command:

Prefix-new(" 192.168.96.0","255.255.252.0").














Figure 2-1 Tree structure generated by Prefix.pm.

This tree structure has several convenient features (Sklower 1991). As load-

balancing decisions are made and prefixes must be split to move traffic, the Prefix object

tree can be split into two sub-trees with the appropriate mask length. Also, the sub-trees









have the traffic information included (Nilsson and Karlsson 1999). It is not necessary to

reassign flow data into the new prefixes.

Circuit Representation

A Circuit module was developed to provide the ability to assign information at the

circuit level. This module is implemented in Perl and is used to create an object

representing each egress link in the AS being analyzed.

The Circuit module contains all information unique to a particular circuit. The

circuit name, capacity, and load factor are all contained in member variables. The

purpose of the load factor will be discussed in Chapter 3.

The Circuit module also provides members functions for managing Prefix objects

assigned to the circuit. Methods are available to add new Prefix objects, to return the

largest Prefix object assigned to the Circuit, and to get the current load on the Circuit

based on the assigned Prefix objects.

NetFlow Data Collection

A collector for traffic data was implemented as part of this project. In order to

allow the collection and storage of NetFlow data to be uncoupled from the analysis

components, the traffic data is stored in MySQL. This is the same approach that was

used for topology collection.

Definition of a Flow

A flow is any communication that can be described using the following tuple:

source IP address, source port, destination IP address and destination port. For a

NetFlow-enabled router there are seven key fields that identify a unique flow:

Source IP address
Destination IP address
Source port number









Destination port number
Layer 3 protocol type
ToS byte
Input logical interface

If a NetFlow-enabled router receives a packet that is not associated with an existing

flow, a new entry is created in the NetFlow cache. This will occur even if the flow

differs by just one of the above fields. One exception to this process is that NetFlow is

only aware of unicast IP flows. When a router receives a multicast packet, it will be

forwarded normally but will not generate a new cache entry.

Capturing NetFlow Data

In order to begin capturing NetFlow data, the router must be configuration for

NetFlow on each interface. If NetFlow cache is enabled on an interface that contains

sub-interfaces, data will be collected on all sub-interfaces. The figure below shows a

configuration example for enabling NetFlow on an EthernetO interface.


RouterA#config t
Enter configuration commands, one per line. End with CNTL/Z.
RouterA(config)#interface EthernetO
RouterA(config-if)#ip route-cache flow
teRouterA(config-if)#end


Figure 2-2 Enabling NetFlow on a router interface.

For the purposes of this application, the configuration need only be done on the

egress interfaces. This tool is focused on analyzing inter-AS traffic and does not consider

traffic that is internal to the AS. If analysis of total network traffic flow were to be

conducted, the remaining interfaces would need to be configured.

Once configured, the router will process the first packet of a flow normally. At this

time, a new entry in the NetFlow cache will be created that corresponds to the flow.









There exists an entry in the cache for all active flows. The fields in the NetFlow cache

will be used to generate flow records for export and analysis.

To verify that the configuration was successful, the command 'show ip cache flow'

can be used. This command will display the current status of the NetFlow cache.


Fouter ~#sh ip :cache Flow
IP packet size distribution (784 total p.-cets):
1-32 64 96 12: 16.' 192 224 256 288 320 352 384 416 448 480
.05 +969 ,005 ,015 0 .Ji'i + 000 + .001.y0 + O i 000 I.C: 11 ,.1 .0,i.'11' .000

512 544 576 i',24 t537e 2I 2e:", 1)072 "P--334 4',6 4FI6
i,' 1")" ,1): '.'l .('i'l +000 .- i .ii ) J (l) i) )

IP Floi.1 i.itr.hing Cache, 27'::544 byt.e.-:.
1 :. -. ,.e.- 4:'r- i iri.~,c.t i.e. 17 added
360 ager polls, 0 flow alloc failures
Active flows timeout in 30 minutes
Inactive flows timeout in 15 seconds
lart clearing iF .-.t.l-q.lh-.- never
Fr.t c.-, Total Fli,.-. FPa-.:et.s Bytes Packets F4.ti...(S-:.) Idle(Sec)
---- Flows /Sec /Flow .'Pi t /Sec .FFl,:w /Floi.
TCF-Telnet 10 0+0 70 41 0+3 14+0 11.3
UIIP-,_.thc- 4 0.0 2 125 0+0 0.0 15.5
II.:lP 2 0,0 5 100 0. 0+0 15.5
Total- 16 0+0 45 42 0+3 8B. 12,8

SrcIf SrcIP..ddre s DstIf DstIPaddress Pr SrcP DlstP PktL
Et. 1040.0.1 Local lu.u.2,i,.l .1i. :54u '.l17 62


Figure 2-3 Output from router verifying NetFlow configuration.

Another bit of useful information that is available in the router by configuring

NetFlow is packet size distribution. Calculations for throughput on router interfaces are

dependent on the packet size distribution that a router will see in a production network.

This information can be used to develop accurate lab testing scenarios that are consistent

with real world patterns and contain a realistic traffic mix.









Exporting NetFlow data

Configuring NetFlow switching at the interface will begin the data collection

process. This will only create entries in the cache on the router. Unless the flow-export

configuration has been completed, flows will be discarded when entries are expired from

the cache.

The flow export configuration includes at a minimum a destination IP address and

destination port of the NetFlow collector. The example shows how to configure NetFlow

to export NetFlow Version 5 flow records including origin AS information. All UDP

packets sent from the router to the collector will use the source IP address of the

LoopbackO interface. Since a router has a number of interfaces, specifying the source

interface for traffic originating at the router simplifies the process of associating the

packet with a specific network element.


RouterA#config t
Enter configuration commands, one per line. End with CNTL/Z.
RouterA(config)#ip flow-export destination 192.168.0.100 5000
RouterA(config)#ip flow-export version 5 origin-as
RouterA(config)#ip flow-export source loopback 0
RouterA(config)#end


Figure 2-4 Configuring NetFlow export on a router interface.

Storage of NetFlow Cache Entries

A router cannot store significant amounts of flow data. Typically, flash cards in

routers are only large enough to store the router's operating system and configuration file.

Because of this limited storage capacity, the router must transmit flow data to a central

location periodically.










In a default configuration, there are four conditions when the router will expire

flows from the NetFlow cache:

Transport is completed (TCP FIN or RST).
The flow cache has become full.
The inactive timer has expired after 15 seconds of traffic inactivity.
The active timer has expired after 30 minutes of traffic activity.

Two of the above conditions are configurable. Both the active and inactive timeout

values can be configured in the router. The values shown above are their default values.



Netflow-enabled
router
Netflow v5
packet

pe Netflow data
collection


Transport is completed
Flow cache full
Inactive timer expired
Active timer expired

Figure 2-5 Process for transferring data from NetFlow cache to data collection system.

Data Collector

Once the router has been correctly configured to capture and export NetFlow data,

packets will begin to be exported. A data collector was implemented in Perl to receive

the NetFlow records, extract the data fields, and store that information into MySQL for

later analysis.

The data collector binds to a user defined port and listens for incoming packets.

NetFlow does not have an Internet Assigned Numbers Authority (IANA) specified port

number. When a NetFlow datagram arrives, the collector extracts and decodes the

header. The header includes a count of the number of flow records included in the

packet. This count is important since a packet can contain a variable number of flow









records, depending on the number of cache entries that expired at or near the same time.

The schema for the header table follows the header format shown in Table 2-1.


Table 2-1 Fields in a NetFlow header packet.

Bytes Content Description
NetFlow export format version number (in this case, the
0 to 1 Version .
number is 5).
2 to 3 Count Number of flows exported in this packet (1 to 30).
SNumber of milliseconds since the routing device was last
4 to 7 SysUptime booted.
booted.
8 to 11 unix secs Number of seconds since 0000 UTC 1970.

12 to 15 unix nsecs Number of residual nanoseconds since 0000 UTC 1970.

16 to 19 flow_sequence Sequence counter of total flows seen.

20 enginetype Type of flow switching engine.

21 engine_id ID number of the flow switching engine.

Sampling mode and the sampling interval information.
The first two bits of this field indicates the sampling
mode:
22 to 23 sampling_interval 00 = No sampling mode is configured
01 = 'Packet Interval' sampling mode is configured.
(One of every x packet is selected and placed in the
NetFlow cache).


The information gained from decoding the header can be used to extract the flow

records and their associated data for storage. The collector has a second subroutine for

collecting and decoding the information stored in each flow record. For each record, the

fields are extracted and inserted into a flow table in the database. The schema for the

flow table mirrors the definition of the NetFlow flow record.














Table 2-2 Fields in a NetFlow flow record.


Bytes Content Description
0 to 3 srcaddr Source IP address.
4 to 7 dstaddr Destination IP address.
8 to 11 nexthop IP address of the next hop routing device.
12 to 13 input SNMP index of the input interface.
14 to 15 output SNMP index of the output interface.
16 to 19 dPkts Packets in the flow.
20 to 23 dOctets Total number of Layer 3 bytes in the flow's packets.
24 to 27 First SysUptime at start of flow.
28 to 31 Last SysUptime at the time the last packet of flow was
28 to 31 Last reeied
received.
32 to 33 srcport TCP/UDP source port number or equivalent.
34 to 35 dstport TCP/UDP destination port number or equivalent.
36 padl Pad 1 is unused (zero) bytes.
37 tcp flags Cumulative OR of TCP flags.
38 prot IP protocol (for example, 6 = TCP, 17 = UDP).
39 tos IP ToS.
40 to 41 src_as AS of the source address, either origin or peer.
42 to 43 dst_as AS of the destination address, either origin or peer.
44 src_mask Source address prefix mask bits.
45 dst_mask Destination address prefix mask bits.
46 to 47 pad2 Pad 2 is unused (zero) bytes.


Traffic Assignment

Traffic data collected via NetFlow has information at the individual host level.

Before any network analysis can take place the traffic data must be aggregated at the

prefix level. These prefixes can then be assigned to circuits in a load-balanced fashion.









The tree design of the Prefix objects was discussed earlier in this chapter. During

traffic assignment, the aggregate traffic information contained in each flow record is

assigned to the Prefix object that contains the host's IP address. The root node of the

Prefix tree is the largest subnet that contains the host address. The internal behavior of

the Prefix object is as follows:

1. Check if the host belongs to either child of the current Prefix object

2. If so, assign the aggregate traffic information to the child.

3. If not, assign the aggregate traffic information to the current node.

Netflow data









192,18 960/21 192,168970/24 192 14G 99 024 a.16898,0/24 19,158 100 0o 24 192158101 0/24 92,161020/24 192 16[,10 ,/24
X
Figure 2-6 Process of assigning traffic to a prefix object.

Since a Prefix tree is a balanced tree, unless the current Prefix object node is a leaf

node the host address will always belong to one of the child nodes. This approach

ensures that all the traffic information propagates to and resides in the leaf nodes














CHAPTER 3
DATA ANALYSIS

Network data analysis is the key component in developing a system to load balance

EBGP peers. The previous chapters have been concerned with collecting both topology

and traffic information necessary to perform an analysis. This chapter will discuss the

analysis methodology employed by this tool.

Network Data Analysis

The goal of the network analysis module is to assign IP prefixes to egress circuits

in such a way that inbound traffic is balanced. The ideal balance condition would be an

assignment in which the percent utilization on each circuit is within a predefined

tolerance. A simple solution to this type of problem would be to break down the prefixes

as small as possible to provide more granularity, thereby making it easier to reach a

balanced state. However, an additional constraint in the BGP load-balancing problem is

that announcing the minimum number of BGP prefixes in the global Internet routing

table is considered good routing policy.

The global Internet routing table is a representation of all IP space being advertised

across the world. In order to ensure that the size of the table does not grow at the same

pace as the Internet itself, network operators need to ensure that they contribute the

smallest number of prefixes possible. As the table grows, the amount of routing

information being exchanged increases. These increases in both total size of the table

and frequency of updates imposes increasing CPU and memory requirements on Internet

routers.









The Network Analysis module was performs load-balancing on a set of prefixes

and circuits provided as parameters. This module is written in Perl and provides only an

analyze method.

Analysis Methodology Overview

When developing an analysis methodology, there are typically two primary

considerations: accuracy and computational complexity. The design phase weighs both

requirements and develops a solution that represents a balance between the two that is

appropriate for the application (Sahni 2000).

For the BGP load-balancing problem, the accuracy requirement is difficult to

quantify. Any router configuration developed by the system will have a measure of

accuracy associated with the analysis period selected. If another analysis period is used,

the accuracy of the configuration will change. Since the traffic characteristics do not

experience dramatic changes in magnitude over normal analysis periods (i.e. the change

in maximum load over a 24 hour period is reasonably small), a solution that is reasonably

accurate should be sufficient.

The problem of assigning traffic to circuits can be considered a form of the bin-

packing problem. One distinction between the classical bin-packing problem and the

BGP load-balancing problem is that the size of the objects (IP prefixes) being placed in

the bins (circuits) can be changed. The constraint is that the splitting of prefixes can only

be done along CIDR block boundaries.

The approach used in this system is a two-pass approach. The goal of the first pass

is to distribute the traffic across the available circuits. This will provide a start state for

the second pass analysis. The second pass analysis will refine the load-balanced









condition and provide a final state that is within the defined tolerance. The final state

will be used to generate new router configuration.

First pass analysis

The first pass analysis treats the BGP load-balancing problem as if it were simply a

bin-packing problem. No modifications to the Prefix objects are considered during this

stage.

The bin-packing problem is known to be NP-hard (Horowitx, et al. 1998). To

approach this type of problem, an approximation algorithm can be applied. There are

four common approximation algorithms: First Fit (FF), Best Fit (BF), First Fit Decreasing

(FFD), and Best Fit Decreasing (BFD).

The First Fit algorithm considers the objects in the order in which they are

presented. The bins are also considered in the order in which they are initially presented.

To pack bins, each object is taken in order and placed in the first bin in which it fits.

In the case of Best Fit, the initial conditions are the same as for First Fit. Best Fit

differs in that each object in turn is packed into the bin that has the least unused capacity.

The First Fit Decreasing approach reorders the objects such that they are in

decreasing order by size. Once the objects are re-ordered, a FF approach is used to pack

objects.

Best Fit Decreasing also reorders the objects such that they are in decreasing order

by size. After re-ordering, the objects are packed using a BF approach.

The algorithm selected for this application was Best Fit Decreasing (BFD). The

first step in implementing a BFD solution to the BGP load-balancing problems is to sort

the Prefix objects by the amount of traffic generated by that prefix. This step is done to

order the Prefix objects for analysis and is not repeated. Next, Circuit objects are sorted









in increasing order by the amount of load currently assigned to the Circuit. The

assignment of Prefix objects is done by iteration. A Prefix is assigned to the Circuit with

the lowest load. After each assignment, the Circuits are sorted by the amount of load

currently assigned to the Circuit. This process continues until all Prefix objects have

been assigned to a Circuit.

There is no consideration given during the first pass analysis as to whether adding a

Prefix object to a Circuit will cause the Circuit to become overloaded. It is assumed that

the second pass analysis must result in a load-balanced configuration. If this were not

true, then bandwidth must have been exhausted prior to the analysis. While it is possible

to overload a circuit during the first pass, the second pass can break Prefix objects down

to a sufficient level of granularity that a load-balanced configuration is possible.

Second pass analysis

The second pass analysis starts off such that all traffic has been assigned to an

egress circuit. The challenge in this phase is to determine how to best re-assign some

portion of the traffic so that the circuits are closer to the ideal condition of being perfectly

load-balanced. Solving this problem requires answering two questions:

Which traffic should be moved?

Where should the traffic be re-assigned?

One option considered involved identifying the most heavily loaded circuit,

removing some fraction of the load, and re-assigning that traffic to the least heavily

loaded circuit.

The methodology chosen for this implementation is to re-utilize the BFD

algorithm. The underlying assumption is that by improving the initial conditions of the

BFD algorithm, a better solution will be found. Given that the problem set size is









relatively small for BGP configuration, performing multiple rounds of the BFD algorithm

is reasonable.

In the second pass analysis, the circuit with the highest load is identified. The

prefix with the most traffic is then removed from the circuit and split along the next

CIDR boundary. This is the next step in granularity for traffic re-distribution and

increase the BGP prefixes announcements by only one.

Once the prefix has been split, all prefixes are removed from all circuits. The new

set of prefixes is now one larger than the previous BFD run and the circuits have no

prefixes assigned. This creates a new set of initial conditions for the next round of BFD

analysis.

If the load on each circuit is not within tolerance of the mean load across all

circuits, another round of second pass analysis is performed. With each iteration, the

number of prefixes that will be announced into BGP increases by one.

No consideration is given into whether splitting the largest prefix from the most

heavily loaded circuit will improve the balanced condition. This method is simple to

implement and assumes that traffic is fairly well distributed throughout the IP address

ranges being evaluated.

Balanced Traffic Condition

The definition of load in this paper is a measure of the aggregate of all traffic

associated with a prefix throughout the duration of the analysis period. It does not

indicate the maximum utilization experienced by the circuit during the analysis period.

The most obvious approach to determining to what degree traffic is balanced across

multiple circuits is to compare the percent utilization on each circuit at some point in time

(e.g. 60% utilization on Circuit A and 58% utilization on Circuit B = well-balanced).









Unfortunately, this type of comparison requires data at a fairly high sampling rate (or at

least uniform sampling rate).

In the case of using NetFlow data for analysis, traffic data is not being sampled at a

particular frequency. Rather, the data is exported as is occurs in the network. The

frequency at which data arrives at the collector is irregular and the period of time

represented by a NetFlow record varies. The conditions for flow export were discussed

in Chapter 2. Since not all flows have the same lifetime, each NetFlow record represents

data from a slightly different analysis period. It is possible to have a large number of

small NetFlow records for a short telnet session or a single large flow record for a long-

lived FTP session on a lightly loaded router. Possible techniques for extracting additional

information from this type of data source are discussed in Chapter 5.

To overcome the sampling limitation, the application of a load factor to each circuit

was utilized. This load factor serves to normalize the capacity of each circuit to the

capacity of the smallest egress circuit. When load factors are used, the load on each

circuit during the analysis period can be used for comparison. This allows circuits of

varying capacity (e.g. 2 DS-3s and 1 OC-3) to be load-balanced.

This load factor technique has been implemented in the Circuit module. Raw

traffic data is added to a Prefix object by the Traffic module. The Network Analysis

module uses a custom sort routine in order to sort circuits for each iteration of the BFD

algorithm. The custom sort routine calls the get load() method on each Circuit. This

method returns the amount of traffic assigned to the circuit times the load factor for the

Circuit.









Configuration Generator

The Configuration Generator module was developed to provide a solution for

implementing the results of the analysis in a network. This module is implemented in

Perl and provides an interface to accept the results of the Network Analysis module and

generate the configuration files necessary to correctly configure a router.

Benefits of Code Generation

Code generation is a technique in which programs are used to write or develop

other programs. In this case, the BGP load-balancing program generates code (or

configuration files) for a router. There are several benefits to code generation including

the reduction of human errors, standardization, and efficiency.

Though network engineers are both knowledgeable and professional, they are still

human. By developing a system that performs accurate and repeatable analysis of

network data, the network engineer can focus on other tasks that require human

intervention.

Regardless of the size of the network, standardization is a critical element of a

successful operation. By creating standard configurations and processes, networks can

scale to a very large number of elements being managed by a reasonably small staff. One

key in the operational scalability of large networks is documenting processes either in

standard, written procedures or by developing systems that establish how a particular

function should work. The Configuration Generator module encompasses what a

standard configuration should look like. Any configuration generated by this system will

be in the correct, standard format.

A goal of any system should be to improve the efficiency of the task it implements.

By automating the analysis of network traffic, BGP load-balancing can be done more









accurately and efficiently. It is now possible to schedule the analysis to occur at regular

intervals and store the results. This will allow an engineer to review the results and

choose the best solution to implement. To accomplish the task of analyzing network data

would be an arduous task for an engineer to perform at any reasonable frequency.

Additionally, without the traffic information being available the typical solution would

involve only an educated guess by an engineer familiar with the network.

Implementation Process

Once the analysis has been completed, an engineer must implement the

configuration. While it would be possible to extend the system to implement the

configuration in a live network automatically, that functionality is beyond the scope of

this work.

The first requirement for announcing prefixes into BGP is to configure network

statements that include all address space. To avoid issues with IGP synchronization, null

routes are also configured for each network statement. Without additional routing policy,

the network statements and null routes would generate BGP updates for the entire address

space.

With only network statements and no routing policy, all updates announced would

have the same metric. The addition of routing policy to affect load balancing is

accomplished via route-maps. Route-maps use an if-then-else type construct to allow

modifications to be made to attributes of a BGP announcement. In the case of load

balancing, the route-map has a term that matches all IP space within a prefix. For each

match, the MED value is changed to prefer the circuit or not depending on which prefix

list is matched. If the IP space falls in the prefix list for the circuit, the IIMED value is set

to 50 and traffic from that range will prefer the circuit. For all other space, the MED











value is set to 200. By using a default MED value, any IP prefix that does not have

routing policy applied will still be advertised. In the case of the failure of an egress link,

the prefixes that were preferred on the link will continue to route across other links at a

higher MED value. Without this catchall, the more strict routing policy would create an

outage for the preferred blocks when an egress link goes down.




neighbor A



route-map
RouterA Router A











Neighbor B
dampening,
synchronization,

U[ route-map
RouterB Router B




prefix-list prefix-list







Figure 3-1 Process by which router applies BGP routing policy via route-maps and
prefix-lists.









router bgp 65000
no synchronization
bgp log-neighbor-changes
network 10.0.0.0 mask 255.255.248.0
network 10.0.8.0 mask 255.255.248.0
network 10.0.16.0 mask 255.255.248.0
network 10.0.24.0 mask 255.255.248.0
network 192.168.80.0 mask 255.255.248.0
network 192.168.96.0 mask 255.255.248.0
network 192.168.128.0 mask 255.255.248.0
network 192.168.160.0 mask 255.255.248.0
neighbor 192.168.1.50 remote-as 1234
neighbor 192.168.1.50 description RouterA
neighbor 192.168.1.50 route-map RM-RouterA out
neighbor 192.168.1.100 remote-as 5678
neighbor 192.168.1.100 description RouterB
neighbor 192.168.1.100 route-map RM-RouterB out


Figure 3-2 Basic BGP configuration without routing policy.


ip prefix-list IP-ALL
ip prefix-list IP-ALL
ip prefix-list IP-ALL
ip prefix-list IP-ALL
ip prefix-list IP-ALL
ip prefix-list IP-ALL
ip prefix-list IP-ALL
ip prefix-list IP-ALL

ip prefix-list RouterA
ip prefix-list RouterA


seq 5 permit 10.0.0.0/21
seq 10 permit 10.0.8.0/21
seq 15 permit 10.0.16.0/21
seq 20 permit 10.0.24.0/21
seq 25 permit 192.168.80.0/21
seq 30 permit 192.168.96.0/21
seq 35 permit 192.168.128.0/21
seq 40 permit 192.168.160.0/21

seq 5 permit 10.0.0.0/21
seq 10 permit 10.0.8.0/21


ip prefix-list RouterA seq 15 permit 10.0.16.0/21
ip prefix-list RouterA seq 20 permit 10.0.24.0/21

ip prefix-list RouterB seq 5 permit 192.168.80.0/21
ip prefix-list RouterB seq 10 permit 192.168.96.0/21
ip prefix-list RouterB seq 15 permit 192.168.128.0/21
ip prefix-list RouterB seq 20 permit 192.168.160.0/21


Figure 3-3 IP prefix list configuration to identify groups of prefix that will have routing
policy applied.













route-map RM-RouterA permit 10
match ip address prefix-list RouterA
set metric 50

route-map RM-RouterA permit 20
match ip address prefix-list IP-ALL
set metric 200

route-map RM-RouterB permit 10
match ip address prefix-list RouterB
set metric 50

route-map RM-RouterB permit 20
match ip address prefix-list IP-ALL
set metric 200

Figure 3-4 Route-maps use prefix-lists to apply routing policy to outbound BGP
announcements.

The system generates a route-map and prefix list for each BGP neighbor. Another

prefix-list call IP-ALL is also generated. This prefix-list includes all valid address space.

It is used to ensure that all address space is advertised out of every circuit.














CHAPTER 4
SYSTEM RESULTS

This chapter discusses testing that was conducted to validate the BGP load-

balancing system. The test setup and procedures are presented. Results and observations

from the various test cases are included. Finally, several topics for further investigation

are suggested.

Testbed Configuration

The testbed used to evaluate the system was built to mimic what a typical access

network might look like. In order to understand the lab configuration, it is important to

understand how a typical access network is configured. Figure 4-1 shows a typical

access network configuration.

In a typical network, end users are connected via an access router. This access

router could be a PPP aggregator in a DSL network, Cable Modem Termination System

(CMTS) in cable modem networks, or access point in wireless networks. This layer is

where per-subscriber configuration is done. This configuration can include subscriber

authentication, rate-limiting, and IP address assignment.

In order to simulate end users in the lab setup, IP pool interfaces were configured in

the access router. One interface for each /24 subnet used in the testing was configured.

These interfaces would be the default gateway that end users would be assigned in a

production network.












Internet







Core Router






Access Router





End Users E N




Figure 4-1 A typical access network configuration.

Eight subnets were utilized for the testing. The subnets are contained in Table 4-1.

These subnets are initially configured as IP prefixes with a 21 bit subnet mask. The

subnets are only split if the Network Analysis module identifies the subnet as a large

portion of the traffic.

Traffic Generation

During testing of the traffic collection module, the test setup shown in Figure 4-2

was used. The ping command was used to send ICMP traffic into the network from a

Unix workstation. Command line options allow a user to specify both the number of









packets as well as packet size. This allows for a user-defined amount of ICMP traffic to

be sent to a single IP address.

Table 4-1 Subnets that were utilized during lab testing.


IP prefix Subnet mask
10.0.0.0 255.255.248.0
10.0.8.0 255.255.248.0
10.0.16.0 255.255.248.0
10.0.24.0 255.255.248.0
192.168.80.0 255.255.248.0
192.168.96.0 255.255.248.0
192.168.128.0 255.255.248.0
192.168.160.0 255.255.248.0


I


4


Cisco 7206


Traffic
generator


Red back
SMS1800


Figure 4-2 Lab setup used to test BGP load-balancing tool.


Cisco 7206









This method was used to test the traffic collection module and its ability to decode

and store NetFlow records. Additional testing was conducted by generating SQL code to

populate the database with traffic information directly. This allowed for the creation and

execution of tests cases without utilizing the traffic collection module.

There was no consideration for type or distribution of traffic in this study. The

analysis module balances based on aggregate load values. The specific type or duration

of each flow has little meaning in this approach.

Test Cases

Several test cases were developed to test the ability of this system to generate load-

balanced BGP configurations. The cases considered include both well-balanced and

unbalanced traffic conditions. This tested the performance of the system under normal

network conditions (nearly balanced) as well as worst-case conditions (significantly

unbalanced). Test case #4 also included load balancing across circuits with different

capacities.

The cases considered in this study are shown in Table 4-2. Each case is explained

in further detail in the following sections.

Table 4-2 Description of test cases used during validation testing.


Case Id Test Case
1 Even distribution of traffic across all prefixes
2 Even traffic in 2 /24 prefixes that fall within the same /21 prefix
3 Random distribution of traffic across all prefixes
4 Random distribution of traffic across all prefixes
with unequal size circuits









System Output

The system generates both summary output for the Network Analysis module and

router configuration to implement the results.

Test Case #1

This test case was used to provide a baseline analysis. Traffic was uniformly

distributed across all subnets. Since there were an even number of subnets and the traffic

is uniformly distributed, the algorithm should be able to develop a perfectly balanced

solution.

The results shown in Table 4-3 indicate that a perfectly balanced condition was

generated by the system.

Table 4-3 Per circuit loading results from test case #1.

Traffic Load
Circuit ID TKB) Capacity Factor Load
(KB) Factor
PL-Lab-01 1600 2 1.00 1600.00
PL-Lab-02 1600 2 1.00 1600.00


Table 4-4 Per prefix loading results from test case #1.

Prefix Traffic (KB)
10.0.0.0/21 400
10.0.8.0/21 400
10.0.16.0/21 400
10.0.24.0/21 400
192.168.80.0/21 400
192.168.96.0/21 400
192.168.128.0/21 400
192.168.160.0/21 400









Test Case #2

The purpose of this test case was to evaluate how well the system performed with a

highly skewed traffic distribution. The case was derived so that two prefixes had an

identical amount of traffic. These two subnets were chosen such that they fell within the

same /23 supernet. With this arrangement, a perfectly balanced configuration was

possible but would require several iterations of the algorithm to achieve.

Table 4-5 shows that the ideal balanced condition was generated using 11 prefixes.

This indicates that there were 4 iterations of the algorithm. The skewed distribution of

traffic is visible in Table 4-6.

Table 4-5 Per circuit loading results from test case #2.

Load
Circuit ID Traffic (KB) Capacity Load Load
Factor
PL-Lab-01 50 2 1.00 50.00
PL-Lab-02 50 2 1.00 50.00


Table 4-6 Per circuit loading results from test case #2.

Prefix Traffic (KB)
10.0.0.0/24 50
10.0.1.0/24 50
10.0.2.0/23 0
10.0.4.0/22 0
10.0.8.0/21 0
10.0.16.0/21 0
10.0.24.0/21 0
192.168.80.0/21 0
192.168.96.0/21 0
192.168.128.0/21 0
192.168.160.0/21 0










Test Case #3

The traffic distributions in the previous test cases were not consistent with normal

traffic patterns. The intent of test case #3 is to closely mirror a traffic distribution that

might be seen in a live network. Traffic data was generated and distributed across all

prefixes. No idle prefixes are contained in this test case.

The tolerance used in this test case was 10%. Table 4-7 shows the resulting circuit

loads after analyzing the data. The algorithm is able to achieve a load-balanced condition

without splitting prefixes.

Table 4-7 Per circuit loading results from test case #3.

Load
Circuit ID Traffic (KB) Capacity Load Load

PL-Lab-01 1411.823 2 1.00 1411.82
PL-Lab-02 1354.287 2 1.00 1354.29


Table 4-8 Per prefix loading results from test case #3.


Prefix Traffic (KB)
10.0.0.0/21 390.566
10.0.8.0/21 360.717
10.0.16.0/21 267.394
10.0.24.0/21 289.327
192.168.80.0/21 180.033
192.168.96.0/21 364.333
192.168.128.0/21 422.971
192.168.160.0/21 490.769









Test Case #4

A slightly more complicated scenario is presented in test case #4. This case used

randomly distributed traffic across all subnets. The tolerance for determining balance

condition was lowered from 10% to 5%. Additionally, circuit #2 has twice the capacity

of circuit #1. This validates that the concept of balancing traffic on load rather than

traffic works correctly.

Table 4-9 Per circuit loading results from test case #4.
Load
Circuit ID Traffic (KB) Capacity Load Load
city Factor
PL-Lab-01 1077.613 2 1.00 1077.61
PL-Lab-02 2091.995 4 0.50 1046.00


Table 4-10 Per prefix loading results from test case #4.

Prefix Traffic (KB)
10.0.0.0/21 419.265
10.0.8.0/21 260.42
10.0.16.0/21 383.099
10.0.24.0/21 397.928
192.168.80.0/22 296.812
192.168.84.0/22 197.234
192.168.96.0/21 404.345
192.168.128.0/21 358.856
192.168.160.0/21 451.649


Conclusions

The test results discussed in this section demonstrate that the system for generating

load-balanced BGP configurations works correctly. Scenarios that required prefix









splitting were included to test the algorithms ability to generate a new set of initial

conditions that could be used in the next iteration to develop a better solution.

The test cases that used random data spread across all prefixes are a more accurate

representation of real world traffic. In these test cases, the system was able to achieve a

load-balanced condition under both 5% and 10% tolerance settings. With the variations

in traffic levels in live networks, these thresholds are reasonable.

Regardless of the complications added to each test case, the system was able to

achieve the desired result. Balanced BGP configuration can be developed in an

automated fashion by analyzing traffic data and network topology.














CHAPTER 5
SUMMARY AND FUTURE WORK

The system developed in this study is a proof of concept implementation to show

that load-balanced configurations can be developed through network analysis. This

chapter discusses some improvements to the system as well as some opportunities for

enhancement that can be realized by implementing this type of system.

System Improvement

During the development of this system, several issues were discovered that might

cause the system to provide less than ideal results. The following sections outline these

issues and propose solutions to the underlying problems.

Instantaneous Data Rate Load-Balancing

The load-balancing done in this system is based on aggregate load during an

analysis period. The goal is not to ensure that the instantaneous data rate on each circuit

is balanced at some peak time. Rather, the system ensures that the total flow of traffic

out each circuit during the entire analysis period is equal. The calculation is more a

balance of volume than of rate.

The reason that using a volumetric method is appropriate is that network traffic

patterns are regular on a day-to-day basis. Peak traffic levels tend to increase in a linear

fashion over time. The traffic patterns are somewhat predictable. These patterns are

illustrated in Figure 5-1.










While the total volume of traffic is fairly predictable, what is not predictable is

which subnets will originate the traffic. Because end users are assigned IP addresses out

of one of several pools, it is common for dynamic users to change IP addresses.



120M



80M -



40 M

20M

0




Figure 5-1 Total volume of traffic is predictable on a day-to-day basis and grows linearly
over time.

In order to balance traffic on an instantaneous data rate basis, the NetFlow data

must be discretized. Data must be sampled on a small interval. The flow records contain

an aggregate amount of data during a period of time. The period represented by each

flow record is different. Each datum from a flow record is assigned to the timeslot during

which it was generated.

Since the information in a flow record spans many timeslots, some assumptions

must be made about the distribution of data during the flow record's interval. In order to

make this type of assignment, several challenges must be overcome:

* The flow distribution models must be developed for each protocol present in the
network.
* The protocol contained in the flow must be determined from the information
contained in the NetFlow record. (i.e., non-standard ports would skew the results)
* The sampling interval must be smaller than most of the flow records.










Although not impossible to solve, these problems are beyond the scope of this

thesis. When considering the traffic characteristics presented earlier in this section and

the difficulty in extracting additional information from the flow records, the system

described in this paper is a good trade-off between accuracy and complexity.

Low Utilization Prefixes

Because the load-balancing system makes prefix assignments based on load, the

presence of low or zero utilization prefixes have little impact on these assignments. This

can lead to router configurations where the load is perfectly balanced but the number of

IP addresses assigned to each egress link is drastically different. This was exhibited in

the configurations generated by test case #2 shown in Figure 5-2. The prefix lists are

balanced based on current load, but the number of addresses on each circuit are not

balanced.

If left unchecked, this issue could result in the configuration being significantly out

of balance as these subnets become utilized. In order to alleviate this problem, another

analysis pass could be performed that would use the load-balanced configuration as the

initial condition. This algorithm would look for low or zero utilization prefixes and

balance them across the links based on the number of IP addresses in the prefix. After

this pass, the configuration would represent a current load-balanced state as well as

balancing the number of addresses thereby extending the period of time that this

configuration will maintain load-balance.





























Figure 5-2 System generated prefix lists are balanced based on load, but not necessarily
balanced on number of addresses.

Minimizing Prefix Announcements

Another issue that exists in the current tool is that the number of prefixes is

expanded so that a load-balanced condition can be reached. This can lead to a

configuration whereby after several iterations, two subnets that belong to the same

supernet are assigned to the same circuit. In this case, the two subnets could be collapsed

into a single prefix announcement for the supernet.

Scanning each circuit after the analysis and looking for adjacent prefixes could

resolve this issue. This technique would be simple to implement and would have O(n)

complexity.

Cost Factor in Load Balancing

Including the cost of bandwidth into the load-balancing calculation is potential area

for exploration. This study had no preference as to which egress circuit should be

utilized first. Dual-homed configurations increase redundancy and reliability of BGP


ip prefix-list IP-PL-Lab-01 description Networks preferred on Circuit ID PL-Lab-01
ip prefix-list IP-PL-Lab-01 seq 5 permit 10.0.1.0/24
ip prefix-list IP-PL-Lab-01 seq 10 permit 10.0.8.0/21
ip prefix-list IP-PL-Lab-01 seq 15 permit 10.0.16.0/21
ip prefix-list IP-PL-Lab-01 seq 20 permit 10.0.24.0/21
ip prefix-list IP-PL-Lab-01 seq 25 permit 192.168.80.0/21
ip prefix-list IP-PL-Lab-01 seq 30 permit 192.168.96.0/21
ip prefix-list IP-PL-Lab-01 seq 35 permit 192.168.128.0/21
ip prefix-list IP-PL-Lab-01 seq 40 permit 192.168.160.0/21
ip prefix-list IP-PL-Lab-01 seq 45 permit 10.0.4.0/22
ip prefix-list IP-PL-Lab-01 seq 50 permit 10.0.2.0/23

ip prefix-list IP-PL-Lab-02 description Networks preferred on Circuit ID PL-Lab-02
ip prefix-list IP-PL-Lab-02 seq 5 permit 10.0.0.0/24









peering. In a dual-homed configuration, a network operator obtains egress links from

multiple providers to ensure that they are protected from a failure in the service provider

network. To reduce the cost of this type of configuration, the backup link can be on

usage based billing or burstable billing. In usage based billing, the customer pays based

on how much bandwidth they utilize. In a burstable billing scenario, the customer pays

for a certain amount of bandwidth. If they exceed the allocated bandwidth, they pay a

higher rate for the additional capacity.

These factors could be included in the algorithm by modifying how the circuits are

sorted during the iterations of BFD. Rather than sorting just on load, the sort method

would include both the load on the circuit and the cost of exceeding the threshold for

burstable circuits. Usage base circuits could be handled by the same method with a zero

threshold.

Support More Complicated Route-Maps

Route-maps are utilized in this system to apply the load-balanced routing policy to

BGP announcements. This is the type of route-map term supported.

Other types of routing policy are also implemented through route-maps. This

system could be adapted to include other routing policy terms present in the route-map

prior to analysis. This would integrate the existing routing policy with the changes

required to maintain load-balance.

This type of change would require parsing of the existing configuration file or

additional changes to the database to include routing policy beyond the scope of load-

balancing.









Fault Detection

The data available in this system would allow for the development of some

additional fault detection capabilities. By mapping traffic data to egress links, over-

utilization conditions on egress circuits can be identified. This type of information can

typically be obtained by other means (e.g., SNMP).

The ability to break down the traffic information from an overloaded circuit to a

more granular level is an enhanced capability. Once an issue has been identified, the

traffic data will indicate what type of services are consuming the link and what hosts are

the source of that traffic.

Infected Host Detection

One specific example of fault detection is infected host detection. The data

gathered for load balancing will include signs of virus presence or propagation. Not all

viruses and worms could be detected, but many have signatures that can be identified by

looking at flow characteristics. Also, host enumeration would be detected in this case.

Enumeration might involve ICMP and port scanning to identify hosts where a worm can

propagate.

Summary

The goal of this project was to develop a system that could analyze network traffic

and generate load-balanced router configurations. The motivation for this effort is that

the existing process today is manual, intensive, and potentially error prone.

Today, an engineer must evaluate load conditions on each egress link within an AS.

Aggregate traffic levels on a per-circuit basis and the number of IP addresses preferred on

each link is the only information that the engineer has available. The engineer









determines an estimate of how much traffic needs to be moved. The step is based on

estimation and can lead to errors.

This system analyzes data and assigns traffic at an IP prefix level. This allows for

accurate determination on how much traffic will migrate when the preference on a

particular IP prefix is adjusted. This eliminates the guesswork and estimation in today's

process.

Rather than simply identifying a prefix to move, the system analyzes the entire

system and develops a new set of prefix to circuit assignments that achieves a well-

balanced state. This analysis is based on data across all IP prefixes. The system has been

shown to be effective across a range of cases that test both normal traffic conditions as

well as irregular traffic that represent challenges for the algorithm.

The enhancements presented in this chapter are by no means complete. There are

certainly additional features that could be developed that would allow this system to play

an integral role in a network management suite.















LIST OF REFERENCES


Argawal S, Chuah C, and Bhattacharyya S, Diot C, 2004, "The Impact of BGP Dynamics
on Intra-Domain Traffic," Proceedings of the Joint International Conference on
Measurement and Modeling of Computer Systems, ACM Press, pp. 319-330.

Bressoud T, and Rastogi R, "Optimal Configuration for BGP Route Selection,"
INFOCOM 2003: Twenty-Second Annual Joint Conference of the IEEE Computer
and Communications Societies, Vol. 2, pp 916-926

Christiansen T, and Torkington N, 1999, Perl Cookbook, O'Reilly & Associates, Inc,
Sebastopol, CA.

Halabi B, 1997, Internet Routing Architectures, Cisco Press, Indianapolis, IN.

Holzner S, 1999, Perl Core Language, Coriolis Technology Press, Scottsdale, AZ.

Horowitz E, Sahni S, and Rajasekaran S 1998, Computer Algorithms, Computer Science
Press, New York, NY.

Nilsson S, and Karlsson G, 1999, "IP-Address Lookup Using LC-Tries," IEEE Journal on
Selected Areas in Communication, Vol. 17, No. 6, pp 1083-1092.

Lili Q, Zhang Y, and Keshav S. "Understanding the Performance of Many TCP Flows,"
Computer Networks (formerly called Computer Networks and ISDN Systems),
Vol. 37, pp 277-306, 2001.

Sahni, Sartaj, 2000, Data Structures, Algot ithin/, and Applications in Java, McGraw-
Hill, Boston, MA.

Savelsbergh M, 1997, "A Branch-and-Price Algorithm for the Generalized Assignment
Problem," Operations Research, Vol. 45, No. 6, pp 831-841, 1997.

Sklower K, 1991, "A Tree-Based Packet Routing Table for Berkeley Unix," Proceedings
of the 1991 Winter USENIX Technical Conference, Dallas, TX, pp 93-99.

Stallings W, 2000, Data and Computer Communications, Prentice Hall, Sixth Edition,
Upper Saddle River, NJ.

Tanenbaum A, 1996, Computer Networks, Prentice Hall, Third Edition, Upper Saddle
River, NJ.















BIOGRAPHICAL SKETCH

I received a Bachelor of Science degree in civil engineering from the Florida State

University in 1997. During my undergraduate studies, I was employed by Post, Buckley,

Schuh, and Jernigan (PBS&J) and at the Florida Department of Transportation Structural

Research Center. This practical, hands-on engineering experience proved beneficial

during my graduate studies.

I began my graduate school career under Dr. Peter Ifju in the Aerospace

Engineering, Mechanics and Engineering Science (AEMES) Department at the

University of Florida in 1997. I studied the post-buckling response of composite

sandwich structures and published a thesis on the topic.

My studies in the Computer and Information Science and Engineering (CISE)

Department at the University of Florida began while writing my thesis in aerospace

engineering. During my tenure in computer engineering, I have focused on network

communications and security. Both my academic progress and network-related research

are represented by this thesis. This thesis was defended on October 6th, 2004.