<%BANNER%>

Process Forensics: The Crossroads of Checkpointing and Intrusion Detection

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110114_AAAACY INGEST_TIME 2011-01-14T15:18:20Z PACKAGE UFE0008063_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 1954 DFID F20110114_AABTGY ORIGIN DEPOSITOR PATH foster_m_Page_11.txt GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
19678a894a5c710c48e3d123836857f0
SHA-1
f86f3016adad322dc95001cfd8d033f7c8fe7b49
1712 F20110114_AABTIA foster_m_Page_45.txt
4138f14b59a9de3511bc85b27af87753
51a0741485a95fe92dfc5ce6abe888ce9c59ea4c
2015 F20110114_AABTHM foster_m_Page_28.txt
8471ab806a9f9aeeb88337e172f39be8
91be824dbb2fe0ee9bafce8f71c316331c616351
1981 F20110114_AABTGZ foster_m_Page_12.txt
a26d52753551979b4fff760df8cd1168
d58b88a4137cbb7350dcf477b7a989a186ebd9fb
1478 F20110114_AABTIB foster_m_Page_47.txt
1825e49cd64d7fd4c19235214d24ba84
f2dc79d71357f1ec5baf0b3ebb79d20b0b00a352
1808 F20110114_AABTHN foster_m_Page_29.txt
48ebdcc2456dbb95aae2d153f83e5b76
fb931b5dedba71d458191ef95ed56354be945b08
1673 F20110114_AABTIC foster_m_Page_48.txt
87681d69cede7c258314960081084005
57a50646162089baa7139a2d89f8c8a30495d3ce
2048 F20110114_AABTHO foster_m_Page_31.txt
739a9203b38ee84747b068409d7b137e
8448150b7eea9ea13dbf211b361c4db5f8984d77
1961 F20110114_AABTID foster_m_Page_49.txt
94583df0cb8a43067f1f252f9e770cad
34160d8874a09b60217035ce1c9cd1253918c3f5
1977 F20110114_AABTHP foster_m_Page_32.txt
e1c95efb9e6591515fa9f48c02201d7c
193f4cf665c41c97148267d17080b235d889162d
1962 F20110114_AABTIE foster_m_Page_52.txt
3f181a37c3f8b556d7221179bf8f283d
788356d58b002e9c585b1a5f43a638aff39952f7
1956 F20110114_AABTHQ foster_m_Page_33.txt
0aa626bb558a6ecc475b07328436787d
87c5dde97823af82337b74a3c3a36076966665e5
1442 F20110114_AABTIF foster_m_Page_53.txt
1aaf5ff9545515379015462790d00801
77bb9f13afa5b5ab2e569455c12b8fa2d2a14ef7
2047 F20110114_AABTHR foster_m_Page_36.txt
5c787f83e4e98e4470c1bc3ba3ad320a
22cb42b1ac91f45aca76f51e5434e2c10724014c
2035 F20110114_AABTIG foster_m_Page_55.txt
44309b60574401af97a21f037dc2f7d0
a1bb747d72ae11c4af1187cd0f1c15bfdeed4667
1316 F20110114_AABTHS foster_m_Page_37.txt
6c28112c3e9be6f262e56cab8cd4ba33
ad4ea4a11fb28ec098d518f01b522508f5ea6c37
1965 F20110114_AABTIH foster_m_Page_57.txt
ff8532848c9d38f609f6063da2eba6a9
081e302e712c4615449a483ca1d6f68c647aee66
1849 F20110114_AABTHT foster_m_Page_38.txt
5f405701871f093b4cd23c0177dd5dcf
158a2ba93c98fa80b05f9e77d94422e2ffaf2be2
2108 F20110114_AABTII foster_m_Page_58.txt
239e08487537a33d0bde1616ccc98a61
dcc193a726393e5f1693933a8c41b8ddd17ec25d
1843 F20110114_AABTHU foster_m_Page_39.txt
0006f0e82df75b6c9d88e4a9c69fcd14
71a9b9dea27c1503f118a3b0fbadcea4fbacdbab
2064 F20110114_AABTIJ foster_m_Page_59.txt
5471e14faacdfc423082f1d44cea64fa
2dfd353e091947d9b93b007ca2caaaa8ce6626c9
2045 F20110114_AABTHV foster_m_Page_40.txt
1ed12dd3aef66ba1b3bfa4f52c993dac
735e6ea35a11ded7327dcb878f2fd23905a12e3a
1929 F20110114_AABTIK foster_m_Page_60.txt
d536f4dcff4fd8db69ef2b3215077ad7
ea51461fb9db60ff9f4fdd30aae7d53f8da93b2a
1993 F20110114_AABTHW foster_m_Page_41.txt
70bceb4840c5b1114bac1e3aebf98150
2a7ad4ac1428f422a966e4d18babcafa7f7f6bb5
663 F20110114_AABTIL foster_m_Page_61.txt
0498bc8dfcad1886986c0c2cc3504d4b
95cdd8c1b9c62669ff9db0fbf0d73f7860357236
1740 F20110114_AABTHX foster_m_Page_42.txt
6f4e0ca5095546555b910c51ae4ed3ac
d4431732d28596227dee095517c11f69b45907b9
1836 F20110114_AABTIM foster_m_Page_62.txt
b13ddfebda5021d5a18333a1b932c3a8
f67b9b46e692c059fbc0f9d86627152886c08e14
2384 F20110114_AABTHY foster_m_Page_43.txt
4d5ad87a8b19d55fb1dc407ed2cf4cb7
7c49f122d73c529d5ceb293066e65d6534ca66b3
2079 F20110114_AABTJA foster_m_Page_80.txt
fffc8586b2a2da74d959142d1bd0d0d9
2534c2b093e3f3eb526576ee8c928877493a63f8
2088 F20110114_AABTIN foster_m_Page_63.txt
16731720d5651bb2b93797af79d1e328
c453bcd60e1f486687be97bf78e0e2680621b279
1989 F20110114_AABTHZ foster_m_Page_44.txt
a56c2cea8b211afc8d8eb5cc11b7753c
d20356e3e8fe03e70d72be0ac019a46b4ab3b405
2046 F20110114_AABTJB foster_m_Page_81.txt
2dfee637fe6484300862a8805cdf6009
17bc7de44eff353392a6ba78db95f2505fd93ad9
1725 F20110114_AABTIO foster_m_Page_64.txt
a4ceeced3f7c973e19c9a3126b841733
9661273e57fa9d1132fd40c4031ea0c0ddeb0c2d
F20110114_AABTJC foster_m_Page_82.txt
fe508258adbbfacbd88555b39c7b0b41
b25544abc6e0a05f1ae7d8ad8abd31b4733b2ee0
1517 F20110114_AABTIP foster_m_Page_66.txt
d45b5dbfd90bd866c065aeb8101b05f7
22a6776a6cc44ff0de907b49b6809db6037f1fdd
2006 F20110114_AABTJD foster_m_Page_83.txt
b6c9548c6d4bd709e9a57a740c6f71bd
5bbb1c4fb86b0cfa4de6f17cca61888c4d58b7ac
F20110114_AABTIQ foster_m_Page_67.txt
21e0ba7bfae5aa8c3dd388f834d9d4f5
02578254e806965d6cb06ed68ce40b889a8d0741
1964 F20110114_AABTJE foster_m_Page_84.txt
3cf85607860f8f27288468768fdf9348
8fc7a33336a850abccfd8f1b14b5274aea426954
1710 F20110114_AABTIR foster_m_Page_69.txt
2b495c9f7b512b63cd552ba27f461773
ce7f52325c8215281adf17a6530b890ab4d64f0a
1985 F20110114_AABTJF foster_m_Page_85.txt
6a188666f235e4e30842e28225224169
a6f92e7a528b7eb1f217cd83e1b2ac0d97fdb6c6
1600 F20110114_AABTIS foster_m_Page_71.txt
5788b87cd9eb75f8ce7509ca8016a4b0
70634a66b25b2cf809667604f68317653698cf51
1918 F20110114_AABTJG foster_m_Page_86.txt
f986a488a156ab4017b9cdd9f7758ac2
0e71d4c4ebf4d2663d92e05454b4fa6b49446343
1587 F20110114_AABTIT foster_m_Page_73.txt
da45e228a777cef214b3c1e543556bfd
9f7489cd171e7aba4084d90a29f979982e60a6d4
168 F20110114_AABTJH foster_m_Page_88.txt
9b3306bd9b94d1aaff07990f34c3eae1
ca3f870d5a13b11f6d96ceb51f82eeeaf7ea5865
1633 F20110114_AABTIU foster_m_Page_74.txt
5a43c4f0cacb98d6b377b7d5f50bd9a8
284b8e3fe2a2ba18823b3e7846c0809f66075f5c
1966 F20110114_AABTJI foster_m_Page_90.txt
addeb080947387f556afc3e219e4a898
1529f79f5a8b124485e37f5170ae7e08b96a06f0
2262 F20110114_AABTIV foster_m_Page_75.txt
79e7fc86e78b177ca90efbd881bf92ef
caafad50c148c73059ab06a4c224a5998a0b550b
1861 F20110114_AABTJJ foster_m_Page_91.txt
95eec33f8745072a79dec283bfa42247
40d0ba26c26009086c40baf2153a6057b2b1d796
2075 F20110114_AABTIW foster_m_Page_76.txt
d28eed77564b0fc302c6df9b02b92dd6
5b559f8c879bf20235f468ce88add9f7ad04978f
775 F20110114_AABTJK foster_m_Page_93.txt
a0de7e39201459b00a6e3671b0afefd8
b06798ad63b388192777327ae9c5ac5e4d9f8357
F20110114_AABTIX foster_m_Page_77.txt
2975a02d1f783f424c4c9b7134ecb74d
f837f9af24f879f6dc153a6070b8647b1d367623
26116 F20110114_AABTKA foster_m_Page_76thm.jpg
88bb3779507ae645bae0b10ff20427b1
0ecfd4cbd7d843e021ffeba4dfd793053a661021
2753 F20110114_AABTJL foster_m_Page_95.txt
3d9353f770f9f0f495b17504ecf7dc40
7aabfb7c251329fa06bf563f310e35186c99b703
1859 F20110114_AABTIY foster_m_Page_78.txt
013cbe14e83c562657bf97e446077c14
23a247b9050e53e196132c3c5d1249f3547ae477
2422 F20110114_AABTJM foster_m_Page_96.txt
4eacc211f52b8152c49da19888248839
6d57c30165f6cd39d2644240f845e6a3b88d386c
1877 F20110114_AABTIZ foster_m_Page_79.txt
56e34a9ba4d8ff2419e3e8284b3f973f
68086fdb2e47a97a55c5eacab827c57036fedb81
25471 F20110114_AABTKB foster_m_Page_40thm.jpg
89d6bb07d8295d286f34e1121094a309
bd87a33c0e79dfb1dadb219057ec69ade6ad2151
560 F20110114_AABTJN foster_m_Page_97.txt
fb5f4ab1c86b13145f1a9d1da6faa554
0cf6e118fa41bc8532a95e1c73e17b385afb11db
20989 F20110114_AABTKC foster_m_Page_16thm.jpg
125cbd4a8174ddbd694a42c67ddc0c03
0484e5ca841c47431538db8f8417a62e533af6a0
6938 F20110114_AABTJO foster_m_Page_01thm.jpg
a56eea3cf606878f919c38ed500eead7
e569a064d6ce6f0dddb837bdf76f912b25335215
21455 F20110114_AABTKD foster_m_Page_42thm.jpg
65adbeb93d5a495f243daea5aed644ed
fa5836c915b41b7f7eb6ff93cbd8725b6d727395
818719 F20110114_AABTJP foster_m.pdf
04d4fa6e2a0830c2705a7cdcad2e6259
cc4e6bda120a8804b4e190166243dc17892d9a2f
73088 F20110114_AABTKE foster_m_Page_94.QC.jpg
140794e60aa0ac5ccc230de77d618dab
5fe1ba182aff424d521c7e2daadd55c1bc7739aa
93579 F20110114_AABTJQ foster_m_Page_05.QC.jpg
b43851e7b769fedcc6f1fd861131cf86
3ee4760be179c71e2ae80325ccd8ae6d827a2349
80862 F20110114_AABTKF foster_m_Page_63.QC.jpg
0efd61611716d25e7cac3a63b590de1c
60df1abebcf6a155f948e9da3e7c7e934ff816af
23368 F20110114_AABTJR foster_m_Page_50thm.jpg
fdc4736b498d1bb082eafea5486021ff
328d02b286f43b379d92bc5439859b5f6227116a
25048 F20110114_AABTKG foster_m_Page_84thm.jpg
363082c6346473c962adc4baed4eb718
74dcd9bc9bbb7e2648e99134273f1680ccb4a0ee
25765 F20110114_AABTJS foster_m_Page_97.QC.jpg
e7650e60d3299c31f6fa4268fd73bd7e
f375eac9eb5949a869ae205736309e2b290f7378
24834 F20110114_AABTKH foster_m_Page_19thm.jpg
182f8bc47e040eb716f6c5ef0a9b0b55
e2046b6749aed60abac9a6bb7ff8d5623701a08f
54107 F20110114_AABTJT foster_m_Page_71.QC.jpg
7ab421f4809258f4871a3b5dde3afcb0
598a159ecc7fdb2d6717a0876308ff54d1348929
78426 F20110114_AABTKI foster_m_Page_51.QC.jpg
7259f1c767a4400b957322abc944bdaf
1998ea091a161eb9721c38142cb400881469eb88
24170 F20110114_AABTJU foster_m_Page_85thm.jpg
4817ed1ea8bdc1aa26d23cc46980b757
d6d8f7c3c3ce2ab278b5925d329fda41093aebd6
80478 F20110114_AABTKJ foster_m_Page_81.QC.jpg
ab4c0fb92ceede0762d05ae4a7a885c4
bfd5b00ac7c604cc95eef35a1f38ef04b6665ece
31054 F20110114_AABTJV foster_m_Page_75thm.jpg
25e3f19994ac2236b8c11b3360843072
99e1f731565e2386ec0cdb88da0f0e2e6ec161ef
21644 F20110114_AABTKK foster_m_Page_79thm.jpg
c92ba4fa397cbe50af7fcf51d7524d57
4bff13d54e062c152da2730fd487683f42846839
23988 F20110114_AABTJW foster_m_Page_25thm.jpg
f733c3dc4e610ce86ea974ea8ad1fb00
25ad149afd075dba5b26dfc6dc942cfc5783fc79
70851 F20110114_AABTLA foster_m_Page_07.QC.jpg
de8427572237ff5ab14cabc043a44467
b016e3b30157a90b22f4c0e34c53de380457426e
75193 F20110114_AABTKL foster_m_Page_86.QC.jpg
b03f7fb3b35ce9505f610916e95838fa
fc340a7a3210a1d00875540e980f6e6d2fb74ec1
54544 F20110114_AABTJX foster_m_Page_06.QC.jpg
631a315bcfdfc0893fccac9edae6f519
071793a32cbfbe7b7b738a60ee08ed3e7a3f1828
29138 F20110114_AABTLB foster_m_Page_09.QC.jpg
1a0ec6a220236ede493455a74c90b1e6
5fe38d2fcc156d9a8c747ff73f913cb48dda9314
25307 F20110114_AABTKM foster_m_Page_23thm.jpg
850c9470a234a018e20fe0e26a3965f2
ec8f00782d95c388514e8b82de4fd37443ee9212
78890 F20110114_AABTJY foster_m_Page_84.QC.jpg
8088dcbc86b6ae8011a1dfe3115249ed
39293d752ea95aad2d25373ae848fd569d3767eb
23089 F20110114_AABTKN foster_m_Page_20thm.jpg
c549caf9642b98096f3593e1807b9149
e456da591baf88700fa19d728615eaf3b612ef49
64774 F20110114_AABTJZ foster_m_Page_45.QC.jpg
75d485bdbf96d49949fb32712c282b95
aa485ff65b87c2d8bbe7fcacea6514e7203fd97e
44232 F20110114_AABTLC foster_m_Page_04thm.jpg
1cfaa87356fb31f0307438c4ede43f75
f5853dbc02e874ce0af376158b7849a77258b280
24536 F20110114_AABTKO foster_m_Page_87thm.jpg
e4be95f8dadb163b34e1abcf8cf1a3a1
013eb92ff8bcf4a819ebb376455c904931754050
71238 F20110114_AABTLD foster_m_Page_62.QC.jpg
4f062a9e5a44e511e53463411997c18f
0864b19d9dde007ef2ccdc3053c99fa651b5cf11
12010 F20110114_AABTKP foster_m_Page_15thm.jpg
67df535949468b3a8ecf4e4ecf09d631
aeb94ec81a4d961f13d67a32d138ce427619a8dd
24949 F20110114_AABTLE foster_m_Page_35thm.jpg
cfecd7067af56b89871f4ea826ad9b26
de0a4613ce0f596b7583d1e010430b0b992cafd7
28563 F20110114_AABTKQ foster_m_Page_61.QC.jpg
2a4f78d387068bdf498626cf50864c24
295ffe864aaae8247770c225831a7ed44a4ca111
69871 F20110114_AABTLF foster_m_Page_29.QC.jpg
f8d6edb57f28c65df5020a3d664bcf68
2bf6f413f5cf24b342c5e3c162260875ce5c0823
73400 F20110114_AABTKR foster_m_Page_11.QC.jpg
cfe427d150621f5ad8a9e28029ab0897
b249c62737bcaacf5dc6e677b1bfe1be20ef18e4
22291 F20110114_AABTLG foster_m_Page_65thm.jpg
1eaae820d41cd870109f9aac318207c4
33d93cd37640ed4fc210d6a018c641c60a6887a6
62745 F20110114_AABTKS foster_m_Page_16.QC.jpg
963296a1b464edee84d259fa19fb92ac
ab241aae61a5169bfecf4a99ef85df472d0ff717
69836 F20110114_AABTLH foster_m_Page_79.QC.jpg
95a4e6aee6f424f05294dd2c4389a9a0
5fee6ef49f57c6267fc48ec1acbdb12d71391202
25498 F20110114_AABTKT foster_m_Page_58thm.jpg
efacb8ccc94b1acb74a84265690eef51
e61d81c79926997fd65ee3662b4a1037ca051818
40939 F20110114_AABTLI foster_m_Page_03.QC.jpg
9bddec851a2427ded7685ae4d4bb93b3
2008351af9674de2ec8e5eb2283a27835e1ad821
75270 F20110114_AABTKU foster_m_Page_25.QC.jpg
3e8088c1ca1b9cda8e9e96538a552db2
dbfa52379da9775ba273c71297ea40d525a8077e
75916 F20110114_AABTLJ foster_m_Page_26.QC.jpg
0cb4ce5172e5134ab082ac9b04e653b6
23a400e5b709367dd7804950fe11b1c98f599b4c
8234 F20110114_AABTKV foster_m_Page_97thm.jpg
774f3e9697c372b3550babe400e5dfbb
9353e9945d007238c560076631aeec9f697a35db
5517 F20110114_AABTLK foster_m_Page_02.QC.jpg
286c90697a949b0e801065ccb0621b68
7b2f10f7ad0d213e0a238279604f6097a0764ce4
26258 F20110114_AABTKW foster_m_Page_70thm.jpg
c5b36405361bb85cae8ae5c09bdd716e
a6313d22c1440f040cd7ee02fd934f5d1f90a95a
74491 F20110114_AABTLL foster_m_Page_82.QC.jpg
c0238745d6963a8e23940f228a2324bc
41d568ec8d456503cc7cd5600588b32975e7fa9f
80161 F20110114_AABTKX foster_m_Page_21.QC.jpg
6ac8e815ab1b2dfca595436a81b12fcc
4c08a40e6cd600d6ee6f426671ca4e67c2be2fa2
10974 F20110114_AABTMA foster_m_Page_93thm.jpg
b94fd1b82c7e66783af236ac3e16a884
8ad070a48871140ba2800a0804f0b1450bd059e9
79690 F20110114_AABTLM foster_m_Page_83.QC.jpg
a827e0ed60c46813be4d39ccc51ae659
99a53fdb280b8748cae4facf137cb9203f026c4c
78406 F20110114_AABTKY foster_m_Page_90.QC.jpg
9fec8a54f69da11557c9b842309928b8
4610cdbdc0d04615bf3b822bc4a858f809bbfd4f
77887 F20110114_AABTMB foster_m_Page_22.QC.jpg
fc36cdb3d3145e80a0b04c46102d90f2
53014ff2faddb046a249662c0fc5488ad8f3c109
76298 F20110114_AABTLN foster_m_Page_55.QC.jpg
d557e5d9593cd0a2627eafb8be84d6dc
6f2f792fab469a5853b1f1effd923566539b02b4
78977 F20110114_AABTKZ foster_m_Page_36.QC.jpg
bbf51425292ac6839f0af43cf1a7397b
53da9b15a680d4bc19b31ac4bc27b30eff546155
45597 F20110114_AABTMC foster_m_Page_53thm.jpg
08efd46b1685771581d9f23f60c9dfc0
79a3042460a33dd8959fc12668b687ecf641f121
25483 F20110114_AABTLO foster_m_Page_95thm.jpg
8f23bba2fc4823917549c8246525d06b
7f89447fe90b6addcb1dfd1913e1531b6cf6214e
18642 F20110114_AABTLP foster_m_Page_41thm.jpg
d5849834cf8c24f4d7ff26a2b56f3b0b
cdc4928efac8225035a77477cdf3d495eab020ad
24554 F20110114_AABTMD foster_m_Page_86thm.jpg
3a7636dcff984571d135c0279d86abb7
6f97bc47f8c05d2c1245b3988b69cc0c65e936b4
77441 F20110114_AABTLQ foster_m_Page_68.QC.jpg
a8cedc5dd0c289b197b30df465edc78b
2f9e904c875236471624c1ec50cf4970aa72bbf4
20841 F20110114_AABTME foster_m_Page_73thm.jpg
131c6193ee0df9832f80127312c269d6
8bb3d2778badb8cf9a2ea601c46b999ccb8798ee
45342 F20110114_AABTLR foster_m_Page_54thm.jpg
0a23081f3d395e117ce1c7b11e579cb3
9be268b49cfd4037897960dcfba7f06fb9234776
22570 F20110114_AABTMF foster_m_Page_29thm.jpg
4fdeadd38e6ed2c6249cf8c87819d298
3c016acb0689798f03586f4588e8c72091a52301
24040 F20110114_AABTLS foster_m_Page_60thm.jpg
ca8e0de2e03838551dc3ea9e9a2ecd40
3677bd4886953fbc0e70d022a10a64c68fca395c
75908 F20110114_AABTMG foster_m_Page_91.QC.jpg
ddc5048a765ce01bcea2704a80cc01c2
e7e5d34be77e5f5e1c3c74ba897387a804a766b8
18301 F20110114_AABTLT foster_m_Page_08thm.jpg
02125b2bf9fd283db1953f86161b591f
c4030836ad937481d9c281cfa9b97debb32e5793
22868 F20110114_AABTMH foster_m_Page_62thm.jpg
9013f3ea172604777d5bfe522c9208d1
17b1642544103b7f4824cf4b4db9646f03491607
42906 F20110114_AABTLU foster_m_Page_47thm.jpg
52fdf86c5d7e0112df55f54c9a7f5be7
d0a72142e880bff466add5f74193b4f83af414b8
79410 F20110114_AABTMI foster_m_Page_13.QC.jpg
f42301423d250993afb0f39decc9602a
7a7591fd8cb872d69707c95dc86e3bc4aa9b9405
69342 F20110114_AABTLV foster_m_Page_38.QC.jpg
d50995e9afa2b27caa6b4b58c7dc4259
ee4103cd5e8ff9cc62399fee074c7dc91223aa2b
68246 F20110114_AABTMJ foster_m_Page_34.QC.jpg
dcc830725fa108b161b7c52d8ad4ba18
99a3b3998905b2bd041f5776eb0a18f494843885
75302 F20110114_AABTLW foster_m_Page_14.QC.jpg
ff6f6f10e20aada5cc99682ad7836bc0
1658b5d90ad60e26cc95b4de900f4ae74a783860
79010 F20110114_AABTMK foster_m_Page_44.QC.jpg
6bdb56d4b5b6bb6361b9105250ae3b20
ce3acda094c91ea3cd66bf726d3828b9e65526d5
58579 F20110114_AABTLX foster_m_Page_41.QC.jpg
90b0bb454c6a076a97359c11c1f97015
97d8da170b58fa4330933dfa83cee19dde25aa65
8647 F20110114_AABTNA foster_m_Page_88.QC.jpg
3b555c9c2fa3690ab06f3c6592185f3d
c4a18611511440a9656c4a4869c55ff30542413f
67829 F20110114_AABTML foster_m_Page_74.QC.jpg
fceae5720c67940775e2ae13b73ee254
b000cc9116c30e532bbf55e669dc65370d916718
24393 F20110114_AABTLY foster_m_Page_27thm.jpg
abbbc1cc74f8c5352db5979ff372b932
4cd1133db23689c3fb7dac20b4652ae3434f7aeb
25785 F20110114_AABTNB foster_m_Page_21thm.jpg
04b668fff8bd306cb5d7ea44e26049c0
947e74f9144443d6a1d60544be7e09114aead797
70133 F20110114_AABTMM foster_m_Page_10.QC.jpg
9084c3e9e4e077c2dc6e2026449f807d
f5b19cf6b3b19d83b6cd9cbbdba932f5cb5cd7fb
46336 F20110114_AABTNC foster_m_Page_43thm.jpg
0d2a652bbe154065290ed61975b1c2fe
79978ba0d96674ee41c79e21dbb4195f217e7fae
24020 F20110114_AABTMN foster_m_Page_96thm.jpg
b5f7c6fc370028cbfaf0fe4b25040fb8
e55a7f5f99e7b4a63daed2fd66d32edf078e7af6
18680 F20110114_AABTLZ foster_m_Page_72thm.jpg
c223706440a045ef5dcf61410ba8b1d9
6e921af8123161c1302eed132554f8396343295d
21795 F20110114_AABTND foster_m_Page_10thm.jpg
abb9894830bf318b937aa9ba89a5481b
67aa29fbc8a495a37d3d995732f1e99442e672d0
21512 F20110114_AABTMO foster_m_Page_69thm.jpg
25eac3e50564ba0e3e472238b4d0557d
5acd2893dfe728646d72b88122fd346c7e4294ef
25435 F20110114_AABTMP foster_m_Page_28thm.jpg
3b41e90fb5f6c3176c35e13c81218160
f9d301ecb4714ff89ab1fbc7da97d66668dea1d4
25009 F20110114_AABTNE foster_m_Page_57thm.jpg
d801814d4036ddb1fbe2a15b2178c841
b27087685ab9952ce690b354b2fd8b992c8190c2
13691 F20110114_AABTMQ foster_m_Page_03thm.jpg
17081f611101d6513437a742dbb55ac9
5ea8f1e56f39eb645ad99bf40afdd4f9a10b7efa
23716 F20110114_AABTNF foster_m_Page_91thm.jpg
9d859b7d810256f74a8da2fa3c0f3ff2
a8e969e2f01af21184e3e1f99c70c24e866f54ce
18752 F20110114_AABTMR foster_m_Page_71thm.jpg
69ef59796a55946f9820c16ac254684f
4dfb2947a80f0eb5e476720326cfc264148d143e
18691 F20110114_AABTNG foster_m_Page_31thm.jpg
a4137b540663192087f1689a5d4bf770
ae64ca12d00a6bde84d3b6177b18d9fbf89d3fc8
56719 F20110114_AABTMS foster_m_Page_31.QC.jpg
d929f2186f77d57c9b3e481421aa74a9
c4ff04a360b241c569c11e6e6a553bbab6cf141a
145791 F20110114_AABTNH UFE0008063_00001.xml FULL
1b6a87bfbd8ee269f5131304f7d938c2
2945fe3790660ae962c3f836646d44527746f240
85917 F20110114_AABTMT foster_m_Page_53.QC.jpg
b0b9f8e9e518593fb2527d3d9d1a40f1
6087576bb0b62e109f09e81ac5c73865c2ca350c
18169 F20110114_AABTNI foster_m_Page_01.QC.jpg
c2d4006716621bab2833f751f3954c45
8de1e9b15ec9f7dc79e43a9dad0ac35d406df80e
82584 F20110114_AABTMU foster_m_Page_59.QC.jpg
5f094b07c3fa68d07565621cde894eb6
ff0b561aaa7de1bf3485d210b14b9eb408e03b5f
85255 F20110114_AABTNJ foster_m_Page_04.QC.jpg
bb6f3ff5ff438117e7c206cdf6eb4c78
15394a35ea79682e25124c466f2d896dc3022202
72565 F20110114_AABTMV foster_m_Page_20.QC.jpg
6e23a3948afc2ab7e1f58175f31a5364
1ece037dabeea1e40311bc92e6e9685fa94d154c
45566 F20110114_AABTNK foster_m_Page_05thm.jpg
382c8ed96b3ea6d3b24a366d30afacf0
7aaee6bfea13d0dec68edc2896021f50ea49d8dd
30899 F20110114_AABTMW foster_m_Page_63thm.jpg
5cdf8f3d83e1072f702542ff4ddcb1fb
fd8504709f6a2ba9ae87208d14afc50dc6a1e4d9
33851 F20110114_AABTNL foster_m_Page_06thm.jpg
9e58220a2e91aa67e462c98f8741706a
d12446d6c46024f322dd8fe94072d135424a643b
85934 F20110114_AABTMX foster_m_Page_54.QC.jpg
314fb009b14c15c4a77ee3e61103097d
87b7aabadeaebebf37e72b6f3758c6bc9c208bef
23592 F20110114_AABTOA foster_m_Page_26thm.jpg
696bb0b22d9ce34bcde26291a346a072
87bef8b51227c7b43f2d07ce2c2fa12f159ac118
60077 F20110114_AABTNM foster_m_Page_08.QC.jpg
9cb8ca55db1106b7e4008534b5b73837
d06c2cc2a2484d59c1d30ec603ac94680c0e5642
25177 F20110114_AABTMY foster_m_Page_44thm.jpg
7ffad175a0618d8138084b9592f5500b
ae70d4ee1dc5095941704dfeaaac1ba385f098be
74687 F20110114_AABTOB foster_m_Page_27.QC.jpg
d7acb2a9c41a2fd3d4145db404f52e20
ab226d708d293655ab7f5842c85b8248a8e164fe
9181 F20110114_AABTNN foster_m_Page_09thm.jpg
47d2369008e0e0b71895e2b71be64743
4d70384f0912a5de2fe34698333ce49f75e1e66d
39314 F20110114_AABTMZ foster_m_Page_07thm.jpg
0d154d3719e030311396d8c2c7cbf639
2e1e6a92b7698e024591639083e9bb08f75a8176
77765 F20110114_AABTOC foster_m_Page_32.QC.jpg
3b4cc4d8c792b84e3be4ddddaef20d70
c87495718cffcd9b3908a026d6ded084485ec1d4
24701 F20110114_AABTNO foster_m_Page_11thm.jpg
04249af42946287eb581f532e43739ab
5e8f406ed0af625a198e6f1315af89327725abf3
77384 F20110114_AABTOD foster_m_Page_33.QC.jpg
fda54dff0e6c0d5c14a6b66b134f17c8
7e87815bd6c8519022c63b5253c574f2f4611e02
24765 F20110114_AABTNP foster_m_Page_12thm.jpg
255af17e0fcf54a8d8a0707060bcad09
838f0e57ac2411f8714bd0a9dbe006b2d7760ee1
22749 F20110114_AABTOE foster_m_Page_34thm.jpg
b40c8a2b89b660dc22f436e2a51f3d0f
bdadde94b967cea66ab5bbe28e45f704bf587bca
23733 F20110114_AABTNQ foster_m_Page_13thm.jpg
f5591220299f6c01990c8d964709916f
00f510c36a7b27ff9a977af813f4c1c6145efe7c
24091 F20110114_AABTNR foster_m_Page_14thm.jpg
e4d7704fff9810f8bb90caba70c5f861
c176416a56f2b3af4bdbef4f3f3b10c049c22f0f
77065 F20110114_AABTOF foster_m_Page_35.QC.jpg
c446ee862a88a3e2124c5cddd568f3a3
9e78ab30b5b299f5bdd7e2fe16d169c9908b37ab
34272 F20110114_AABTNS foster_m_Page_15.QC.jpg
2c40f4e679cb720503caf48346fdebff
5e0589811dbf5d4bf296aae9b87dc59476bc78b2
16915 F20110114_AABTOG foster_m_Page_37thm.jpg
2cf59db4d3e4d4fe7d8699a4f83badc3
7c6346d090ca1c7a3ca84a54ebf1a475857971ca
79247 F20110114_AABTNT foster_m_Page_17.QC.jpg
283033119e3063f9cd983254ea3a2925
5fc716cc246db716fcabbf37e3f17bbaafcecfd9
53462 F20110114_AABTOH foster_m_Page_37.QC.jpg
feb60a6f1c64adcd6966e8922bdd7a68
5665085e656b7acacc5aa76429b70ed7fff0d914
25680 F20110114_AABTNU foster_m_Page_18thm.jpg
56cc784e5123fb5542396ee6ea724fa0
b4e4a71162464d10e88d2180083d38da5a46630d
72783 F20110114_AABTOI foster_m_Page_39.QC.jpg
537d5bf8e791d73cb240d4604e9ecc78
ce42fe841ba6436006540cdaabaa412349847aef
80996 F20110114_AABTNV foster_m_Page_18.QC.jpg
8666889a1aac344f2d5d5caf490f6995
e729d7dd525111dac65dfd2184d58381b571f6dd
81490 F20110114_AABTOJ foster_m_Page_40.QC.jpg
cad0b149d14930e547abf92d55f312fe
52e4fdf23592130a9896d68719b16c916671ed9b
77657 F20110114_AABTNW foster_m_Page_19.QC.jpg
c94327e6e6a3d3d0718c820f8a947a91
5fdbf4f98a320216a004380dd94a22cfce1f76b4
65667 F20110114_AABTOK foster_m_Page_42.QC.jpg
379191eaf87af7d232ea3b04452596df
d1af03edff472570358c1450a45404a3a273bea7
24841 F20110114_AABTNX foster_m_Page_22thm.jpg
9b3808643df564d72d91f7abbea1f883
9f84f2beda53bcd1b5e28e0fc68569ca71df3f8e
24802 F20110114_AABTPA foster_m_Page_59thm.jpg
612983b5d3668fbfa6226434cd44d17f
d0883aa6f9d4afceb01fc790c804d86050167a69
91329 F20110114_AABTOL foster_m_Page_43.QC.jpg
f8387a45c8b1fa00818242710d2dbadc
691f13dab9bedf6b8007a3ce8017a25551e019a2
26033 F20110114_AABTNY foster_m_Page_24thm.jpg
6ef6a56294393ebaa3bc69bec9152fee
49156ac5ac28a805314b30de80beabc19966d4cb
76441 F20110114_AABTPB foster_m_Page_60.QC.jpg
2935e09b1f16a8d0dbb49da5c601ebc7
3a8e6809b4c1c251a7258b897411717cfc5efbd7
47792 F20110114_AABTOM foster_m_Page_46thm.jpg
b709d8274b3d998ceeb545e4d206a9e6
71d67d124e2663f19f11761c6c31a2b7c2c211c1
81961 F20110114_AABTNZ foster_m_Page_24.QC.jpg
4674d9635b195108cd0f0f3f7a4eb63c
eb6544e365c76d541ab2fee206e957b56a4f00cc
10595 F20110114_AABTPC foster_m_Page_61thm.jpg
68bd2b0b9b41b5a9746a54ed2f02abef
240cfd5237b368e32a497a2e40e24768d09b116b
91076 F20110114_AABTON foster_m_Page_46.QC.jpg
0e88b309aafd4a8e43d271d430bab816
bbc977b1bda76a36aa8fd0b76b0393c973ba25b2
28692 F20110114_AABTPD foster_m_Page_64thm.jpg
fed984ed4ecd7db44510b7c846981106
0f2497b43cd7f7fd875bb7e21e7092c5fadd6d4e
76181 F20110114_AABTOO foster_m_Page_47.QC.jpg
8760202084bab56732b508b4139d81e6
8e4d477c9c758d3c6798bafc898fab8a5b082475
74139 F20110114_AABTPE foster_m_Page_64.QC.jpg
f30c9913d9d87ee6bd7d76d57502575a
700ebabfc548ac658c4c1d9fb589854b2fa24b95
46081 F20110114_AABTOP foster_m_Page_48thm.jpg
cfbbb517b751b3deb9384d041ad1f118
bb885d1d8ecd7bf807f71541f1c031816417c8cf
70434 F20110114_AABTPF foster_m_Page_66.QC.jpg
027340144bfddbc0450a9f4585bbda73
633982a2d1d5883f17e43eccdfa8f68717b1bafa
25160 F20110114_AABTOQ foster_m_Page_49thm.jpg
21949fba3443357510e3d8de9cf97b56
46b6c4526c766acfcba407121fcb884e30d0453e
78398 F20110114_AABTOR foster_m_Page_49.QC.jpg
f1a6ad063863ed35f45acc8c8b21898b
75b62b816ebb2336e218f68db27e6c94311b9d28
25143 F20110114_AABTPG foster_m_Page_67thm.jpg
62ab3f7958805a2339a508864851a92c
70a3490f9e5672e4889ddadad92ce48a05dea275
70224 F20110114_AABTOS foster_m_Page_50.QC.jpg
1207fa4d81d0b55593fbe11d30697211
94125becec0ac2d58321d41768c6950104b3f0d6
66519 F20110114_AABTPH foster_m_Page_69.QC.jpg
954fde359da8af06d696348814c01f9e
b632b3cd23faf8fb8ae65aaaed540115c152a25d
23660 F20110114_AABTOT foster_m_Page_51thm.jpg
b09312eb329dde265db8e4afb1a9bf67
141aca69de222a6ad7db8031328863f0251e16d7
63243 F20110114_AABTPI foster_m_Page_70.QC.jpg
25451bd2fe481642b046440b3738a7d4
28b60aad5171190ca9bbe06f3de793ce4aaa7f25
23870 F20110114_AABTOU foster_m_Page_52thm.jpg
ea43bfa16ce1d5caeb0c145249bad5b3
cdd284a000d5218801afeb60cac2a911631ff8ea
58730 F20110114_AABTPJ foster_m_Page_72.QC.jpg
fb22471f1bb8a13a1561e49d2004949d
583f7e3dfe8c6270b4e52253341ac9d4407bd2c7
78376 F20110114_AABTOV foster_m_Page_52.QC.jpg
ab2e1f299fd944a5f34c9d1b323731f5
c1d2563acc7cc2f9cc0d0a271395dcbda6f81b77
81310 F20110114_AABTPK foster_m_Page_76.QC.jpg
634521cc364909b0970b80d39c5de260
8a43eb26fb1941a6ae1b64132c96a61a9b6c7327
23077 F20110114_AABTOW foster_m_Page_55thm.jpg
c35f64a180893dccd843b046e2203f91
a473870ce62d7668fe804e05df9ea2aa13f8a2bb
23399 F20110114_AABTPL foster_m_Page_77thm.jpg
c19abb070c5c242c40bfa045f7a24e07
ff11c270071997d414b91a8cebad6b21a4023282
46048 F20110114_AABTOX foster_m_Page_56thm.jpg
8e595f37f071aa02cff478cf69850de9
e1586791e374d262c39f3f09ea225645e1d39c9b
23297 F20110114_AABTPM foster_m_Page_78thm.jpg
8a1e7977fad11d5a48f375c7ad5520c9
5fa5a845a76b61b8c73d69193dfe1f90c0ee1ecd
90068 F20110114_AABTOY foster_m_Page_56.QC.jpg
e549a915d412d0e4951d0d0b15a28b1d
89bc2c223535976ba78220c90f5d10379db6ce76
73242 F20110114_AABTPN foster_m_Page_78.QC.jpg
25bf6ef98f9bb27c45a0021f37c8fca7
4cab7e4c82f675f5706b299fd92a60a83e54cc1d
77892 F20110114_AABTOZ foster_m_Page_57.QC.jpg
694c0e1a2c20abfbb0d31c8389439935
68d78a7deb59d62545104db92a7f078eae71fe42
24759 F20110114_AABTPO foster_m_Page_80thm.jpg
a6c425ee700827509fbd7506427c3517
0d215764372e1d2e9b0f5aff691451f831871755
80002 F20110114_AABTPP foster_m_Page_80.QC.jpg
d8c090421cf34a3c8b69628124dcf31b
373541dec8aaec4db168388ea795f66336e816c5
25269 F20110114_AABTPQ foster_m_Page_81thm.jpg
febb04263e4d15973a704bc563673c0a
a718566ced3cbe8ebff4b09612400f1d1113c40c
24269 F20110114_AABTPR foster_m_Page_82thm.jpg
a766f8b422ca2a4097bfffbd9ad01f34
591520886899be3d185446875bdd661bf0a44b9a
79549 F20110114_AABTPS foster_m_Page_85.QC.jpg
c1cb417bfa91f2a07b987aec7a7ac497
4fc200f61828a680426dddb6f5c7f93c42c1d275
74744 F20110114_AABTPT foster_m_Page_87.QC.jpg
b246c6acf6bd50323105ecba9a585a57
98f4ac0c62c8c2d6e35546fa0a156317ccec6672
25561 F20110114_AABTPU foster_m_Page_92thm.jpg
2207623c9f33c6dd8856792a92f17f10
106556b4b47bd694f71516662fcc26684529d9a3
81405 F20110114_AABTPV foster_m_Page_92.QC.jpg
892c2bb57b0350d91ddf5883356fbdaf
06dde07948f1089d51ef082f90fb292f03b7d4a0
31804 F20110114_AABTPW foster_m_Page_93.QC.jpg
d2d845ea28f69ca522ec841b51f878f6
33a8b9682c77a94c96fce65be922132e9b143036
91259 F20110114_AABTPX foster_m_Page_95.QC.jpg
55b3cf84fc3e6713c3ea0558dca6c9a6
cc774bfb3e1d44ae621bc3b075485289e0a4410d
83687 F20110114_AABTPY foster_m_Page_96.QC.jpg
bbb107369152ba1f62d2914f5680e935
5fce40e6ea6b953b7a7d4909a280ae127534b2b9
24681 F20110114_AABSQA foster_m_Page_33thm.jpg
6350725d53bacba10d4f5f7169a0ca1b
0439c8c11efc5946b227bef5912adff6be9a880c
73679 F20110114_AABSQB foster_m_Page_30.QC.jpg
d7e770392d4174697e70b02e3b857ef7
bc94c1bb801bcaae4b8fa49ec5684b1d9e0432e6
67155 F20110114_AABSQC foster_m_Page_89.QC.jpg
7beccf22ed09f00fffc3576b4457462e
32dddb6d806a79f6e8e22791b6b6a377a6fab259
626 F20110114_AABSQD foster_m_Page_09.txt
784de13913f2c6ca4c6f3a677485239f
99619bc0e5b0b2478225dc4f29fb164953d8d1d5
22149 F20110114_AABSQE foster_m_Page_38thm.jpg
59af4c83213eb9fe968c5e4351b40b80
688e500cadb6159b510197380cc9c264ddc82d02
46981 F20110114_AABSQF foster_m_Page_78.pro
087707430c3fe5e54cd741ea6e0ac5ef
4a055667d2c44d9ac563ae1baa7a2d1d136c4d19
1992 F20110114_AABSPR foster_m_Page_35.txt
68a660d94b19c43a98dbd2b6fb54f15e
120ae10c4e4308fc096a47db90b288d39a69242d
157020 F20110114_AABSQG foster_m_Page_31.jpg
00338b25356fce04c60a9538f1977e55
44c7f98ea47cc05395322f2e8c1c36fe0530dd70
175463 F20110114_AABSPS foster_m_Page_89.jpg
4bfaf9d189281da85a1f9fcb007a7cd7
80121835817802af41834acb542fd14df8a10fa2
1801 F20110114_AABSQH foster_m_Page_20.txt
22c2ad296a8a5b23b6a1067d53675bf0
88ddf67e3f23428248201f4df91a0df3f3fa3386
51890 F20110114_AABSPT foster_m_Page_18.pro
42f0887a0b55a74276c0a3e54b16f21d
d555416f721d711f8e1c59ecb5a32e7c825a07ba
105439 F20110114_AABSPU foster_m_Page_03.jpg
5a28cc45ce6e4e1f6e57dd23c7cf281d
e57180a45739d5430b64f02bb08dd37879559c0d
50190 F20110114_AABSQI foster_m_Page_32.pro
cfe72d885c7250d408115a52b1cf2d90
d0f880c35651f01247eeb5bb32325ff838fbee90
1757 F20110114_AABSPV foster_m_Page_34.txt
fd1f400bf8b07a1f6f96c9fe0286daa0
9687ec10eaea5dea9fcd5c80d74d7e83e65f5328
77838 F20110114_AABSQJ foster_m_Page_75.QC.jpg
8f2d43d5618682f596ae97a90d174de5
921f67955279afcc9cdc621fd3bd6da3b89d2b92
2178 F20110114_AABSPW foster_m_Page_94.txt
ff779d33fb54df3100b41a7b384d5d26
f972cef75f148bdc4b989a29c21f4091fcd406c8
44744 F20110114_AABSQK foster_m_Page_65.pro
baed829cde3c427115c9ad39d9b4ba95
2ceb7907794c2b8fa493b0c27f61b11d6f2edad5
930565 F20110114_AABSPX foster_m_Page_46.jp2
41a6dcb4334a463e6aeedbeadee62f91
fa82d862aa87577bdb0b338f7b2e2ff33354d0a9
78764 F20110114_AABSRA foster_m_Page_23.QC.jpg
422551c8ca41bb2bb96df4caf21d8bb3
9f8dda22ef8c337ed61d0f52671b5f2644500f95
8423998 F20110114_AABSQL foster_m_Page_64.tif
d76ec2dc566bf07ecc5f0b6ec25ba088
0ef738609f54978341a89a844658afb83368d746
24363 F20110114_AABSPY foster_m_Page_30thm.jpg
08401efe515dcbb4d04ce0600e3e55eb
4787eaa29a4283ec8d43f22af414f5216047209b
1053954 F20110114_AABSRB foster_m_Page_40.tif
094601c0f9799ad04f5ac0cc73e0d910
3494e8ec3d21ed1b1d70e6f7c0b825abb5d7b888
191830 F20110114_AABSQM foster_m_Page_64.jpg
bc7c8a56127bd2f90e42dfccb0b8dc38
9c9aa90fe36200c4cdc93589343dd3b87411fdaf
21641 F20110114_AABSPZ foster_m_Page_89thm.jpg
34cc697fb6488f354845e1a40068275c
60cbe2a55e816c190040a3bd90dd9a7065067515
47758 F20110114_AABSRC foster_m_Page_27.pro
fecaf099bfe871e1a7763a88774f2495
ef60515110ab6d727b82435cfc40eb317e0b2077
58014 F20110114_AABSQN foster_m_Page_96.pro
4b7621dc2ead48061e144f8164956333
ce06bb9a9b2a9140e2b39c22f1c0d9ee766e7c34
1915 F20110114_AABSRD foster_m_Page_14.txt
8a1ee515c5bead1d539c9e016e6bb180
9b823f531ef46e5aad684a72eba7c72d6992e24c
24737 F20110114_AABSQO foster_m_Page_90thm.jpg
85d21a4e273708b07e658d7c691573bc
3b182449baa0572ca97a570d85d119841d7a5ec4
51941 F20110114_AABSRE foster_m_Page_40.pro
ad811780e6127d1569c09a79e3ac951a
fd2b92f086afb05d67be1f8d9c85021569c53d43
193718 F20110114_AABSQP foster_m_Page_53.jpg
2713a0dfc1530bf1d712b606aacfd07a
7a95d819c6fca2c291a0d0697241970c99fe25c1
1886 F20110114_AABSRF foster_m_Page_27.txt
974e5e6febeb7d368b336831d1ea141b
93070358c56b3644f88ad766b66ac88e8076fbb5
2042 F20110114_AABSQQ foster_m_Page_92.txt
d199671daba9f9e186c15bd9a95c6a27
967cca6a30392e921b62158a712539f0b45ae09b
140655 F20110114_AABSRG foster_m_Page_95.jp2
fd8224ac1125a082a3ff0b4b9a55c3ce
694c2b0bbca9363c4c86a42f46131bd48b5b5a50
1951 F20110114_AABSQR foster_m_Page_68.txt
13a5563726012ae3526fb6ebf8165758
87035b5ec186f741be42844c68e09ce417aa996f
25298 F20110114_AABSRH foster_m_Page_17thm.jpg
f9da1f33975fb0f0f7cfe20f014bd091
fd9d740efcc9b77b611fbf2667276506725b07a1
42547 F20110114_AABSQS foster_m_Page_42.pro
09ae919775c3e1c5ab7766a2c2117059
6f4b79db692662c8ca1ac20bb6ad02f039f516e0
F20110114_AABSRI foster_m_Page_21.tif
6dbd9ffd6c77df499d90c18fcc9c0e4c
ac05eb5fd0c4933300a07b345a788d4cabc2a7a7
F20110114_AABSQT foster_m_Page_88.tif
6173916c65524e007b379e34ca27717d
fd2134aa47206eba6f1752ca75480adf4483671d
F20110114_AABSQU foster_m_Page_70.txt
4b9c1a304d44177ff585b5dda7dfdc90
daa474a52ed869b10e523bda2d701779a146e774
86452 F20110114_AABSRJ foster_m_Page_48.QC.jpg
b35e54c5e3f77e2a33fa7a5c16844bb3
2500e06bb315094624c073859c30f185136e1148
1516 F20110114_AABSQV foster_m_Page_72.txt
8c0ce5442e25094c831b76552cab605a
b232b067ec8b3c8d6c30400f0d9d09ea808f186e
42759 F20110114_AABSRK foster_m_Page_41.pro
c481f4bb47571b21961f9773e32521cf
73488a0155d5bc366e8f90b41db0d1eaa49cfabb
F20110114_AABSQW foster_m_Page_02.tif
57ef6c123154f09ba1bd00b4f5333211
39b4ca6be834d673a79a1b4904220e8045fad3f4
44351 F20110114_AABSSA foster_m_Page_62.pro
fed3363bc52a4420a65566ddddf0b764
ff4ad67c6c8a61face844447d5c259e783ff9434
F20110114_AABSRL foster_m_Page_30.txt
52640232d034839670a843fd1dd29b9e
4fd9fd659b64af581934bc4c2fa989af33848d47
10186 F20110114_AABSQX foster_m_Page_88.jp2
f2fca667a0d4728f8bf9b823269cb6b3
ac7cc293b7abc419b7689f57c7f036651a934c18
255658 F20110114_AABSSB foster_m_Page_05.jpg
d232fd8b2641b8ee35398aa3a57b608a
e547eb1c065a1473ea179a15c12ce33dc055120a
23647 F20110114_AABSRM foster_m_Page_39thm.jpg
c5df1fbbecd69087d9b0b6633bab8a4d
b1c8e00724823ca95c3440bce63648da806ea090
70786 F20110114_AABSQY foster_m_Page_65.QC.jpg
4d09989b47b51d09a9039062b8c2c1a9
28e8820788399ec08780c35b38abb79d542510e8
23507 F20110114_AABSSC foster_m_Page_32thm.jpg
14580eadc0077158d6e926a84d877fb9
59e913a61ad1c3a5edf510ab064de031e20d991d
41399 F20110114_AABSRN foster_m_Page_46.pro
9792fbd2c8b67310b13e84f8fe927af8
d0b5e31d6c527924dea2c0db67c83cf65416ef9e
98302 F20110114_AABSQZ foster_m_Page_62.jp2
636e5ab4cabf9a4f23f93b8615167ad9
260fab823002494737647ebb4e2c9c76d3e83cd0
24099 F20110114_AABSSD foster_m_Page_68thm.jpg
99640376ca9a41437480b9026bd218e4
2fd72fde59008bcd6af50fa79f311d5da263b02f
79341 F20110114_AABSRO foster_m_Page_28.QC.jpg
0a7a513c0299c4f22e0049818bec8c9a
4432ac9045488174bb9ce5ee82924616c21a749f
79779 F20110114_AABSSE foster_m_Page_67.QC.jpg
f7ca7d03bffb73efe92e624a9e01ee83
538a93c95502c9e56a345fc2adf4334f87e1e619
1913 F20110114_AABSRP foster_m_Page_87.txt
38bd586f6ae7c2583743e4466e0bac5a
4359dcc8563ac984ef6a1991a24fa9c79716b0b4
1714 F20110114_AABSSF foster_m_Page_89.txt
ca54643f35616ce0e24ad08df4eee6b0
6b6fbe5efcc2dae23e47c6fcd3926dd3292abd0e
92313 F20110114_AABSRQ foster_m_Page_89.jp2
acfecd6d4b3d9cfa2b3205d66a5dcca3
0e42c4faf7d588b69770f1ef9fc54333d82117f5
79028 F20110114_AABSSG foster_m_Page_12.QC.jpg
f4d01068ee6d80b70a40797d13469f99
25478b33dff450d0facb1c2f3c60b7a3badb4734
109716 F20110114_AABSRR foster_m_Page_35.jp2
50014c22b8a712da9bd2a1ca51e64974
9d4c28ff05c567d6ac2b4ba837d887fcc34ebe9e
1691 F20110114_AABSSH foster_m_Page_46.txt
90bda8db940f7fbe55958e8f3e6e39b6
94e52afbd05a6e276aade64729c4a2bb3ce71286
1982 F20110114_AABSRS foster_m_Page_65.txt
db45d6c9f27c62270d68a242a2e04c45
b4c8f1f014b1efb86a4a9aaa959fe1a9b4a025fc
F20110114_AABSSI foster_m_Page_80.tif
42c4e299cf96b2ecdd47afc9fe2b728b
32a88c9fa0e8934adaa07f09c6d983f680c0339b
48172 F20110114_AABSRT foster_m_Page_15.jp2
462a193a5fa89a98a25a5e6a15c792fa
7731b7485b98162ca78257c990f12b287691b5a9
F20110114_AABSRU foster_m_Page_59.tif
bce9b8130229b03af7e5f474f4091fe9
45806f51fd159790cde69a1f3c49c2cae7267621
1475 F20110114_AABSSJ foster_m_Page_54.txt
cbf698324f1ab1552c0a4c30f35ac9d5
fe464a209c896afe1575d799fc23d686e7cd144e
66959 F20110114_AABSRV foster_m_Page_95.pro
9253f26fa7328d1eee5582a50e9b2569
e2fbd331e85d4737254bc91c633cc09ffdf03646
192320 F20110114_AABSRW foster_m_Page_38.jpg
4a51e9e3f5eae204373d091c2cefaa9f
4b4e74ad91ee912d5d4f034be92a04664754a86a
F20110114_AABSSK foster_m_Page_27.tif
3ecce3373c2c2daef7e6b4f3f34d9c07
28cb70ad7fee19aba2330c406a0f8e536458aa3a
153572 F20110114_AABSRX foster_m_Page_72.jpg
d74fcabd7cba8f512d3822617094edba
fe944f5de115b5658a084af27f51ceea488dad3f
83061 F20110114_AABSTA foster_m_Page_58.QC.jpg
c7f138af0439906a1a96d104acd235b9
39040cb8c6724fd68b288d85003fd16e8e1d1b0b
72098 F20110114_AABSSL foster_m_Page_09.jpg
974b95a99013e601912866cf32bc7bf3
6c04b1ae0aa4a675cebd15c8f6fa38a9121b3a83
25689 F20110114_AABSRY foster_m_Page_83thm.jpg
98eed7a38f992e4ae57de89fef8889dc
d828c65266a0f55916a60c5659935a9abec39fc9
F20110114_AABSTB foster_m_Page_97.tif
893f4f652ca09a2bb16eb3370aa270eb
6ca32f05f561299a66eaba667bee166130aa1def
97210 F20110114_AABSSM foster_m_Page_38.jp2
72185aeafea29a545fe5660a1c772176
4421d875f9719cf0d9c3f6da14bc293a4f649b5a
25278 F20110114_AABSRZ foster_m_Page_36thm.jpg
c010f4f58f38c7d530f97fdf18afefac
c3732fddad88b74c4a305c3a930bb6b160cc547c
49395 F20110114_AABSTC foster_m_Page_23.pro
36fecf932c7342c81c6b58a5b1f3cd77
a8e2d83ca40e18c1cd2ca78117c8acb42c0ca77e
28731 F20110114_AABSSN foster_m_Page_47.pro
4708dd6b6278688c0d40194773b234fb
2cc508e6742513b493f19519aa1ea6b9143d9b38
424 F20110114_AABSTD foster_m_Page_01.txt
08da9b2b27dcc149e8042b106f0ae70f
e1c4c3d254cbc5ba28325599e82e04e6994d147c
F20110114_AABSSO foster_m_Page_51.txt
3429128833bb277129d16d6a3e718466
9556b5e1e7d13f8ecb6b837db0cb76c61a48a867
60113 F20110114_AABSTE foster_m_Page_73.QC.jpg
18ed623e5573b95e58c1c3c7f86ab7bc
3a2766e2d4a71628f9c43b46bcabb0b74cf01229
111019 F20110114_AABSSP foster_m_Page_36.jp2
454446c4adee2e8883e06d596b67bc99
14267f7480a56a02937cd19fb9d0a27c1dc8eb73
673904 F20110114_AABSTF foster_m_Page_47.jp2
836d3412d20577362f11eafbfe01c010
1955bb6e56d4da691d7a9c44eaf7c3ce184bcc2c
1823 F20110114_AABSSQ foster_m_Page_50.txt
10db453b0e2b51d960bb6af8e866bb0a
5e543aeb1a2387d7a3d4d172f6d012a164aaa017
1500 F20110114_AABSTG foster_m_Page_56.txt
c2b97d2f330bf584b0806f75f9369b9e
11d6cb979332fec5edc256b725da39e011d6e3c0
22013 F20110114_AABSSR foster_m_Page_94thm.jpg
7623f3b4a17a63a0ee49011dd56f0dd9
9e735ddc730c93e241ad01e83eed739e85975e90
23149 F20110114_AABSTH foster_m_Page_66thm.jpg
d6e793449502e0b2ff308f0c0b4e347a
36433934a6e3d62412506c6b38342e36819414ba
15741 F20110114_AABSSS foster_m_Page_09.pro
65baf0fadaa8e67f9e771f83954097b6
722081e3655d412d203e942729113f159662df30
103699 F20110114_AABSTI foster_m_Page_65.jp2
0fd7d15f4f99f696e8ce13780d8fd822
17ae4d1f7d44a30ead6ee79d7fa0b223c30f219d
21105 F20110114_AABSST foster_m_Page_74thm.jpg
b08babdcbbe70ad894aa06a99b91b166
e3e86b07bedbed8949a01f9de6248f396751aa6f
48702 F20110114_AABSTJ foster_m_Page_11.pro
33bdf05d663a80bb06ad8a7d73669522
b0a55df413d85834575fbe27ce6b6c0248ec9f61
25271604 F20110114_AABSSU foster_m_Page_47.tif
ee322359f6e7defb4bca3debf5cb2813
64052d62032ef8e98076c2f0ebc0c552daa131f7
2896 F20110114_AABSTK foster_m_Page_04.txt
e8bc609111d74949cec8c40f66954ba6
2eb52e6c535e794926d0fc85701515e53f850dfa
1051962 F20110114_AABSSV foster_m_Page_04.jp2
ab86a03836572f65003eba10b1b5663d
f3940a4b9272a4d07787d1304a8b5de17b61e7b9
124122 F20110114_AABSSW foster_m_Page_96.jp2
1e554046a6f5cedeb798ee79deb9f2e6
2090af2daabd18105eb68140cf882f9b3de6d7f7
166064 F20110114_AABSUA foster_m_Page_08.jpg
56ad00c28c4f2ac5f0e23f9bb8756db1
b09fc80a371a6b103d83af4e5ab6e92e51068f7d
23331 F20110114_AABSTL foster_m_Page_03.pro
915e8f4797a8e2141fe0c4f0e8cf4919
e331a6c6ba77bf7ff6aa0f083b2dd331713138e2
39804 F20110114_AABSSX foster_m_Page_16.pro
0fda8636405ea63a054da5b03b754503
1402ca558ba15de37793b87ff5a2f479e4a19b10
183213 F20110114_AABSUB foster_m_Page_10.jpg
fefeec13467743dd524a40cf6bb6476e
1f1d01b99f560f391b8f47d51759e2a3641c566d
221220 F20110114_AABSTM foster_m_Page_94.jpg
d808b3f2e28efcb1344e9a02d7d1dd93
23ce2eb3a8814f701da9c52bd2730b7783ea559c
3024 F20110114_AABSSY foster_m_Page_02thm.jpg
67290d7731647c2ae8838d5124e97b93
65e9f995813b005b2457a3bb067c8442b469a480
201031 F20110114_AABSUC foster_m_Page_11.jpg
f8e22f113617ef4b2d35c442b162a99c
2d5f5aeacfe63c9229114886055432c8f58ca75a
1037254 F20110114_AABSTN foster_m_Page_75.jp2
7fec6ed4fa7429561eef00aa2bca5e20
5886754f3a62274259556060e659d7bda70a2ea8
4083 F20110114_AABSSZ foster_m_Page_88thm.jpg
ae1013fb8f21379e4563c635ba80fb39
8dc20ec4ac455d1727d57ff874082f985eef4859
210815 F20110114_AABSUD foster_m_Page_12.jpg
7dd724fd8e5f1f504eadf15c2f16173a
afafa518700446bebba76a346c12a9a94ed0946a
75370 F20110114_AABSTO foster_m_Page_77.QC.jpg
305c1855241c14ae9780adda6636530e
6fed86ed9c004b2a86769081c440b69556d29b8d
212260 F20110114_AABSUE foster_m_Page_13.jpg
41c7f28357bacefe31aecf8be4c9b862
9dd53dc7476a62ffaaa046c3a4fe8f8bae4245f6
924973 F20110114_AABSTP foster_m_Page_43.jp2
6416949627396f63a4421e0026e5de23
0f557d96716ea44f10f0d7b8a4f9ac616199cf78
203167 F20110114_AABSUF foster_m_Page_14.jpg
1f17dcb99eb81fd863a9ddd49b4c11c4
1007ca085158624a6ff669044fdf36591eaa52d1
21204 F20110114_AABSTQ foster_m_Page_45thm.jpg
a70eee057f7e3012e3dac3c57fd72fd6
9080835fdd9fc064deb4e73db3664e61626b42cf
91007 F20110114_AABSUG foster_m_Page_15.jpg
9d76b2590d5324a072059117dd35f31d
f032e0ca35e410e0ed93d5823fd533c739885ce6
F20110114_AABSTR foster_m_Page_70.tif
a4349f2734792558eb8469e4c59fae4a
df954d39fc635e5b2bfe747bb1ad34750a5334fb
104869 F20110114_AABTAA foster_m_Page_86.jp2
a1722ec20a220c839e4453b6404d6440
6e3a1ab1b2c2eb06458774b59d4350654329d47a
170253 F20110114_AABSUH foster_m_Page_16.jpg
9b90c1623f05d0d03bf23eab686f790c
a62766759c0041d22454dab4f3b61e330ca87db8
112672 F20110114_AABSTS UFE0008063_00001.mets
0837aed4e4cb84171e6afb2eda0dd341
cbb68551c32a067fa2199ab701df2b9288bdddbc
105081 F20110114_AABTAB foster_m_Page_87.jp2
9f4dc91bf8b35b292bb4d54dcab8211c
0bed1e32a7c808a55b46b0c01edec2409773f649
210875 F20110114_AABSUI foster_m_Page_17.jpg
5942098ddf350e63732a189db6bdc9aa
97785069a96608a85322bf03e706b281eec0c9ab
109657 F20110114_AABTAC foster_m_Page_90.jp2
5e657486d08579c79b048cc2194ff2d1
2e59cc08537fecd45b7f8e9650678530f9a1f6dd
215145 F20110114_AABSUJ foster_m_Page_18.jpg
615e10c22546ef6b4da3e9ace30dd3f2
a9b101c0bc69d7238a9b386b62e2255f1ebbe909
103835 F20110114_AABTAD foster_m_Page_91.jp2
c294f58d6177b37a30daa05054ca7076
6daed3ba1fd3383dce20a0c8f7bb618f04d1031f
202396 F20110114_AABSUK foster_m_Page_19.jpg
4718897aeeca60bc7e50a06f91173f3d
56677694b12fae98b8e43aecbb2482eee3f5f64e
53982 F20110114_AABSTV foster_m_Page_01.jpg
e4e3ac4283af4e876afe78e9013e3508
c347d701f048e440d61afa01e3f7f9f15e147eaa
113042 F20110114_AABTAE foster_m_Page_92.jp2
0390c7ff343c37df5369b776bc5e9dab
224a6e1a73d14e3a717dbeb5bac9f3c8a118d6f8
194008 F20110114_AABSUL foster_m_Page_20.jpg
7bef7ed8c8b5c73ba3066d9b5144c7a1
d1f40400305e4b8707893a9299a030d7eda8c6be
14193 F20110114_AABSTW foster_m_Page_02.jpg
e428fd0e72cd67b0b291f0f7a5fcdd2c
8ba0822532c0cc890a52617d8470a10ca28e25d4
44453 F20110114_AABTAF foster_m_Page_93.jp2
c908b645e94d240ed871b393e09bc214
a42b3bc04ed5b92f945a62cc32abc25249c7250c
217091 F20110114_AABSTX foster_m_Page_04.jpg
2d986a949fff8e97c03a2703ab1b1959
832be0f0b61ba8cc4c2221f07fb7b1e3e667ca52
111341 F20110114_AABTAG foster_m_Page_94.jp2
0d75d2b99f8f2fb35b95aeab4f48658b
8e585c48ee1ab3cb621daefc751510b3c69a1d0f
213835 F20110114_AABSVA foster_m_Page_36.jpg
923854b3876cc3b75f4ba79c77ef0685
f9124a3d6ee5688f06908a39683a9637b61823dc
214362 F20110114_AABSUM foster_m_Page_21.jpg
c26b9422eddf9ca09e10b7bcfdc7e583
661f93695b11b13dffdaf47393b413f04393c72d
102094 F20110114_AABSTY foster_m_Page_06.jpg
8f5bb0fbedbec4b542cf8141e4b79d7c
abbe856d73b11e962d3e3e2f3a355906b4f13605
32172 F20110114_AABTAH foster_m_Page_97.jp2
7575869e03fc377f8cf45f820620264d
d1ec9758cdbf48b36522d1a271992afb3998b3ec
139284 F20110114_AABSVB foster_m_Page_37.jpg
e2d0fe0fc71076cc5ecd4bfd73b5ea3d
c82f2279ae2cb3b7bcd5c45367e1f4054116a595
210981 F20110114_AABSUN foster_m_Page_22.jpg
b37b51ae1c48c79b770b1547de94c34b
a857952f5e40ba2484559ae8dea6d249cb92e69b
161123 F20110114_AABSTZ foster_m_Page_07.jpg
53253ee821a1ec02adf39a90bc217b78
d9d7dc0ce98dafdf8d4c20cec16d392c51753253
F20110114_AABTAI foster_m_Page_01.tif
078f4bb5b97c0f057ddbd7d638352fcb
65041ff3ffeeb562250a540073847013d61c1022
197286 F20110114_AABSVC foster_m_Page_39.jpg
c74811c53d56f03efb7237b539265a44
c41ba740d9b31bcdd059decef89b06d1bd2d1a44
211673 F20110114_AABSUO foster_m_Page_23.jpg
c4e30791c1b0762781dd6b4c42e93ae8
069f19b769d248081c5e6ed3b203c9c01c71fa87
F20110114_AABTAJ foster_m_Page_03.tif
f47ef14a7e85bf4b735ae447f8415715
8c19fb027edd9d8005698e57b1fed17c110e49f4
216911 F20110114_AABSVD foster_m_Page_40.jpg
e791ea27b092bea559f48e82390a588a
c908bb656f8d935caa72f8e5b839e6e6cf775d8b
218916 F20110114_AABSUP foster_m_Page_24.jpg
7fbaf0ff4dae6bc89139c336e7986378
d9a2fd90f552c851d99f62bc1bc32bfcca89ef3d
F20110114_AABTAK foster_m_Page_04.tif
dfb134b405d8f4caebc96212ae46efed
2eadf24365ee6e55c5c41f7e08fe713b01bfc10f
176485 F20110114_AABSVE foster_m_Page_41.jpg
5b396ad02e73df3adce90b8caf872f35
3ac122acf86235eeb17ed5870250020cfbc29a7b
201454 F20110114_AABSUQ foster_m_Page_25.jpg
6cefd2662a8c71ef352403449b430e05
5f81b0707d921c64ee34934b4752aa335a90afde
F20110114_AABTAL foster_m_Page_05.tif
6a9515c49749fc9de598563300b29c8e
3afddc75f272a036bddbbe75b14acddcf095af9d
174004 F20110114_AABSVF foster_m_Page_42.jpg
7bccc776a945218ff64aaa78ed741c76
f65a24cd863b377192ad01d08dccc62a047fa6d8
200353 F20110114_AABSUR foster_m_Page_26.jpg
9877e10c19a6499158c5cbc3f812dafe
ca1e4d880310cf017d0810ab5b30081fe493cd54
F20110114_AABTBA foster_m_Page_20.tif
c31327ea23b521d94646524d3d931511
c5ddf82595e7364af9931b21a3e7d37f6e0de2db
F20110114_AABTAM foster_m_Page_06.tif
0c32f58b031dee6f2799a2f35ada2e94
dbfa23f95810aa37ec1d44fd2f4e18bbac9bc17d
196818 F20110114_AABSVG foster_m_Page_43.jpg
a1fa8a6c195fffbea9b22ea1f9d4706e
1c7662f3e6b9279f9ff249f36ad75bd07961ea6f
202958 F20110114_AABSUS foster_m_Page_27.jpg
bf536d7c23ce15a621b5e61b6e2ed9ea
c738879268e9e6a63c561640795d5fb0cf0a810f
F20110114_AABTBB foster_m_Page_22.tif
77854c510ca59059a2c314546cf8fb45
c0f640a5a0c1ab369f3358be6902da76dd135746
F20110114_AABTAN foster_m_Page_07.tif
3853bc43239488b8fc997b719337ff56
e56a77c02e895ca1903313cb173765a648eb7566
213193 F20110114_AABSVH foster_m_Page_44.jpg
85f183e0062076eb56d45cc2ce929b6a
5a42999a98d416da8c7789c9e7b2a03fbe88d206
212390 F20110114_AABSUT foster_m_Page_28.jpg
e33634126a7c1148f9232bbcf0d2ad3e
0a99fdc0419f09ce7e886c338f92d4fbf826eb8c
F20110114_AABTBC foster_m_Page_23.tif
a219e00d2daf7d7e5e8370de3d0c209d
e49c16191391be7a169421a6049c5edfd83261ef
F20110114_AABTAO foster_m_Page_08.tif
5278bff3469d063aebc6936a52932f06
3f9072e974af0be34459f7dec66d60057797914a
175405 F20110114_AABSVI foster_m_Page_45.jpg
c41a633026654e3a30b8cb95e73c3434
1c98d2a5b2afc7381904a934c37815bbe81d64b9
189581 F20110114_AABSUU foster_m_Page_29.jpg
9693a5e174b079613dcfe5ebea25b997
b3e9756ad4c116720f31c0ddc653bb20f7642fcb
F20110114_AABTBD foster_m_Page_24.tif
58197fb738dfcba93875ab7624d39592
d220f3256338b7be35606123c90aaf52e50216e5
F20110114_AABTAP foster_m_Page_09.tif
fcd496de2fdf3f2cd77cd988134091b8
58d51fb3b3b9fbc300a863953ffb561ee9f776b6
201502 F20110114_AABSVJ foster_m_Page_46.jpg
3921f9b174535aaf9f2fa46f17149cd0
826bc7ab6941c20420d5dca2cb7ac6cf7f857974
201880 F20110114_AABSUV foster_m_Page_30.jpg
89a5d7d7b41d37030387872e87012d34
12a19db1127232693fe0b225e2f07444b9418426
F20110114_AABTBE foster_m_Page_25.tif
fdb63be29f5aaa2662f689dc67f391d0
bb7ef01e5f169b7ea2086626834cb99d9618d76f
F20110114_AABTAQ foster_m_Page_10.tif
6b46299a359ace90f7ade29dfb9ed545
9039b4d34fa1137b5fc628b0b9aec97e8cef2eff
153649 F20110114_AABSVK foster_m_Page_47.jpg
052c54bb3066527492ba536aa33951d7
44750a34ac5677ecc94cb0fc6a0cc9d7c56f449c
203581 F20110114_AABSUW foster_m_Page_32.jpg
9ee3940a51aa5ea1c92a1b34ab9dd70e
a663357112a1e4e318a13c0bab295ed0a24c1ef3
F20110114_AABTBF foster_m_Page_26.tif
82bea555e2a5ddec7c4416342dc0fec2
cb4f958f844bfeb65c1ec26afea43ff43dc18d61
F20110114_AABTAR foster_m_Page_11.tif
cfcf80c0fde576723d677853510523bd
d68fd245f94586a0dcd643789c8e314fe1e595e1
187907 F20110114_AABSVL foster_m_Page_48.jpg
1cce545b616ee5fde65bcbc90fde06c8
6794cd37c3b127e26e4580c2162237be2c5f5de5
207496 F20110114_AABSUX foster_m_Page_33.jpg
e41d33680b6088c824409c138980a4f3
c39f442ccecf853bc3aa465f313bce058baae491
F20110114_AABTBG foster_m_Page_28.tif
a5183ce0f121c940918fc6d99c76850d
159d229e69ed11cab45f3d8084c4d7ca416060d8
194198 F20110114_AABSWA foster_m_Page_65.jpg
69fa59b66381c27bdf2e7da5b6576101
42b9b6465084c808ca8652a3b20f1d1f5b904443
F20110114_AABTAS foster_m_Page_12.tif
05575c5877eaf42fdec57f3ed217365e
d3574cd54ce4db2fec7ebbf26ce1b57419528e04
209833 F20110114_AABSVM foster_m_Page_49.jpg
4b65dd7129e48912df19288bb0f0f890
b7ccd5e35b643c7741ff1b1db68b59b3dbab0849
188231 F20110114_AABSUY foster_m_Page_34.jpg
8836913ca408597b0e4847af65e18bb4
13139fabf9e19159805e8443e8b21ae561b3ce3e
F20110114_AABTBH foster_m_Page_29.tif
2ba6b550119fff859a51e5b7cf42811e
711407c7ad41dd865cb4fede943e32fbea1accbb
194015 F20110114_AABSWB foster_m_Page_66.jpg
431be5f302526e02f646c780d4b81e2b
83c511d2ee88c77ce092c75eb51016eadf7422a7
F20110114_AABTAT foster_m_Page_13.tif
7daaf249800a5b34331bc86dce0c7229
b3b09a6a70d313974a8d20312be12e3f550893e5
209574 F20110114_AABSUZ foster_m_Page_35.jpg
1905040ba3847e3e9cc9112a10457489
7390a75d8033572fe37f4ed2d109ce62b8829757
F20110114_AABTBI foster_m_Page_30.tif
8c41492f080fdc87d054bfb71f7a3931
2faaf1920b2f03de7e96a2d8e8f015715f8923b7
213902 F20110114_AABSWC foster_m_Page_67.jpg
6d3c780b503584d886411d3460cab24b
cf2ab959cf1a52f119af5c877fabc2808c8f3ee0
F20110114_AABTAU foster_m_Page_14.tif
a638820a658641d759b26e75ebc6f2e9
4871be0fce896a7c4fdec1bed3a886d2f27e0c6e
191432 F20110114_AABSVN foster_m_Page_50.jpg
1c5d71364842c3bd09f2fa87f99ea9b2
0b088885fb46392caa153330c63597d452ce6b22
F20110114_AABTBJ foster_m_Page_31.tif
1a8955abf5466b8841fbce8e83cb40b1
698df7ad673e4974a8af96811f2935ed508528d2
201332 F20110114_AABSWD foster_m_Page_68.jpg
172a758800dcd73c4d06c2b050f7bdb9
46616e99c9cbcffbcf82de4f47ad395e47972949
F20110114_AABTAV foster_m_Page_15.tif
de04cf312de547baef192fe95ea866e5
6c6072dcb9fab827afd88e0fd446b2014c55fd65
207483 F20110114_AABSVO foster_m_Page_51.jpg
246a0ec21cad5bdec8f55a328fa0b241
b602f3cce5f54e051cebae0642c2be22887fa27e
F20110114_AABTBK foster_m_Page_32.tif
ec28a9a810bcf107f675d6223be6273e
6348a66c0958e327f9d4c6aa4eb565d4801fd58c
173968 F20110114_AABSWE foster_m_Page_69.jpg
3de04be1362aca9cdf99ddc3065ae806
4b5b2e072bca09a8f09a0cae6397eac513ffb512
F20110114_AABTAW foster_m_Page_16.tif
1204f18b45a75b845615ba749c475ab7
ae5d1f2847d3e878ee0a41a33c4944378b45509d
207121 F20110114_AABSVP foster_m_Page_52.jpg
dc679762ba8a6a21036242e0fb73b506
4c31c9d15fb004dba8f2feafa600570710d05bb1
F20110114_AABTBL foster_m_Page_33.tif
9d62f62219ddbc2e706566ff76da36ee
1268724e17be2eca87eef6eddb1843abf98b0df7
157388 F20110114_AABSWF foster_m_Page_70.jpg
140c2f5eaa253d04ae735c087cd0ae84
fefcd449197c5b6ab2d2e2ceb2e2541ef4597f02
196463 F20110114_AABSVQ foster_m_Page_54.jpg
f3a033e099856aacad89072fc29063b6
4dcfd63ee2e06ccac504a21a32af39527288e3dd
F20110114_AABTBM foster_m_Page_34.tif
b4d4f6388e2d36fd26a08dc8b00e378f
acb6713102e3331acd953eff706a5dbc40afdcd8
144276 F20110114_AABSWG foster_m_Page_71.jpg
297abdc3d7fecc2390ea07583482ef81
0e68bd440b908be2cd3c320040d305e028e53687
F20110114_AABTAX foster_m_Page_17.tif
98ec8c607d9c2395d598f8b72070741b
c4b367fa3878ec7d19a92f21f275c5c0b8e5e5c2
202534 F20110114_AABSVR foster_m_Page_55.jpg
fb3ce2153fb67dc823903007123003c3
8ba8edbee81c7b6753fa5af7ff27b218abb8cf07
F20110114_AABTCA foster_m_Page_50.tif
88e7caf5502780da36039446dfe50ffe
2672282b3ece6b34173cecd587addbd40efcdbb5
F20110114_AABTBN foster_m_Page_35.tif
287bf81bdef885ebdb0a8d56e5e21dd3
f302bcaa89ff2699691a3d57da922b95f5890f0e
158594 F20110114_AABSWH foster_m_Page_73.jpg
96a3f275fca5427f326d49c85c2214cd
dd1ede232679d6274c0409e5263463b91f22333f
F20110114_AABTAY foster_m_Page_18.tif
22995c829c8b7f476193486bed1ddc69
4240ab23703a678ce20684a3c0160e85a2bd01b1
201237 F20110114_AABSVS foster_m_Page_56.jpg
db051f497610b07e609bf7c5d5be172c
1d3d1da5c198605ee86dcae9a4941e72bdde11fd
F20110114_AABTCB foster_m_Page_51.tif
f35935d0dca742b701195bae9d5d43e1
d6708c7f0526412a837a564b64721b1b00f09e43
F20110114_AABTBO foster_m_Page_36.tif
5ade30ace638facc2332402dcc720c15
7afbb48e0a8917ef55e30d01fd3b21613891e341
179789 F20110114_AABSWI foster_m_Page_74.jpg
787c9410d6ad01fd6d1682e1959f4960
d1d1d3e9158e47d5ff9761c2be6d4be89c02eeb4
F20110114_AABTAZ foster_m_Page_19.tif
ab46588a6cc463e2260912feb35b4940
81bb78dbdc76c83dcdf9929e846a409bb83fc60d
204857 F20110114_AABSVT foster_m_Page_57.jpg
b348bfb17c88e3785aab2e423ba12f16
ad2adefa0b0d7240dc5a7a1692f3a4768c1638b6
F20110114_AABTCC foster_m_Page_52.tif
4d965ce14e09db7f0db3c8cb770dab07
1cbed1b9c3f05a6836084d084d504e59c2ed5e77
F20110114_AABTBP foster_m_Page_37.tif
af08db5c660b7aeefb54b0a08690162d
631f292c7b17806cd54c18fab4651a84b05e9d9b
205387 F20110114_AABSWJ foster_m_Page_75.jpg
683000ce41e51b23c58710c18b4dfdf9
3c1a117d57ba1fff53b205d767a8593621dd57c7
224194 F20110114_AABSVU foster_m_Page_58.jpg
2730bd5a9fb2494c866c9c980bfded75
9bf85701a68ac29078e46cd30936097b16bc58e2
F20110114_AABTCD foster_m_Page_53.tif
48ebfc8eed71e3d96fb4ab61365f28e0
d56a06e1353721efbc0b7f40ecfad6dac3a694a0
F20110114_AABTBQ foster_m_Page_38.tif
bd65946aa60c9a7813d1660921315edb
4907a49c372acf9d2fc2c584330eecd59710b70a
218725 F20110114_AABSWK foster_m_Page_76.jpg
4e3147d68bb91456437af7722f171f18
2b9c89c34f304d4cdf5b87f64e5acd93b6edd039
218059 F20110114_AABSVV foster_m_Page_59.jpg
362db0dfe946f147a560a0556453e6ff
33ba144f78e00632da66d6f6cc7fb256f1f3f0d4
F20110114_AABTCE foster_m_Page_54.tif
dc063fa3748e86388691a372d97c3a3d
fa5ad7f476a7aea037d044b14f2db56e7d6f7797
F20110114_AABTBR foster_m_Page_39.tif
785e447c75f6aeb3ebf7e945fc5c9a9a
97ed13897e54d53cd4d4b65b2e8ba25d6ef1f666
201421 F20110114_AABSWL foster_m_Page_77.jpg
9cfd14ffe3069738e4276521c59434cb
bf52c8ee977cd45895a7a0db5b53ade31e4aab2f
206292 F20110114_AABSVW foster_m_Page_60.jpg
97fff71c596fe0247e88200b0d9858a7
a34977325cccfbe115569b52d1d63328cdd75ab1
F20110114_AABTCF foster_m_Page_55.tif
132dccab447ccdf3eb023248704e7030
7e057dc926c8b45ffa8f3423d65a44706bd0ceec
84343 F20110114_AABSXA foster_m_Page_93.jpg
a9d8f6696badc517c2859bbd7d9963e1
85d59e439af440caf853d50cf19f175ced84b211
F20110114_AABTBS foster_m_Page_41.tif
be10f90ca9f89beb879214c635004b89
bc16f417c5e88ea16d0c2dfcffa8a78b9217f4cb
196530 F20110114_AABSWM foster_m_Page_78.jpg
5d2a92d01b7be944968123f9182a7c80
f934fc9e37acf4015c9973a97608f1b7d701fc98
74727 F20110114_AABSVX foster_m_Page_61.jpg
65117d519da9cad5d3cc90dcf89e81ce
114ff59c9df82635181c9371bdd3d4581af6b284
F20110114_AABTCG foster_m_Page_56.tif
05ccbdff98b7ed1e2aecce9c3ec53df0
7ae9a0e4530cfe98ae8e196feac95a13965802c1
286851 F20110114_AABSXB foster_m_Page_95.jpg
3621d32f3755bac4467a30cd8eef05f0
7b9488230067959a07d28ecef9567ba0c6178857
F20110114_AABTBT foster_m_Page_42.tif
ad385ab00f9a372b150779fa4509be01
ad8775f0a87c93f4eb2de37f8a830278f726fc03
192514 F20110114_AABSWN foster_m_Page_79.jpg
3974016a8c59518db1890aa5a1cd5e03
5666b01dd15ddd118aab5276fd05551f18cae997
189979 F20110114_AABSVY foster_m_Page_62.jpg
f328d259f16bbdbced9f81a7d17c010c
d12a00991472e3f8e98b6573c7ec30da6aac8c93
F20110114_AABTCH foster_m_Page_57.tif
a0bc6ceee6168dd5a2f68276b7119943
523fcc78677cfe55d1d15683dd553e6aee071fef
253616 F20110114_AABSXC foster_m_Page_96.jpg
6bdc9627ad42f87cfb663dc57bf8efed
ace3041f169086dd013c749593124c4f20fe6a76
F20110114_AABTBU foster_m_Page_43.tif
d940919e45a2f8241221af6539487233
82e24038590b314b609fa100203f1f3c4f255a0e
202362 F20110114_AABSVZ foster_m_Page_63.jpg
8ea7bf5af7030d40382f7d7a07f8f300
71f4832d636ff5b26d6793f590fab385f5cc8ae7
F20110114_AABTCI foster_m_Page_58.tif
278d3af098c79806be06123cde7d119e
17d9d124e1cf1906f3441cb552174f9a8bfc5e8f
65540 F20110114_AABSXD foster_m_Page_97.jpg
812c0db28deb10d2dde0aa762872b7b5
84c276cf7f36caf46e460e19e898dd47a77786f6
F20110114_AABTBV foster_m_Page_44.tif
b767befd2ab29c677f7c2f72b6246402
1e53cc2a46e1103c47425a3a0bb28db83a661f2c
215167 F20110114_AABSWO foster_m_Page_80.jpg
298d14608002babf5bd9e2ee1d0f7777
0ad6dc605112a0b181b833a513826ca2026f360d
F20110114_AABTCJ foster_m_Page_60.tif
0c7ffa870042e83a14565bb2ffe52b21
ca322f6020c67c72627215ab95015a15e46fb376
23705 F20110114_AABSXE foster_m_Page_01.jp2
a324223b97319cd43e5ce4562496eb14
f941383c17827d07fa5168eb9ff2823ce8f752d5
F20110114_AABTBW foster_m_Page_45.tif
a2159202020b6352dfb5ec52bd1135ad
c7e8a39635ed1228fe8822095a82055ab996bef3
214654 F20110114_AABSWP foster_m_Page_81.jpg
594c75b0bdc5b8834c2c69fe1c858248
d8114736375da2067153be1546e843ffabab5f19
F20110114_AABTCK foster_m_Page_61.tif
3d1dd5822b5992749e3029b7dedfc3cd
5db75819a362318adb8a05b29e5bd883124ba047
5369 F20110114_AABSXF foster_m_Page_02.jp2
a74ff9707f0cd4685d171d558c5c2705
87eaf07cd387673cb23a426d798054b96d2c8e70
F20110114_AABTBX foster_m_Page_46.tif
d04c5ef776cfa4bc737e4bc47c5d883a
c80cbf67c303458397b8305506cb48822782f7f3
201295 F20110114_AABSWQ foster_m_Page_82.jpg
1406c4b5f0f723c3174b30c343326e66
49e4ac58e04a903bb847e38bf9ef45601111122a
F20110114_AABTCL foster_m_Page_62.tif
aa1c77ae6a1c55d41273b2926d8aff62
a74bdcedc2e80a8e357d4e31a7fdd7bf0f45557e
212015 F20110114_AABSWR foster_m_Page_83.jpg
739f0e5ca17f059dc6cc768c0ca2580a
83378538393f6bd4add167e78ebee1478e0e0d4a
F20110114_AABTDA foster_m_Page_79.tif
7f89bf34e29a79f8f152fa766e2d30b3
98268d3953145659ef70045809b25a89c9aa82a6
F20110114_AABTCM foster_m_Page_63.tif
29fc6070384e1e11dd63bb98fe4a7ede
5e6f36f348624fccd11e2099841c6cff916216c7
53620 F20110114_AABSXG foster_m_Page_03.jp2
fdff4a84dbce5645a477ac66b8a9b4ef
09eea09dff2c7d7d3fa0232569655597918aab9b
F20110114_AABTBY foster_m_Page_48.tif
cf7bda7e79bb9c3c811d34fb9dc181b1
dd9208a1a0e21e0c6bc0b32e488696e5203ea431
207828 F20110114_AABSWS foster_m_Page_84.jpg
be8a510b16599acced15bcfd065e0067
dbba9720b23d3af4753af4a5fbc1047a83c66f6a
F20110114_AABTDB foster_m_Page_81.tif
9607ec3681c5d3bee01dd45db6a8add4
315dbc678d213e3eaf2d891fb6565d5db4362772
F20110114_AABTCN foster_m_Page_65.tif
3470fa2e59f826875fa48025eaf96d68
074d040c10205dc1b4f2db040aed8aa01885c3ff
1051972 F20110114_AABSXH foster_m_Page_05.jp2
92085572cde3d4c5a44c668e39960ce2
370066b7ade3d8f0bf1b66c70e9d582bd154b2bd
F20110114_AABTBZ foster_m_Page_49.tif
4e30e9f71ee98bc7e6c3a15a94924f25
8af14fb38c75bd06794495890095099e2035c927
208322 F20110114_AABSWT foster_m_Page_85.jpg
52c9f6d67305162c3cef57b5655e8467
27500577a57e50240f989283067e3c7288ad5db3
F20110114_AABTDC foster_m_Page_82.tif
e03a0ccb6c2a3d72ae5b8ca980431be0
283e6e36ed2f8da5dae742c86369f180397f8d78
F20110114_AABTCO foster_m_Page_66.tif
6db0373b1d7f2429660bf4b9f6544427
75abc1d5dfed8cb94709e56c6bf4938bef096036
570426 F20110114_AABSXI foster_m_Page_06.jp2
8e8fc2ca951e675bd5fe167a2117b3ab
d1de0ccb7dd075748bf1dd363a94ce8d56cb2950
203387 F20110114_AABSWU foster_m_Page_86.jpg
46ff925dcb4c2f0b3f723844c3082e58
214240f48ff91a37e450c587874834b17edb26fd
F20110114_AABTDD foster_m_Page_83.tif
efd5e5f6ed7a3b9e533ff0d672c5dca6
e29d0a9e5e74abde39a739a005a2998196a19717
F20110114_AABTCP foster_m_Page_67.tif
548f716f2a7f7c7694d3f0ba778e7d81
14c3ff88911fb6aeb472e78f4e5c9303d6794d7c
1051951 F20110114_AABSXJ foster_m_Page_07.jp2
2c89a3e59748a5248bcde02d44adca16
265be39021cbabee1800179c28bad29ab74c600e
203430 F20110114_AABSWV foster_m_Page_87.jpg
ff744b4b09b5a667cc5da09f26e02f64
159c4321467d89c88ea1bb70765d372e55ef6c06
F20110114_AABTDE foster_m_Page_84.tif
19b12efad648dd8f9784bfb08528af49
a98f56df7ec84dca21d41096e74077750ccae745
F20110114_AABTCQ foster_m_Page_68.tif
f80b43e35edc2d49cdbcdcfdd35f7256
bdf142fe4053082728c04b464eb075c43f045796
83641 F20110114_AABSXK foster_m_Page_08.jp2
266a2a5f02f0899452f79d9f041340a9
22a5f66e062371ceba90895b9ca822837e8859fa
20222 F20110114_AABSWW foster_m_Page_88.jpg
015975844a7d5a70c40b0bda0e52b38e
99be59bfadeba04b0cbcc87b93ffab12fe35a5fb
F20110114_AABTDF foster_m_Page_85.tif
2470882b7782c9cdd6bca1fd71d0eb46
ba314c29ef0173b5eebe8dd2da252c3895c248c4
F20110114_AABTCR foster_m_Page_69.tif
accf515a696e2e73908fb93ce8ef5a66
772086e7c9e80b2b8325bd697f14a5c864dd68f6
38588 F20110114_AABSXL foster_m_Page_09.jp2
38df30291325d44a3709e907c2dbbfbf
29c4d49fac397bb20f2d4e2920bbfe462dd1ff32
212161 F20110114_AABSWX foster_m_Page_90.jpg
138b6e6277daac058eb41f7033c4e388
b64a65cd5c91485c238b961388e43044472b7c2d
F20110114_AABTDG foster_m_Page_86.tif
a061f31b93a942c4f4fca56cc0ed118d
4aa87a7fe457ed0c229afe62584d2883afe43d42
105683 F20110114_AABSYA foster_m_Page_25.jp2
aa58a196955a34d1c99ef1496bf18095
f1d7bf1278feb10a9bff194aaec093da61c648ad
F20110114_AABTCS foster_m_Page_71.tif
73c95c30328af2c0dcfb3eab117bace2
f55827b099029e3e6f8b6d49335bc65a14d214c6
95338 F20110114_AABSXM foster_m_Page_10.jp2
8210ebb8b46243e381a5b6ab3474a82e
60adf35acc173837d290639e07928cb9c6a12e50
198850 F20110114_AABSWY foster_m_Page_91.jpg
858eef9382a1ba9f56c1f1758d442109
63f73787dd1846038cdf9cf3b7aae8210a3acef1
F20110114_AABTDH foster_m_Page_87.tif
c07e089ab145260de5c9b1f1c1c3799a
b3206c660b0f503e43ea560c55ba076c4e933ad3
104488 F20110114_AABSYB foster_m_Page_26.jp2
be1e3dc47ea1f2699ca8f3d1b605152e
6c4041260b1f1c5f27a92428a02515d984425aa1
F20110114_AABTCT foster_m_Page_72.tif
937786a7c6927cfd544149446ef156ea
f85a04526d0988d154e4270eb2b96d4ef031daaa
106012 F20110114_AABSXN foster_m_Page_11.jp2
5cfb64c891001dd7a687037195211e14
c8a50099fe963d24e2b76822ccfbde6f49c0f3b6
215934 F20110114_AABSWZ foster_m_Page_92.jpg
db92fec96027507636c0f92ce455e77a
633713ae8a327efbf2ac6d824e351d700f39f5f8
F20110114_AABTDI foster_m_Page_89.tif
dacf078374d629489de31b553e9138ec
f500d419d467824182f588cd2fabab64546bd8f6
104656 F20110114_AABSYC foster_m_Page_27.jp2
8395c89271f5cc9a39b4503ce22f7ddb
538876295d1d434765232bcfa244174f0fd9f076
F20110114_AABTCU foster_m_Page_73.tif
2adfe413f72fc7dccbe3c51d39366cf5
8c64c1979a6dcd4c4268b1a23a827742e6506395
108316 F20110114_AABSXO foster_m_Page_12.jp2
b1f997f435b9f648b80c353ccb4938c8
7bf44b9b01d5323697f3a5c7180abba5cd882ecf
F20110114_AABTDJ foster_m_Page_90.tif
f677328939980d9518d52eb7867ec633
37dfd7a1f502c2919d8a42d271e35be6bcaa9b35
110613 F20110114_AABSYD foster_m_Page_28.jp2
9ed4fd27984246ce28f1b10c83d28a3d
9c85bc499f9c5f551d9cf155ce5dc03979151561
F20110114_AABTCV foster_m_Page_74.tif
76fc2a3fd11796875dec00efe63c264b
00494941b055678ed41478101451c2d99c803b08
F20110114_AABTDK foster_m_Page_91.tif
31c96c4fb6ef8de5730cd1718572efec
2ae400ca02be1e6084d92be5cf724472506b9ec4
98299 F20110114_AABSYE foster_m_Page_29.jp2
ea64de5d948c66aec4f93187341c178f
d28a478ecb81c4081f71cd84af25fbbeaa77d98e
F20110114_AABTCW foster_m_Page_75.tif
1a0afb310f24cbf0e2486f09d501fb1f
def5d67b8281dbb26e37ba1cdbe82d99f097ded5
110068 F20110114_AABSXP foster_m_Page_13.jp2
1b684f6a8a86d46b32bd4f3a1cd93d3f
5baa0bdd91410a868cd6e3f4fbed4f8ec6e72012
F20110114_AABTDL foster_m_Page_92.tif
c73e8a4557518e71caa915f343084c9d
5f276c37d94d350a4bbba8fa936b5cd572fc6c08
105150 F20110114_AABSYF foster_m_Page_30.jp2
4e99be01dcbc342c7b274ee160ee5802
cef19c9276d43eb1e0a9ae46e20396703c428073
F20110114_AABTCX foster_m_Page_76.tif
c8ac43afe5b1fbea2d9d870dcb1886be
e0fd2b3fc49a155d9d7dd7d8c991b4ff3e6e1ce4
107364 F20110114_AABSXQ foster_m_Page_14.jp2
93f9511edf5a637c94c04cb5b85e48f4
99746e0402f92a2deff49274457e845c1543e4fb
48574 F20110114_AABTEA foster_m_Page_14.pro
a0cb9fb74c483536d6e954fa67173506
290917c2f7e2b979b592d7649cccfb177d044030
F20110114_AABTDM foster_m_Page_93.tif
6dc684f98dac1ec1aec841a1c8cd8e9f
62d81b8e26864933ac10d5a1c1653b4c05877a84
78237 F20110114_AABSYG foster_m_Page_31.jp2
7d15ce6da6f80c105603b789c22671a5
eb2e096b7bee2b41f746d6849a591b13ac33a578
F20110114_AABTCY foster_m_Page_77.tif
47dc96096cd81c44ed1a52e6e9be1d39
a9d5d6f81d2fc5b2542d55997bb3665a386ef013
88349 F20110114_AABSXR foster_m_Page_16.jp2
2f352761a9587e66fe3a33ff275671fc
bdf3ea9b2d489b3bb01b555f72fd1a7debd7e7a7
20727 F20110114_AABTEB foster_m_Page_15.pro
0983a95aa1400faf8097be1654069e98
10d32f99f46c5d6523323ac6576c85c4fceedd73
F20110114_AABTDN foster_m_Page_94.tif
756dd78290e97c4ffdf725befbc110f9
020eca555dedd1690c4459fb9d9f67c707e3d40e
107199 F20110114_AABSYH foster_m_Page_32.jp2
ba0ba2ff1ed7c09b5a7ebe26dcee5a84
15d5abde623eb2634d665c8fb5608838e40faad1
110149 F20110114_AABSXS foster_m_Page_17.jp2
7e994f13572ff8545dfa3e4dbd77ecad
95b849febb477b14887196fe8d192e39f2a32ea0
49892 F20110114_AABTEC foster_m_Page_17.pro
764fdd198e43beb00e84ff9c22e0eeec
0271d5b2d5929ba65efb3578d131ccc20508b442
F20110114_AABTDO foster_m_Page_95.tif
9e72854e1fc99124e7c72fdb7af2e256
d5146cf4333d16847a87009dd257c05ff0c45580
107541 F20110114_AABSYI foster_m_Page_33.jp2
1384fa3d688a25edc95fb437637a2d19
481c9870f7ca0548b4ecf2729553116d336b8883
F20110114_AABTCZ foster_m_Page_78.tif
f69c883bf7d9e6f31fccd4ca4deb74a7
175f556a95bf8ee561418f17e7ecd3d46eecedee
111594 F20110114_AABSXT foster_m_Page_18.jp2
738edafffa2c183d387e28d3e6660c84
7d35910d04b3417fb2600c31eb7a4e1f22065a8a
F20110114_AABTDP foster_m_Page_96.tif
c7fde5c590d13781a150352d9c429935
17ea2b6880efd6a1c210fdbdad85bd66800f10de
96614 F20110114_AABSYJ foster_m_Page_34.jp2
b8310a19e8303ea1f2c66eaf9feb384b
a0f2a6aa4516de7a0a13d286f1e4a7c0e104bd5c
108776 F20110114_AABSXU foster_m_Page_19.jp2
efa85450e9a71617bcfac367fcaedcc9
146ba7a6fde8feaa65a295d946441227fb76bf8e
49052 F20110114_AABTED foster_m_Page_19.pro
1475bf52a07d2994849ca39bb055b376
62e798984d637448b62663dee82f54b5173869aa
8080 F20110114_AABTDQ foster_m_Page_01.pro
b3b16857b92ceb77d8ec60578f6ab48d
90ebe481cc3b496257b67eb4557d99ce2b29021e
73056 F20110114_AABSYK foster_m_Page_37.jp2
622145410a5e123da5282d87b5d08cd4
0389f51e62618920adc1e976602b441a2eaaa3dc
100396 F20110114_AABSXV foster_m_Page_20.jp2
e7c5c4d2e23fd9aa9ea1a51bfa0bf553
ed8993cd3631ccb04328b5a08fae9917e56fc0fa
45612 F20110114_AABTEE foster_m_Page_20.pro
1d229491e61fe99a68d3772a19697fab
02d02c871c9a77622ad3d1d581d1fcc841371ad8
1069 F20110114_AABTDR foster_m_Page_02.pro
5a88d5df7d70906ae3679465e5574c6a
fa9b1e4de7d1fa3e413669dd2593c69c10cd3312
100954 F20110114_AABSYL foster_m_Page_39.jp2
cf7d46f39e6b7f7f6254f035f582194f
180763268373e8bf7c387c7683598710cd8663a2
109164 F20110114_AABSXW foster_m_Page_21.jp2
15f1950533893ef5d1d2cf953d96607b
47b7d11e80351551c72053ff7fb685c09f92c8b3
50418 F20110114_AABTEF foster_m_Page_21.pro
260ea76ca3eb0e92627e2a5218b831c4
64c7502739039c3a2ee833afd263380eef270a3a
105068 F20110114_AABSZA foster_m_Page_57.jp2
3677b7cfd106cecd558fb899a931bf84
6a93e7fe765149394b8d5bee4e9aa75501bffb6d
68642 F20110114_AABTDS foster_m_Page_04.pro
6c861b0e92e8c4cffd62b25c0154e5e9
6c1d580235b9ae67c64147ef9d1011987ef968c1
110828 F20110114_AABSYM foster_m_Page_40.jp2
03ec660abb2f19c3bba3b5010dfc936e
6ce502888a100a2ba45e02c255898664cc33ece3
109480 F20110114_AABSXX foster_m_Page_22.jp2
76ec5c3cf9726434be3aeddce500bc14
6b478003f6ee95e3040b93082623bb582c00ee47
50192 F20110114_AABTEG foster_m_Page_22.pro
4050517779390bf963fa5a1c5ba004fe
1a85834b5595d07e1ccb8cc7f92b420315babbb4
113477 F20110114_AABSZB foster_m_Page_58.jp2
281cdf2ccf66197421f3db27a7cda973
6a3c89935cb6222062cfcc220cb9c942a2806488
83218 F20110114_AABTDT foster_m_Page_05.pro
8d580437a64be12ed0b894072165df43
08b317140d97fd9765f6a835fb8567d3c4c3d350
90573 F20110114_AABSYN foster_m_Page_41.jp2
22b0c206e09a1f9a2cb4ee34b25c2f82
d7d574bce797d10276f43b137713b4b1d5b5b0d7
108675 F20110114_AABSXY foster_m_Page_23.jp2
7dfe1d8d8d7fe94085ebffe41d687d42
192a72fdf8b20d68bda5b6036a0a3f74aec17ab9
52012 F20110114_AABTEH foster_m_Page_24.pro
62607ff1277b039597af3130643da6df
7221a7844639cd6a9d0f82b6a287631f6add2d82
111965 F20110114_AABSZC foster_m_Page_59.jp2
75775110c891f8692e584faa761c9ffa
4b456e07830b13d6ea7e6cb9444a6c74bba6d08b
18404 F20110114_AABTDU foster_m_Page_06.pro
6544d1923c2b50eb8a67b29951d834dc
c90d500fd79b7bbb7dcf720013f43663da5026ac
91154 F20110114_AABSYO foster_m_Page_42.jp2
eb7332851447d01ac9a6602981ab9092
a22446b4c4711e34d1c6ecdc8f669b78170906c5
113499 F20110114_AABSXZ foster_m_Page_24.jp2
9888d144b15d9c37c0acb5256a3b2001
7470cfa791a6373e5bae0f3e8d5a792018ec6aec
47436 F20110114_AABTEI foster_m_Page_25.pro
528505909b2399d2ba25006ef29524de
f3c9ecc7df56ae108466f884890cbd880b6fca63
104476 F20110114_AABSZD foster_m_Page_60.jp2
cc7b9e51a403273fd054e4142e069387
eed7a104f0c621fd6df7e87dfced04b0f4311b86
35635 F20110114_AABTDV foster_m_Page_07.pro
da391e9eeb270cfc6a50653a90710fcb
cd0ee7daeccd8caac2427c3da0c0aa22d0efefa6
109963 F20110114_AABSYP foster_m_Page_44.jp2
b3179fea42343e2805533ec4cb56d456
044fa1ea75b397f4ac3b5dcbeecd45b52816a631
47757 F20110114_AABTEJ foster_m_Page_26.pro
83fec8c6753424cbf03f4e90a1749b18
d280e4bbdcca7fedde8ce7871e5d38c6b38a3128
38274 F20110114_AABSZE foster_m_Page_61.jp2
f1be094a6e9bf917e8d849e34e07afc7
213367aab23ac86cff163d5f326eee45d49217d9
37443 F20110114_AABTDW foster_m_Page_08.pro
1b9fe36453d1cd76e1e0837b39a3b4a2
adb39ed6a215b28637f9e1eb64b2a031ba7cb847
51061 F20110114_AABTEK foster_m_Page_28.pro
b5a79b91b436c8dbd3f1565cb4c0167b
357dc6153bdb907ad5d859cd0c00c7b0b4123d8b
1034360 F20110114_AABSZF foster_m_Page_63.jp2
b73ca76f50caafce410bbaef2fc7036a
a8b096e2f4d24e928c6b2534259f7f6e60965db2
44338 F20110114_AABTDX foster_m_Page_10.pro
d3e73968fb6ff7cefb4b02d364e71be8
dc9334b3e60a5ca0dfd02ac69669cae596632274
90615 F20110114_AABSYQ foster_m_Page_45.jp2
0cc5213e410593ca9d28bbc260cee0dc
d73d81fc712b074f4915e75d1e84718fb7781555
45417 F20110114_AABTEL foster_m_Page_29.pro
e20d9aa26c2767e26c6c41b7dc308c47
e6b075c3f837daa3bf57b82c18f73f2fb6ee49ae
959449 F20110114_AABSZG foster_m_Page_64.jp2
c504ae1958e2752073c1df740a90e992
1dd9c775ab7fd97594af33db9eedc09403b3cf94
50253 F20110114_AABTDY foster_m_Page_12.pro
98ad1a2f191425aac45fb342326ca38a
8b05d49e2684065669fc18d1a2f6a855b72f709b
860584 F20110114_AABSYR foster_m_Page_48.jp2
ee31ed7c62bcb42d1e3a618de3d31732
89b8526681fbaf7a997641233378e018815221f9
45982 F20110114_AABTFA foster_m_Page_50.pro
0729ead6424a3141ba107657f3553da6
8fd52f4cb08edd395589ba8068e129a05a2559ff
49566 F20110114_AABTEM foster_m_Page_30.pro
73ecb526380fbc8074053ad7bf1a54f2
dd091854334c1d8f0b0a0063d96de8402c49e72b
102884 F20110114_AABSZH foster_m_Page_66.jp2
ac29128ff75d83e039d058c6bcc94744
df5a037b337aace74b388e84c6366fa17db72519
50605 F20110114_AABTDZ foster_m_Page_13.pro
1261cd2b2cfc3bbd8ba433e6ee4d5eae
3eea22cb4208e03e3efefb23bdaf9c563445139b
107141 F20110114_AABSYS foster_m_Page_49.jp2
9eb5f356d1c185fb296246b998b499a5
1d9f6d74da20538edf02a35d6ce94fdda9cc9c94
49865 F20110114_AABTFB foster_m_Page_51.pro
580c1e309e2e6909a2a007ce2223941a
51dbb4f00ef3b34c313d676fbd0c2ced62c9af16
39184 F20110114_AABTEN foster_m_Page_31.pro
2e98238cff5c4438fdc4a65167ca2d54
d8e82a852c9d93e80ca5877fce43674852b2a3fc
109612 F20110114_AABSZI foster_m_Page_67.jp2
426084f0f0079cf30fa50d04008974cc
edcee8d1b160dbe093ba224db43a3447c6a6956f
99135 F20110114_AABSYT foster_m_Page_50.jp2
4383e396d5517254c6b4440e4d4792f7
22d348e192b44194d86f9ae59a72aa4a403d30cf
50000 F20110114_AABTFC foster_m_Page_52.pro
8bb19b016ca7f4522db33ac6c8b00c65
0c053a3e0664ceee44d61ef5f9cf46c291385e57
49797 F20110114_AABTEO foster_m_Page_33.pro
f855f55fa488fead2b45d97440f1b1e9
3ae22443e417cc059cd32d2d6c78e08185e0d673
106924 F20110114_AABSZJ foster_m_Page_68.jp2
738b54e269dfe3e27c27f3e8e82631cd
613ed034276118c04dd06096f0808b4b78a0c755
106649 F20110114_AABSYU foster_m_Page_51.jp2
f1d99e391aadcc3e6d64afefd25e8871
79d68f2a57c2e133c59b400f263aa283bdface5c
35875 F20110114_AABTFD foster_m_Page_53.pro
21e7cd1150fd842f5fb0df69b1dfd7b3
575a0399cd22e79b0e5a2f1bd6329741d4559926
44297 F20110114_AABTEP foster_m_Page_34.pro
34e2181e64fe4e3921c8b5eaa0ecd368
c817c2b298fbd34b7eed52e889d90b36fd35cbb0
90657 F20110114_AABSZK foster_m_Page_69.jp2
8096244557481c7566be83b56544b3a2
455720e6784f8362739d7c55e4bfa93a809be21d
108275 F20110114_AABSYV foster_m_Page_52.jp2
a6e35b6ba2719319b96068d67dec790f
72ab8da360940b077f74900809130b1b7718a031
35392 F20110114_AABTFE foster_m_Page_54.pro
ab79aa2bcdc807ff04a9ea86f147c3cb
256aaa1fbe0b35d738d2549bc5bf34d8982656be
49866 F20110114_AABTEQ foster_m_Page_35.pro
501068541767d98cbbccd08d1a30f0bf
5659408e49a68ba2477c87d6720e5d6d31e7c868
764174 F20110114_AABSZL foster_m_Page_70.jp2
13b7d75749def97af6327f3282a9ea08
1212e73e9db9b1a0187242b15ff527cf47e23e3d
886184 F20110114_AABSYW foster_m_Page_53.jp2
65a272e45e37d4897e521e1c84530db2
c47f6b44e66837d462691147dec7fdee74b25b25
48016 F20110114_AABTFF foster_m_Page_55.pro
8a846f1f00ebae6e31ee1915615e19b6
93a7bc9d7155c408948161a020bacb6496c40bc2
52209 F20110114_AABTER foster_m_Page_36.pro
712853a22e878c3cf3914eee81c39875
7287b62a86fb7134223e8959a7b54925876edb50
75640 F20110114_AABSZM foster_m_Page_71.jp2
e3786c3917a6c2c4e8c55ea0f8c387a1
539ffe59f818573e8af036939d748b2e71b19488
889983 F20110114_AABSYX foster_m_Page_54.jp2
9babe99b6036a9257288defdc9f1464d
07b16077ec477deb50276d8c53d3abdfb942d560
34454 F20110114_AABTFG foster_m_Page_56.pro
402fa2e367f137f0dfc129780438a99a
da5cddff642f77f070cf39652c71c453737cb08e
33022 F20110114_AABTES foster_m_Page_37.pro
686bc64f797142d8b2e324a5895916b4
1f331d449a1743784242fcba3ac49088defae7a1
82803 F20110114_AABSZN foster_m_Page_72.jp2
8d3273889329337b83936bc5d9f749c0
c4480aa4c2d4e6da2cab398b90cc235817ee02b7
100284 F20110114_AABSYY foster_m_Page_55.jp2
1e38543f7d97094260f58b21caa906e6
204411213527b4e0a41ec5762eca3a1f64b4728d
49770 F20110114_AABTFH foster_m_Page_57.pro
d46cc19afbc863f2f9af34543a4984d5
338e970c2ebdca3f8a12fb3614723a63b725edcc
44385 F20110114_AABTET foster_m_Page_38.pro
a90cd98ea1f7e84b5d012fd03a99972f
6b112350e30d42bc4636768d0c91d255291c8ffc
82599 F20110114_AABSZO foster_m_Page_73.jp2
d17cdd2141145c5de0f91e6fc9b97d69
bc97656e83e7f150d0696c153510c76316e92655
957985 F20110114_AABSYZ foster_m_Page_56.jp2
97bd35dcd766e5a1a73fb4bfb17ff8ee
c1a8a23968e01c3ebcf4e24c3b371434f8294415
53477 F20110114_AABTFI foster_m_Page_58.pro
47f944995b575dcc0f3f2426d5199815
d6f78f52c68f5547f3bb94a6e3186e94b0ca3f17
46614 F20110114_AABTEU foster_m_Page_39.pro
396bc4bdff94b4e4341e4a5bac5b7dde
e4773854c426e17b223c7db5f902f0f7e4e567ff
92299 F20110114_AABSZP foster_m_Page_74.jp2
fd31c8b1eaea7f8beec5e542f1e6387d
caf49ee0e7b9320b2962055ada1320dee8e13b83
52453 F20110114_AABTFJ foster_m_Page_59.pro
584f2ab3ebf23ecdc2a45dfda6501ccc
2161f59742fbb56631fc41d96f23cc2db25e1716
40659 F20110114_AABTEV foster_m_Page_43.pro
2c51a998cf55e44e8b3b0c74d546967f
763f330f76d34da0794aa3908802df178e04d09d
113151 F20110114_AABSZQ foster_m_Page_76.jp2
af53d3acf86984f6009473f893e3f750
9d58c383790b7eec96ee3fe4522600b9959f84c9
49014 F20110114_AABTFK foster_m_Page_60.pro
b3e185649029a4cb764fd823dfe93cc8
a67086b81cf8d01c10707056d126d964f4737dcf
50723 F20110114_AABTEW foster_m_Page_44.pro
d2caee4e542e9de91ca0aaeeddba5dec
63a881a7dd43c5b7a30655d2a4f8dfa2804e679c
16395 F20110114_AABTFL foster_m_Page_61.pro
c072875327d051bd7254f424a3b0ccb2
437faefea247e4c4c2f45cfceeec25c26ed788e1
42268 F20110114_AABTEX foster_m_Page_45.pro
aa7d1023cd401de4f706c5b88f3ec02e
0e71e0772ff1a96f80cbc884054331da5fdbea3a
105363 F20110114_AABSZR foster_m_Page_77.jp2
c1069b2a95c784af70cf44faefecf862
dff008fd942e0997fef474e424f6be87748531f2
45100 F20110114_AABTGA foster_m_Page_79.pro
dcca4d015d5a40979458c824517979bf
1938e8cf3eb98bca3fc7984aa46f2d66dc84deaf
48599 F20110114_AABTFM foster_m_Page_63.pro
899286a61132b5386792b588e8208303
c0274d567d98b0315297046128fb2e68c7c0e1dc
37721 F20110114_AABTEY foster_m_Page_48.pro
cff3493ec378efc34e31f5bd16ae0969
fc0ed51f56440cefc7176886b2a9952822590aa9
100879 F20110114_AABSZS foster_m_Page_78.jp2
511fdf4f0af200a864fb104190e111f6
92e27aad6af62f5a065ae520c5d089cbb765fbc7
52346 F20110114_AABTGB foster_m_Page_80.pro
e28140959dfabe7d32dfb20f2f46233a
9a9678f616f974618c623d2b351d484f5ec5069b
42672 F20110114_AABTFN foster_m_Page_64.pro
fb0fdf614b9970b407b796c470f76055
0d374dd63342b6c9e90e37788db37b7d62f5a6c3
49624 F20110114_AABTEZ foster_m_Page_49.pro
058fb743217fe58b652dd6065bcb975e
0afbf9932e12822e99b8dc61f209aaa41d2b47b8
98721 F20110114_AABSZT foster_m_Page_79.jp2
5302683cdad18c61059d31240ecd5e13
abccab051f62d698df036ac98d19e07c4d2fa322
52161 F20110114_AABTGC foster_m_Page_81.pro
993afd3cf6e9752b18291b2a9cb0d0dd
8cc335d6159f16362e9f21951a52b28f9426c716
36698 F20110114_AABTFO foster_m_Page_66.pro
47e0e5729270484e664f9af13e479e65
7641541248e3cbd8abe21c690690c54f2d41289c
112640 F20110114_AABSZU foster_m_Page_80.jp2
686b4ee05806c79545231d006af8157f
5d1643f5bfbbe7aa060c9a3bb9c55acb94e1bb7a
49510 F20110114_AABTGD foster_m_Page_82.pro
2e59ba5078a0e9b342bb76cdde2b6040
4221dcf104e30cfda1d88d956a40ae2722900bd8
49822 F20110114_AABTFP foster_m_Page_67.pro
53fca5981e91956b4ddf0010bd1e5ade
ed66d13ce4e416c90a78315431cbfff4169de48c
112266 F20110114_AABSZV foster_m_Page_81.jp2
841bd3513b0bf3782808a2202e94b766
b703923e0128bd2529e6bbd230069cabb59377ea
51113 F20110114_AABTGE foster_m_Page_83.pro
538e1eed86191ff31d8c9afd89184242
31ee752b5944a9e163b2dae83cd2afd0de51ee0e
49511 F20110114_AABTFQ foster_m_Page_68.pro
83d276a14ec4c613f92f6a8c40dcae03
a372577608f2655d0bd98b57fb94abab8be4af7d
106277 F20110114_AABSZW foster_m_Page_82.jp2
c90d16d463b903b1dc7c37c0509ea1f2
c0349edf54e204affb10c98da3fa49900073f023
50007 F20110114_AABTGF foster_m_Page_84.pro
06f037663f6e01bbb45aa47138032277
a908ce97ddda6ab7e2c1aa8b8676b18fab656466
41859 F20110114_AABTFR foster_m_Page_69.pro
db1cf4c246374159a803962796d909b6
35388a77e07db714bd5a05ed8a8aadf451db9a2b
110307 F20110114_AABSZX foster_m_Page_83.jp2
bb15a854262c7a891132c1a1a981cc07
80d832e3f13b537dbb0f836ca69e11dd6deea2b6
49580 F20110114_AABTGG foster_m_Page_85.pro
6c52d8be5bc4d81fdcb4771e3ecfa132
acb5bae45a84a418c787be4b0a44f280d92094a1
33652 F20110114_AABTFS foster_m_Page_70.pro
b36ac5f7af704c4ee0884605c5ba297f
be63b31d68f0d7b8d98128183fce30cd7dd76045
109501 F20110114_AABSZY foster_m_Page_84.jp2
13607985abba2c751a38145ed9a5c7ee
7aa9ed0107ca3ca3715fb1d93c8fd27051722abd
48484 F20110114_AABTGH foster_m_Page_86.pro
ca787ab7677184ee375b2524e1dded81
37ed144ddfb072acc5e11c666a1ef96ef06c7a59
35689 F20110114_AABTFT foster_m_Page_71.pro
d220def18c96a9400c014aa5037cbd3d
11a3fcd104b7ab1f18626dc9fcdcc01129a4495c
107410 F20110114_AABSZZ foster_m_Page_85.jp2
902f01edc8e368a48ae24c5be32d58dc
e4d85f252784e6a1c02b4cd55c3953414a57bfd9
48434 F20110114_AABTGI foster_m_Page_87.pro
36b714da5ed9d1dac80f7b9eaeabb7ff
5dfe15ac040f9c9c43ec3d3d4a79f99137000aa8
37341 F20110114_AABTFU foster_m_Page_72.pro
17a6d8347090baeb6506cfa6295c0fe8
84e67a8f540a7ae6fb74c5d669f539fe49fd6d81
3116 F20110114_AABTGJ foster_m_Page_88.pro
26acf7eca82c5e54fed3ca5e8deb8215
819e996e37a8a13fa7c353b58995963267ec5ccd
37436 F20110114_AABTFV foster_m_Page_73.pro
63b8fd72b607eee271cfa93a0d4373c2
fa8cb914f198d453e8632c43c5c1e924c8fcb06e
41431 F20110114_AABTGK foster_m_Page_89.pro
f5b4986b023e2c072d339539f942b04b
d78811a6d7a3f0868e62fd09ff7fd4060f3c91fb
38366 F20110114_AABTFW foster_m_Page_74.pro
b6d2a0721134ccfdd764f37b4c0dc4c2
debe76db055f369c3e738389de084f91fa517d1a
49950 F20110114_AABTGL foster_m_Page_90.pro
e6539468b23e9a49a58894fea6447f9d
f850c36927c485a0f119b98a1cba9ea014322fe9
50203 F20110114_AABTFX foster_m_Page_75.pro
c044bea02c16d1a7318bb98085c450d9
183de12bdd907565a2bca7be72f25fd3ff6a3c2e
46914 F20110114_AABTGM foster_m_Page_91.pro
0bd793d876530812e2f2556a6c29f727
bfca21ed216335a61a9a0476fd30a2ce15cd48ec
52915 F20110114_AABTFY foster_m_Page_76.pro
75aacd3febfb1169f1c692e32fcb100b
ac16c2f19c77e2b318bee94a959be42f36773e58
2019 F20110114_AABTHA foster_m_Page_13.txt
4703e95613f42e322157c81737f3faba
56ac7b82a5a2f50e9e232be0ffee1b2cf47ba34f
51920 F20110114_AABTGN foster_m_Page_92.pro
a8df13385ca3e27fc5707b54fbc5c4f9
cfd573296c9da9bb0349415110c8f294d03133cf
48892 F20110114_AABTFZ foster_m_Page_77.pro
ef169edbf37d6034158e18ca2d0ed65c
305075b92728115ea14683bda1cdfc0b62658b25
825 F20110114_AABTHB foster_m_Page_15.txt
ce117bd05057692362e5f4b1818f9d26
4503d4aaf54a5f8aae2e212ebd268ceceaec4102
19177 F20110114_AABTGO foster_m_Page_93.pro
78642e6b985b9971dd7f5fd2f58ec4df
9717d2605601cbe4f238cd65fa0d6502116f2368
1681 F20110114_AABTHC foster_m_Page_16.txt
222e940279ebc6b6d6e7307d57746190
efec0b90c32d0d0d227ba33711a2eb1c39585b20
52388 F20110114_AABTGP foster_m_Page_94.pro
7dceb2a0c2a70189402e28e02667ccf2
b57941f5a97e8550f4ea25754a8ac96232e74280
F20110114_AABTHD foster_m_Page_17.txt
aca8a89a072e7ba47e02acebcdae8add
a69c7dadf84b9419b9e8dd22f56e28c854a6588f
12881 F20110114_AABTGQ foster_m_Page_97.pro
81098ea0cc605964bf9d0e4728a527aa
15f78859c483e42d7464d40934d23775aa5672a7
2043 F20110114_AABTHE foster_m_Page_18.txt
10fb206e727cce310fe6280b7280c24f
d0bb4774c6302ba38fa591b7e211ed9b5aef7c2e
106 F20110114_AABTGR foster_m_Page_02.txt
7a83e7a3fee8b2a835c3ee70b8da531d
a57a81d6b38875e56a136c36fa45cdf90d1a3c83
1933 F20110114_AABTHF foster_m_Page_19.txt
5a34a696ccf6fa2660aca543f154cf01
f680a0fa9226b54c7d3daa96609ed4e6fc1166cd
975 F20110114_AABTGS foster_m_Page_03.txt
59b3821fb616953bf0a21e565bbe08b7
a9b4cb8d838678ab1571998d2a5bf3fdc9405d29
1991 F20110114_AABTHG foster_m_Page_21.txt
03a19d07ab9ff45557ccbdbb8c1cab8f
4935fc5c4b126465fc467fca19de7b25e307d59e
3456 F20110114_AABTGT foster_m_Page_05.txt
87b5ae43f093196db228b437c1c09179
4e2db8951fb11e4972066ee0db56aa4b2f2cfdaa
F20110114_AABTHH foster_m_Page_22.txt
de45eebd6ede6dbbabf114886a89a0a4
c2c4eab4844f62de1991c195a592f8fdf2c8729c
807 F20110114_AABTGU foster_m_Page_06.txt
6dcb43a7843571c96637b36fbfe6079b
78716a6a65856c82e9edb8fb4415f835be3d1d09
1949 F20110114_AABTHI foster_m_Page_23.txt
180b9cb4748fab4c7273cd0d82209027
7440255bd94d4d397fffad7b303c0023d108d739
1492 F20110114_AABTGV foster_m_Page_07.txt
212ee585ea8ca715cb91b66ccfacb9cc
3a268c78b3ba2d061cd9aef34b5e375df341bba3
F20110114_AABTHJ foster_m_Page_24.txt
181bbda255125dd9a33791233f002af5
bcf97c0442800d42063d5c76175f797c72967b23
1672 F20110114_AABTGW foster_m_Page_08.txt
d5413803abe21ab1d962ea278cd7db2d
d317f6dd58b728c71af71a2efd4cd8cee7af80de
1871 F20110114_AABTHK foster_m_Page_25.txt
dd43f453e63c454cdcae1fd724cccc10
2b52931a048f6a9bd532effd5e0657f2dd1009f5
1845 F20110114_AABTGX foster_m_Page_10.txt
ccd0f02c46f56e37ac308692fff6ddcb
6c98fc10561dc6dfb9345ac31caf7974826a64bf
F20110114_AABTHL foster_m_Page_26.txt
0d55ca95e3cc88390db241044eb93ca7
8fa81bd0460601f238c45f246b5e12c7cc60d237



PAGE 1

PROCESS FORENSICS: THE CROSSROADS OF CHECKPOINTING AND INTRUSION DETECTION By MARK FOSTER A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2004

PAGE 2

Copyright 2004 by Mark Foster

PAGE 3

ACKNOWLEDGMENTS I would like to thank my supervisory committee chair Joseph N. Wilson. Without his patience and support this work would not have been possible. Many thanks are also extended to the supervisor of my teaching assistantship, Rory DeSimone. She has created a sense of home and family I will never be able to forget. Without this support I would not have continued my graduate work beyond the masters level. I would also like to thank my parents for their support. They have provided me with an amazingly reliable safety net that allowed me to focus solely on school and avoid many headaches. Lastly, and most of all I would like to thank my wife. Her love, support, and patience have not wavered in this lengthy process. She has undoubtedly been the single most integral component to my success. I am proud to say that her days of being the bread winner are numbered. iii

PAGE 4

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iii LIST OF TABLES.............................................................................................................vi LIST OF FIGURES..........................................................................................................vii ABSTRACT.....................................................................................................................viii CHAPTER 1 INTRODUCTION........................................................................................................1 Checkpointing...............................................................................................................2 Buffer Overflow Attacks..............................................................................................4 2 BACKGROUND AND RELATED WORK................................................................7 Checkpointing...............................................................................................................7 Background............................................................................................................7 Related Work.......................................................................................................11 User-level checkpointing..............................................................................11 System-level checkpointing.........................................................................17 Buffer Overflow Attacks............................................................................................22 Background..........................................................................................................22 Related Work.......................................................................................................23 Stackguard....................................................................................................23 Libsafe and Libverify...................................................................................24 VtPath...........................................................................................................25 RAD.............................................................................................................25 Static analysis...............................................................................................26 Computer Forensics....................................................................................................26 3 CHECKPOINTING....................................................................................................29 A Robust Checkpointing System................................................................................29 UCLiK Overview........................................................................................................33 iv

PAGE 5

UCLiKs Comprehensive Functionality.....................................................................36 Opened Files........................................................................................................36 Restoring the file pointer..............................................................................37 File contents.................................................................................................41 Deleted and modified files...........................................................................41 Restoring PID......................................................................................................42 Pipes....................................................................................................................42 Parallel Processes................................................................................................43 Restoring PIDs of parallel processes............................................................48 Restoring pipes between parallel processes.................................................48 Restoring pipe buffers between parallel processes.......................................49 Sockets.................................................................................................................50 Terminal Selection...............................................................................................52 4 DETECTING STACK-SMASHING ATTACKS......................................................53 Overview of Proposed Technique..............................................................................53 Constructing the Graph........................................................................................54 Graph Construction Explained............................................................................57 Proof by Induction...............................................................................................59 Recursion.............................................................................................................63 Implementation and Testing.......................................................................................65 Limitations..................................................................................................................66 Benefits.......................................................................................................................68 5 PROCESS FORENSICS............................................................................................70 Proposed Process Forensics........................................................................................70 Possible Evidence in a Checkpoint.............................................................................71 Opportunities for Checkpointing................................................................................73 Additional Enhancements...........................................................................................76 6 CONCLUSIONS........................................................................................................80 LIST OF REFERENCES...................................................................................................85 BIOGRAPHICAL SKETCH.............................................................................................88 v

PAGE 6

LIST OF TABLES Table page 3-1. AP table for the different checkpointing systems.......................................................32 3-2. Example values for children and childstart................................................................46 4-1. Example program we use to demonstrate graph construction....................................54 4-2. Return address/invoked address pairs.........................................................................55 4-3. Variables for the induction proof................................................................................61 4-4. Publicly available exploits used to test Edossa.........................................................66 vi

PAGE 7

LIST OF FIGURES Figure page 2-1. Typical runtime program layout.................................................................................22 3-1. Performing a checkpoint with UCLiK........................................................................34 3-2. File size relative to page size......................................................................................37 3-3. Glibc executes in user-space.......................................................................................38 3-4. Glibc variables during a read......................................................................................38 3-5. The second page of a file loaded into memory...........................................................39 3-6. Process descriptor fields p_osptr and p_cptr..............................................................44 3-7. Building a linear list of processes...............................................................................45 4-1. Example programs addresses divided into islands....................................................56 4-2. Edges leading from invoked address node 0x08048418............................................57 4-3. Abstract graph once (n+1)st call is made....................................................................61 4-4. Abstract graph with recursion when the (n+1)st call is made.....................................65 vii

PAGE 8

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy PROCESS FORENSICS: THE CROSSROADS OF CHECKPOINTING AND INTRUSION DETECTION By Mark Foster December 2004 Chair: Joseph N. Wilson Major Department: Computer and Information Sciences and Engineering The goal of our study was to introduce a new area of computer forensics we call process forensics. Process forensics involves extracting information from a process address space for the purpose of finding digital evidence pertaining to a computer crime. The challenge of this subfield is that the address space of a given process is usually lost long before the forensic investigator is analyzing the hard disk and file system of a computer. Our study began with an in-depth look at checkpointing techniques. After surveying the literature and developing our own checkpointing tool, we believe checkpointing technology is the most appropriate method for extracting information from a processs address space. We make the case that an accurate and reliable checkpointing tool provides a new source of evidence for the forensic investigator. We also thoroughly examined the literature and methods for detecting buffer overflow attacks. In addition, we have developed a new method for detecting the most common form of a buffer viii

PAGE 9

overflow attack, namely, stack-smashing attacks. We believe that the boundary where these two areas meet (specifically, incorporating checkpointing with intrusion detection) can readily provide process forensics. The technology of checkpointing is nothing new when considering process migration, fault tolerance, or load balancing. Furthermore, a plethora of research has already focused on finding methods for detecting buffer overflow attacks. However, with respect to computer forensics, the gains from incorporating checkpointing with intrusion detection systems have yet to be explored. ix

PAGE 10

CHAPTER 1 INTRODUCTION In recent years, computers and the Internet have become an integral part of our society. Computer use includes the workplace, home, school, and in some cases, public areas such as shopping malls and airports. The National Telecommunications and Information Administration (NTIA) released a report showing that Internet growth in the United States is estimated at 2 million new users each month [1]. The downside to this trend of pervasive computing is that the amount of computer-based crime is also on the rise. Statistics published by the CERT Coordination Center [2] show that the number of security related incidents have increased every year since 1998. From 2001 to 2003, the number of security related incidents more than doubled. As computer crime increases, so do the demands placed on computer security specialists and law enforcement. To many computer security specialists, intrusion prevention is more important than intrusion detection. However, as long as intruders continue to be successful, the need for reliable intrusion detection systems is apparent. In addition, to prevent repetitive or similar intrusive attacks, we need reliable computer forensics to help us learn why an attack occurred in the first place. Thus, computer forensics is an integral part of intrusion prevention. Our purpose is to introduce a new area of computer forensics, called process forensics. Process forensics involves extracting information from the process address space of a given program. We discuss how the information extracted from a process address space could be a source of evidence after a computer crime. However, to collect 1

PAGE 11

2 digital evidence in the form of process forensics, one must address two issues. First, some tool must exist that can extract information from a process address space. Second, one must know when to extract such information. We propose that checkpointing technology be used for the extraction process. Checkpointing research is already aimed at storing key information about a process. We believe that the proper checkpointing tool can also meet the needs of the evidence collector. Furthermore, we propose that intrusion detection systems be used to help indicate when to extract information from a process address space. Intrusion detection systems already aim to detect malicious activity. Malicious activity sometimes results in the need for evidence in a computer crime. Checkpointing Individuals familiar with UNIX systems are probably also familiar with the act of stopping and restarting a running process. This act is usually achieved by sending a series of signals to a given process. The signals SIGSTOP and SIGCONT are used for this purpose. While stopping and restarting processes is a useful capability, it is important to note that during the time a process is stopped, it still consumes system resources. In particular, a stopped process consumes system memory associated with saving its state. Without saving the processs state, we cannot restart the process. Checkpointing is a way of saving a processs state such that the process may be restarted from the point at which it was checkpointed without requiring active operating system information. Checkpoints are made at regular or chosen time intervals. Checkpointing plays a vital role in fault tolerance and rollback recovery schemes. After a system crash or failure, processes that were checkpointed can be restarted from the point of their last checkpoint.

PAGE 12

3 We introduce a new system for checkpointing called UCLiK (Unconstrained Checkpointing in the LInux Kernel). UCLiK has been implemented as a kernel module for the Linux operating system. UCLiK is different from most other checkpointing systems in that it requires no additional programming by the application programmer. Furthermore, it requires no special compiler or run-time library. UCLiK operates on the system-level, and therefore has direct access to all of a processs state. When UCLiK checkpoints a process, it does so by taking a snapshot of a processs state and saving that information to a file. By saving this state to a file, we no longer use system resources, except for whatever stable storage is necessary for storing a file. We refer to the file containing our saved state as our image file. The image file can be stored indefinitely, or moved to another system where the process could be restarted to achieve process migration. Some well-known benefits of checkpointing include process migration, fault tolerance, and rollback recovery. The benefits of our system are even more widespread. System administrators can use such a system in place of killing what are seemingly objectionable processes. Killing what seems to be a problem process, but in actuality is not, can result in a drastic loss of computation for the user and the system. The user is upset at having to start the process over again, and the system repeats computation it has already performed. With UCLiK, such a process could be checkpointed to a file and later restarted if the administrator can determine that running the process presents no problem. Essentially, UCLiK can serve as an undo option for the kill system call. Brown and Patterson [3] made a thorough case for the importance of a system-level undo mechanism. Furthermore, they showed that human errors are inevitable and should be

PAGE 13

4 considered when designing highly-available and highly-dependable systems [4]. A common example of human error believed to be experienced by many including this author when issuing the kill system call is using the wrong process identification (PID). Obviously this results in killing the wrong process. With UCLiK, one can undo such a mistake. With UCLiK, a system administrator can be more trigger-happy when killing suspicious user processes. Rather than waiting until a user process is blatantly degrading the system's performance, system administrators can take a more preemptive approach and checkpoint a users process at the first sign of trouble. UCLiK could also be used by system administrators when doing preventive maintenance. Maintenance to a system often requires taking the system down and thus killing a number of user processes. These processes could be checkpointed instead of killed. Once maintenance is complete, the user processes could be restarted. By alleviating the need for a special compiler, a run-time library, or additional work for the application programmer, we come one step closer to the ideal checkpointing system. The ideal checkpointing system would allow us to checkpoint any process, anytime, anywhere. Furthermore, one should be able to restart a checkpointed process anytime and anywhere. Ultimately, the ideal operating system would have the ability to checkpoint and restart running processes. Buffer Overflow Attacks The term buffer overflow refers to copying more data into a buffer than the buffer was designed to hold. A buffer overflow attack occurs when a malicious individual purposely overflows a buffer to alter a programs intended behavior. In most common forms of this attack, the attacker intentionally overflows a buffer on the stack so that the excess data overwrites the return address just below the buffer on the stack. When the

PAGE 14

5 current function returns, control flow is transferred to an address chosen by the attacker. Commonly, this address is a location on the stack, inside the buffer, where the attacker has injected malicious code. This type of buffer overflow attack is also referred to as a stack-smashing attack, since the buffer resides on the stack. Stack-smashing attacks are among the most common buffer overflow attacks because of their simplicity of implementation. Buffer overflow attacks have been a major security issue for years. Wagner et al. [5] extracting statistics from CERT advisories, found that between 1988 and 1999, buffer overflows accounted for up to 50% of the vulnerabilities reported by CERT. Other statistics showed that buffer overflows caused at least 23% of the vulnerabilities in different databases. More recent National Institute of Standards and Technologys (NIST) ICAT statistics show that a significant number of the common vulnerabilities and exposures (CVE) and CVE candidate vulnerabilities were due to buffer overflows. For the years 2001, 2002, and 2003, buffer overflows accounted for 21, 22, and 23% of the vulnerabilities respectively [6]. From April 2001 to March 2002, buffer overflows caused 20% of the vulnerabilities reported by SecurityTracker [7]. These more recent statistics reinforce the case made by Wagner et al. [5]. Buffer overflows are a significant issue for system security. We introduce a new method for detecting stack-smashing and buffer overflow attacks. While much work has been focused on detecting stack-smashing attacks, few approaches use the program call stack to detect such attacks. Our new method of detecting stack-smashing attacks relies solely on intercepting system calls and information that can be extracted from the program call stack and process image. Upon

PAGE 15

6 intercepting a system call, our method traces the program call stack to extract return addresses. These return addresses are used to extract what we refer to as invoked addresses. In the process image, return addresses are preceded by call instructions. These call instructions are what placed the return addresses on the stack and then transferred control flow to another location. An address that was invoked by a call instruction is referred to as an invoked address. We use the return and invoked addresses to create a weighted directed graph. We found that the graph constructed from an uncompromised process always contains a Greedy Hamiltonian Call Path (GHCP). This allows us to use the lack of a GHCP to indicate the presence of a buffer overflow or stack-smashing attack.

PAGE 16

CHAPTER 2 BACKGROUND AND RELATED WORK Checkpointing Background Checkpointing is the technique of storing a running processs state in such a way that a process can be restarted from the point at which the checkpoint, or stored copy of a process state, was created. Checkpointing plays an important role in fault tolerance, rollback recovery, and process migration. After a system crash, processes that have been checkpointed will be able to restart from their most recent checkpoints, while uncheckpointed processes have lost their work and will be forced to start over from the beginning. Process migration is the act of moving a running process from one host to another. Process migration can be achieved by checkpointing a running process on one host, moving the checkpoint to another host, and then restarting the process on the new host. The amount of stable storage required to save a checkpoint is called the checkpoint size. The amount of time it takes to make a checkpoint is referred to as the checkpoint time. The additional amount of time it takes to run an application when checkpointing that application as opposed to when that application is not being checkpointed is referred to as the checkpoint overhead [8]. Most checkpointing techniques can be classified into two major categories: user-level and system-level. User-level checkpointing refers to checkpointing systems whose code executes on the user-level. User-level checkpointing systems execute in user-space. 7

PAGE 17

8 Checkpointing at the user-level usually involves an enhanced language and compiler, a run-time library with checkpointing support, or modifications to the source code of a process to be checkpointed. System-level checkpointing refers to checkpointing systems whose code executes as part of the operating system. These systems are also referred to as kernel-level checkpointing systems. These systems execute in kernel-space. These systems usually involve modifications to an existing operating system. Debate is ongoing as to whether checkpointing should take place at the user-level or system-level. The ultimate goals of a checkpointing system are transparency, portability, and flexibility. The ideal checkpointing system is completely transparent to the application programmer. Ideally, the application programmer can write code however he wants, while using any language and any compiler, and still be able to have his application checkpointed. Furthermore, this checkpointing system should be portable to any type of system. In addition, the checkpoint of a process should be portable to any type of system for a restart. All along, the user should have the flexibility to minimize the checkpoint size, time, and overhead. These aspects of checkpointing consume storage and computation. A common method for checkpointing is to suspend the execution of a process, write the processs state information to stable storage and then continue with the execution of the process. This method of checkpointing is called sequential checkpointing [8]. Another checkpointing method is forked checkpointing [8]. With forked checkpointing, the process to be checkpointed is forked, and the parent process continues with execution, while the child process is used for making the checkpoint. For long-running processes, multiple checkpoints may be taken. Incremental checkpointing

PAGE 18

9 refers to storing only what has changed in the processs state since the previous checkpoint [8]. All other information can be extracted from the previous checkpoints. Checkpoints can be made synchronously or asynchronously [8]. When the application programmer specifies points in the code at which to perform checkpoints, these checkpoints are said to be synchronous. If a checkpoint is taken at regular time periods (such as every hour for a long-running process), it is said to be asynchronous. One common approach to reducing checkpoint size is memory exclusion [8]. The idea behind memory exclusion is that clean and dead areas of memory need not be included into a checkpoint. Clean areas of memory are portions of memory that have not changed since the previous checkpoint. Clean memory is also called Read-only memory [9]. Dead memory includes those portions of memory that will not be read before being written after the current checkpoint, and thus need not be included in the checkpoint. Often, one may need to checkpoint a group of parallel applications or distributed processes. Usually this type of computation takes place on a network or a system of clustered computers. This scenario is a bit more complex than the sequential checkpointing of a single process. The challenge here is interprocess communication. If one node fails, and a process running on that node must rollback to its last checkpoint, any messages that a process has sent since its last checkpoint are sent again. Therefore, any other process in the system that has received a message from that process since its last checkpoint must also rollback. The rollback of one process might invoke the rollback of another, and in turn invoke the rollback of each process in the system. This concept (and the idea that rollbacks could propagate through all the processes in the system and return the system to its initial state) is called the domino effect [10]. To avoid

PAGE 19

10 the domino effect, checkpointing of parallel applications and distributed processes usually involves some sort of coordinated checkpointing. Coordinated checkpointing refers to a group of processes coordinating their checkpoints to achieve a global consistent state [10]. A global consistent state refers to a point in the execution of a group of processes where, for any process whose state includes having received a message, there is another process whose state includes having sent that message [10]. Uncoordinated checkpointing can lead to inconsistent states, and thus lead to the domino effect. Another technique that assists in rollback recovery of parallel applications and distributed processes is logging. Some checkpointing systems log pertinent information and events to assist in the rollback recovery of a process or group of processes. One common approach to this is to model message receipts as nondeterministic events [10]. Each nondeterministic event then has its own corresponding deterministic time interval. A nondeterministic event corresponds to the deterministic time interval after it and before the next nondeterministic event. The main objective behind this is to avoid orphan processes. Orphan processes are processes that are dependent on an event that cannot be generated during recovery [10]. We can now classify logging approaches into three categories: pessimistic logging, optimistic logging, and causal logging. With pessimistic logging, the determinant of each nondeterministic event is logged to stable storage before any other processing is done. Pessimistic logging does not allow any process to depend on an event before that event has been logged [10]. Pessimistic logging never creates any orphan processes, but imposes more overhead because of its blocking nature. With optimistic logging, the

PAGE 20

11 determinant of each nondeterministic event is immediately logged to volatile storage; but may not be logged to stable storage until sometime later depending on the protocol [10]. If a failure occurs before the volatile storage is transferred to stable storage, then orphan processes may exist during a recovery. If this is the case, each orphan process is forced to rollback until it is no longer dependent on an event that cannot be generated. The benefit of this approach is a reduction in overhead. Causal logging requires logging all determinants corresponding to the nondeterministic events that are causally related to the processs given state [10]. Causal logging applies Lamports happened-before relationship to the determinants of the nondeterministic events. This approach does not allow orphan processes. Unlike pessimistic logging, causal logging does not suffer from increased overhead. Related Work User-level checkpointing Application programmer-defined checkpointing. Over the years, there have been a number of different checkpointing schemes. One straightforward checkpointing scheme has been to leave the responsibility of checkpoints to the application programmer. After all, the application programmer should know more about the code than anyone else. The application programmer should know exactly what information is pertinent and which locations are best for checkpoints. However, without any support, the task of writing fault-tolerant applications can be challenging. Some studies show that fault-tolerant routines can take up to 50% of the source code [11]. Fortunately, other approaches to checkpointing have been developed, to relieve the application programmer from such a burden.

PAGE 21

12 Checkpointing with library support. To alleviate much of the burden of checkpointing placed on the application programmer, some programmers use a run-time library with support for checkpointing. One benefit of this approach is that we still exploit the application programmers knowledge of the code. The application programmer still decides what data is checkpointed, and where the checkpoints should be placed. This is referred to as user-defined checkpointing [11]. One major advantage of user-defined checkpointing is its ability to greatly reduce the checkpoint size. Much of the process state that is saved in a system-level checkpointing scheme is able to be eliminated from the checkpoint, since the application programmer is familiar with the data involved in a process. The application programmer can decide what data is truly necessary for checkpointing the given process. CHK-LIB is one such run-time library [11]. This run-time library provides three fault-tolerant primitives. One such primitive is used for the application programmer to specify which data should be saved in a checkpoint. Another such primitive is used to allow the programmer to specify where checkpoints should be made. The last primitive is used to determine whether a process is new or has been restarted. Another system that uses library support for checkpointing is Condor [12]. Condor is a distributing processing system whose main goal is to maximize resource utilization by scheduling processes on idle workstations. Checkpointing and process migration play a vital role in Condor. If the owner of a workstation being used by the Condor system needs to use that workstation, Condor must be able to checkpoint any process on that workstation and migrate those processes elsewhere. Processes to be run in the Condor system are relinked with the Condor checkpointing library. This checkpointing library

PAGE 22

13 contains system call wrappers that can log information such as names of opened files, file descriptor numbers, and file access modes. Checkpoints in the Condor system are invoked by a signal. A checkpoint in the Condor system is created by copying the processs state information to a file. Restart is achieved by having the restarting process copy the original processs state information from that checkpoint file into the process address space of the restarting process. Routines for handling this signal, and for writing a processs state information into a file, are provided in the Condor checkpointing library. Process migration is achieved when a checkpoint file is moved to another location and then restarted. Condor has a number of benefits. Condor was designed to work on UNIX systems, but does not require any modifications to the UNIX kernel. Furthermore, since this is a user-level checkpointing system, the system may be more portable than a system-level checkpointing system. Condor has been used in conjunction with other systems such as CoCheck [13]. CoCheck is system for checkpointing parallel application on networks of workstations. CoCheck is concerned primarily with finding a global consistent state for all of the nodes in a network. Once this consistent state is achieved, Condor aids in checkpointing the parallel applications. Once checkpointed, these applications can be migrated to achieve a more balanced load across a network. Condor does suffer from a number of drawbacks. Condor makes no attempt to deal with pipes, sockets, or any other form of interprocess communication. Condor also assumes that files opened by a process being checkpointed remain unchanged when the restart takes place. Furthermore, Condor relies on a stub process running in the location of the original process to facilitate file access at the original processs location.

PAGE 23

14 Checkpointing with Condor also requires that one know if a process may be checkpointed before starting that process. A process that is running and was not relinked with the checkpointing library cant be checkpointed. Libckpt [8] is another checkpointing library designed for UNIX systems. Somewhat similar to Condor, checkpointing with Libckpt takes place at the user-level. However, Libckpt differs from Condor in that processes to be checkpointed with Libckpt must be recompiled, not just relinked with the Libckpt checkpointing library. Furthermore, Libckpt requires that the main() function in a program to be checkpointed, must be renamed to ckpt_target(). Libckpt also distinguishes itself from Condor by its additional performance optimizations. Libckpt supports incremental checkpointing and forked checkpointing. Libckpt also can do memory exclusion via user-directives. Memory exclusion with these user-directives is called user-directed checkpointing. The Libckpt library supplies two procedure calls for memory exclusion: include_bytes() and exclude_bytes(). These two procedure calls tell the system which portions of memory to include or exclude in the next checkpoint. In some cases, the checkpoint size was reduced by as much as 90% when using the user-directives to do memory exclusion [8]. The Libckpt library also supports the procedure call: checkpoint_here(). This procedure call enables the programmer to request a synchronous checkpoint at any point in the code. By default, Libckpt checkpoints a process every 10 minutes. The time interval between checkpoints, whether to use incremental or forked checkpointing, and a number of other options are specified in the .ckptrc file. Libckpt does have a number of disadvantages. As with Condor, one must know before compilation that a process needs to be checkpointed. Transparency is lost by

PAGE 24

15 requiring the programmer to rename the main() function. Furthermore, to achieve the performance optimizations of memory exclusion and synchronous checkpoints, additional transparency is lost by requiring the programmer to use the user-directives. Pipes and sockets were not addressed in the literature discussing Libckpt. Compiler-defined checkpointing. Another common approach to checkpointing at the user-level is referred to as compiler-defined checkpointing. This approach consists of using the compilers knowledge of the program to select the most optimal locations for inserting checkpoints. One advantage of this approach is that the application programmer is not responsible for checkpoints. This results in a checkpointing scheme that is transparent to the application programmer. Another advantage of this approach is that when compared with system-level checkpointing, the compiler-defined checkpoints are typically much smaller. System-level checkpointing usually requires saving most of a processs address space, also referred to as its memory footprint. This usually results in a large checkpoint file. However, with the recent reduction in costs for stable storage, one could argue that for larger checkpoint files are often acceptable. For example, a system administrator might rather use a few megabytes of storage than deal with a disgruntled user whose process was killed. Furthermore, some proponents of compiler-based checkpointing think that for processes with small memory footprints, system-level checkpointing is still best [14]. However, the same individuals think that compiler-defined checkpointing is better for applications on large cluster computer systems [14]. One approach to checkpointing on large cluster computer systems was to have the ZPL language and compiler perform automatic checkpointing [14]. A major challenge of checkpointing processes across clustered computer systems is the difficulty in identifying

PAGE 25

16 a globally consistent state. This approach used the compilers knowledge of the code to identify ranges of code that were void of communication. These ranges of code could be checkpointed without losing messages in the network. The compiler further exploited its knowledge of the code to insert checkpoints where the fewest live array variables existed. In some cases, these compiler techniques resulted in as much as a 73% reduction in the checkpoint size [14]. Additional compiler-based memory-exclusion techniques have used Libckpt [15]. The previously mentioned Libckpt [8] library used user-directives such as include_bytes(), exclude_bytes, and checkpoint_here() to perform the checkpointing. Libckpt was later expanded to include additional directives such as EXCLUDE_HERE and CHECKPOINT_HERE. These directives tell the compiler when to invoke the checkpoint_here() or memory-exclusion procedure calls. Having the compiler invoke these procedure calls allows the compiler to guarantee a correct checkpoint. The compiler determines what portions of memory to exclude, based on a set of data flow equations. These equations are solved using an iterative method. This differs from a careless programmer who might exclude portions of memory that are essential to recovery. However, the programmer is still responsible for inserting these new directives. This results in a continued loss of transparency. This technique is called Compiler Assisted Memory Exclusion (CAME) [15]. Compiler-defined checkpointing schemes do suffer from a number of disadvantages. One obvious disadvantage is transparency. Only programs that are compiled with compilers possessing support for checkpointing will be able to be checkpointed. Furthermore, the compiler will have no knowledge of any message

PAGE 26

17 passing that takes place during execution. To work around this disadvantage, one could log the activity across communication channels, but this obviously results in additional work and overhead. As we have discussed, some compiler defined checkpointing schemes just limit the location of checkpoints to specific ranges in the programs execution [14]. Exportable kernel state. Another interesting approach to checkpointing involves exporting the kernel state. This approach was explored using the Fluke microkernel [16]. The Fluke microkernel was designed in such a way that the kernel state is exportable to the user-level. The Fluke microkernel also allows the kernel state to be set, or imported from the user-level. This allows a user-level checkpointing system to have access to the necessary kernel objects that pertain to a given process. Most of the checkpointing systems we discussed so far must infer the kernel state by logging information collected by system call wrappers. One of the major drawbacks of this approach to checkpointing is that a process to be checkpointed must be in the child environment of the checkpointing application. A process with no checkpointing application as an ancestor apparently cannot be checkpointed. In addition, portability is limited for a checkpointing system that must run on the Fluke microkernel. System-level checkpointing Many of the system-level checkpointing systems in existence today are focused on checkpointing parallel applications running on distributed operating systems or clusters of computers. MOSIX [17] is one such system. MOSIX is a distributed operating system with checkpointing and process migration support designed for load balancing. MOSIX was designed for scalable clusters of PCs. While MOSIX does well in

PAGE 27

18 achieving its goal of resource sharing, it does not provide much support for the System Administrator wanting to checkpoint a users process before killing it. Simply put, MOSIX does not provide support for storing a checkpoint image in a file. A similar system is CKPM [18]. CKPM was designed for checkpointing parallel applications running in a network-based computing environment. CKPM was also designed on the system-level but requires additional libraries for wrapping the Parallel Virtual Machine (PVM) libraries. CKPMs checkpointing strategy uses pessimistic log-based protocols. While this system is efficient in achieving a global consistent state when checkpointing parallel applications, it does not provide the ability to store a checkpoint image in a file. Furthermore, checkpointing is restricted to the parallel applications of the PVM. One of the more notable and recent attempts to perform system-level checkpointing, that also provides the functionality were looking for, was with a tool called epckpt [19]. Epckpt was designed with a focus on being able to checkpoint parallel applications; specifically, those parallel applications resulting from a parent process calling fork(). Epckpt comes as a patch for the Linux kernel. Since epckpt is part of the kernel, epckpt has a number of advantages over the checkpointing systems we have discussed so far. Processes to be checkpointed with epckpt need not be recompiled or even relinked with any special libraries. Furthermore, epckpt can handle a broader range of applications, since it is not dependent on any special language or compiler. Epckpt, being part of the kernel, has direct access to a processs address space. Furthermore, epckpt can write the checkpoint image to a number of file descriptor abstractions. Epckpt

PAGE 28

19 can write the checkpoint image to a file, pipe, or socket. This enables epckpt to migrate a process at the time of the checkpoint. One of the main disadvantages of system-level checkpointing is that it usually results in a rather large checkpoint image. Some refer to this approach as being a core dump. A core dump suggests a checkpoint image with much unnecessary information. Epckpt has directly addressed the issue of checkpoint size. Epckpt allows the user to omit shared libraries and binary files from the checkpoint image. In cases where the user knows that these files will still be available when the process is restarted, the size of the checkpoint can be greatly reduced. In some cases, the size of the checkpoint image was reduced by more than half when the shared libraries and binary files were omitted. Epckpt does still suffer from a few disadvantages. Like a number of the checkpointing systems we have previously discussed, epckpt requires that the user know a process may need to be checkpointed, before starting that process. Epckpt consists of a new system call, collect_data() that notifies the kernel to start recording information about a process. This information consists of file names and libraries. Epckpt provides a tool called spawn that can invoke this system call when starting a process, but the limitation still exists. In addition, epckpt is designed as a patch for the Linux kernel. In order for one to use this system, they must recompile their entire kernel. CRAK. A number of ideas used in the design of epckpt have been extended on in the work on CRAK [20]. CRAK is a Linux checkpointing system that has been designed as a kernel module. As a kernel module, CRAK enjoys a number of the same benefits as epckpt does by functioning on the system-level. CRAK has access to a processs address space in addition to a number of kernel objects. Furthermore, CRAK requires no special

PAGE 29

20 libraries, compilers or any modifications to the user code. In addition, CRAK requires no special logging by the kernel or by any user-level application. The main limitation of CRAK is that it requires an operating system to provide module support. Without module support, CRAK cannot be loaded. However, once loaded, CRAK can checkpoint and restart a wide range of applications. CRAK [20] provides two essential user-level tools: ck and restart. To perform a checkpoint with CRAK, one invokes ck. Ck accepts two command line arguments. The first command line argument is the process id of the process to be checkpointed. The second command line argument is the filename of the file where the checkpoint image should be stored. This user-level application then invokes the module functions that perform the necessary actions to create a checkpoint image. These module functions begin by stopping the execution of the specified process. Once the process is stopped, the following process information is copied into the checkpoint image file. Address space. Register set. Opened files. Pipes. Sockets. Current working directory. Signal handler. Termios information. Just like with epckpt, the size of this checkpoint file can be greatly reduced by omitting the shared libraries and binary code. CRAK offers the same flags for omitting these items when possible. Once checkpointed, the checkpoint image files can be stored for a later restart or moved to another node to achieve process migration. To restart a process, one simply invokes restart, and provides the filename of the file containing the checkpoint image on the command line. Restart works much like the

PAGE 30

21 system call execve(). The address space data in the checkpoint image file is copied into the address space of the restart program. This in conjunction with restoring the other items from the list above essentially restores the process. Undoubtedly, the work on CRAK has laid the framework for an excellent checkpointing system. To the knowledge of this author, no other checkpointing systems have attempted to handle items such as networked sockets the way CRAK has. However, CRAK has left a number of areas open for continued work. For example, support for items such as opened files, pipes and sockets exist at the user-level. We believe support for these items should exist at the system-level. Furthermore, there are a number of issues not supported by CRAK. The following is a list of items not supported by CRAK. Restoring of PID. A PID reservation system for checkpointed processes. CRAK only supports saving opened files pathnames. No support for storing an opened files contents. No support for handling opened files that have been deleted or modified. Does not restore file pointer when restarting a process. Only supports TCP Sockets. No support for UDP sockets. Does not support loopback address. Cannot restore an established TCP connection that is the result of a call to accept(). These established TCP connections are multiplexed on the same port as a listening socket. A listening socket is created by a call to listen(). Always restarts a process in the same terminal window as the restart program. No support for restarting a process in another terminal window. As discussed in later chapters, the author has expanded the work on CRAK to include support for the items in the above list. In addition, the author has moved support for items such as opened file, pipes, and sockets to the system-level.

PAGE 31

22 Buffer Overflow Attacks Background A buffer overflow takes place when a larger amount of data is copied into a buffer than that buffer was designed to hold. Functions such as strcpy, strcat, sprintf, and gets do not perform bounds checking and thus allow programmers to write code that overflows. A buffer overflow attack is usually the result of a malicious individual purposely overflowing a buffer with the goal of altering a programs intended behavior. f1s SFP x b uf [ 10 ] f3s Stack Fram e f3s RA f3s SFP f2s Stack Frame f1s Stack Frame f2s RA f2s SFP Bottom of Stack f1() { int x; f2(); } f2() { f3(); } f3() { char buf[10]; } Figure 2-1. Typical runtime program layout. Recall, that a functions return address resides on the stack just below a functions local variables. The only item between a functions return address and local variables is the saved frame pointer. This concept is shown in Figure 2-1. In the case where an array is declared as a functions local variable, such as buf in Figure 2-1, space for the array is allocated on the stack. If this array, or buffer, is overflowed with more data than was allocated for it, this excess data overwrites other items on the stack. For example, a skilled attacker may overflow buf, and thus overwrite f3s SFP and RA. If this address is

PAGE 32

23 overwritten properly, when f3 returns control flow of the program is sent to the location of the attackers choosing. Commonly, the attacker sends control flow to a location on the stack where the attacker has injected his/her own malicious code. Usually the malicious code is injected inside the same array or buffer that is being overflowed. The result of this buffer overflow is that the attacker is able to execute his/her malicious code with the privileges of the original program. If the compromised program has root privileges, then the code executed by the attacker also has root privileges. One of the most common goals of an attacker launching a buffer overflow attack is to spawn a root shell. The code to spawn a shell is short and can be injected into a buffer rather easily. The impact of a malicious user gaining access to a root shell is beyond the scope of this paper, but clearly undesirable for any system administrator. A buffer overflow attack that takes place on the stack is often referred to as a stack-smashing attack. Other types of buffer overflow attacks can take place in the heap, bss, or data segments. A buffer overflow that takes place in the heap is also referred to as a heap smashing attack. However, the stack-smashing attack is the most popular since it allows the attacker to inject code and alter control flow in one step. Other forms of stack-smashing and buffer overflow attacks involve redirecting control flow to other preexisting functions or even library functions. Related Work Stackguard One of the most notable approaches to detecting and preventing buffer overflow attacks is referred to as StackGuard. Cowan et al. [21] created a compiler technique that involves placing a canary word on the stack next to the return address. This canary word acts as a border between a functions return address and local variables. It is very

PAGE 33

24 difficult for an attacker to overwrite a return address without overwriting the canary word. When a function is returning, Stackguard checks to make sure the canary word has not been modified. If the canary word is unmodified then that implies the return address is also unmodified. One of the advantages of Stackguard is that it does not require any changes to the program source code or existing operating system. Stackguard does suffer from a performance penalty. However, that performance penalty is minor. The only downfall of Stackguard is that programs are only protected if they have been recompiled with a specially enhanced compiler. Libsafe and Libverify Baratloo et al. [22] proposed two new methods referred to as Libsafe and Libverify. Both methods were designed as dynamically loadable libraries and were implemented on Linux. Libsafe uses saved frame pointers on the stack to act as upper bounds when writing to a buffer. Libsafe intercepts library calls to functions such as strcpy() or scanf() that are known to be exploitable. It then executes its own version of these functions that provides the user the same functionality but also supplies the bounds checking based on the upper bounds set by the saved frame pointers. Libverify uses a similar approach to Stackguard in that a return address is verified before a function is allowed to return. Libverify does this by copying each function into the heap and overwriting the original beginning and end of each function with a call to a wrapper function. The wrapper function called at the beginning of a function stores the return address, allowing the wrapper function called at the end of the function to verify the return address. One downfall of this method is that the amount of space in memory required for each function is double that of what the process would require if not using Libverify.

PAGE 34

25 VtPath One approach proposed by Feng et al. [23] is VtPath. VtPath is designed to detect anomalous behavior but would also work well in detecting buffer overflow attacks. VtPath is unique in that it uses information from a programs call stack to perform anomaly detection. VtPath intercepts system calls. At each system call it takes a snapshot of the return addresses on the stack. The sequence of return addresses found between two system calls creates what is referred to as a virtual path. During training, VtPath can learn the normal virtual paths that a program executes. When online, VtPath detects any virtual paths that were not experienced in training. When such a path occurs, it is likely that an anomaly has occurred. RAD Another proposed approach to defend against buffer overflow attacks was introduced by Prasad et al. [24]. This approach involves rewriting binary executables to include a return address defense (RAD) mechanism. This approach is rather complex since it requires accurate disassembly in order to distinguish function boundaries. Once function boundaries are located they can be rewritten to include the RAD code. Upon entering a function, the RAD code stores a second copy of the return address that is later used to verify the return address on the stack when a function returns. Unfortunately, this approach is limited due to the challenges faced in disassembly. As Prasad points out, distinguishing between code and data in the code region can be an undecidable problem. Furthermore, we suspect this approach could lead to significant overhead during runtime when the ratio of lines of code to the number of functions decreases.

PAGE 35

26 Static analysis Other approaches aimed at the broader issue of anomaly detection include the call graph and abstract stack models proposed by Wagner et al. [25]. These methods use static analysis of program source code to model the programs execution as a nondeterministic finite automaton (NDFA) or nondeterministic pushdown automaton (NDPDA). These methods monitor a programs execution while using these models to determine if the sequences of system calls generated by a program are consistent with the programs source code. The downfall of these approaches is that they require access to a programs source code. When dealing with legacy applications or commercial software access to program source is often unavailable. Computer Forensics Stephenson defines computer forensics as the field of extracting hidden or deleted information from the disks of a computer [26]. Carrier [27] refers to computer forensics as the acquisition of hard disks and analysis of file systems. Simply put, computer forensics is the art of extracting digital evidence from a computer system usually associated with a crime. Not relevant to this discussion, computer forensics does at times include rescuing data from a damaged or corrupted computer system. Commonly, a computer forensic investigation takes place on the computer system that has either suffered an attack from another computer, or on the apprehended computer of a suspected criminal. In the case of the computer attack, the forensic investigator is usually attempting to find evidence that can answer questions such as, where did the attack originate, what vulnerability made the attack possible, and what files were compromised as a result of the attack. In the case of the suspected criminals apprehended computer, the forensic investigator is usually looking for evidence of the suspected criminals recent

PAGE 36

27 behavior, motives, or planning of future crimes. In either case, the forensic investigator has a number of tactics for collecting such evidence. The investigation may involve anything from searching the file system for incriminating text files to analyzing log files for evidence of the attack. Typically, a computer forensic investigation involves using special forensic tools to analyze items such as slack space, unallocated space, or swap files. Slack space is the leftover space in a block or cluster allocated to a file but not used by the file. Unallocated space is space that is currently not used by any file. Both of these items may contain bytes from old files that were deleted but have yet to be fully overwritten. This allows a digital forensic investigator using forensic tools to extract this data. Swap files can be thought of as scratch paper for an application or the operating system. These files may have traces of data that allow the digital forensic investigator to piece together what actions have previously taken place on the given computer. Another example of data often used in a forensic investigation that does not require special tools to extract would be log files. During a forensic investigation, log files on the victimized or suspected computer are of the utmost importance. A survey of the literature on computer forensics reveals the direct correlation of logging and a successful forensic investigation [26,28,29]. Stephenson refers to the lack of logs as the single biggest barrier to a successful investigation of an intrusion. Upon completion of a forensic investigation all of the extracted evidence is preserved and stored in a secure facility. A chain-of-custody is maintained to assure that no one tampers with the collected evidence. A chain-of-custody is simply a system of recording who is responsible for the evidence at any point in time from the moment it was collected till the moment it is used in a courtroom.

PAGE 37

28 Slack space, unallocated space, swap files, log files, and most other items analyzed by the forensic investigator shared an important similarity. Each of these items exists as nonvolatile data. Nonvolatile data is that which has been saved to disk or resides on some form of stable storage. The opposite of nonvolatile is obviously volatile. Volatile data is that which resides in main memory such as a processs address space. Once a computer is unplugged from its power source, all volatile data is lost, but nonvolatile data remains intact. Due to this inherent nature of digital data, computer forensics is largely restricted to the analysis of nonvolatile data. We believe one of the major keys to improving and enhancing computer forensics is to increase the amount of relevant nonvolatile data available to the forensic investigator. Later in this paper we discuss the idea of using checkpointing technology to create additional nonvolatile data from one of the most common forms of volatile data, namely, processes. This would add checkpoint image files to the collection of items the forensic investigator can analyze for evidence. As we know from previous sections of this chapter, a checkpoint image file contains a plethora of information.

PAGE 38

CHAPTER 3 CHECKPOINTING A Robust Checkpointing System The ideal checkpointing system should be able to handle a wide range of issues. It should not only meet the needs of process migration in a distributed computing environment, but also provide checkpointing support for the system administrator who wishes to checkpoint a users seemingly runaway process rather than kill it. Developing a checkpointing system that can support both these issues and the wide range of issues in between is not a simple task. To aid us in this task we have isolated the Three APs to Checkpointing, namely, the ideal checkpointing system should support checkpointing for: Any Process on Any Platform at Any Point in time [30]. The Three APs to Checkpointing serve as our guide in working towards the ideal checkpointing system. More specific requirements of the ideal checkpointing system could be classified into three categories: transparency, flexibility, and portability. One of the most common forms of transparency sought after in a checkpointing system is transparency for the application programmer. The ideal checkpointing system would not require any modifications to the application programmers code to support checkpointing. In addition, the application programmer would not be restricted to any special language or compiler. Furthermore, the application would not have to be recompiled or even relinked with any special checkpointing libraries. The ideal checkpointing system would also be transparent to the operating system code. In other words, it would not require any modifications to the operating system code. In addition, this system would not require 29

PAGE 39

30 any logging of information. Furthermore, no system call wrappers would be required. Flexibility would be achieved by having the system support all possible aspects of a process including PID, opened files, pipes, and sockets. Further flexibility could be achieved by allowing the user to include or exclude items from the checkpoint image. This would allow the user to reduce the checkpoint size when necessary. Examples of items that should be able to be included or excluded from a checkpoint are shared libraries, binary files, and the contents of opened files. The system should also allow the user to restart a process in whatever terminal or pseudo-terminal the user wants. Portability is achieved when the checkpointing and/or restarting can take place on a number of different types of systems. To date, no checkpointing system has been able to satisfy all the requirements mentioned in the previous paragraph. This would obviously imply that no system has ever supported all three APs. While developing a checkpointing system that supports all three APs and satisfies all the previously mentioned requirements is quite difficult, it is in fact our long term goal. However, our more immediate goal, and for the purpose of this paper we are focusing on two of the three APs; Any Process at Any Point in time. The checkpointing system developed here operates on the system-level and is designed as a kernel module. This system satisfies these two APs and meets the vast majority of the requirements listed in the previous paragraph. Our system expands the work of CRAK [20] to work on more recent and stable versions of the kernel and include additional functionality. Our system is called UCLiK, (Unconstrained Checkpointing in the Linux Kernel).

PAGE 40

31 We believe that such a system should operate at the system-level for a number of different reasons. One of the most compelling reasons is that it gives us access to all the kernel and process state information. Without this information, other systems have had to rely on system call wrappers to perform logging of process information. In addition, checkpointing at the system-level aids our quest for transparency. It completely alleviates the application programmer from any responsibility. In addition, no special compiler or checkpointing libraries are needed. Furthermore, by having direct access to kernel functions and data structures we are more efficient than if we were executing at the user-level, incurring additional overhead from user code and library routines. Our only disadvantage is portability. Since our system functions on the system-level, it is not very portable to other types of operating systems. Hence, the second AP, Any Platform, is not satisfied at this time. However, it is important to point out that the major benefit of the second AP, Any Platform, would be cross-platform migration. At this time, cross-platform migration is not one of our goals. Furthermore, this type of migration would be extremely difficult considering how differently checkpoints would be created on different platforms. It is very likely that any single system able to support cross-platform migration would suffer greatly in areas such as transparency and flexibility, and development of a completely portable system may ultimately be impossible. Table 3-1 shows how the different checkpointing systems addressed in the previous chapter measure up to the APs of Checkpointing. We can see in this table that no system is successful in addressing the AP, Any Platform. We can also see where our system, UCLiK, is more successful at satisfying the other two APs, Any Process and Any Point in time than any other existing checkpointing system.

PAGE 41

32 Table 3-1. AP table for the different checkpointing systems. Any Process Any Point in time Any Platform CHK-LIB No. Processes must be linked with the run-time library. No. Checkpoints are only made at the points specified by the programmer. No. Condor No. Processes must be linked with the run-time library and run within the Condor system. Yes. No. Libckpt No. Processes must be recompiled with the checkpointing libaray and main() must be renamed to ckpt_target(). No. Checkpoints are created a prespecified time intervals. No. ZPL No. Processes must be written with the ZPL language and compiled with the ZPL compiler. No. Checkpoints can only be created during certain ranges of code. No. MOSIX No. Processes must be part of the MOSIX cluster system. Yes and No. A globablly consistent state must achieved. Not targeting a single process. No. CKPM No. Requires libraries for wrapping the Parallel Virtual Machine (PVM) libraries. Yes and No. A globablly consistent state must achieved. Not targeting a single process. No. Epckpt No. The new system call, collect_data() must be invoked to collect data about a process to be checkpointed. Yes. No. CRAK Almost. CRAK has the freedom to checkpoint any process but does not provide support for items such as the PID, opened files contents, file pointers, opened files that have been deleted or modified, UDP sockets, and loopback addresses. Yes. No. UCLiK Yes. UCLiK has the freedom to checkpoint any process and provides support for those items CRAK does not. Yes. No.

PAGE 42

33 In conclusion, we believe checkpointing at the system-level brings us closer to achieving the ideal checkpointing system than any other approach. Checkpointing at the system-level provides transparency that is unmatched by user-level checkpointing systems. It is also important to note that by developing the system as a kernel module additional transparency is achieved. Since a module can be inserted and removed from the kernel code, modifications to the running kernels code is unnecessary. Furthermore, we can provide more flexibility with our checkpointing system than has been supported by other systems. Additional issues of portability and potential standardization of checkpoint image files is addressed in a later chapter. UCLiK Overview To best understand how UCLiK works, it is best to first understand the files involved with UCLiK. The UCLiK system is primarily composed of the following four files. ukill.C ucliklib.c uclik.c uclik.h The file ukill.C is the source code for our user-space tool. This tool is what a user would invoke to perform a checkpoint, or to restart a checkpointed process. The file ucliklib.c contains helper functions that assist the user-space tool in communicating with the functions in the kernel module. The kernel module source code is located in the third file, uclik.c. The fourth file, uclik.h, is a header file shared by all three of the previously mentioned files. The user-space tool communicates with the module through a device file. The name of this device file is /dev/ckpt. This device file is created when the module is

PAGE 43

34 loaded into the kernel. The module registers itself as the owner of this device file. For simplicity, we refer to the user-space tool as being able to call functions in the kernel module. However, for preciseness we explain here that the user-space tool actually calls functions in ucliklib.c. These functions then make ioctl() calls to the device file, /dev/ckpt, which is owned by the kernel module. Thus the kernel module receives these ioctl() calls and directs them to the corresponding kernel module functions. Figure 3-1 illustrates this concept more clearly. Figure 3-1. Performing a checkpoint with UCLiK. A checkpoint is created when a user invokes the user-space tool ukill. The ukill tool receives a minimum of one command line argument. This command line argument is the PID of the process to be checkpointed. By default, the processs checkpoint will be saved in a file named with the processs PID. The user can optionally include a filename

PAGE 44

35 as an additional command line argument to specify where the checkpoint image should be stored. When checkpointing, the main purpose of ukill is to open the file where the checkpoint should be stored, and pass its file descriptor to the checkpoint() function in the kernel module. Ukill also passes the PID number and any additional flags sent on the command line to the checkpoint() function in the kernel module. The checkpoint() function will use the PID number to locate the given processs process descriptor (represented with a task_struct structure). The checkpoint() function will immediately send the SIGSTOP signal to the process being checkpointed. If we are checkpointing a family of processes, the SIGSTOP signal will be sent to each process in that family. Once the process is stopped, the real checkpointing begins. UCLiK begins by storing crucial fields of the process descriptor such as pid, uid, gid, and so forth. Following this, the process address space is stored. This includes fields from the memory descriptor (represented with an mm_struct structure) such as initial and final addresses of the executable code, initialized data, heap, command-line arguments, and environment variables. The initial address of the stack is also saved. Next, UCLiK loops through each of the memory regions (represented with a vm_area_struct structure). For each memory region, the linear address boundaries, access permissions and flags of that region are saved. The contents of each memory region are also saved. Following this, UCLiK iterates through each opened file descriptor saving necessary information for each file abstraction. UCLiK provides support for opened files, pipes, and sockets. Lastly, UCLiK stores information about items such as the current working directory and the signal handler. Once all of this information is written to a file, UCLiK can optionally kill the process and then exit.

PAGE 45

36 A process is restarted by invoking the user-space tool ukill and activating the undo switch. When the undo switch is activated, the command line argument following the undo switch should be the name of the file containing the image of the checkpointed process. Very often, this will simply be the processs original PID (i.e., ukill undo PID). Ukill will read this file to find out process family information. If it is a single process, then ukill will call the restart() function of the kernel module. If it is a family of processes, then ukill will fork the appropriate number of processes, in the appropriate order, and allow each new process to call the restart() function separately. The restart() function of the kernel module will subsequently read in all the process information from the checkpoint image file, copying the original processs information over the ukill process. Once the kernel module completes this task, it allows the process to run. A family of processes is handled slightly different and is addressed later. UCLiKs Comprehensive Functionality Opened Files CRAKs handling of opened files was very straightforward. During a checkpoint, the following information is stored as part of the checkpoint file. File pathname. File descriptor number. File pointer. Access Flags. Access Mode. During a restart, CRAK opens the file using the file pathname, access flags, and access mode. This open takes place at the user-level. If opening the file is successful, then the file is duped if necessary to assure it had the same file descriptor number as

PAGE 46

37 before the checkpoint. This approach worked well but left a number of issues open for continued work. One of our objectives is to provide transparent checkpointing at the system-level. During restart, CRAK would reopen files at the user-level. With UCLiK, during a restart, files are opened at the system-level. One major advantage of reopening a file at the system-level is that it allows us to also restore the file pointer. The following subsections address a number of other issues with opened files such as restoring the file pointer. All of these issues are also being dealt with at the system-level. Restoring the file pointer UCLiK, unlike its predecessor, has the ability to restore the file pointer. To understand the importance of a file objects file pointer, we must first understand how a read() system call made in user-space is handled by the kernel. As an example, suppose we have a file that is 14,361 bytes in length. Suppose we also have a process that is going to read this entire file line by line. Figure 3-2. File size relative to page size. We assume the user-space process has already opened the file. At the point when the user-space process first begins to read the file, the kernel loads the first page of the file into the process address space of the user-space process. The file object pertaining to the file being read has its file pointer (i.e., the f_pos field) assigned the value 4096. This seems strange since the file is being read line by line. But the buffering of line by line

PAGE 47

38 reading is actually handled by glibc. The glibc portion of execution takes place in user-space, but is hidden from the programmer. The following image illustrates the relationship of the process, glibc, and the kernel. Figure 3-3. Glibc executes in user-space. Inside the glibc layer there are three variables of great importance: _IO_read_base, _IO_read_ptr, _IO_read_end. When the page is loaded into memory, the variable _IO_read_base points to the first byte of the page. The variable _IO_read_end points to the first byte just after the last byte of the page. Meanwhile, while the file is being read line by line, variable _IO_read_ptr points to the next unread byte in the page. Essentially, _IO_read_ptr is our real file pointer. Figure 3-4. Glibc variables during a read. _IO_read_ptr is modified with each incremental read. Once the value of _IO_read_ptr equals the value of _IO_read_end, the kernel removes this page from the

PAGE 48

39 processs address space, and loads the next page (Page 2) into the processs address space. At that point, the f_pos value is assigned the value of 8,192. This process continues until the end of the file is reached. For the purposes of process checkpointing and restarting it is essential that the value of f_pos be saved and restored when the process is restarted. Let us continue our above example of a process reading a file of size 14,361 bytes, only this time we checkpoint the process and restart it without restoring the f_pos field. We assume the process was checkpointed when 4,363 bytes of the file had been read. This would imply that the second page of the file had been loaded into memory. The following image should illustrate this our current state. Figure 3-5. The second page of a file loaded into memory. If the process is checkpointed at this moment, when the process is restarted, the f_pos value must be assigned a value of 8192. If it is not, then it is assigned a default value of 0. Meanwhile, our glibc values, _IO_read_base, _IO_read_ptr, _IO_read_end will be restored to their original values. Since the glibc values are part of the process address space, when the process address space is restored, the glibc values are also restored. When reading is continued, the _IO_read_ptr value causes the reading to pick

PAGE 49

40 up where it left off. However, when the value of _IO_read_ptr equals that of _IO_read_end, the kernel then uses the f_pos value to determine which page to load into memory next. Since the value of f_pos is 0, the kernel loads the first page of the file into the processs address space. From here, the entire file is read. To summarize, when a process is restarted, if the f_pos value is allowed to default to 0, then the rest of whatever page was already loaded into the process address space is read. This is then followed by the entire file being read. In testing, because the f_pos value was allowed to default to 0 the resulting number of bytes read after restarting the process was equal to the number of bytes of the file plus the number of unread bytes in the page that was loaded into memory at the time the process was checkpointed. For example, the file being read was 14,361 bytes in length. The process was checkpointed when 4,363 bytes had been read. This meant that the second page was the page currently loaded into the processes address space. This also meant that there were 3,829 bytes remaining in that second page that had never been read. When the process was restarted, the number of bytes read after the restarting was 18,190. This is the sum of 14,361 and 3,829. Furthermore, it is important to note that the f_pos value must be restored in kernel space. One could attempt to restore the f_pos value from user-space through the use of functions like lseek(). However, if this is done from user-space the _IO_read_ptr value of the glibc layer is also modified. As we have seen above, the f_pos value is usually greater than the _IO_read_ptr value. The f_pos value points to the end of the page currently loaded in the processs address space. If the _IO_read_ptr value is restored from user-space, the only value available is that extracted from the f_pos. If

PAGE 50

41 _IO_read_ptr is restored to the value of f_pos, we more than likely, skip some portion of the file being read. File contents If we consider the list of items recorded in a CRAK checkpoint we immediately see another important missing file attribute. The actual contents of an opened file were not included in the checkpoint image. In UCLiK we have added support for packaging the file contents of opened files in the checkpoint image. Since this can drastically affect the checkpoint size we leave this as an option for the user. When invoking ukill, the user can specify a flag, -p, that notifies the system to package the contents of opened files with the checkpoint image. Later, when invoking ukill -undo, the user can specify the same flag, -p, to unpack the files that have been packaged with the checkpoint image. Unpacking the files obviously just creates a copy of the files. If the original files are not available, the user can specify another flag that tells the system to use the new copies of these files. Deleted and modified files Another issue left open by CRAK was how to handle deleted and modified files. Having already added the functionality to package the contents of opened files with a checkpoint image, this issue is resolved rather easily. During a restart with UCLiK, if a file is found to be missing or modified since the time of the checkpoint, the restart alerts the user and cancels itself. At this point, the user can restart the process again using the flag that tells the system to force the restart. This restarts the process with the missing or modified file. Of course the user still has the option to unpack and use any packaged copies of files that were included in the checkpoint image file.

PAGE 51

42 Restoring PID CRAK makes no attempt to restore the PID. When CRAK restarts a process, the PID assigned to the restart process is then inherited by the restarted process. We have added the functionality of restoring a processs PID to UCLiK. We use the find_task_by_pid() function to determine if the PID is available. If it is available, then the restarted process is assigned the original PID. However, if the PID is not available, then the restarted process will be run with the new PID. Pipes CRAKs handling of pipes was rather similar to that of opened files. During a checkpoint, information such as inode number and whether a pipe was a reader or a writer was included in the checkpoint image. During a restart, a new pipe was created and duped if necessary to the correct file descriptor. The creation of this new pipe during restart took place at the user-level. We have moved this functionality to the system-level. UCLiK recreates pipes at the system-level. However, creating pipes at the system-level incurs some additional complexity. Pipes are created with two ends, one for reading and one for writing. When a user-level application creates a pipe it is usually followed by a fork. The parent process can then close one end of the pipe, and the child process will close the other end of the pipe. This allows the two separate processes to share a single pipe. This pipe then acts as a one-way channel of communication. By the very nature of our restart mechanism this is hard to replicate. Our restart mechanism consists of the user-level ukill program whose address space is copied over with the address space of the checkpointed application. When we are restarting multiple processes, ukill forks and calls the restart function of our kernel module once for each process being restarted. Imagine if there are two processes being restarted, and there

PAGE 52

43 exists a pipe between them. When the first process invokes the restart function in the kernel module, the pipe is created. When the second process invokes the restart function of the kernel module, it needs the same pipe. If it creates a pipe, it is a new pipe. We handle this scenario by passing an additional parameter to the restart function of the kernel module. This parameter is called family_count. The family_count tells the module how many parallel processes are being restarted. The module then knows to maintain pointers to any created pipes. Then when other processes need to access those same pipes, identified by their original inode numbers, the module still has access to them. This has enabled us to restore pipes from kernel-space. Additional detail concerning pipes between processes in larger groups of parallel processes is addressed in the next subsection. Parallel Processes One of UCLiKs most beneficial features is its ability to checkpoint parallel processes. While some checkpointing systems do not support checkpointing parallel processes, those that do are often constrained by the types of parallel processes they support. Some such systems only checkpoint parallel processes consisting of a single parent and its immediate children. UCLiK is not constrained by the type or number of parallel processes it can checkpoint. Once again, since UCLiK operates on the system-level it has access to process descriptor fields such as p_cptr and p_osptr. The p_cptr field of the process descriptor is a pointer to the process descriptor of the processs youngest child. A processs youngest child is the most recently spawned child. The p_osptr field points to the process descriptor of a processs older sibling. A processs older sibling is considered the process spawned by the same parent just before the spawning of the given process (Figure 3-6).

PAGE 53

44 Figure 3-6. Process descriptor fields p_osptr and p_cptr. UCLiK uses these fields to navigate through a tree of processes while creating a linear list of the processes. This linear list is stored as part of the process familys checkpoint. Upon restart, this linear list of processes is recursively scanned through to fork a process for each original process in the original process tree. The first process to be entered into the linear process list is always the highest parent process. Using the process tree from Figure 3-6, P1 would be the first process entered into the list. At this point, P1 becomes our main list item. UCLiK would then follow P1s p_cptr field to find P4s process descriptor. Using the p_osptr fields, P4, P3, and P2 would subsequently be added to the list in that order. Now that P1s immediate children have been added to the list, the item in the list following P1 would now become the main list item. This would be P4. UCLiK would then follow the same procedure for adding P4s children to the list. After P4 is the main list item, the next item in the list, P3, then becomes the main list item (Figure 3-7). The dotted lines in Figure 3-7 surround the processes that have recently been added to the list. The arrow indicates which process in the list is currently the main list item.

PAGE 54

45 Figure 3-7. Building a linear list of processes. We see in Figure 3-7 that when the arrow points to P3, that no processes are being added to the list. However, when the arrow points to P2, the processes P5 and P6 are then added to the list. This procedure continues until the arrow reaches the end of the list. Once UCLiK has created a linear process list, it is easy for UCLiK to signal each process in the list to stop execution. This is done with the SIGSTOP signal. With each process in the group of parallel processes stopped, UCLiK can incrementally checkpoint each process to a file. Once all the processes are checkpointed, the execution of this family of processes can continue or be stopped. Upon restarting a group of parallel processes, our user-space ukill tool is responsible for forking a process for each process in the original process tree. Recall, that all of these processes cannot be forked from a single process. We must have each process fork the same number of processes as its corresponding original process had forked. To facilitate this procedure we maintain two additional values for each process in

PAGE 55

46 the list. These two values are referred to as children and childstart. The children value is simply the number of children a given process has spawned. The childstart value tells us what position in the list is a given processs oldest child. Table 3-1 contains the values for the process tree in Figure 3-6. Position In List 0 1 2 3 4 5 6 7 8 9 Process P1 P4 P3 P2 P9 P8 P7 P6 P5 P10 Children 3 3 0 2 1 0 0 0 0 0 Childstart 3 6 0 8 9 0 0 0 0 0 Table 3-2. Example values for children and childstart. At first glance, the childstart values may not be apparently obvious. However, if we start at the beginning of the list with P1, and move through the list summing the children values, we quickly see where the childstart values come from. For example, adding P1s children value of 3 with P4s children value of 3, and we get P4s childstart value of 6. Since, P3 has zero children, its childstart value is also zero. However, adding P4s childstart value of 6 (which is the current sum of children values), to P2s children value of 2, and we get P2s childstart value of 8. We continue this process to fill in the rest of the table. Once these values are determined, UCLiK uses this list to fork the appropriate number of children and thus recreating the original process tree. This procedure is done with a combination of iteration and recursion. A call to our restore_process_tree() function starts with P1 at the beginning of the list. This function uses P1s children and childstart values to iterate through P1s children, while forking a process for each one. This iteration moves in a right-to-left order relative to the process list shown in the top of Figure 3-8. For P1, the function forks a process for P2, then P3, and then P4. Each new

PAGE 56

47 forked process then recursively calls the restore_process_tree() function on itself. This causes each forked process to have its own children iterated through the same way as P1. Subsequently, when restore_process_tree() is called for process P2, it will fork a process for P5 and P6. When this function is called for process P4, it will fork a process for P7, P8, and P9. Figure 3-8 illustrates this procedure. The arrows in Figure 3-8 indicate what process the restore_process_tree() function was called for. The dotted lines are used to represent what processes are being forked. Figure 3-8. Building a process tree from a process list. While the function restore_process_tree() is rebuilding our original process tree it is also invoking functions in the kernel module to restore each of the original processes. It makes a separate call to the kernel module for each process that must be restored. During this, UCLiK must keep track of how many processes are in a group of processes. For all the processes except the last one, UCLiK sends them the SIGSTOP signal after restoring their process address space and kernel state. When UCLiK restores the last

PAGE 57

48 process in a group, it then sends all the rest of the processes the SIGCONT signal. From here, the group of processes can then continue their execution. Restoring PIDs of parallel processes During a checkpoint the original PID of each process is stored as part of the checkpoint. During a restart, if a processs original PID value is not in use then UCLiK has the ability to restore a processs original PID. When restarting a group of parallel processes, at the point in which UCLiK is restoring the first process of this group, UCLiK must also check to see if the entire groups original PID values are in use. If none of the original PID values are in use, then UCLiK can restore these PID values for the restarted group of parallel processes. To check the availability of the groups original PID values, UCLiK makes use of the kernel function find_task_by_pid(). This function is called for each of the original PID values. If a particular PID value is in use, this function will return a pointer to the process descriptor of the corresponding process. If a particular PID value is not in use, this function will return NULL. Restoring pipes between parallel processes When checkpointing with UCLiK, we save pertinent information about pipes between parallel processes. Upon restart, UCLiK restores these pipes from kernel-space. Recall, that for a group of parallel processes, the ukill tool makes an individual call to the kernel module for each process being restarted. Considering that two different processes often share a single pipe, a pipe created for one process, must be accessible during subsequent calls to the kernel module. To handle this situation, we created a pipe cache. When a given process needs one end of a pipe, the pipe cache is always scanned first. If the pipe is not in the cache, then we create the pipe and add it to the cache.

PAGE 58

49 UCLiK makes use of the do_pipe() system call to create pipes in kernel-space. The do_pipe() function receives as input a pointer to an array of two integers. When do_pipe() returns, these two integer values will correspond to the file descriptors of the file objects that represent the two ends of the pipe. These two file objects will have been installed in their corresponding file descriptor positions for the current process. A pointer to the inode shared by these two file objects and a pointer to each of these file objects are stored as an item in UCLiKs pipe cache. Items in the pipe cache are identified by the original pipe inode number. We refer to the original pipe inode as the inode of the pipe that existed before checkpointing the group of processes. In the case of two processes that share a single pipe, the first process to be restarted will cause UCLiK to create a new pipe. After creating this pipe, UCLiK will install the appropriate end of the pipe to the first process, and place a corresponding item in the pipe cache. When restarting the second process, the pipe cache identifies an item by the original pipes inode number, and installs the pipe from the cache item. A final note on our pipe cache is that entries in the cache should not carry over from one group of parallel processes to another group of parallel processes. To prevent this from happening, any call to the kernel module consists of an additional value referred to as family_count. The family_count value represents the number of processes in a group of processes. This allows UCLiK to determine when the first and last processes of a given group of processes are being restarted. When UCLiK determines that it is restarting the last process in a group of processes, it can then clear the pipe cache. Restoring pipe buffers between parallel processes UCLiK also has the ability to restore the pipe buffers between parallel processes. UCLiK makes use of the kernels preprocessor macros PIPE_START, PIPE_BASE, and

PAGE 59

50 PIPE_LEN. PIPE_START points to the read position in the pipes kernel buffer. PIPE_BASE points to the address of the base of the kernel buffer. PIPE_LEN represents the number of bytes that have been written into the kernel buffer, but have not yet been read. During a checkpoint, UCLiK utilizes these macros to store a copy of the buffer along with the rest of the checkpoint. Later on during a restart, these same macros are used again to refill the buffer with the same contents it had before the checkpoint. Sockets CRAK stands out from most checkpointing systems by its ability to checkpoint and even migrate some sockets. However, much like opened files and pipes, when restarting a process consisting of sockets, these sockets were being created and even bound at the user-level. We have moved support for sockets from the user-level to the system-level. One issue left open by CRAK was that of loopback addresses. CRAK did not support loopback addresses. UCLiK supports loopback addresses. TCP sockets use a client/server relationship. A typical TCP socket connection is established by the following procedure. On the server side, a socket is created and then bound to a local address and port number. This is done with the use of the socket() and bind() C Library functions. The server can then begin listening using the listen() function on the port to which it is bound. On the client side, a socket is created and optionally bound. When the client invokes the connect() function, the corresponding server will accept this connection request with a call to the accept() function. If the client does not call bind() before connect(), then the clients socket is automatically bound to a random port. Once this procedure is complete an established socket exists between the client and the server. The interesting aspect to this procedure is that the server now has two sockets. One socket, that was created by the server and was set to listen on the bound port. A

PAGE 60

51 second socket, which has an established connection with the client. When the server calls the accept() function, a second socket is created. This socket is multiplexed on the same port as the listening socket. Usually, different sockets at the same IP address must have unique port numbers, but in this case the port number is shared. This creates an issue when checkpointing and restarting processes that contain sockets. Inside the Linux kernel a socket can be in any one of twelve different states. At this point, we only need to be concerned with three of these states, TCP_ESTABLISHED, TCP_CLOSE, and TCP_LISTEN. The other states are transitional states. A socket only exists in a transitional state during kernel mode execution that results from a system call. Before and after any of the system calls mentioned in the previous paragraph a socket will always be in one of the three previously mentioned states. Sockets that are part of an established connection will obviously be in the TCP_ESTABLISHED state. When restoring a socket in this state, the naive approach would be to bind the socket back to the port to which it was previously bound to. However, this does not work since the listening socket has already been bound to that port. CRAK handles this issue by allowing the established socket to be bound to another port. This works, but we have now restored the process with a socket that is not exactly like the socket it had before the checkpoint. Furthermore, this results in the client of this socket connection, needing to be notified that the server port for this socket connection has changed. A functionality that we do want to support, but we believe should be reserved for times when it is absolutely necessary, like process migration. In UCLiK, we utilize the function tcp_inherit_port() which allows us to remultiplex the

PAGE 61

52 established socket onto the same port on which the listening socket is listening. We believe this is a better method. Terminal Selection We have also developed a tool that allows the user to restart a checkpointed process in the terminal or pseudo-terminal of their choice. This tool makes use of the ioctl() command to run a command in a different terminal window. The ioctl request of TIOCSTI makes it possible to write text to a different terminal window. This would be very helpful for the system administrator who wishes to restart a users process in the users terminal window rather than his/her own window.

PAGE 62

CHAPTER 4 DETECTING STACK-SMASHING ATTACKS The ICAT statistics over the past few years have shown at least one out of every five CVE and CVE candidate vulnerabilities have been due to buffer overflows. This constitutes a significant portion of todays computer related security concerns. In this paper we introduce a novel method for detecting stack-smashing and buffer overflow attacks. Our runtime method extracts return addresses from the program call stack and uses these return addresses to extract their corresponding invoked addresses from memory. We demonstrate how these return and invoked addresses can be represented as a weighted directed graph and used in the detection of stack-smashing attacks. We introduce the concept of a Greedy Hamiltonian Call Path and show how the lack of such a path can be used to detect stack-smashing attacks. Overview of Proposed Technique We propose a new method of detecting stack-smashing attacks that deals with checking the integrity of the program call stack. The proposed method operates at the kernel-level. It intercepts system calls and checks the integrity of the program call stack before allowing such system calls to continue. To check the integrity of a programs call stack we extract the return address and invoked address of each function that has a frame on the stack. Using the list of return addresses and invoked addresses we can create a weighted directed graph. We have found that a properly constructed weighted directed graph of a legitimate process always has the unique characteristic of a Greedy Hamiltonian Call Path (GHCP). We refer to this as a call path since it corresponds to the 53

PAGE 63

54 sequence of function calls that lead us from the entry point of a given program to the current system call. This call path is greedy because when searching for this path within our weighted directed graph, we always choose the minimum weight edge when leaving a vertex. Furthermore, this path is Hamiltonian because every vertex must be included exactly once. Most significantly, we have found that the lack of such a path can be used to indicate that there has been a stack-smashing or buffer overflow attack. Constructing the Graph The task of constructing a weighted directed graph from the program call stack involves five major steps. We demonstrate these five steps on an example program. The functions, their source code, starting and ending addresses in memory for the example program are shown in Table 4-1. Table 4-1. Example program we use to demonstrate graph construction. Function Name Starting Address in Memory Ending Address in Memory Functions Code f3() 0x08048400 0x0804842a execve(); f2() 0x0804842c 0x08048439 f3(); f1() 0x0804843c 0x08048449 f2(); main() 0x0804844c 0x0804845f f1(); return 0; The five major steps include Step 1: Collect return addresses. Using the existing frame pointer, trace through the program call stack to extract the return address from each stack frame. Step 2: Collect invoked addresses. For each return address extracted from the stack, find the call instruction that immediately precedes it in memory. Extract the invoked address from that call instruction. At this point we can create a table of return address/invoked address pairs. For the program shown in Table 4-1, the return and invoked addresses in Table 4-2 would be extracted.

PAGE 64

55 Table 4-2. Return address/invoked address pairs. Return Address Invoked Address 0x08048321 0x42017404 0x42017499 0x0804844c 0x08048457 0x0804843c 0x08048447 0x0804842c 0x08048437 0x08048400 0x08048425 0x420b4c34 0x420b4c6a 0xc78b1dc8 In Table 4-2 it is easy to see how the addresses starting with 0x0804. correlate to the addresses in Table 4-1. The addresses starting with 0x420. are the addresses of C library functions used by our program. The last address, 0xc78b1dc8 is the kernel address of the system call function execve(). Addresses such as 0x420b4c34 and 0x420b4c6a correspond to the system call wrapper in our C library. The additional addresses at the beginning of the table (i.e. 0x08048321, 0x42017404 and 0x42017499) are the addresses corresponding to _start and __libc_start_main. The purpose of these functions is not pertinent to this paper. Step 3: Divide addresses into islands. Once the values in Table 4-2 have been obtained we can begin construction of our weighted directed graph. Our final graph contains a node for each of the addresses in Table 4-2. However, before we can make each address into a node we must first categorize our addresses into what we refer to as islands. Our addresses are divided into islands based on their locations in memory. For example, addresses that begin with 0x0804 are part of a different island from addresses that begin with 0x420. Addresses are further divided on whether they are a return address or an invoked address. In this example we have four islands. These islands are show in the Figure 4-1.

PAGE 65

56 0x0804 Return Addresses 0x08048321 0x08048457 0x08048447 0x08048437 0x08048425 0x0804 Invoked Addresses 0x0804844c 0x08048300 0x0804843c 0x0804842c 0x08048400 0x420 Invoked Addresses 0x42017404 0x420b4c34 0x420 Return Addresses 0x42017499 0x420b4c6a Figure 4-1. Example programs addresses divided into islands. Note that the address 0xc78b1dc8 was not placed into an island. This is because this address represents the first instruction of our system call. It represents a unique node later on. We add this address node when we begin adding edges. The address 0x08048300, the first instruction in the _start procedure, was added even though it is not in Table 4-2. This address is part of an ELF header and is loaded into memory, thus it can be extracted at runtime. Every program must have an entry point and therefore can be part of our graph. Step 4: Adding edges. Recall the return address/invoked address pairs we have listed in Table 4-2. Each of these pairs are connected with a zero weight edge leading from the return address to the invoked address. At this time we can add a node for the address 0xc78b1dc8, and subsequently add a zero weight edge to it from its corresponding return address. All of the edges added thus far are part of our final GHCP. To complete this step, we attempt to give each invoked address node an edge to every return address node in the same memory region. These edges are weighted with the distance in memory between the two nodes. For example, the node with address 0x0804842c has a directed edge with weight 0x1b leading to the node with address

PAGE 66

57 0x08048447. In addition, the node with address 0x0804842c also has a directed edge leading to 0x08048457 with a weight of 0x2b. The edges leading from invoked address node 0x0804842c to every return address node of the same memory region are shown in Figure 4-2. 0x08048300 0x08048400 0x08048321 b 1b 2b 0x0804843c 0x0804844c 0x08048457 0x08048447 0x0804842c 0x08048437 0x0804 Invoked Addresses 0x0804 Return Addresses 0x08048425 Figure 4-2. Edges leading from invoked address node 0x08048418. Note that the node 0x0804842c was not given an edge to nodes 0x08048321 and 0x08048425. This is because these edges would have resulted in negative weights which we do not allow in the graph. This concept is explained more thoroughly in the next subsection. Once all of the appropriate edges have been added, the graph is complete. We have omitted the drawing of our completed graph due to the crowded nature of a graph of such a simple program. Graph Construction Explained Inspection of Table 4-2 allows us to see a relationship between a value in the ith row of column one and the (i-1)th row of column two. For example, we know that 0x0804842c is the address of the first instruction in our f2() function. Let the row with this address be the (i-1)th row. This means that the return address in the ith row is 0x08048437. Since we also know that the last instruction in our f2() function is at

PAGE 67

58 address 0x08048439, we know that this return address is inside of f2(). Furthermore, we can see in Figure 4-2 that when a minimum weight edge leaving the address node of 0x0804842c is chosen, it leads to the return address node of 0x08048437. Stated more formally, if we let return addresses be denoted with and invoked addresses be denoted with a given invoked address, i, should have a minimum weight edge leading to return address, i+1. This leads to the idea that every graphs GHCP is no different from the Actual Call Path (ACP) of the program. It turns out, this is exactly what we need. All programs that have not fallen victim to a stack-smashing or buffer overflow attack posses this ACP. We can find this ACP by searching for a GHCP. Our method must be greedy to insure that we chose the minimum weight edge when leaving a given node. In addition, since our path must include each vertex exactly once, our path is Hamiltonian. If we are unable to find such a GHCP, then we know that our ACP has been disrupted. This implies the likely occurrence of a stack-smashing or buffer overflow attack. To demonstrate why this works, suppose the function f2() were vulnerable to a buffer overflow attack. Suppose the attack overwrites the return address of f2() with the address 0x0804844a. This results in the edge from Figure 4-2 that was labeled with 0xb, now being labeled with a 0x1e. Thus when a minimum weight edge leaving the address node of 0x0804842c is chosen, it no longer leads to the proper node. It leads to the address node 0x08048447, whose edge is labeled with a 0x1b. This same address node is also the result of choosing a minimum weight edge when leaving the address node of 0x0804843c. Having two edges that both lead to the same node disrupts our GHCP. There no longer exists a path that is both Greedy and Hamiltonian. When the lack of a

PAGE 68

59 GHCP is detected, we know that a stack-smashing or buffer overflow attack has occurred. One assumption we make is that two functions in memory never overlap and that the initial instruction of a function is always the invoked address. We realize that some programs written in assembly may not abide by this assumption. However, all compiled programs and most assembly programs satisfy this constraint. To summarize, our ACP represents the expected GHCP. However, we provide multiple paths leaving a given invoked address to give that invoked address a choice when determining our GHCP. By providing a choice, it allows the other return addresses to act as upper bounds. The upper bounds created by other return addresses limit the potential range of addresses that a given return address can been overwritten and modified to by an attacker. There already exists an inherent lower bound since we do not include negative weight edges. Recall that invoked addresses are likely the address of the first instruction for a given function. Thus it makes sense that unaltered execution flow of a given function should never lead to an instruction that resides at a lower memory address than the first instruction of that function. Proof by Induction In order for us to rely on the nonexistence of a GHCP to indicate the presence of a stack-smashing attack we must first prove that a GHCP exists for all uncompromised programs. In this section, we consider the case in which there are no recursive function calls. Knowing that our graph has two types of edges, those leaving return addresses and those leaving invoked addresses, we can simplify this proof. Since there is always exactly one edge leaving a given return address, we know this edge is always part of our GHCP. We can exploit this feature of our graph to simplify our proof. With this feature,

PAGE 69

60 we now only need to prove that in the ACP each invoked address always has a minimum weight edge leading to its corresponding return address. We prove, using induction that this holds true for all unobjectionable programs. An unobjectionable program is defined as a program whose call stack represents a possible actual call path. Our formal inductive hypothesis is as follows: Theorem 4.1: For all unobjectionable programs in which n different functions have been called, where n 1, every invoked address i, for i < n, has a minimum weight edge leading to the return address i+1. With this stated we must first prove our base case. Base case. In this case there is one active function and no other calls have been made. Our assertion that i, for i < n, has a minimum weight edge leading to return address i+1, is vacuously true. Alternatively, we can say that the GHCP corresponds to the ACP, because they are both null. Inductive case. For our inductive case we must prove that if the GHCP corresponds to an ACP for n calls, it corresponds to an ACP when the (n+1)st call is made. Stated more formally, we assume the following to be true: GHCPn = ACPn = 1, 2, 2, 3, 3, 4 n, n Thus we must prove the following to be true: GHCPn+1 = ACPn+1 = 1, 2, 2, 3, 3, 4 n, n, n+1, n+1 The (n+1)st call results in adding the two additional nodes, n+1 and n+1, to our graph. This also results in the additional edges, (i, n+1) and (n, i+1), being added to our graph. Since we know that GHCP = ACP, as long as every invoked address i, has a

PAGE 70

61 minimum weight edge leading to the its corresponding return address i+1, we must prove the following proposition. Proposition: For each i, where i < n, weight(i, i+1) < weight(i, n+1) Before proceeding any further we must define some variables. Table 4-3. Variables for the induction proof. Variable Definition Li Length in bytes of the ith function. i Address of the first byte of the ith function. ri The offset to the return address inside the ith function. (ri = i+1 i) We also assume that two separate functions loaded into memory never overlap. Therefore, we must prove our proposition for two different scenarios, namely i < n and n < i. We can construct an abstract version of our graph as it would exist the moment our (n+1)st call is made. This version of our graph, Figure 4-3, illustrates the relationship between the function that made the (n+1)st call and any other invoked/return address pairs. A solid line represents an existing edge. A dotted line represents a new edge. Figure 4-3. Abstract graph once (n+1)st call is made. n i+1 ri Z2 Z1 n+1 n+1 0 rn i Now we prove our proposition holds for both scenarios. For the scenario i < n, we know the following must also be true i+1 < n+1.

PAGE 71

62 Therefore we can conclude weight(i, i+1) = ri = i+1 i < n+1 i = Z1 = weight(i, n+1). Thus our proposition holds true for our first scenario. Given the second scenario n < i, we know the following must also be true n+1 < i+1. Therefore we can conclude weight(i, n+1) < 0, and since our graph does not contain negative edges, our proposition still holds true. It might seem logical to conclude that we also need to prove a second proposition. This second proposition is stated below. Proposition 2: For each i, where i < n, weight(n, n+1) < weight(n, i+1) Proving this proposition for the first scenario we find weight(n, i+1) < 0. Once again, since our graph does not contain negative edges, our proposition still holds true. With the second scenario we find weight(n, n+1) = rn = n+1 n < i+1 n = Z2 = weight(n, i+1) Thus we can prove that our 2nd proposition also holds for both scenarios. However, if Proposition 1 holds, then this second proposition is unnecessary. When we arrive at the point where we must choose an edge leaving n, since we are searching for a GHCP, our only feasible choice is rn leading to n+1. If the first proposition holds, every i+1 for i < n, has already been visited. Thus the only choice that maintains a Hamiltonian path is n+1.

PAGE 72

63 In conclusion, we have proven that when the (n+1)st call is made, every invoked address i, still has a minimum weight edge leading to its corresponding return address i+1. This in turn proves that if the GHCP corresponds to the ACP for n calls, it corresponds to the ACP when the (n+1)st call is made. Therefore we know that the lack of a GHCP demonstrates that some form of stack-smashing or buffer overflow attack has occurred. Recursion Recursion is the case where i = n for some i < n. When this is the case, we have two different scenarios that may create a problem. i+1 = n+1 i+1 > n+1 The first scenario creates a problem because i has two equal weight edges leading to i+1 and n+1. Subsequently, these two equal weight edges are also the minimum weight edges leaving i. When searching for a GHCP, we wont know which edge to choose. The second scenario creates a problem because i has a minimum weight edge leading to n+1. To address these scenarios we add a new graph construction rule. Rule 1: If i = n for some i, where i < n, we dont allow the edge (i, n+1) in our graph. With this rule being stated, we must now prove that our GHCP corresponds to our ACP with this condition even when recursion is present. We now revisit each case of our induction proof in the previous section dropping the requirement that all active functions are different from each other.

PAGE 73

64 It is important to note that we are not concerned about the scenario where i+1 < n+1, for the same reasons we were not concerned about the Proposition 2 in Theorem 4.1. We now prove the following theorem. Theorem 4.2: For all unobjectionable programs in which n functions have been called, where n 1, every invoked address i, for i < n, has a minimum weight edge leading to the return address i+1. Base case (n = 1). Since this case has only one active function, the new condition has no affect on it. Once again, the GHCP corresponds to the ACP, because they are both null. Inductive case (n > 1). For our inductive case we must prove that if the GHCP corresponds to the ACP for n calls, it corresponds to the ACP when the (n+1)st call is made even when our new condition is applied. We know that GHCPn still corresponds to ACPn. We know this because before the (n+1)st call is made, n is the last node in our ACPn. Hence, n+1 does not exist yet and neither of our scenarios create a problem yet. Once the (n+1)st call is made, we must still prove that when our additional condition is followed that GHCPn+1 corresponds to ACPn+1. Fortunately, we know the following: If i < n, then i n Thus, (i, n+1) (i, i+1) Since the edge (i, i+1) is never the same edge as (i, n+1) we can safely remove (i, n+1) from our graph and our GHCP is not affected. Since (i, i+1) is always left

PAGE 74

65 unmodified, we know that our GHCP still exists. Figure 4-4 shows our abstract graph when the (n+1)st call is made for when i n and i = n. Figure 4-4 illustrates that when the (n+1)st call is made, regardless of whether i n or i = n, GHCPn+1 still corresponds to the ACPn+1. Our new condition never alters our (i, i+1) edges. In Figure 4-4, the left sides represents when i n and the right side represents when i = n n i i+1 ri n+1 n+1 0 Z1 Z2 rn n i+1 n+1 n+1 0 Z2 rn ri i Figure 4-4. Abstract graph with recursion when the (n+1)st call is made. To summarize, we use other functions return addresses to perform bounds checking on a specific function, say f(). We do not allow multiple invocations of f() to create bounds criteria for itself. Implementation and Testing We have implemented our method of detecting stack-smashing attacks in a prototype called Edossa (Efficient Detection Of Stack-smashing Attacks). Edossa was designed as a kernel module on the Linux 2.4.19 kernel. It was implemented on a system running RedHat Linux 7.3 with a 1.3 GHz AMD Duron processor. The system is running with 128MB RAM. The source code for Edossa is freely available under the GPL at http://www.cise.ufl.edu/~mfoster/research/edossa.html. Edossa operates by checking the integrity of the program call stack at the point when a system call is made. As an initial proof of concept version, the current Edossa is

PAGE 75

66 only intercepting the execve() system call. While there are numerous systems calls that could be used maliciously, the vast majority of buffer overflow and stack-smashing attacks deal with the attacker attempting to spawn a root shell. This is done by passing the string /bin/sh to the execve() system call. The intercepting of additional system calls could be easily added to Edossa with minimal effort. To test Edossa we collected a number of publicly available exploits. These exploits consisted of a wide range of applications that are known to have some form of buffer overflow vulnerability in their source code. The results of our testing are shown below in Table 4-4. Table 4-4. Publicly available exploits used to test Edossa. Exploit Date Posted Result Overhead (s) Reference efstool Dec. 02 Attack Detected 94 [9], [10] finger Dec. 02 Attack Detected 104 [9] gawk April 02 Attack Detected 113 [9] gnuan July 03 Attack Detected 126 [9] gnuchess July 03 Attack Detected 103 [9] ifenslave April 03 Attack Detected 94 [10], [11] joe Aug. 03 Attack Detected 90 [9] nullhttpd Sept. 02 Attack Detected 103 [9] pwck Sept. 02 Attack Detected 212 [9] rsync Feb. 04 Attack Detected 109 [9] As we can see from Table 4-4, Edossa was successful at detecting all of these known exploits. In addition, the overhead for detecting all of these exploits never exceeded 212 microseconds of CPU time. Limitations One limitation of our GHCP analysis is that it depends on the existence of a valid frame pointer. In most cases when the return address is overwritten, the frame pointer is also overwritten. Without a valid frame pointer, there is no way to trace through the stack to extract return addresses. However, in the case where there is no valid frame

PAGE 76

67 pointer, we already know that some form of buffer overflow or stack-smashing attack is underway. A system that implements our proposed method, such as Edossa, detects that no valid frame pointer exists even before generating a graph. A system that only tests for the ability to trace up a stack is too easily evaded by an attacker to warrant a stand alone buffer overflow detection system. However, due to the prevalence of attacks that could be detected with such a test, we believe it should be incorporated. There are two methods with which an attacker might be able to evade detection of a system using GHCP analysis. The first method an attacker could use to evade detection is to perform a buffer overflow attack that overwrites a return address to a new return address but leaves the frame pointer unmodified. The second method an attacker could use to evade detection, is to perform a buffer overflow attack that overwrites the return address and frame pointer and also injects code onto the stack. The first few instructions of this injected code must restore the return address and frame pointer to their original values. The injected code would then jump to preexisting code the attacker wants to execute. Both of these methods could work, but they exhibit a major limitation. The new return address or the preexisting code jumped to by the injected code must reside in the same function as the original return address. Recall that each invoked address must have a minimum weight edge to its corresponding return address. If the new return address is inside of another function, the attacker risks destroying the minimum weight edge between the invoked address and the original return address. Likewise for the second method. When a system call is made, a return address is placed on the stack. Thus, if the injected code has jumped to a piece of code in another function, the attacker risks this return address not having a minimum weight edge from its corresponding invoked

PAGE 77

68 address. While both methods are possible, the challenges facing the attacker are much more rigorous than without GHCP analysis. Another limitation of our method is its ability to deal with function pointers. Currently, we use return addresses to trace through memory and find a corresponding invoked address. These invoked addresses are part of a call instruction in memory. The bytes in memory representing a call instruction include an address or offset to an address. In either case we can extract the address invoked by a call instruction. In the case of function pointers, the call instruction is often calling an address that is in a register. We have no way to determine what address was in this register when this call instruction executed. However, we note that one can easily modify a compiler to store invoked addresses on the stack. We have done this in the form of a patch for the GCC compiler to verify the technique. This gives us the ability to always determine a return address corresponding invoked address. When using programs compiled with the patched GCC compiler, function pointers are no longer a limitation. Benefits A system designed for buffer overflow detection using GHCP analysis has a number of benefits. First, our system does not require access to a programs source code. Secondly, it does not require that a program be compiled with any specially enhanced compiler or even have the executable binary file rewritten unless one wants to verify calls through pointers to functions. In addition, our system does not require linking with any special libraries or place any additional burden on the application programmer. Many intrusion detection systems also rely on a training phase with a program to learn its normal behavior. After a training phase, the system can monitor a program to ensure that

PAGE 78

69 it doesnt deviate from the behavior observed during the training. Our system does not require any training phase. Our method is similar to a nonexecutable stack because it makes it extremely difficult for an attacker to execute malicious code on the stack. However, our method provides a number of benefits the executable stack does not. For example, in addition to stack-smashing attacks, our method can also detect heap smashing attacks. Furthermore, it is likely to also detect a similar attack that uses the bss or data segment. Our method would also detect most attempts to rewrite a return address to another location in preexisting code. A nonexecutable stack would not detect such an attack. Lastly, our method allows code with a legitimate stack trace to execute code on the stack. In cases where an uncompromised process needs to execute code on the stack, the nonexecutable stack would not allow such a process to proceed. Our method also provides the framework for even more concise buffer overflow detection system. Currently, one of the limitations of our method is that we rely on other functions in the call path for our bounds checking criteria for a given function. A compiler could easily be modified to inject a dummy function in between every function in a given program. The code for the ith dummy function would consist of only the code required to call the (i+1)st dummy function. By calling the sequence of dummy functions before starting main() we would place the necessary bounds checking criteria on the stack that we need for any function in our program. The cost of this is in the compilation and start up times of the program. In addition, computation of the GHCP would only require time proportional to the number of active functions.

PAGE 79

CHAPTER 5 PROCESS FORENSICS Proposed Process Forensics All work done on a computer system is done in the form of a process. Processes can be divided into two categories: user-space processes and kernel-space processes. For the purpose of this discussion we refer only to user-space processes. The reason for this is that a given kernel-space process is always acting on behalf of a particular user-space process or processes. Regardless of the unique methods different platforms use to handle processes, most all processes contain a great deal of information. Unfortunately, due to the nature of computer forensics, by time a forensic investigation has begun, most of the relevant processes have already been terminated. Often, the involved computer system may have been completely shutdown. The only data with which a digital forensic investigator has to analyze processes are any log files created while a given process was executing. Thus, the digital forensic investigator is left with a very limited amount of computer forensics concerning processes. We believe computer forensics can and should be expanded to include more process information. We propose checkpointing as one means to create more computer forensics in the form of process forensics. Knowing the importance of log files, let us analyze the sources of log files. Essentially, logging is the same as recording details about the currently running processes. In other words, logging is no more than creating nonvolatile data by recording details about volatile data. This perspective immediately leads us to consider what other volatile data should be made nonvolatile. Knowing that a significant portion of volatile 70

PAGE 80

71 data comprises user processes, we see where checkpointing can be a means to create more nonvolatile data out of volatile data. Possible Evidence in a Checkpoint Let us continue our discussion with a quick overview of the main sources of information found in the checkpoint of a process. Recall, a process exists in main memory where it has been assigned its own address space. Some of the more useful information found in a processs address space, therefore included in a checkpoint, might consist of items such as a process identification (PID), the user who owns the given process, and pointers to parent, child, and sibling processes. While an attacker may have altered some of this information, it still provides a starting point for the forensic investigator. Information such as a PID is essential in distinguishing between multiple processes. Furthermore, knowing what user owned a process indicates who started the process or whose account has been compromised. Ownership of a process, whether legitimate or not, also tells us the permissions level of the process. Clearly, a process run as root can do far more damage than a typical user process. In addition, knowing the relationship between different processes can assist in isolating the source of a process or what other processes resulted from the execution of a process. The parent and sibling relationship between processes is something not likely found in log files. One of the more notable portions of a processs address space is the stack. The stack contains significant information pertaining to the execution sequence of a process. This sort of information is extremely useful to someone investigating a buffer overflow or stack-smashing attack [31]. Given access to the stack, a digital forensics investigator can determine both where and how such an attack was made possible. Without knowing where and how an attack was made possible, it is very difficult to prevent similar future

PAGE 81

72 attacks without limiting ones usage of their own system. The process address space also contains the heap, bss and data segments of a process. Analysis of the heap segment may reveal evidence pertaining to a heap smashing attack much like the stack in a buffer overflow or stack-smashing attack. The stack, heap, bss, and data segments are all potential targets of malicious input attacks. In turn, each of these items would contain essential evidence of such an attack. As an integral part of the process address space, each of these items is included in a checkpoint image file. An additional example of this sort of malicious input attack would be a format string attack. A process address space also contains information about items we refer to as process peripherals. Process peripherals include opened files, sockets, and pipes. Knowing what files a process accessed can be extremely valuable to the forensic investigator. This can indicate the intruders objective, help isolate the damage done during the attack, or indicate attempts by the intruder to cover his or her own tracks. The digital forensic investigator and system administrator very much need to know if files such as password or log files have been modified or accessed. Tampering with a password file indicates the likelihood of future attacks via a compromised account. When log files have been tampered with it usually indicates an attacker is attempting to cover his/her tracks. Furthermore, socket connections provide additional evidence of communication links involved in a crime. Socket connections may indicate from where an attacker is launching an attack or where the attack is dumping stolen data. Pipes are another form of communication with which that digital forensic investigator would take an interest. Some checkpoints even include data that is still in the pipe buffer. Process peripherals could also include items such as a process corresponding tty or terminal.

PAGE 82

73 With this information, the digital forensic investigator might learn the attack was launched locally. The possibilities of evidence from process forensics are quite vast. However, to gain from process forensics we must know when to collect process forensics. The following section addresses this issue. Opportunities for Checkpointing A system administrator must make a number of tough decisions when dealing with an intruder. At times, a system administrator may become aware of an attack while the attack is in progress. This may be a result of the system administrators own monitoring of the system or an alert issued by an Intrusion Detection System (IDS). The knee-jerk reaction to such a scenario is to kill all the processes related to the attack. While such an approach can be very effective in stopping the attack, it does little towards collecting evidence about the attack. Furthermore, such an approach is likely to tell the intruder that he or she was detected. Most of the time, one does not want the intruder to know he or she was detected, until there is enough evidence to prove a crime took place, and by whom it was committed. We believe when an intrusion is detected, whether by the system administrator or IDS, the immediate actions should include collection of evidence, or more specifically process forensics. We encourage the use of incremental checkpoints that can be created without alerting the intruder. Once the intruders session is ended, whether by the system administrator or by the intruder himself, the resulting checkpoints can provide crucial information about the attack. A recent look at the ICAT vulnerability statistics shows a significant number of the CVE and CVE candidate vulnerabilities were due to buffer overflows. For the years 2001, 2002 and 2003 buffer overflows accounted for 21, 22, and 23% of the

PAGE 83

74 vulnerabilities, respectively [6]. While much work has been done to detect buffer overflow attacks, to the knowledge of this author, little has been done to enhance our abilities to collect evidence resulting from buffer overflow attacks. We believe process forensics derived from checkpointing can help fill this void. Recall that a checkpoint contains the stack, heap, data, and bss segments of a process. In the case of a buffer overflow attack, creating checkpoints the moment the attack is detected, and even while the attack is in progress, is likely to collect vital evidence. A forensic investigator can use this information to more closely determine how and when the intruder entered the system. A thorough analysis of the stack is likely to show what function contains the exploited vulnerability. Isolating the vulnerability is essential to preventing a similar attack in the future. Furthermore, in the case of a stack-smashing attack, any code injected onto the stack may uniquely correspond to code that is later found on the attackers computer. Likewise for a heap smashing attack and a processs corresponding heap segment. While this alone does not prove anything, it does provide an additional corroborating stream of evidence. Any additional such evidence is desirable in the case of a legal setting. Stephenson [26] reminds us that it takes a heap of evidence, to make one small proof. ICATs CVE and CVE candidate vulnerabilities classified as buffer overflow attacks are actually a subgroup of a much larger classification. This larger classification, known as input validation errors, accounted for 49%, 51%, and 52% of the CVE and CVE candidate vulnerabilities for the years 2001, 2002, and 2003 respectively. The idea of collecting evidence about a buffer overflow attack from a checkpoint is based on the concept that a buffer overflow attack stems from malicious input. Such input has no

PAGE 84

75 choice but to become part of a processs address space. We believe this approach to process forensics and evidence collection can be expanded far beyond buffer overflow attacks to include other input validation errors. An example of an input validation error is a boundary condition error. While some boundary condition errors result from a system running out of memory, others may result from a variable exceeding an assumed boundary. Inspection of variables in a checkpoint file may reveal such an assumed boundary and expedite the process of closing a vulnerability once exposed. SQL injection attacks also take advantage of input validation errors. SQL injection attacks often allow the attacker to damage and/or compromise a websites database. The very nature of attacks that exploit input validation errors automatically leave evidence in a processs address space. The potential for evidence and process forensics from checkpointing intruder related processes resulting from such vulnerabilities have yet to be explored. Most intrusion detection systems can be categorized as misuse detection and anomaly detection. Misuse detection usually refers to those systems that utilize some form of signature or pattern matching to determine whether or not a process is part of an intrusion. Anomaly detection usually refers to those systems that attempt to define normal behavior so that processes can be categorized as normal or intrusive. Due to the inherent challenge in defining what is normal behavior, these systems often rely on some form of threshold to distinguish between normal and anomalous behavior. Markov Chain Model [32], Chi-square Statistical Profiling [33], and Text Categorization [34] are examples of such approaches to anomaly detection. We propose that such anomaly detection systems use checkpointing as an evidence collection technique for processes

PAGE 85

76 that are approaching or have passed the given threshold. Incremental checkpoints can be used to continually collect evidence of a processs behavior for any process that is considered anomalous or nearing anomalous. This would result in process forensics for those malicious processes that never quite reach the threshold and would usually go undetected. In addition, this would create process forensics for processes that do cross the threshold. Such forensics could expedite finding out why a process deviated from its normal behavior. A common dilemma facing the computer crime investigator when entering a crime scene is whether or not to unplug the computer [28]. Any work by the criminal that resides in main memory is lost if the computer is unplugged. However, forensic analysis of a hard disk must always be performed on a copy rather than the original. In order to create a copy of the confiscated hard disk, the computer must eventually be powered off. Depending on the platform, Stephenson usually recommends directly unplugging the power source [26]. This avoids any booby-traps that may be triggered if the machine is not shutdown in a particular manner. Regardless of the manner by which a machine is shutdown, all of the volatile data such as a running process is lost. This illustrates another example of where additional evidence may be gained by using checkpointing. Prior to shutting down or unplugging a computer, relevant processes could be checkpointed. The resulting checkpoint files allow the forensic investigator to analyze the running processes at a later time. Additional Enhancements If a computer crime ever reaches the courtroom, any evidence presented before the court must have been preserved through a chain-of-custody [26]. In other words, one must be able to verify with whom and where the evidence has been held since the

PAGE 86

77 moment it was collected. In the case of a checkpoint, the checkpoint resides in a file and can therefore be digitally fingerprinted immediately following its creation. In a courtroom, this digital fingerprint can be verified to show that the checkpoint file remains unaltered. A time and date stamp can also be included and verified with a digitally fingerprinted checkpoint file. In addition, a checkpoint stored as a file can easily be transferred to a secure location much like some logging systems. It is often recommended that logs be stored on a separate secure system from the system that generates the logs. These log files are also commonly stored in an encrypted format. These measures deter an intruder from altering log files to cover-up his or her unauthorized access to a system. Checkpoint files can be treated in the same manner. They can be stored on secure systems separate from where they were created. This prevents an intruder from modifying or destroying any evidence that is collected in the form of a checkpoint file. Sommer has provided a good analysis of why intrusion detection systems fall short of providing quality evidence in [29]. We propose that a checkpointing system should be developed separately from an ID system. During an attack, the ID system can trigger the checkpointing system to handle any intrusion related processes. This allows the ID research to focus on detection rather than evidence collection. A checkpointing system, due to its inherent goal of recreating a process, is already aimed at collecting information about a process. We believe this goal can be more easily combined with the goal of evidence collection. Furthermore, by allowing checkpointing systems to provide the evidence collection, we alleviate the need for drastic modifications to existing ID systems.

PAGE 87

78 We also believe the format of a checkpoint file could be shared amongst multiple platforms. We believe that the format of a checkpoint file should be standardized similar to that of the ELF format used on Linux platforms. The standardization of checkpoint file formats would facilitate a common ground from which law enforcement, academia, and other researchers can work. This would facilitate the development of tools for working with and analyzing checkpoint files. In addition, standardizing any aspect of the forensic investigation aids in training future forensic investigators. Furthermore, standardization would assist in the acceptance of checkpoint evidence in legal proceedings. Likewise, standardization could further facilitate process migration amongst different platforms. Carrier [27] has proposed a balanced solution to the open/closed source debate with regards to digital forensic tools. Carrier urges digital forensic tools be categorized into tools for extraction and presentation. Carrier proposes extraction tools should remain open source, while presentation tools remain closed source. Such a balanced solution could easily be applied to checkpointing tools. The checkpoint/restart engine of a checkpointing system could remain open source. This allows researchers and the digital forensics community to validate the inner-workings of such checkpointing tools. Meanwhile, the presentation tools used for presenting the data from a checkpoint file could remain closed source. It is likely that many individuals involved in the legal proceedings following a computer crime do not posses the necessary technical skills for understanding the data found in a checkpoint file. This provides ample opportunity for software developers to create presentation tools for checkpoint files. The goal of making

PAGE 88

79 complex checkpoint file data easily understandable would create ample competition for the private sector.

PAGE 89

CHAPTER 6 CONCLUSIONS In this paper we have shown significant progress towards developing the ideal checkpointing system. We define the ideal checkpointing system as one that satisfies the Three APs of Checkpointing which are: Any Process on Any Platform at Any Point in time. The system we develop and discuss in this paper supports two of the three APs: Any Process at Any Point in time. To achieve these two APs, we developed UCLiK as a kernel module. As a kernel module we enjoy levels of transparency not experienced by most checkpointing systems. This transparency includes relieving the application programmer from any additional responsibility, not requiring any system call wrappers to log information about a process, no special compilers and no checkpointing libraries with which one must compile or link their programs. Additional transparency is achieved by developing the system as a kernel module since the running kernels code does not have to be modified. The benefits of such a checkpointing system are very broad. A checkpointing system such as UCLiK can be used for process migration, fault tolerance and rollback recovery. Furthermore, UCLiK can provide system administrators with an alternative to the kill system call. System administrators who substitute kill with checkpointing a process, now have the option to undo ending a given process execution. This functionality allows system administrators to more preemptively protect their systems against runaway and other suspicious processes without losing valuable work. 80

PAGE 90

81 Our study has also served to deepen our belief that development of a single checkpointing system that can satisfy the third AP to checkpointing, namely Any Platform, is unrealistic. However, to best facilitate the long term goal of checkpointing processes on Any Platform we suggest development of checkpointing systems be divided into two separate components. The two components are the checkpoint engine and the checkpoint file. We refer to the checkpoint engine as the portion of the system concerned with collecting process data, kernel state, and any other interactions with the operating system. This component is also concerned with stopping and restarting a process. The checkpoint file is the actual file where a checkpoint image is saved and can be stored indefinitely. We believe differences in hardware architecture and kernel data structures make it nearly impossible for a single checkpoint engine to work on all existing platforms. Thus, the development of checkpoint engines for different platforms are probably best kept separate. However, we suggest that the development of checkpoint engines for different platforms should be coordinated. This coordination should be centered around designing the engines to work with a standardized checkpoint file format. We believe checkpoint file formats should be standardized in a way similar to that of ELF files. Standardizing the checkpoint file format would have numerous benefits. One major benefit would be facilitating cross-platform migration. Checkpoint engines of different platforms could all function on the same checkpoint file. Furthermore, tools for reading and modifying the contents of a checkpoint file could be developed and would immediately be platform independent. Additional benefits to a standardized checkpoint file format with regards to computer forensics are prevalent and discussed shortly.

PAGE 91

82 In this paper we have also introduced a novel method for detecting stack-smashing and buffer overflow attacks. We have shown how the return addresses extracted from the program call stack can be used along with their corresponding invoked addresses to create a weighted directed graph. We have shown that such a graph always contains a Greedy Hamiltonian Call Path (GHCP) for all unobjectionable programs. Thus, the lack of a GHCP can be used to indicate the existence of a stack-smashing or buffer overflow attack. The benefits of such a method for detection are independence from specially enhanced compilers and libraries, access to a programs source code is unnecessary, executables do not have to be rewritten, there is no logging involved and it requires no modifications to the operating system. These benefits make our approach unique when compared with most other approaches to detecting stack-smashing and buffer overflow attacks. In addition, our work has laid the framework for an even more concise detection system for stack-smashing and buffer overflow attacks. Using our methods in addition to an enhanced compiler could remove the limitations experienced by our system involving function pointers and programs with few active functions. An existing compiler could be easily modified to include a series of dummy functions that are called at the beginning of a programs execution with the sole purpose of placing bounds checking criteria on the stack. Furthermore, enhancing a compiler to push the invoked addresses on the stack would allow our method to handle function pointers. We have begun implementing a prototype for our method. Early results show a promising outlook for low overhead. Future work includes continued development of our

PAGE 92

83 prototype with more exhaustive testing of overhead and compatibility with items such as setjmp/longjmp calls. We expect such items to be compatible with our method but it remains unconfirmed and beyond the scope of this paper. Researching both checkpointing and intrusion detection results in a unique perspective, namely, that computer forensics is lacking in the subfield we have termed process forensics. Process forensics involves extracting information from a process address space of a given program for the purpose of evidence collection. Since computer forensics is restricted to nonvolatile data, to improve computer forensics we must find new sources of nonvolatile data. Since checkpointing creates nonvolatile data from processes, we believe that including checkpointing technology with intrusion detection systems can create a new source of nonvolatile data. In addition, by increasing the amount of nonvolatile data we have increased the amount of forensic evidence available to the digital forensic investigator. Since this evidence comes from processes, we find it appropriate to refer to it as process forensics. In this paper we have explored different sources and benefits of process forensics. One primary example being the evidence collected by an intrusion detection system enhanced with checkpointing technology. Palmer [35] reminds us that the future is likely to bring even tougher standards for digital evidence. We believe standardizing items such as the checkpoint file format used for process forensics can help meet these standards. Standardizing methods of evidence collection can help thwart some of the scrutiny placed on digital evidence in a courtroom setting. In addition, standardizing the checkpoint file format helps facilitate the training process of future digital forensic investigators. Lastly, it encourages the development of tools used to analyze checkpoint files for the purpose of process forensics.

PAGE 93

84 In [26], Stephenson addresses the importance of reconstructing the crime scene. We suggest that anything less than the ability to recreate an entire process state may lead to holes in the evidence required to identify an attacker or prevent similar future attacks. Checkpointing provides us with the necessary level of detail to recreate an entire process. Although many attacks to date have not necessitated checkpointing, we do not want to use the needs of the past to limit our preparedness for the future. In closing, researchers and the digital forensics community must continue to find new sources of evidence following computer crimes. We believe that in many cases checkpointing technology can achieve such a goal.

PAGE 94

LIST OF REFERENCES 1. National Telecommunications and Information Administration, Economics and Statistics Administration, Nation Online: How Americans Are Expanding Their Use of the Internet, Author, Washington, D.C., February, 2002, Retrieved May 10, 2004, From http://www.ntia.doc.gov/ntiahome/dn/. 2. Carnegie Mellon University, CERT Coordination Center Statistics, Author, 2004, Retrieved May 10, 2004, From http://www.cert.org/stats/cert_stats.html. 3. A.B. Brown, D.A. Patterson, Rewind, Repair, Replay: Three Rs to Dependability, 10th ACM SIGOPS European Workshop, Saint-Emilion, France, September, 2002. 4. A.B. Brown, D.A. Patterson, To Err is Human, Proceedings of the First Workshop on Evaluating and Architecting System Dependability, EASY '01, Gteborg, Sweden, July, 2001. 5. D.Wagner, J. Foster, E. Brewer, A. Aiken, A First Step Towards Automated Detection of Buffer Overrun Vulnerabilities, In Proceedings 7th Network and Distributed System Security Symposium, Feb, 2000. 6. National Institute of Standards and Technology (NIST), ICAT Vulnerability Statistics, Author, September, 2003, Retrieved April 2, 2004, From http://icat.nist.gov/icat.cfm?function=statistics. 7. SecurityGlobal.net, SecurityTracker.com Statistics, Author, 2002, Retrieved April 2, 2004, From http://www.securitytracker.com/learn/securitytracker-stats-2002.pdf. 8. J. Plank, M. Beck, G. Kingsley, Libckpt: Transparent Checkpointing under Unix, Conference Proceedings, USENIX Winter 1995 Technical Conference, New Orleans, LA, January, 1995. 9. J. Plank, Y. Chen, K. Li, M. Beck, G. Kingsley, Memory Exclusion: Optimizing the Performance of Checkpointing Systems, Software -Practice and Experience, Vol. 29, Number 2, pp. 125-142, 1999. 10. M. Elnozahy, L. Alvisi, Y.M. Wang, D. Johnson, A Survey of Rollback-Recovery Protocols in Message-Passing Systems, ACM Computing Surveys, Vol. 34, Issue 3, pp. 375-408, September, 2002. 85

PAGE 95

86 11. L.M. Silva, J.G. Silva, System-Level versus User-Defined Checkpointing, Proceedings of the Seventeenth IEEE Symposium on Reliable Distributed Systems, pp. 68-74, October, 1998. 12. M. Litzkow, T. Tannenbaum, J. Basney, M. Livny, Checkpoint and Migration of UNIX Processes in the Condor Distributed Processing System, Paper Retrieved June 1, 2003, From http://www.cs.wisc.edu/condor/doc/ckpt97.ps. 13. G. Stellner, CoCheck: Checkpointing and Process Migration for MPI, Proceedings of the 10th International Parallel Processing Symposium (IPPS '96), Honolulu, HI, April 15-19, 1996. 14. S.E. Choi, S.J. Deitz, Compiler Support for Automatic Checkpointing, Proceedings of the 16th Annual International Symposium on High Performance Computing Systems and Applications (HPCS), pp. 213-220, 2002. 15. M. Beck, G. Kingsley, J. Plank, Compiler-Assisted Memory Exclusion for Fast Checkpointing, IEEE Technical Committee on Operating Systems and Application Environments, Winter, 1995. 16. B. Ford, M. Hibler, J. Lepreau, P. Tullmann, User-Level Checkpointing through Exportable Kernel State, Proceedings of the 5th International Workshop of Object Orientation in Operating Systems (IWOODS ), Seattle, WA, October, 1996. 17. A. Barak, O. Laadan, The MOSIX Multicomputer Operating System for High Performance Cluster Computing, Journal of Future Generation Computer Systems, Vol. 13, Number 4-5, pp. 361-372, March, 1998. 18. Y. Zhang, J. Hu, Checkpointing and Process Migration in Network Computing Environment, Proceedings of the 2001 International Conferences on Info-tech and Info-net, Vol. 3, pp. 179-184, Beijing, OctoberNovermber, 2001. 19. E. Pinheiro, Truly-Tansparent Checkpointing of Parallel Applications, Author, 2003, Paper Retrieved June 1, 2003, From http://www.research.rutgers.edu/~edpin/epckpt/paper_html. 20. H. Zhong, J. Nieh, CRAK: Linux Checkpoint/Restart As a Kernel Module, Technical Report: CUCS-014-01, November, 2001, Paper Retrieved June 1, 2003, From http://www.ncl.cs.columbia.edu/research/migrate/crak.html. 21. C. Cowan, C. Pu, D. Maier, H. Hinton, J. Walpole, P. Bakke, S. Beattie, A. Grier, P. Waggle, Q. Zhang, StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks, Proceedings of the 7th USENIX Security Conference, San Antonio, TX, 1998. 22. A. Baratloo, N. Singh, Transparent Run-Time Defense Against Stack-smashing Attacks, In Proceedings of the 2000 USENIX Technical Conference, San Diego, CA, Jan 2003.

PAGE 96

87 23. H. H. Feng, O. M. Kolesnikov, P. Fogla, W. Lee, W. Gong, Anomaly Detection Using Call Stack Information, IEEE Symposium on Security and Privacy, Berkeley, CA, May, 2003. 24. M. Prasad, T. Chiueh, A Binary Rewriting Defense Against Stack Based Buffer Overflow Attacks, In Proceedings of the 2003 USENIX Technical Conference, San Antonio, TX, June, 2003. 25. D. Wagner, D. Dean, Intrusion Detection via Static Analysis, IEEE Symposium on Security and Privacy, Oakland, CA, 2001. 26. P. Stephenson, Investigating Computer-Related Crime, CRC Press, 1999. 27. B. Carrier, Open Source Digital Forensics Tools: The Legal Argument, @Stake Research Report, October, 2002, Retrieved May 20, 2004, From http://www.atstake.com/research/reports/acrobat/atstake_opensource_forensics.pdf. 28. A. Yasinsac, Y. Manzano, Policies to Enhance Computer and Network Forensics, Proceedings of the 2001 IEEE Workshop on Information Assurance and Security, West Point, NY, June, 2001. 29. P. Sommer, Intrusion Detection Systems as Evidence, First International Workshop on the Recent Advances in Intrusion Detection, Belgium, September, 1998. 30. M. Foster, J.N. Wilson, Pursuing the Three APs to Checkpointing with UCLiK, Proceedings for the 10th International Linux System Technology Conference, October, 2003. 31. M. Foster, J.N. Wilson, S. Chen, Using Greedy Hamiltonian Call Paths to Detect Stack-smashing Attacks, Proceedings of the 7th Information Security Conference, Palo Alto, CA, September, 2004. 32. N. Ye, A Markov Chain Model of Temporal Behavior for Anomaly Detection, Proceedings of the 2000 IEEE Workshop on Information Assurance and Security, West Point, NY, June, 2000. 33. N. Ye, Q. Chen, S. M. Emran, K. Noh, Chi-square Statistical Profiling for Anomaly Detection, Proceedings of the 2000 IEEE Workshop on Information Assurance and Security, West Point, NY, June, 2000. 34. Y. Liao, V. R. Vemuri, Using Text Categorization Techniques for Intrusion Detection, 11th USENIX Security Symposium, August, 2002. 35. G.L. Palmer, Forensic Analysis in the Digital World, International Journal of Digital Evidence, Volume 1, Issue 1, Spring, 2002.

PAGE 97

BIOGRAPHICAL SKETCH Mark Foster received the BS degree in Computer Science and Mathematics from Vanderbilt University in the Spring of 1999. Since the Fall of 1999, he has been a graduate assistant in the department of Computer and Information Science and Engineering at the University of Florida. He completed the MS degree in Fall of 2001. His academic interests include checkpointing, intrusion detection, and computer forensics. He also enjoys teaching, exercising, and movies. 88


Permanent Link: http://ufdc.ufl.edu/UFE0008063/00001

Material Information

Title: Process Forensics: The Crossroads of Checkpointing and Intrusion Detection
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0008063:00001

Permanent Link: http://ufdc.ufl.edu/UFE0008063/00001

Material Information

Title: Process Forensics: The Crossroads of Checkpointing and Intrusion Detection
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0008063:00001


This item has the following downloads:


Full Text












PROCESS FORENSICS: THE CROSSROADS OF
CHECKPOINTING AND INTRUSION DETECTION


















By

MARK FOSTER


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2004

































Copyright 2004

by

Mark Foster















ACKNOWLEDGMENTS

I would like to thank my supervisory committee chair Joseph N. Wilson. Without

his patience and support this work would not have been possible. Many thanks are also

extended to the supervisor of my teaching assistantship, Rory DeSimone. She has

created a sense of home and family I will never be able to forget. Without this support I

would not have continued my graduate work beyond the master's level. I would also like

to thank my parents for their support. They have provided me with an amazingly reliable

safety net that allowed me to focus solely on school and avoid many headaches. Lastly,

and most of all I would like to thank my wife. Her love, support, and patience have not

wavered in this lengthy process. She has undoubtedly been the single most integral

component to my success. I am proud to say that her days of being the bread winner are

numbered.
















TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S ................................................................................................. iii

LIST OF TABLES .............. ................. ........... ....................... .... vi

L IST O F FIG U R E S .... ...... ................................................ .. .. ..... .............. vii

A B STR A C T ... .................... ............................................ ... ....... ....... viii

CHAPTER

1 IN TR OD U CTION ............................................... .. ......................... ..

C heckpointing ...................................................................................................... 2
B uffer O overflow A attacks ........................................... ................................. 4

2 BACKGROUND AND RELATED WORK ........................ ..... .................

C h e ck p o in tin g ................................................... ........................ 7
Background ........... ............ ...... ..................7
R elated W ork .............................................. ................ 11
U ser-level checkpointing ............................................................. ......... 11
System -level checkpointing ....................................................... 17
Buffer Overflow A attacks ................................................ ............... 22
B background ................................................................... ............ 22
R e late d W o rk ................................................................................................. 2 3
Stackguard ................................................................................ 23
Libsafe and Libverify .................................................................. 24
V tP a th ........................................... .. ...................................................... 2 5
R A D ....................................................... 2 5
S tatic a n a ly sis ......................................................................................... 2 6
Computer Forensics ............. ........ ....... ............... ............. .. ....... 26

3 CH ECK POIN TIN G ...............................................................................29

A Robust Checkpointing System ................................................................ ..... 29
U CLiK O overview ........................................................................33









UCLiK's Comprehensive Functionality ..............................................................36
O p en ed F iles ............... .. ................................................. ................ 36
R estoring the file pointer......................................... .......... ............... 37
F ile c o n te n ts ................................................. ................ 4 1
D elected and m odified files ........................................ ....... ............... 41
R restoring P ID ...............................................................42
P ip e s ................................................................4 2
P arallel P rocesses ............................. ...................... ......... ........... 43
Restoring PIDs of parallel processes .............................................48
Restoring pipes between parallel processes ...............................................48
Restoring pipe buffers between parallel processes............... .......... 49
S o c k e ts ........................................................................................................... 5 0
T erm inal Selection ........... ............................................................. ..... .... ... 52

4 DETECTING STACK-SMASHING ATTACKS..................... ........................... 53

Overview of Proposed Technique .................................................................. 53
C onstructing the G raph............................................... ............................. 54
G raph Construction Explained ........................................ ........................ 57
P roof by Induction ........ ........................................... ............. ....................... ...59
R e cu rsio n ...............................6 3............................
Im plem entation and Testing ............................................... ............................ 65
L im station s .................................................................................................... 6 6
B e n e fits ................................ .....................................................6 8

5 PR O CESS FOREN SIC S .................................................. .............................. 70

Proposed Process Forensics ........... ............................. ............... ............... 70
Possible Evidence in a Checkpoint....................................... .......................... 71
O opportunities for Checkpointing ...................................................... ..... .......... 73
Additional Enhancem ents ................................................................ ...............76

6 CON CLU SION S .................................. .. .......... .. .............80

LIST OF REFEREN CES ............................................................ .................... 85

B IO G R A PH IC A L SK E TCH ..................................................................... ..................88














v
















LIST OF TABLES

Table pge

3-1. AP table for the different checkpointing systems................... ..................................32

3-2. Example values for children and childstart. .................................... .................46

4-1. Example program we use to demonstrate graph construction ..................................54

4-2. Return address/invoked address pairs................................... .......................... 55

4-3. Variables for the induction proof ............ ............ ............... 61

4-4. Publicly available exploits used to test Edossa. .............................. ......... ...... .66
















LIST OF FIGURES

Figure page

2-1. Typical runtim e program layout. ........................................ ........................... 22

3-1. Perform ing a checkpoint with UCLiK ................................... ......................... 34

3-2. File size relative to page size .............................................. ............................ 37

3-3. G libc executes in user-space............................................. .............................. 38

3-4. G libc variables during a read ......................... ......... ........................ ............... 38

3-5. The second page of a file loaded into memory. ................... .......................... 39

3-6. Process descriptor fields p osptr andp_cptr...... ............. ...............44

3-7. Building a linear list of processes ................ ........................................ 45

4-1. Example program's addresses divided into islands............................... ..............56

4-2. Edges leading from invoked address node 0x08048418. ........................................57

4-3. Abstract graph once (n+l)st call is made. ......................................... .............. 61

4-4. Abstract graph with recursion when the (n+l)st call is made. ...................................65















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

PROCESS FORENSICS: THE CROSSROADS OF
CHECKPOINTING AND INTRUSION DETECTION

By

Mark Foster

December 2004

Chair: Joseph N. Wilson
Major Department: Computer and Information Sciences and Engineering

The goal of our study was to introduce a new area of computer forensics we call

process forensics. Process forensics involves extracting information from a process'

address space for the purpose of finding digital evidence pertaining to a computer crime.

The challenge of this subfield is that the address space of a given process is usually lost

long before the forensic investigator is analyzing the hard disk and file system of a

computer.

Our study began with an in-depth look at checkpointing techniques. After

surveying the literature and developing our own checkpointing tool, we believe

checkpointing technology is the most appropriate method for extracting information from

a process's address space. We make the case that an accurate and reliable checkpointing

tool provides a new source of evidence for the forensic investigator. We also thoroughly

examined the literature and methods for detecting buffer overflow attacks. In addition,

we have developed a new method for detecting the most common form of a buffer









overflow attack, namely, stack-smashing attacks. We believe that the boundary where

these two areas meet (specifically, incorporating checkpointing with intrusion detection)

can readily provide process forensics. The technology of checkpointing is nothing new

when considering process migration, fault tolerance, or load balancing. Furthermore, a

plethora of research has already focused on finding methods for detecting buffer overflow

attacks. However, with respect to computer forensics, the gains from incorporating

checkpointing with intrusion detection systems have yet to be explored.














CHAPTER 1
INTRODUCTION

In recent years, computers and the Internet have become an integral part of our

society. Computer use includes the workplace, home, school, and in some cases, public

areas such as shopping malls and airports. The National Telecommunications and

Information Administration (NTIA) released a report showing that Internet growth in the

United States is estimated at 2 million new users each month [1]. The downside to this

trend of pervasive computing is that the amount of computer-based crime is also on the

rise. Statistics published by the CERT Coordination Center [2] show that the number of

security related incidents have increased every year since 1998. From 2001 to 2003, the

number of security related incidents more than doubled. As computer crime increases, so

do the demands placed on computer security specialists and law enforcement.

To many computer security specialists, intrusion prevention is more important than

intrusion detection. However, as long as intruders continue to be successful, the need for

reliable intrusion detection systems is apparent. In addition, to prevent repetitive or

similar intrusive attacks, we need reliable computer forensics to help us learn why an

attack occurred in the first place. Thus, computer forensics is an integral part of intrusion

prevention.

Our purpose is to introduce a new area of computer forensics, called process

forensics. Process forensics involves extracting information from the process address

space of a given program. We discuss how the information extracted from a process

address space could be a source of evidence after a computer crime. However, to collect









digital evidence in the form of process forensics, one must address two issues. First,

some tool must exist that can extract information from a process address space. Second,

one must know when to extract such information. We propose that checkpointing

technology be used for the extraction process. Checkpointing research is already aimed

at storing key information about a process. We believe that the proper checkpointing tool

can also meet the needs of the evidence collector. Furthermore, we propose that intrusion

detection systems be used to help indicate when to extract information from a process

address space. Intrusion detection systems already aim to detect malicious activity.

Malicious activity sometimes results in the need for evidence in a computer crime.

Checkpointing

Individuals familiar with UNIX systems are probably also familiar with the act of

stopping and restarting a running process. This act is usually achieved by sending a

series of signals to a given process. The signals SIGSTOP and SIGCONT are used for

this purpose. While stopping and restarting processes is a useful capability, it is

important to note that during the time a process is stopped, it still consumes system

resources. In particular, a stopped process consumes system memory associated with

saving its state. Without saving the process's state, we cannot restart the process.

Checkpointing is a way of saving a process's state such that the process may be restarted

from the point at which it was checkpointed without requiring active operating system

information. Checkpoints are made at regular or chosen time intervals. Checkpointing

plays a vital role in fault tolerance and rollback recovery schemes. After a system crash

or failure, processes that were checkpointed can be restarted from the point of their last

checkpoint.









We introduce a new system for checkpointing called UCLiK (Unconstrained

Checkpointing in the LInux Kernel). UCLiK has been implemented as a kernel module

for the Linux operating system. UCLiK is different from most other checkpointing

systems in that it requires no additional programming by the application programmer.

Furthermore, it requires no special compiler or run-time library. UCLiK operates on the

system-level, and therefore has direct access to all of a process's state. When UCLiK

checkpoints a process, it does so by taking a snapshot of a process's state and saving that

information to a file. By saving this state to a file, we no longer use system resources,

except for whatever stable storage is necessary for storing a file. We refer to the file

containing our saved state as our image file. The image file can be stored indefinitely, or

moved to another system where the process could be restarted to achieve process

migration.

Some well-known benefits of checkpointing include process migration, fault

tolerance, and rollback recovery. The benefits of our system are even more widespread.

System administrators can use such a system in place of killing what are seemingly

objectionable processes. Killing what seems to be a problem process, but in actuality is

not, can result in a drastic loss of computation for the user and the system. The user is

upset at having to start the process over again, and the system repeats computation it has

already performed. With UCLiK, such a process could be checkpointed to a file and later

restarted if the administrator can determine that running the process presents no problem.

Essentially, UCLiK can serve as an undo option for the kill system call. Brown and

Patterson [3] made a thorough case for the importance of a system-level undo

mechanism. Furthermore, they showed that human errors are inevitable and should be









considered when designing highly-available and highly-dependable systems [4]. A

common example of human error believed to be experienced by many including this

author when issuing the kill system call is using the wrong process identification (PID).

Obviously this results in killing the wrong process. With UCLiK, one can undo such a

mistake. With UCLiK, a system administrator can be more trigger-happy when killing

suspicious user processes. Rather than waiting until a user process is blatantly degrading

the system's performance, system administrators can take a more preemptive approach

and checkpoint a user's process at the first sign of trouble. UCLiK could also be used by

system administrators when doing preventive maintenance. Maintenance to a system

often requires taking the system down and thus killing a number of user processes. These

processes could be checkpointed instead of killed. Once maintenance is complete, the

user processes could be restarted.

By alleviating the need for a special compiler, a run-time library, or additional

work for the application programmer, we come one step closer to the ideal checkpointing

system. The ideal checkpointing system would allow us to checkpoint any process,

anytime, anywhere. Furthermore, one should be able to restart a checkpointed process

anytime and anywhere. Ultimately, the ideal operating system would have the ability to

checkpoint and restart running processes.

Buffer Overflow Attacks

The term buffer overflow refers to copying more data into a buffer than the buffer

was designed to hold. A buffer overflow attack occurs when a malicious individual

purposely overflows a buffer to alter a program's intended behavior. In most common

forms of this attack, the attacker intentionally overflows a buffer on the stack so that the

excess data overwrites the return address just below the buffer on the stack. When the









current function returns, control flow is transferred to an address chosen by the attacker.

Commonly, this address is a location on the stack, inside the buffer, where the attacker

has injected malicious code. This type of buffer overflow attack is also referred to as a

stack-smashing attack, since the buffer resides on the stack. Stack-smashing attacks are

among the most common buffer overflow attacks because of their simplicity of

implementation.

Buffer overflow attacks have been a major security issue for years. Wagner et al.

[5] extracting statistics from CERT advisories, found that between 1988 and 1999, buffer

overflows accounted for up to 50% of the vulnerabilities reported by CERT. Other

statistics showed that buffer overflows caused at least 23% of the vulnerabilities in

different databases. More recent National Institute of Standards and Technology's

(NIST) ICAT statistics show that a significant number of the common vulnerabilities and

exposures (CVE) and CVE candidate vulnerabilities were due to buffer overflows. For

the years 2001, 2002, and 2003, buffer overflows accounted for 21, 22, and 23% of the

vulnerabilities respectively [6]. From April 2001 to March 2002, buffer overflows

caused 20% of the vulnerabilities reported by SecurityTracker [7]. These more recent

statistics reinforce the case made by Wagner et al. [5]. Buffer overflows are a significant

issue for system security.

We introduce a new method for detecting stack-smashing and buffer overflow

attacks. While much work has been focused on detecting stack-smashing attacks, few

approaches use the program call stack to detect such attacks. Our new method of

detecting stack-smashing attacks relies solely on intercepting system calls and

information that can be extracted from the program call stack and process image. Upon









intercepting a system call, our method traces the program call stack to extract return

addresses. These return addresses are used to extract what we refer to as invoked

addresses. In the process image, return addresses are preceded by call instructions.

These call instructions are what placed the return addresses on the stack and then

transferred control flow to another location. An address that was invoked by a call

instruction is referred to as an invoked address. We use the return and invoked addresses

to create a weighted directed graph. We found that the graph constructed from an

uncompromised process always contains a Greedy Hamiltonian Call Path (GHCP). This

allows us to use the lack of a GHCP to indicate the presence of a buffer overflow or

stack-smashing attack.














CHAPTER 2
BACKGROUND AND RELATED WORK

Checkpointing

Background

Checkpointing is the technique of storing a running process's state in such a way

that a process can be restarted from the point at which the checkpoint, or stored copy of a

process state, was created. Checkpointing plays an important role in fault tolerance,

rollback recovery, and process migration. After a system crash, processes that have been

checkpointed will be able to restart from their most recent checkpoints, while

uncheckpointed processes have lost their work and will be forced to start over from the

beginning. Process migration is the act of moving a running process from one host to

another. Process migration can be achieved by checkpointing a running process on one

host, moving the checkpoint to another host, and then restarting the process on the new

host.

The amount of stable storage required to save a checkpoint is called the

checkpoint size. The amount of time it takes to make a checkpoint is referred to as the

checkpoint time. The additional amount of time it takes to run an application when

checkpointing that application as opposed to when that application is not being

checkpointed is referred to as the checkpoint overhead [8].

Most checkpointing techniques can be classified into two major categories: user-

level and system-level. User-level checkpointing refers to checkpointing systems whose

code executes on the user-level. User-level checkpointing systems execute in user-space.









Checkpointing at the user-level usually involves an enhanced language and compiler, a

run-time library with checkpointing support, or modifications to the source code of a

process to be checkpointed. System-level checkpointing refers to checkpointing systems

whose code executes as part of the operating system. These systems are also referred to

as kernel-level checkpointing systems. These systems execute in kernel-space. These

systems usually involve modifications to an existing operating system. Debate is

ongoing as to whether checkpointing should take place at the user-level or system-level.

The ultimate goals of a checkpointing system are transparency, portability, and

flexibility. The ideal checkpointing system is completely transparent to the application

programmer. Ideally, the application programmer can write code however he wants,

while using any language and any compiler, and still be able to have his application

checkpointed. Furthermore, this checkpointing system should be portable to any type of

system. In addition, the checkpoint of a process should be portable to any type of system

for a restart. All along, the user should have the flexibility to minimize the checkpoint

size, time, and overhead. These aspects of checkpointing consume storage and

computation.

A common method for checkpointing is to suspend the execution of a process,

write the process's state information to stable storage and then continue with the

execution of the process. This method of checkpointing is called sequential

checkpointing [8]. Another checkpointing method is forked checkpointing [8]. With

forked checkpointing, the process to be checkpointed is forked, and the parent process

continues with execution, while the child process is used for making the checkpoint. For

long-running processes, multiple checkpoints may be taken. Incremental checkpointing









refers to storing only what has changed in the process's state since the previous

checkpoint [8]. All other information can be extracted from the previous checkpoints.

Checkpoints can be made synchronously or asynchronously [8]. When the application

programmer specifies points in the code at which to perform checkpoints, these

checkpoints are said to be synchronous. If a checkpoint is taken at regular time periods

(such as every hour for a long-running process), it is said to be asynchronous.

One common approach to reducing checkpoint size is memory exclusion [8]. The

idea behind memory exclusion is that clean and dead areas of memory need not be

included into a checkpoint. Clean areas of memory are portions of memory that have not

changed since the previous checkpoint. Clean memory is also called Read-only memory

[9]. Dead memory includes those portions of memory that will not be read before being

written after the current checkpoint, and thus need not be included in the checkpoint.

Often, one may need to checkpoint a group of parallel applications or distributed

processes. Usually this type of computation takes place on a network or a system of

clustered computers. This scenario is a bit more complex than the sequential

checkpointing of a single process. The challenge here is interprocess communication. If

one node fails, and a process running on that node must rollback to its last checkpoint,

any messages that a process has sent since its last checkpoint are sent again. Therefore,

any other process in the system that has received a message from that process since its

last checkpoint must also rollback. The rollback of one process might invoke the

rollback of another, and in turn invoke the rollback of each process in the system. This

concept (and the idea that rollbacks could propagate through all the processes in the

system and return the system to its initial state) is called the domino effect [10]. To avoid









the domino effect, checkpointing of parallel applications and distributed processes

usually involves some sort of coordinated checkpointing. Coordinated checkpointing

refers to a group of processes coordinating their checkpoints to achieve a global

consistent state [10]. A global consistent state refers to a point in the execution of a

group of processes where, for any process whose state includes having received a

message, there is another process whose state includes having sent that message [10].

Uncoordinated checkpointing can lead to inconsistent states, and thus lead to the domino

effect.

Another technique that assists in rollback recovery of parallel applications and

distributed processes is logging. Some checkpointing systems log pertinent information

and events to assist in the rollback recovery of a process or group of processes. One

common approach to this is to model message receipts as nondeterministic events [10].

Each nondeterministic event then has its own corresponding deterministic time interval.

A nondeterministic event corresponds to the deterministic time interval after it and before

the next nondeterministic event. The main objective behind this is to avoid orphan

processes. Orphan processes are processes that are dependent on an event that cannot be

generated during recovery [10].

We can now classify logging approaches into three categories: pessimistic logging,

optimistic logging, and causal logging. With pessimistic logging, the determinant of

each nondeterministic event is logged to stable storage before any other processing is

done. Pessimistic logging does not allow any process to depend on an event before that

event has been logged [10]. Pessimistic logging never creates any orphan processes, but

imposes more overhead because of its blocking nature. With optimistic logging, the









determinant of each nondeterministic event is immediately logged to volatile storage; but

may not be logged to stable storage until sometime later depending on the protocol [10].

If a failure occurs before the volatile storage is transferred to stable storage, then orphan

processes may exist during a recovery. If this is the case, each orphan process is forced

to rollback until it is no longer dependent on an event that cannot be generated. The

benefit of this approach is a reduction in overhead. Causal logging requires logging all

determinants corresponding to the nondeterministic events that are causally related to the

process's given state [10]. Causal logging applies Lamport's happened-before

relationship to the determinants of the nondeterministic events. This approach does not

allow orphan processes. Unlike pessimistic logging, causal logging does not suffer from

increased overhead.

Related Work

User-level checkpointing

Application programmer-defined checkpointing. Over the years, there have

been a number of different checkpointing schemes. One straightforward checkpointing

scheme has been to leave the responsibility of checkpoints to the application

programmer. After all, the application programmer should know more about the code

than anyone else. The application programmer should know exactly what information is

pertinent and which locations are best for checkpoints. However, without any support,

the task of writing fault-tolerant applications can be challenging. Some studies show that

fault-tolerant routines can take up to 50% of the source code [11]. Fortunately, other

approaches to checkpointing have been developed, to relieve the application programmer

from such a burden.









Checkpointing with library support. To alleviate much of the burden of

checkpointing placed on the application programmer, some programmers use a run-time

library with support for checkpointing. One benefit of this approach is that we still

exploit the application programmer's knowledge of the code. The application

programmer still decides what data is checkpointed, and where the checkpoints should be

placed. This is referred to as user-defined checkpointing [11]. One major advantage of

user-defined checkpointing is its ability to greatly reduce the checkpoint size. Much of

the process state that is saved in a system-level checkpointing scheme is able to be

eliminated from the checkpoint, since the application programmer is familiar with the

data involved in a process. The application programmer can decide what data is truly

necessary for checkpointing the given process.

CHK-LIB is one such run-time library [11]. This run-time library provides three

fault-tolerant primitives. One such primitive is used for the application programmer to

specify which data should be saved in a checkpoint. Another such primitive is used to

allow the programmer to specify where checkpoints should be made. The last primitive

is used to determine whether a process is new or has been restarted.

Another system that uses library support for checkpointing is Condor [12]. Condor

is a distributing processing system whose main goal is to maximize resource utilization

by scheduling processes on idle workstations. Checkpointing and process migration play

a vital role in Condor. If the owner of a workstation being used by the Condor system

needs to use that workstation, Condor must be able to checkpoint any process on that

workstation and migrate those processes elsewhere. Processes to be run in the Condor

system are relinked with the Condor checkpointing library. This checkpointing library









contains system call wrappers that can log information such as names of opened files, file

descriptor numbers, and file access modes.

Checkpoints in the Condor system are invoked by a signal. A checkpoint in the

Condor system is created by copying the process's state information to a file. Restart is

achieved by having the restarting process copy the original process's state information

from that checkpoint file into the process address space of the restarting process.

Routines for handling this signal, and for writing a process's state information into a file,

are provided in the Condor checkpointing library. Process migration is achieved when a

checkpoint file is moved to another location and then restarted. Condor has a number of

benefits. Condor was designed to work on UNIX systems, but does not require any

modifications to the UNIX kernel. Furthermore, since this is a user-level checkpointing

system, the system may be more portable than a system-level checkpointing system.

Condor has been used in conjunction with other systems such as CoCheck [13].

CoCheck is system for checkpointing parallel application on networks of workstations.

CoCheck is concerned primarily with finding a global consistent state for all of the nodes

in a network. Once this consistent state is achieved, Condor aids in checkpointing the

parallel applications. Once checkpointed, these applications can be migrated to achieve a

more balanced load across a network.

Condor does suffer from a number of drawbacks. Condor makes no attempt to deal

with pipes, sockets, or any other form of interprocess communication. Condor also

assumes that files opened by a process being checkpointed remain unchanged when the

restart takes place. Furthermore, Condor relies on a stub process running in the location

of the original process to facilitate file access at the original process's location.









Checkpointing with Condor also requires that one know if a process may be checkpointed

before starting that process. A process that is running and was not relinked with the

checkpointing library can't be checkpointed.

Libckpt [8] is another checkpointing library designed for UNIX systems.

Somewhat similar to Condor, checkpointing with Libckpt takes place at the user-level.

However, Libckpt differs from Condor in that processes to be checkpointed with Libckpt

must be recompiled, not just relinked with the Libckpt checkpointing library.

Furthermore, Libckpt requires that the main function in a program to be checkpointed,

must be renamed to ckpt target. Libckpt also distinguishes itself from Condor by its

additional performance optimizations. Libckpt supports incremental checkpointing and

forked checkpointing. Libckpt also can do memory exclusion via user-directives.

Memory exclusion with these user-directives is called user-directed checkpointing. The

Libckpt library supplies two procedure calls for memory exclusion: include byteso and

exclude byteso. These two procedure calls tell the system which portions of memory to

include or exclude in the next checkpoint. In some cases, the checkpoint size was

reduced by as much as 90% when using the user-directives to do memory exclusion [8].

The Libckpt library also supports the procedure call: checkpoint here(. This procedure

call enables the programmer to request a synchronous checkpoint at any point in the

code. By default, Libckpt checkpoints a process every 10 minutes. The time interval

between checkpoints, whether to use incremental or forked checkpointing, and a number

of other options are specified in the .ckptrc file.

Libckpt does have a number of disadvantages. As with Condor, one must know

before compilation that a process needs to be checkpointed. Transparency is lost by









requiring the programmer to rename the main( function. Furthermore, to achieve the

performance optimizations of memory exclusion and synchronous checkpoints,

additional transparency is lost by requiring the programmer to use the user-directives.

Pipes and sockets were not addressed in the literature discussing Libckpt.

Compiler-defined checkpointing. Another common approach to checkpointing at

the user-level is referred to as compiler-defined checkpointing. This approach consists of

using the compiler's knowledge of the program to select the most optimal locations for

inserting checkpoints. One advantage of this approach is that the application programmer

is not responsible for checkpoints. This results in a checkpointing scheme that is

transparent to the application programmer. Another advantage of this approach is that

when compared with system-level checkpointing, the compiler-defined checkpoints are

typically much smaller. System-level checkpointing usually requires saving most of a

process's address space, also referred to as its memory footprint. This usually results in a

large checkpoint file. However, with the recent reduction in costs for stable storage, one

could argue that for larger checkpoint files are often acceptable. For example, a system

administrator might rather use a few megabytes of storage than deal with a disgruntled

user whose process was killed. Furthermore, some proponents of compiler-based

checkpointing think that for processes with small memory footprints, system-level

checkpointing is still best [14]. However, the same individuals think that compiler-

defined checkpointing is better for applications on large cluster computer systems [14].

One approach to checkpointing on large cluster computer systems was to have the

ZPL language and compiler perform automatic checkpointing [14]. A major challenge of

checkpointing processes across clustered computer systems is the difficulty in identifying









a globally consistent state. This approach used the compiler's knowledge of the code to

identify ranges of code that were void of communication. These ranges of code could be

checkpointed without losing messages in the network. The compiler further exploited its

knowledge of the code to insert checkpoints where the fewest live array variables existed.

In some cases, these compiler techniques resulted in as much as a 73% reduction in the

checkpoint size [14].

Additional compiler-based memory-exclusion techniques have used Libckpt [15].

The previously mentioned Libckpt [8] library used user-directives such as

include_bytes(), excludebytes, and checkpointhere( to perform the checkpointing.

Libckpt was later expanded to include additional directives such as EXCLUDEHERE

and CHECKPOINT_HERE. These directives tell the compiler when to invoke the

checkpointhere( or memory-exclusion procedure calls. Having the compiler invoke

these procedure calls allows the compiler to guarantee a correct checkpoint. The

compiler determines what portions of memory to exclude, based on a set of data flow

equations. These equations are solved using an iterative method. This differs from a

careless programmer who might exclude portions of memory that are essential to

recovery. However, the programmer is still responsible for inserting these new

directives. This results in a continued loss of transparency. This technique is called

Compiler Assisted Memory Exclusion (CAME) [15].

Compiler-defined checkpointing schemes do suffer from a number of

disadvantages. One obvious disadvantage is transparency. Only programs that are

compiled with compilers possessing support for checkpointing will be able to be

checkpointed. Furthermore, the compiler will have no knowledge of any message-









passing that takes place during execution. To work around this disadvantage, one could

log the activity across communication channels, but this obviously results in additional

work and overhead. As we have discussed, some compiler defined checkpointing

schemes just limit the location of checkpoints to specific ranges in the program's

execution [14].

Exportable kernel state. Another interesting approach to checkpointing involves

exporting the kernel state. This approach was explored using the Fluke microkernel [16].

The Fluke microkernel was designed in such a way that the kernel state is exportable to

the user-level. The Fluke microkernel also allows the kernel state to be set, or imported

from the user-level. This allows a user-level checkpointing system to have access to the

necessary kernel objects that pertain to a given process. Most of the checkpointing

systems we discussed so far must infer the kernel state by logging information collected

by system call wrappers.

One of the major drawbacks of this approach to checkpointing is that a process to

be checkpointed must be in the child environment of the checkpointing application. A

process with no checkpointing application as an ancestor apparently cannot be

checkpointed. In addition, portability is limited for a checkpointing system that must run

on the Fluke microkernel.

System-level checkpointing

Many of the system-level checkpointing systems in existence today are focused on

checkpointing parallel applications running on distributed operating systems or clusters

of computers. MOSIX [17] is one such system. MOSIX is a distributed operating

system with checkpointing and process migration support designed for load balancing.

MOSIX was designed for scalable clusters of PC's. While MOSIX does well in









achieving its goal of resource sharing, it does not provide much support for the System

Administrator wanting to checkpoint a user's process before killing it. Simply put,

MOSIX does not provide support for storing a checkpoint image in a file.

A similar system is CKPM [18]. CKPM was designed for checkpointing parallel

applications running in a network-based computing environment. CKPM was also

designed on the system-level but requires additional libraries for wrapping the Parallel

Virtual Machine (PVM) libraries. CKPM's checkpointing strategy uses pessimistic log-

based protocols. While this system is efficient in achieving a global consistent state

when checkpointing parallel applications, it does not provide the ability to store a

checkpoint image in a file. Furthermore, checkpointing is restricted to the parallel

applications of the PVM.

One of the more notable and recent attempts to perform system-level

checkpointing, that also provides the functionality we're looking for, was with a tool

called epckpt [19]. Epckpt was designed with a focus on being able to checkpoint

parallel applications; specifically, those parallel applications resulting from a parent

process callingfork(. Epckpt comes as a patch for the Linux kernel. Since epckpt is part

of the kernel, epckpt has a number of advantages over the checkpointing systems we have

discussed so far. Processes to be checkpointed with epckpt need not be recompiled or

even relinked with any special libraries. Furthermore, epckpt can handle a broader range

of applications, since it is not dependent on any special language or compiler. Epckpt,

being part of the kernel, has direct access to a process's address space. Furthermore,

epckpt can write the checkpoint image to a number of file descriptor abstractions. Epckpt









can write the checkpoint image to a file, pipe, or socket. This enables epckpt to migrate a

process at the time of the checkpoint.

One of the main disadvantages of system-level checkpointing is that it usually

results in a rather large checkpoint image. Some refer to this approach as being a core

dump. A core dump suggests a checkpoint image with much unnecessary information.

Epckpt has directly addressed the issue of checkpoint size. Epckpt allows the user to

omit shared libraries and binary files from the checkpoint image. In cases where the user

knows that these files will still be available when the process is restarted, the size of the

checkpoint can be greatly reduced. In some cases, the size of the checkpoint image was

reduced by more than half when the shared libraries and binary files were omitted.

Epckpt does still suffer from a few disadvantages. Like a number of the

checkpointing systems we have previously discussed, epckpt requires that the user know

a process may need to be checkpointed, before starting that process. Epckpt consists of a

new system call, collect data that notifies the kernel to start recording information

about a process. This information consists of file names and libraries. Epckpt provides a

tool called spawn that can invoke this system call when starting a process, but the

limitation still exists. In addition, epckpt is designed as a patch for the Linux kernel. In

order for one to use this system, they must recompile their entire kernel.

CRAK. A number of ideas used in the design of epckpt have been extended on in

the work on CRAK [20]. CRAK is a Linux checkpointing system that has been designed

as a kernel module. As a kernel module, CRAK enjoys a number of the same benefits as

epckpt does by functioning on the system-level. CRAK has access to a process's address

space in addition to a number of kernel objects. Furthermore, CRAK requires no special









libraries, compilers or any modifications to the user code. In addition, CRAK requires no

special logging by the kernel or by any user-level application. The main limitation of

CRAK is that it requires an operating system to provide module support. Without

module support, CRAK cannot be loaded. However, once loaded, CRAK can checkpoint

and restart a wide range of applications.

CRAK [20] provides two essential user-level tools: ck and restart. To perform a

checkpoint with CRAK, one invokes ck. Ck accepts two command line arguments. The

first command line argument is the process id of the process to be checkpointed. The

second command line argument is the filename of the file where the checkpoint image

should be stored. This user-level application then invokes the module functions that

perform the necessary actions to create a checkpoint image. These module functions

begin by stopping the execution of the specified process. Once the process is stopped,

the following process information is copied into the checkpoint image file.

* Address space.
* Register set.
* Opened files.
* Pipes.
* Sockets.
* Current working directory.
* Signal handler.
* Termios information.

Just like with epckpt, the size of this checkpoint file can be greatly reduced by

omitting the shared libraries and binary code. CRAK offers the same flags for omitting

these items when possible. Once checkpointed, the checkpoint image files can be stored

for a later restart or moved to another node to achieve process migration.

To restart a process, one simply invokes restart, and provides the filename of the

file containing the checkpoint image on the command line. Restart works much like the









system call execveO. The address space data in the checkpoint image file is copied into

the address space of the restart program. This in conjunction with restoring the other

items from the list above essentially restores the process.

Undoubtedly, the work on CRAK has laid the framework for an excellent

checkpointing system. To the knowledge of this author, no other checkpointing systems

have attempted to handle items such as networked sockets the way CRAK has. However,

CRAK has left a number of areas open for continued work. For example, support for

items such as opened files, pipes and sockets exist at the user-level. We believe support

for these items should exist at the system-level. Furthermore, there are a number of

issues not supported by CRAK. The following is a list of items not supported by CRAK.

* Restoring of PID.

* A PID reservation system for checkpointed processes.

* CRAK only supports saving opened files' pathnames. No support for storing an
opened file's contents.

* No support for handling opened files that have been deleted or modified.

* Does not restore file pointer when restarting a process.

* Only supports TCP Sockets. No support for UDP sockets.

* Does not support loopback address.

* Cannot restore an established TCP connection that is the result of a call to accept().
These established TCP connections are multiplexed on the same port as a listening
socket. A listening socket is created by a call to listen().

* Always restarts a process in the same terminal window as the restart program. No
support for restarting a process in another terminal window.

As discussed in later chapters, the author has expanded the work on CRAK to

include support for the items in the above list. In addition, the author has moved support

for items such as opened file, pipes, and sockets to the system-level.









Buffer Overflow Attacks

Background

A buffer overflow takes place when a larger amount of data is copied into a buffer

than that buffer was designed to hold. Functions such as strcpy, strcat, sprintf, and gets

do not perform bounds checking and thus allow programmers to write code that

overflows. A buffer overflow attack is usually the result of a malicious individual

purposely overflowing a buffer with the goal of altering a program's intended behavior.

Df3's Stack
buff101 Frame
fl(... { f3's SFP
int x; f3's RA
f2 (...) ;
f2's Stack
f2 (... { Frame
f3 (...) ; f2's SFP
} f2's RA
f3 (... {
3 fl's Stack
char buf[1 0]; ls Sac
Frame
x
fl's SFP


Bottom of Stack

Figure 2-1. Typical runtime program layout.

Recall, that a function's return address resides on the stack just below a function's

local variables. The only item between a function's return address and local variables is

the saved frame pointer. This concept is shown in Figure 2-1. In the case where an array

is declared as a function's local variable, such as bufin Figure 2-1, space for the array is

allocated on the stack. If this array, or buffer, is overflowed with more data than was

allocated for it, this excess data overwrites other items on the stack. For example, a

skilled attacker may overflow buf and thus overwritef3's SFP and RA. If this address is









overwritten properly, whenf3 returns control flow of the program is sent to the location

of the attacker's choosing. Commonly, the attacker sends control flow to a location on

the stack where the attacker has injected his/her own malicious code. Usually the

malicious code is injected inside the same array or buffer that is being overflowed. The

result of this buffer overflow is that the attacker is able to execute his/her malicious code

with the privileges of the original program. If the compromised program has root

privileges, then the code executed by the attacker also has root privileges. One of the

most common goals of an attacker launching a buffer overflow attack is to spawn a root

shell. The code to spawn a shell is short and can be injected into a buffer rather easily.

The impact of a malicious user gaining access to a root shell is beyond the scope of this

paper, but clearly undesirable for any system administrator.

A buffer overflow attack that takes place on the stack is often referred to as a stack-

smashing attack. Other types of buffer overflow attacks can take place in the heap, bss,

or data segments. A buffer overflow that takes place in the heap is also referred to as a

heap smashing attack. However, the stack-smashing attack is the most popular since it

allows the attacker to inject code and alter control flow in one step. Other forms of stack-

smashing and buffer overflow attacks involve redirecting control flow to other

preexisting functions or even library functions.

Related Work

Stackguard

One of the most notable approaches to detecting and preventing buffer overflow

attacks is referred to as StackGuard. Cowan et al. [21] created a compiler technique that

involves placing a canary word on the stack next to the return address. This canary word

acts as a border between a function's return address and local variables. It is very









difficult for an attacker to overwrite a return address without overwriting the canary

word. When a function is returning, Stackguard checks to make sure the canary word has

not been modified. If the canary word is unmodified then that implies the return address

is also unmodified. One of the advantages of Stackguard is that it does not require any

changes to the program source code or existing operating system. Stackguard does suffer

from a performance penalty. However, that performance penalty is minor. The only

downfall of Stackguard is that programs are only protected if they have been recompiled

with a specially enhanced compiler.

Libsafe and Libverify

Baratloo et al. [22] proposed two new methods referred to as Libsafe and Libverify.

Both methods were designed as dynamically loadable libraries and were implemented on

Linux. Libsafe uses saved frame pointers on the stack to act as upper bounds when

writing to a buffer. Libsafe intercepts library calls to functions such as strcpyo or scanfO

that are known to be exploitable. It then executes its own version of these functions that

provides the user the same functionality but also supplies the bounds checking based on

the upper bounds set by the saved frame pointers. Libverify uses a similar approach to

Stackguard in that a return address is verified before a function is allowed to return.

Libverify does this by copying each function into the heap and overwriting the original

beginning and end of each function with a call to a wrapper function. The wrapper

function called at the beginning of a function stores the return address, allowing the

wrapper function called at the end of the function to verify the return address. One

downfall of this method is that the amount of space in memory required for each function

is double that of what the process would require if not using Libverify.









VtPath

One approach proposed by Feng et al. [23] is VtPath. VtPath is designed to detect

anomalous behavior but would also work well in detecting buffer overflow attacks.

VtPath is unique in that it uses information from a program's call stack to perform

anomaly detection. VtPath intercepts system calls. At each system call it takes a

snapshot of the return addresses on the stack. The sequence of return addresses found

between two system calls creates what is referred to as a virtual path. During training,

VtPath can learn the normal virtual paths that a program executes. When online, VtPath

detects any virtual paths that were not experienced in training. When such a path occurs,

it is likely that an anomaly has occurred.

RAD

Another proposed approach to defend against buffer overflow attacks was

introduced by Prasad et al. [24]. This approach involves rewriting binary executables to

include a return address defense (RAD) mechanism. This approach is rather complex

since it requires accurate disassembly in order to distinguish function boundaries. Once

function boundaries are located they can be rewritten to include the RAD code. Upon

entering a function, the RAD code stores a second copy of the return address that is later

used to verify the return address on the stack when a function returns. Unfortunately, this

approach is limited due to the challenges faced in disassembly. As Prasad points out,

distinguishing between code and data in the code region can be an undecidable problem.

Furthermore, we suspect this approach could lead to significant overhead during runtime

when the ratio of lines of code to the number of functions decreases.









Static analysis

Other approaches aimed at the broader issue of anomaly detection include the call

graph and abstract stack models proposed by Wagner et al. [25]. These methods use

static analysis of program source code to model the program's execution as a

nondeterministic finite automaton (NDFA) or nondeterministic pushdown automaton

(NDPDA). These methods monitor a program's execution while using these models to

determine if the sequences of system calls generated by a program are consistent with the

program's source code. The downfall of these approaches is that they require access to a

program's source code. When dealing with legacy applications or commercial software

access to program source is often unavailable.

Computer Forensics

Stephenson defines computer forensics as the field of extracting hidden or deleted

information from the disks of a computer [26]. Carrier [27] refers to computer forensics

as the acquisition of hard disks and analysis of file systems. Simply put, computer

forensics is the art of extracting digital evidence from a computer system usually

associated with a crime. Not relevant to this discussion, computer forensics does at times

include rescuing data from a damaged or corrupted computer system. Commonly, a

computer forensic investigation takes place on the computer system that has either

suffered an attack from another computer, or on the apprehended computer of a suspected

criminal. In the case of the computer attack, the forensic investigator is usually

attempting to find evidence that can answer questions such as, where did the attack

originate, what vulnerability made the attack possible, and what files were compromised

as a result of the attack. In the case of the suspected criminal's apprehended computer,

the forensic investigator is usually looking for evidence of the suspected criminal's recent









behavior, motives, or planning of future crimes. In either case, the forensic investigator

has a number of tactics for collecting such evidence. The investigation may involve

anything from searching the file system for incriminating text files to analyzing log files

for evidence of the attack. Typically, a computer forensic investigation involves using

special forensic tools to analyze items such as slack space, unallocated space, or swap

files. Slack space is the leftover space in a block or cluster allocated to a file but not used

by the file. Unallocated space is space that is currently not used by any file. Both of

these items may contain bytes from old files that were deleted but have yet to be fully

overwritten. This allows a digital forensic investigator using forensic tools to extract this

data. Swap files can be thought of as scratch paper for an application or the operating

system. These files may have traces of data that allow the digital forensic investigator to

piece together what actions have previously taken place on the given computer. Another

example of data often used in a forensic investigation that does not require special tools

to extract would be log files. During a forensic investigation, log files on the victimized

or suspected computer are of the utmost importance. A survey of the literature on

computer forensics reveals the direct correlation of logging and a successful forensic

investigation [26,28,29]. Stephenson refers to the lack of logs as the single biggest

barrier to a successful investigation of an intrusion. Upon completion of a forensic

investigation all of the extracted evidence is preserved and stored in a secure facility. A

chain-of-custody is maintained to assure that no one tampers with the collected evidence.

A chain-of-custody is simply a system of recording who is responsible for the evidence at

any point in time from the moment it was collected till the moment it is used in a

courtroom.









Slack space, unallocated space, swap files, log files, and most other items analyzed

by the forensic investigator shared an important similarity. Each of these items exists as

nonvolatile data. Nonvolatile data is that which has been saved to disk or resides on

some form of stable storage. The opposite of nonvolatile is obviously volatile. Volatile

data is that which resides in main memory such as a process's address space. Once a

computer is unplugged from its power source, all volatile data is lost, but nonvolatile data

remains intact. Due to this inherent nature of digital data, computer forensics is largely

restricted to the analysis of nonvolatile data. We believe one of the major keys to

improving and enhancing computer forensics is to increase the amount of relevant

nonvolatile data available to the forensic investigator. Later in this paper we discuss the

idea of using checkpointing technology to create additional nonvolatile data from one of

the most common forms of volatile data, namely, processes. This would add checkpoint

image files to the collection of items the forensic investigator can analyze for evidence.

As we know from previous sections of this chapter, a checkpoint image file contains a

plethora of information.














CHAPTER 3
CHECKPOINTING

A Robust Checkpointing System

The ideal checkpointing system should be able to handle a wide range of issues. It

should not only meet the needs of process migration in a distributed computing

environment, but also provide checkpointing support for the system administrator who

wishes to checkpoint a user's seemingly runaway process rather than kill it. Developing

a checkpointing system that can support both these issues and the wide range of issues in

between is not a simple task. To aid us in this task we have isolated the Three AP's to

Checkpointing, namely, the ideal checkpointing system should support checkpointing for:

Any Process on Any Platform at Any Point in time [30]. The Three AP's to

Checkpointing serve as our guide in working towards the ideal checkpointing system.

More specific requirements of the ideal checkpointing system could be classified

into three categories: transparency, flexibility, and portability. One of the most common

forms of transparency sought after in a checkpointing system is transparency for the

application programmer. The ideal checkpointing system would not require any

modifications to the application programmer's code to support checkpointing. In

addition, the application programmer would not be restricted to any special language or

compiler. Furthermore, the application would not have to be recompiled or even relinked

with any special checkpointing libraries. The ideal checkpointing system would also be

transparent to the operating system code. In other words, it would not require any

modifications to the operating system code. In addition, this system would not require









any logging of information. Furthermore, no system call wrappers would be required.

Flexibility would be achieved by having the system support all possible aspects of a

process including PID, opened files, pipes, and sockets. Further flexibility could be

achieved by allowing the user to include or exclude items from the checkpoint image.

This would allow the user to reduce the checkpoint size when necessary. Examples of

items that should be able to be included or excluded from a checkpoint are shared

libraries, binary files, and the contents of opened files. The system should also allow the

user to restart a process in whatever terminal or pseudo-terminal the user wants.

Portability is achieved when the checkpointing and/or restarting can take place on a

number of different types of systems.

To date, no checkpointing system has been able to satisfy all the requirements

mentioned in the previous paragraph. This would obviously imply that no system has

ever supported all three AP's. While developing a checkpointing system that supports all

three AP's and satisfies all the previously mentioned requirements is quite difficult, it is

in fact our long term goal. However, our more immediate goal, and for the purpose of

this paper we are focusing on two of the three AP's; Any Process at Any Point in time.

The checkpointing system developed here operates on the system-level and is designed as

a kernel module. This system satisfies these two AP's and meets the vast majority of the

requirements listed in the previous paragraph. Our system expands the work of CRAK

[20] to work on more recent and stable versions of the kernel and include additional

functionality. Our system is called UCLiK, (Unconstrained Checkpointing in the Linux

Kernel).









We believe that such a system should operate at the system-level for a number of

different reasons. One of the most compelling reasons is that it gives us access to all the

kernel and process state information. Without this information, other systems have had

to rely on system call wrappers to perform logging of process information. In addition,

checkpointing at the system-level aids our quest for transparency. It completely

alleviates the application programmer from any responsibility. In addition, no special

compiler or checkpointing libraries are needed. Furthermore, by having direct access to

kernel functions and data structures we are more efficient than if we were executing at

the user-level, incurring additional overhead from user code and library routines.

Our only disadvantage is portability. Since our system functions on the system-

level, it is not very portable to other types of operating systems. Hence, the second AP,

Any Platform, is not satisfied at this time. However, it is important to point out that the

major benefit of the second AP, Any Platform, would be cross-platform migration. At

this time, cross-platform migration is not one of our goals. Furthermore, this type of

migration would be extremely difficult considering how differently checkpoints would be

created on different platforms. It is very likely that any single system able to support

cross-platform migration would suffer greatly in areas such as transparency and

flexibility, and development of a completely portable system may ultimately be

impossible. Table 3-1 shows how the different checkpointing systems addressed in the

previous chapter measure up to the AP's of Checkpointing. We can see in this table that

no system is successful in addressing the AP, Any Platform. We can also see where our

system, UCLiK, is more successful at satisfying the other two AP's, Any Process and

Any Point in time than any other existing checkpointing system.










Table 3-1. AP table for the different checkpointing systems.


Any Process


CHK-LIB No. Processes must be linked
with the run-time library.

Condor No. Processes must be linked
with the run-time library and
run within the Condor system.
Libckpt No. Processes must be
recompiled with the
checkpointing libaray and
main( must be renamed to
ckpt target().
ZPL No. Processes must be written
with the ZPL language and
compiled with the ZPL
compiler.
MOSIX No. Processes must be part of
the MOSIX cluster system.


CKPM



Epckpt



CRAK









UCLiK


Any Point in time


No.
mad
by th
Yes.


No.
a pre



No.
create
of cc

Yes
cons
achi
single


No. Requires libraries for Yes
wrapping the Parallel Virtual cons
Machine (PVM) libraries. achi
single
No. The new system call, Yes.
collectdata( must be invoked
to collect data about a process
to be checkpointed.
Almost. CRAK has the freedom Yes.
to checkpoint any process but
does not provide support for
items such as the PID, opened
file's contents, file pointers,
opened files that have been
deleted or modified, UDP
sockets, and loopback
addresses.
Yes. UCLiK has the freedom to Yes.
checkpoint any process and
provides support for those items
CRAK does not.


Checkpoints are only
e at the points specified
e programmer.



Checkpoints are created
specified time intervals.



Checkpoints can only be
*ed during certain ranges
de.

and No. A globablly
istent state must
eved. Not targeting a
e process.
and No. A globablly
istent state must
eved. Not targeting a
e process.


Any
Platform
No.


No.


No.





No.



No.



No.



No.



No.









No.









In conclusion, we believe checkpointing at the system-level brings us closer to

achieving the ideal checkpointing system than any other approach. Checkpointing at the

system-level provides transparency that is unmatched by user-level checkpointing

systems. It is also important to note that by developing the system as a kernel module

additional transparency is achieved. Since a module can be inserted and removed from

the kernel code, modifications to the running kernel's code is unnecessary. Furthermore,

we can provide more flexibility with our checkpointing system than has been supported

by other systems. Additional issues of portability and potential standardization of

checkpoint image files is addressed in a later chapter.

UCLiK Overview

To best understand how UCLiK works, it is best to first understand the files

involved with UCLiK. The UCLiK system is primarily composed of the following four

files.

* ukill.C
* ucliklib.c
* uclik.c
* uclik.h

The file ukill.C is the source code for our user-space tool. This tool is what a user

would invoke to perform a checkpoint, or to restart a checkpointed process. The file

ucliklib.c contains helper functions that assist the user-space tool in communicating with

the functions in the kernel module. The kernel module source code is located in the third

file, uclik.c. The fourth file, uclik.h, is a header file shared by all three of the previously

mentioned files.

The user-space tool communicates with the module through a device file. The

name of this device file is dev/ckpt. This device file is created when the module is










loaded into the kernel. The module registers itself as the owner of this device file. For

simplicity, we refer to the user-space tool as being able to call functions in the kernel

module. However, for preciseness we explain here that the user-space tool actually calls

functions in ucliklib.c. These functions then make ioctl( calls to the device file,

/dev/ckpt, which is owned by the kernel module. Thus the kernel module receives these

ioctl( calls and directs them to the corresponding kernel module functions. Figure 3-1

illustrates this concept more clearly.

uclik. c
build_pn_list(...)
ulill. C ucliklib.c signal_pn_list( ..)
save-file table(...)
nt checkpoint(...) chckpoint(...)
rastlloroe~loccsstrll ecle( \ ccpn.
main(...) :
{ / ioctl0

if, .) restorf ile_table(..)
checkpoint(...) estart(...)
else c int restari(...) oTe
restarit(...)n-m c d ar :T

*i ioctl0 >
: ckptioctl(...)

if(...)
else checkpointo0
-- else
restort0 --
}


User-Space Kernel Space

Figure 3-1. Performing a checkpoint with UCLiK.

A checkpoint is created when a user invokes the user-space tool ukill. The ukill

tool receives a minimum of one command line argument. This command line argument

is the PID of the process to be checkpointed. By default, the process's checkpoint will be

saved in a file named with the process's PID. The user can optionally include a filename









as an additional command line argument to specify where the checkpoint image should

be stored. When checkpointing, the main purpose of ukill is to open the file where the

checkpoint should be stored, and pass it's file descriptor to the checkpoint function in

the kernel module. Ukill also passes the PID number and any additional flags sent on the

command line to the checkpoint function in the kernel module. The checkpoint

function will use the PID number to locate the given process's process descriptor

(represented with a task struct structure). The checkpoint function will immediately

send the SIGSTOP signal to the process being checkpointed. If we are checkpointing a

family of processes, the SIGSTOP signal will be sent to each process in that family.

Once the process is stopped, the real checkpointing begins. UCLiK begins by storing

crucial fields of the process descriptor such as pid, uid, gid, and so forth. Following this,

the process address space is stored. This includes fields from the memory descriptor

(represented with an mm struct structure) such as initial and final addresses of the

executable code, initialized data, heap, command-line arguments, and environment

variables. The initial address of the stack is also saved. Next, UCLiK loops through each

of the memory regions (represented with a vm area struct structure). For each memory

region, the linear address boundaries, access permissions and flags of that region are

saved. The contents of each memory region are also saved. Following this, UCLiK

iterates through each opened file descriptor saving necessary information for each file

abstraction. UCLiK provides support for opened files, pipes, and sockets. Lastly,

UCLiK stores information about items such as the current working directory and the

signal handler. Once all of this information is written to a file, UCLiK can optionally kill

the process and then exit.









A process is restarted by invoking the user-space tool ukill and activating the undo

switch. When the undo switch is activated, the command line argument following the

undo switch should be the name of the file containing the image of the checkpointed

process. Very often, this will simply be the process's original PID (i.e., ukill -undo PID).

Ukill will read this file to find out process family information. If it is a single process,

then ukill will call the restart function of the kernel module. If it is a family of

processes, then ukill will fork the appropriate number of processes, in the appropriate

order, and allow each new process to call the restart function separately. The restart

function of the kernel module will subsequently read in all the process information from

the checkpoint image file, copying the original process's information over the ukill

process. Once the kernel module completes this task, it allows the process to run. A

family of processes is handled slightly different and is addressed later.

UCLiK's Comprehensive Functionality

Opened Files

CRAK's handling of opened files was very straightforward. During a checkpoint,

the following information is stored as part of the checkpoint file.

* File pathname.
* File descriptor number.
* File pointer.
* Access Flags.
* Access Mode.

During a restart, CRAK opens the file using the file pathname, access flags, and

access mode. This open takes place at the user-level. If opening the file is successful,

then the file is duped if necessary to assure it had the same file descriptor number as









before the checkpoint. This approach worked well but left a number of issues open for

continued work.

One of our objectives is to provide transparent checkpointing at the system-level.

During restart, CRAK would reopen files at the user-level. With UCLiK, during a restart,

files are opened at the system-level. One major advantage of reopening a file at the

system-level is that it allows us to also restore the file pointer. The following subsections

address a number of other issues with opened files such as restoring the file pointer. All

of these issues are also being dealt with at the system-level.

Restoring the file pointer

UCLiK, unlike its predecessor, has the ability to restore the file pointer. To

understand the importance of a file object's file pointer, we must first understand how a

read() system call made in user-space is handled by the kernel. As an example, suppose

we have a file that is 14,361 bytes in length. Suppose we also have a process that is

going to read this entire file line by line.





Page 2 Page 3 Page 4



Figure 3-2. File size relative to page size.

We assume the user-space process has already opened the file. At the point when

the user-space process first begins to read the file, the kernel loads the first page of the

file into the process address space of the user-space process. The file object pertaining to

the file being read has its file pointer (i.e., thefpos field) assigned the value 4096. This

seems strange since the file is being read line by line. But the buffering of line by line










reading is actually handled by glibc. The glibc portion of execution takes place in user-

space, but is hidden from the programmer. The following image illustrates the

relationship of the process, glibc, and the kernel.



UserProcess

glibc User Space

Kernel (System Calls) Kernemel Space


Figure 3-3. Glibc executes in user-space.

Inside the glibc layer there are three variables of great importance: IO read base,

IO read_ptr, _IOread_end. When the page is loaded into memory, the variable

IO readbase points to the first byte of the page. The variable IO read_end points to

the first byte just after the last byte of the page. Meanwhile, while the file is being read

line by line, variable IO read_ptr points to the next unread byte in the page. Essentially,

IO read_ptr is our real file pointer.




f_pos = 4096

Pagel 1




10 read base 10 read end
IOread_ptr



Figure 3-4. Glibc variables during a read.

IO0read ptr is modified with each incremental read. Once the value of

IO0read ptr equals the value of O0 read end, the kernel removes this page from the









process's address space, and loads the next page (Page 2) into the process's address

space. At that point, thef pos value is assigned the value of 8,192. This process

continues until the end of the file is reached.

For the purposes of process checkpointing and restarting it is essential that the

value off pos be saved and restored when the process is restarted. Let us continue our

above example of a process reading a file of size 14,361 bytes, only this time we

checkpoint the process and restart it without restoring thef pos field. We assume the

process was checkpointed when 4,363 bytes of the file had been read. This would imply

that the second page of the file had been loaded into memory. The following image

should illustrate this our current state.




f_pos= 8192

Page 2




IO read base IO read end
_IOread_ptr


Figure 3-5. The second page of a file loaded into memory.

If the process is checkpointed at this moment, when the process is restarted, the

f pos value must be assigned a value of 8192. If it is not, then it is assigned a default

value of 0. Meanwhile, our glibc values, 10 read base, J0 read ptr, O0 read end

will be restored to their original values. Since the glibc values are part of the process

address space, when the process address space is restored, the glibc values are also

restored. When reading is continued, the O0 read ptr value causes the reading to pick









up where it left off. However, when the value of J0 read ptr equals that of

I0 read end, the kernel then uses thefpos value to determine which page to load into

memory next. Since the value off pos is 0, the kernel loads the first page of the file into

the process's address space. From here, the entire file is read.

To summarize, when a process is restarted, if thefpos value is allowed to default

to 0, then the rest of whatever page was already loaded into the process' address space is

read. This is then followed by the entire file being read.

In testing, because thefpos value was allowed to default to 0 the resulting number

of bytes read after restarting the process was equal to the number of bytes of the file plus

the number of unread bytes in the page that was loaded into memory at the time the

process was checkpointed. For example, the file being read was 14,361 bytes in length.

The process was checkpointed when 4,363 bytes had been read. This meant that the

second page was the page currently loaded into the processes address space. This also

meant that there were 3,829 bytes remaining in that second page that had never been

read. When the process was restarted, the number of bytes read after the restarting was

18,190. This is the sum of 14,361 and 3,829.

Furthermore, it is important to note that thefpos value must be restored in kernel

space. One could attempt to restore thefpos value from user-space through the use of

functions like seekk. However, if this is done from user-space the 0J read ptr value

of the glibc layer is also modified. As we have seen above, thefpos value is usually

greater than the JO0read ptr value. Thefpos value points to the end of the page

currently loaded in the process's address space. If the O0 read ptr value is restored

from user-space, the only value available is that extracted from thef os. If









IO0read ptr is restored to the value of fpos, we more than likely, skip some portion of

the file being read.

File contents

If we consider the list of items recorded in a CRAK checkpoint we immediately see

another important missing file attribute. The actual contents of an opened file were not

included in the checkpoint image. In UCLiK we have added support for packaging the

file contents of opened files in the checkpoint image. Since this can drastically affect the

checkpoint size we leave this as an option for the user. When invoking ukill, the user can

specify a flag, -p, that notifies the system to package the contents of opened files with the

checkpoint image. Later, when invoking ukill -undo, the user can specify the same flag, -

p, to unpack the files that have been packaged with the checkpoint image. Unpacking the

files obviously just creates a copy of the files. If the original files are not available, the

user can specify another flag that tells the system to use the new copies of these files.

Deleted and modified files

Another issue left open by CRAK was how to handle deleted and modified files.

Having already added the functionality to package the contents of opened files with a

checkpoint image, this issue is resolved rather easily. During a restart with UCLiK, if a

file is found to be missing or modified since the time of the checkpoint, the restart alerts

the user and cancels itself. At this point, the user can restart the process again using the

flag that tells the system to force the restart. This restarts the process with the missing or

modified file. Of course the user still has the option to unpack and use any packaged

copies of files that were included in the checkpoint image file.









Restoring PID

CRAK makes no attempt to restore the PID. When CRAK restarts a process, the

PID assigned to the restart process is then inherited by the restarted process. We have

added the functionality of restoring a process's PID to UCLiK. We use the

find task bypid( function to determine if the PID is available. If it is available, then

the restarted process is assigned the original PID. However, if the PID is not available,

then the restarted process will be run with the new PID.

Pipes

CRAK's handling of pipes was rather similar to that of opened files. During a

checkpoint, information such as inode number and whether a pipe was a reader or a

writer was included in the checkpoint image. During a restart, a new pipe was created

and duped if necessary to the correct file descriptor. The creation of this new pipe during

restart took place at the user-level. We have moved this functionality to the system-level.

UCLiK recreates pipes at the system-level. However, creating pipes at the

system-level incurs some additional complexity. Pipes are created with two ends, one for

reading and one for writing. When a user-level application creates a pipe it is usually

followed by a fork. The parent process can then close one end of the pipe, and the child

process will close the other end of the pipe. This allows the two separate processes to

share a single pipe. This pipe then acts as a one-way channel of communication. By the

very nature of our restart mechanism this is hard to replicate. Our restart mechanism

consists of the user-level ukill program whose address space is copied over with the

address space of the checkpointed application. When we are restarting multiple

processes, ukill forks and calls the restart function of our kernel module once for each

process being restarted. Imagine if there are two processes being restarted, and there









exists a pipe between them. When the first process invokes the restart function in the

kernel module, the pipe is created. When the second process invokes the restart function

of the kernel module, it needs the same pipe. If it creates a pipe, it is a new pipe. We

handle this scenario by passing an additional parameter to the restart function of the

kernel module. This parameter is calledfamily count. The family count tells the module

how many parallel processes are being restarted. The module then knows to maintain

pointers to any created pipes. Then when other processes need to access those same

pipes, identified by their original inode numbers, the module still has access to them.

This has enabled us to restore pipes from kernel-space. Additional detail concerning

pipes between processes in larger groups of parallel processes is addressed in the next

subsection.

Parallel Processes

One of UCLiK's most beneficial features is its ability to checkpoint parallel

processes. While some checkpointing systems do not support checkpointing parallel

processes, those that do are often constrained by the types of parallel processes they

support. Some such systems only checkpoint parallel processes consisting of a single

parent and its immediate children. UCLiK is not constrained by the type or number of

parallel processes it can checkpoint. Once again, since UCLiK operates on the system-

level it has access to process descriptor fields such asp cptr andp osptr. Thepcptr

field of the process descriptor is a pointer to the process descriptor of the process's

youngest child. A process's youngest child is the most recently spawned child. The

p osptr field points to the process descriptor of a process's older sibling. A process's

older sibling is considered the process spawned by the same parent just before the

spawning of the given process (Figure 3-6).





















* p_osptr
* p_cptr


Figure 3-6. Process descriptor fields p osptr and pcptr.

UCLiK uses these fields to navigate through a tree of processes while creating a

linear list of the processes. This linear list is stored as part of the process family's

checkpoint. Upon restart, this linear list of processes is recursively scanned through to

fork a process for each original process in the original process tree.

The first process to be entered into the linear process list is always the highest

parent process. Using the process tree from Figure 3-6, P1 would be the first process

entered into the list. At this point, P1 becomes our main list item. UCLiK would then

follow P 's p cptr field to find P4's process descriptor. Using the p osptr fields, P4, P3,

and P2 would subsequently be added to the list in that order. Now that Pi's immediate

children have been added to the list, the item in the list following P1 would now become

the main list item. This would be P4. UCLiK would then follow the same procedure for

adding P4's children to the list. After P4 is the main list item, the next item in the list,

P3, then becomes the main list item (Figure 3-7).

The dotted lines in Figure 3-7 surround the processes that have recently been added

to the list. The arrow indicates which process in the list is currently the main list item.










FPI P4 FPS P2 1

S. ......................
P 4 F 71

PF P4 P3 P2 P9 PB P7
4





P1 P4 P3 P2 P9 PB P7 P6 P5


PI P4 P3 PS P9 P8 P7 P6 P5 P10


K1 K4 FP3- FP2 P9 P8 P7 P6 P5P1
2 F FIP5LFI


Figure 3-7. Building a linear list of processes.

We see in Figure 3-7 that when the arrow points to P3, that no processes are being added

to the list. However, when the arrow points to P2, the processes P5 and P6 are then

added to the list. This procedure continues until the arrow reaches the end of the list.

Once UCLiK has created a linear process list, it is easy for UCLiK to signal each

process in the list to stop execution. This is done with the SIGSTOP signal. With each

process in the group of parallel processes stopped, UCLiK can incrementally checkpoint

each process to a file. Once all the processes are checkpointed, the execution of this

family of processes can continue or be stopped.

Upon restarting a group of parallel processes, our user-space ukill tool is

responsible for forking a process for each process in the original process tree. Recall,

that all of these processes cannot be forked from a single process. We must have each

process fork the same number of processes as its corresponding original process had

forked. To facilitate this procedure we maintain two additional values for each process in









the list. These two values are referred to as children and childstart. The children value is

simply the number of children a given process has spawned. The childstart value tells us

what position in the list is a given process's oldest child. Table 3-1 contains the values

for the process tree in Figure 3-6.

Position
0 1 2 3 4 5 6 7 8 9
In List
Process P1 P4 P3 P2 P9 P8 P7 P6 P5 P10
Children 3 3 0 2 1 0 0 0 0 0
Childstart 3 6 0 8 9 0 0 0 0 0

Table 3-2. Example values for children and childstart.

At first glance, the childstart values may not be apparently obvious. However, if

we start at the beginning of the list with P1, and move through the list summing the

children values, we quickly see where the childstart values come from. For example,

adding P 's children value of 3 with P4's children value of 3, and we get P4's childstart

value of 6. Since, P3 has zero children, its childstart value is also zero. However, adding

P4's childstart value of 6 (which is the current sum of children values), to P2's children

value of 2, and we get P2's childstart value of 8. We continue this process to fill in the

rest of the table.

Once these values are determined, UCLiK uses this list to fork the appropriate

number of children and thus recreating the original process tree. This procedure is done

with a combination of iteration and recursion. A call to our restore processtreeO

function starts with P1 at the beginning of the list. This function uses P1's children and

childstart values to iterate through P 's children, while forking a process for each one.

This iteration moves in a right-to-left order relative to the process list shown in the top of

Figure 3-8. For P1, the function forks a process for P2, then P3, and then P4. Each new









forked process then recursively calls the restore process tree( function on itself. This

causes each forked process to have its own children iterated through the same way as P1.

Subsequently, when restore process tree( is called for process P2, it will fork a process

for P5 and P6. When this function is called for process P4, it will fork a process for P7,

P8, and P9. Figure 3-8 illustrates this procedure. The arrows in Figure 3-8 indicate what

process the restore process tree function was called for. The dotted lines are used to

represent what processes are being forked.


------------- -[-- --T ;


P1 P


P1 P4 P3 P2 P7 P6 P 5 P I1

S ---------------
P1P PP5 P1P









Figure 3-8. Building a process tree from a process list.

While the function restore process tree( is rebuilding our original process tree it

is also invoking functions in the kernel module to restore each of the original processes.

It makes a separate call to the kernel module for each process that must be restored.

During this, UCLiK must keep track of how many processes are in a group of processes.

For all the processes except the last one, UCLiK sends them the SIGSTOP signal after

restoring their process address space and kernel state. When UCLiK restores the last









process in a group, it then sends all the rest of the processes the SIGCONT signal. From

here, the group of processes can then continue their execution.

Restoring PIDs of parallel processes

During a checkpoint the original PID of each process is stored as part of the

checkpoint. During a restart, if a process's original PID value is not in use then UCLiK

has the ability to restore a process's original PID. When restarting a group of parallel

processes, at the point in which UCLiK is restoring the first process of this group, UCLiK

must also check to see if the entire group's original PID values are in use. If none of the

original PID values are in use, then UCLiK can restore these PID values for the restarted

group of parallel processes.

To check the availability of the group's original PID values, UCLiK makes use of

the kernel functionfind task bypid(). This function is called for each of the original

PID values. If a particular PID value is in use, this function will return a pointer to the

process descriptor of the corresponding process. If a particular PID value is not in use,

this function will return NULL.

Restoring pipes between parallel processes

When checkpointing with UCLiK, we save pertinent information about pipes

between parallel processes. Upon restart, UCLiK restores these pipes from kernel-space.

Recall, that for a group of parallel processes, the ukill tool makes an individual call to the

kernel module for each process being restarted. Considering that two different processes

often share a single pipe, a pipe created for one process, must be accessible during

subsequent calls to the kernel module. To handle this situation, we created a pipe cache.

When a given process needs one end of a pipe, the pipe cache is always scanned first. If

the pipe is not in the cache, then we create the pipe and add it to the cache.









UCLiK makes use of the do pipeO system call to create pipes in kernel-space. The

do pipeO function receives as input a pointer to an array of two integers. When

do pipeO returns, these two integer values will correspond to the file descriptors of the

file objects that represent the two ends of the pipe. These two file objects will have been

installed in their corresponding file descriptor positions for the current process. A pointer

to the inode shared by these two file objects and a pointer to each of these file objects are

stored as an item in UCLiK's pipe cache. Items in the pipe cache are identified by the

original pipe inode number. We refer to the original pipe inode as the inode of the pipe

that existed before checkpointing the group of processes. In the case of two processes

that share a single pipe, the first process to be restarted will cause UCLiK to create a new

pipe. After creating this pipe, UCLiK will install the appropriate end of the pipe to the

first process, and place a corresponding item in the pipe cache. When restarting the

second process, the pipe cache identifies an item by the original pipe's inode number, and

installs the pipe from the cache item.

A final note on our pipe cache is that entries in the cache should not carry over

from one group of parallel processes to another group of parallel processes. To prevent

this from happening, any call to the kernel module consists of an additional value referred

to as family count. The family count value represents the number of processes in a

group of processes. This allows UCLiK to determine when the first and last processes of

a given group of processes are being restarted. When UCLiK determines that it is

restarting the last process in a group of processes, it can then clear the pipe cache.

Restoring pipe buffers between parallel processes

UCLiK also has the ability to restore the pipe buffers between parallel processes.

UCLiK makes use of the kernel's preprocessor macros PIPE_START, PIPE BASE, and









PIPELEN. PIPE_START points to the read position in the pipe's kernel buffer.

PIPE BASE points to the address of the base of the kernel buffer. PIPE_LEN represents

the number of bytes that have been written into the kernel buffer, but have not yet been

read. During a checkpoint, UCLiK utilizes these macros to store a copy of the buffer

along with the rest of the checkpoint. Later on during a restart, these same macros are

used again to refill the buffer with the same contents it had before the checkpoint.

Sockets

CRAK stands out from most checkpointing systems by its ability to checkpoint and

even migrate some sockets. However, much like opened files and pipes, when restarting

a process consisting of sockets, these sockets were being created and even bound at the

user-level. We have moved support for sockets from the user-level to the system-level.

One issue left open by CRAK was that of loopback addresses. CRAK did not

support loopback addresses. UCLiK supports loopback addresses.

TCP sockets use a client/server relationship. A typical TCP socket connection is

established by the following procedure. On the server side, a socket is created and then

bound to a local address and port number. This is done with the use of the socket and

bind( C Library functions. The server can then begin listening using the li'tei() function

on the port to which it is bound. On the client side, a socket is created and optionally

bound. When the client invokes the connect function, the corresponding server will

accept this connection request with a call to the accept function. If the client does not

call bind( before connect, then the client's socket is automatically bound to a random

port. Once this procedure is complete an established socket exists between the client and

the server. The interesting aspect to this procedure is that the server now has two sockets.

One socket, that was created by the server and was set to listen on the bound port. A









second socket, which has an established connection with the client. When the server calls

the accept function, a second socket is created. This socket is multiplexed on the same

port as the listening socket. Usually, different sockets at the same IP address must have

unique port numbers, but in this case the port number is shared.

This creates an issue when checkpointing and restarting processes that contain

sockets. Inside the Linux kernel a socket can be in any one of twelve different states. At

this point, we only need to be concerned with three of these states,

TCP ESTABLISHED, TCP CLOSE, and TCP LISTEN. The other states are

transitional states. A socket only exists in a transitional state during kernel mode

execution that results from a system call. Before and after any of the system calls

mentioned in the previous paragraph a socket will always be in one of the three

previously mentioned states. Sockets that are part of an established connection will

obviously be in the TCPESTABLISHED state. When restoring a socket in this state, the

naive approach would be to bind the socket back to the port to which it was previously

bound to. However, this does not work since the listening socket has already been bound

to that port. CRAK handles this issue by allowing the established socket to be bound to

another port. This works, but we have now restored the process with a socket that is not

exactly like the socket it had before the checkpoint. Furthermore, this results in the client

of this socket connection, needing to be notified that the server port for this socket

connection has changed. A functionality that we do want to support, but we believe

should be reserved for times when it is absolutely necessary, like process migration. In

UCLiK, we utilize the function tcp inheritportO which allows us to remultiplex the









established socket onto the same port on which the listening socket is listening. We

believe this is a better method.

Terminal Selection

We have also developed a tool that allows the user to restart a checkpointed

process in the terminal or pseudo-terminal of their choice. This tool makes use of the

ioctl() command to run a command in a different terminal window. The ioctl request of

TIOCSTI makes it possible to write text to a different terminal window. This would be

very helpful for the system administrator who wishes to restart a user's process in the

user's terminal window rather than his/her own window.














CHAPTER 4
DETECTING STACK-SMASHING ATTACKS

The ICAT statistics over the past few years have shown at least one out of every

five CVE and CVE candidate vulnerabilities have been due to buffer overflows. This

constitutes a significant portion oftoday's computer related security concerns. In this

paper we introduce a novel method for detecting stack-smashing and buffer overflow

attacks. Our runtime method extracts return addresses from the program call stack and

uses these return addresses to extract their corresponding invoked addresses from

memory. We demonstrate how these return and invoked addresses can be represented as

a weighted directed graph and used in the detection of stack-smashing attacks. We

introduce the concept of a Greedy Hamiltonian Call Path and show how the lack of such

a path can be used to detect stack-smashing attacks.

Overview of Proposed Technique

We propose a new method of detecting stack-smashing attacks that deals with

checking the integrity of the program call stack. The proposed method operates at the

kernel-level. It intercepts system calls and checks the integrity of the program call stack

before allowing such system calls to continue. To check the integrity of a program's call

stack we extract the return address and invoked address of each function that has a frame

on the stack. Using the list of return addresses and invoked addresses we can create a

weighted directed graph. We have found that a properly constructed weighted directed

graph of a legitimate process always has the unique characteristic of a Greedy

Hamiltonian Call Path (GHCP). We refer to this as a call path since it corresponds to the









sequence of function calls that lead us from the entry point of a given program to the

current system call. This call path is greedy because when searching for this path within

our weighted directed graph, we always choose the minimum weight edge when leaving a

vertex. Furthermore, this path is Hamiltonian because every vertex must be included

exactly once. Most significantly, we have found that the lack of such a path can be used

to indicate that there has been a stack-smashing or buffer overflow attack.

Constructing the Graph

The task of constructing a weighted directed graph from the program call stack

involves five major steps. We demonstrate these five steps on an example program. The

functions, their source code, starting and ending addresses in memory for the example

program are shown in Table 4-1.

Table 4-1. Example program we use to demonstrate graph construction.
Function Name Starting Address in Ending Address in Function's Code
Function Name Function's Code
Memory Memory
f3() 0x08048400 0x0804842a execve(...);
f2() 0x0804842c 0x08048439 f3();
fl) 0x0804843c 0x08048449 f2();
main() 0x0804844c 0x0804845f fl(); return 0;

The five major steps include

Step 1: Collect return addresses. Using the existing frame pointer, trace through

the program call stack to extract the return address from each stack frame.

Step 2: Collect invoked addresses. For each return address extracted from the

stack, find the call instruction that immediately precedes it in memory. Extract the

invoked address from that call instruction. At this point we can create a table of return

address/invoked address pairs. For the program shown in Table 4-1, the return and

invoked addresses in Table 4-2 would be extracted.











Table 4-2. Return address/invoked address pairs.
Return Address Invoked Address
0x08048321 0x42017404
0x42017499 0x0804844c
0x08048457 0x0804843c
0x08048447 0x0804842c
0x08048437 0x08048400
0x08048425 0x420b4c34
0x420b4c6a Oxc78b1dc8

In Table 4-2 it is easy to see how the addresses starting with 0x0804.... correlate to

the addresses in Table 4-1. The addresses starting with 0x420.... are the addresses of C

library functions used by our program. The last address, Oxc78bldc8 is the kernel

address of the system call function execve(. Addresses such as 0x420b4c34 and

0x420b4c6a correspond to the system call wrapper in our C library. The additional

addresses at the beginning of the table (i.e. 0x08048321, 0x42017404 and 0x42017499)

are the addresses corresponding to start and libc start main. The purpose of these

functions is not pertinent to this paper.

Step 3: Divide addresses into islands. Once the values in Table 4-2 have been

obtained we can begin construction of our weighted directed graph. Our final graph

contains a node for each of the addresses in Table 4-2. However, before we can make

each address into a node we must first categorize our addresses into what we refer to as

islands. Our addresses are divided into islands based on their locations in memory. For

example, addresses that begin with 0x0804... are part of a different island from addresses

that begin with 0x420.... Addresses are further divided on whether they are a return

address or an invoked address. In this example we have four islands. These islands are

show in the Figure 4-1.










0x0804... Return Addresses 0x0804... Invoked Addresses

0x08048321 /0x08048300
0x08048457 Ox0804844c
Ox08048447 Ox0804843c
Ox08048437 Ox0804842c
x08048425 0x08048400

0x420... Return Addresses 0x420... Invoked Addresses

Ox42017499 0x42017404
x420b4c6a \0x420b4c34



Figure 4-1. Example program's addresses divided into islands.

Note that the address Oxc78bldc8 was not placed into an island. This is because

this address represents the first instruction of our system call. It represents a unique node

later on. We add this address' node when we begin adding edges. The address

0x08048300, the first instruction in the start procedure, was added even though it is not

in Table 4-2. This address is part of an ELF header and is loaded into memory, thus it

can be extracted at runtime. Every program must have an entry point and therefore can

be part of our graph.

Step 4: Adding edges. Recall the return address/invoked address pairs we have

listed in Table 4-2. Each of these pairs are connected with a zero weight edge leading

from the return address to the invoked address. At this time we can add a node for the

address Oxc78b dc8, and subsequently add a zero weight edge to it from it's

corresponding return address. All of the edges added thus far are part of our final GHCP.

To complete this step, we attempt to give each invoked address node an edge to

every return address node in the same memory region. These edges are weighted with

the distance in memory between the two nodes. For example, the node with address

0x0804842c has a directed edge with weight Oxlb leading to the node with address









0x08048447. In addition, the node with address 0x0804842c also has a directed edge

leading to 0x08048457 with a weight of 0x2b. The edges leading from invoked address

node 0x0804842c to every return address node of the same memory region are shown in

Figure 4-2.

0x0804... Return Addresses 0x0804... Invoked Addresses

x 0x08048321 x08048300

0804845x804844c

b x080804848444c

Ox08048437



Figure 4-2. Edges leading from invoked address node 0x08048418.

Note that the node 0x0804842c was not given an edge to nodes 0x08048321 and

0x08048425. This is because these edges would have resulted in negative weights which

we do not allow in the graph. This concept is explained more thoroughly in the next

subsection.

Once all of the appropriate edges have been added, the graph is complete. We have

omitted the drawing of our completed graph due to the crowded nature of a graph of such

a simple program.

Graph Construction Explained

Inspection of Table 4-2 allows us to see a relationship between a value in the ith

row of column one and the (i-l)th row of column two. For example, we know that

0x0804842c is the address of the first instruction in ourf2o function. Let the row with

this address be the (i-l)th row. This means that the return address in the ith row is

0x08048437. Since we also know that the last instruction in ourf2o function is at









address 0x08048439, we know that this return address is inside off20. Furthermore, we

can see in Figure 4-2 that when a minimum weight edge leaving the address node of

0x0804842c is chosen, it leads to the return address node of 0x08048437. Stated more

formally, if we let return addresses be denoted with co, and invoked addresses be denoted

with a, a given invoked address, a,, should have a minimum weight edge leading to

return address, co,+1. This leads to the idea that every graph's GHCP is no different from

the Actual Call Path (ACP) of the program.

It turns out, this is exactly what we need. All programs that have not fallen victim

to a stack-smashing or buffer overflow attack posses this ACP. We can find this ACP by

searching for a GHCP. Our method must be greedy to insure that we chose the minimum

weight edge when leaving a given node. In addition, since our path must include each

vertex exactly once, our path is Hamiltonian. If we are unable to find such a GHCP, then

we know that our ACP has been disrupted. This implies the likely occurrence of a stack-

smashing or buffer overflow attack.

To demonstrate why this works, suppose the function f20 were vulnerable to a

buffer overflow attack. Suppose the attack overwrites the return address off20 with the

address 0x0804844a. This results in the edge from Figure 4-2 that was labeled with Oxb,

now being labeled with a Oxle. Thus when a minimum weight edge leaving the address

node of 0x0804842c is chosen, it no longer leads to the proper node. It leads to the

address node 0x08048447, whose edge is labeled with a Oxlb. This same address node is

also the result of choosing a minimum weight edge when leaving the address node of

0x0804843c. Having two edges that both lead to the same node disrupts our GHCP.

There no longer exists a path that is both Greedy and Hamiltonian. When the lack of a









GHCP is detected, we know that a stack-smashing or buffer overflow attack has

occurred.

One assumption we make is that two functions in memory never overlap and that

the initial instruction of a function is always the invoked address. We realize that some

programs written in assembly may not abide by this assumption. However, all compiled

programs and most assembly programs satisfy this constraint.

To summarize, our ACP represents the expected GHCP. However, we provide

multiple paths leaving a given invoked address to give that invoked address a choice

when determining our GHCP. By providing a choice, it allows the other return addresses

to act as upper bounds. The upper bounds created by other return addresses limit the

potential range of addresses that a given return address can been overwritten and

modified to by an attacker. There already exists an inherent lower bound since we do not

include negative weight edges. Recall that invoked addresses are likely the address of the

first instruction for a given function. Thus it makes sense that unaltered execution flow

of a given function should never lead to an instruction that resides at a lower memory

address than the first instruction of that function.

Proof by Induction

In order for us to rely on the nonexistence of a GHCP to indicate the presence of a

stack-smashing attack we must first prove that a GHCP exists for all uncompromised

programs. In this section, we consider the case in which there are no recursive function

calls. Knowing that our graph has two types of edges, those leaving return addresses and

those leaving invoked addresses, we can simplify this proof. Since there is always

exactly one edge leaving a given return address, we know this edge is always part of our

GHCP. We can exploit this feature of our graph to simplify our proof. With this feature,









we now only need to prove that in the ACP each invoked address always has a minimum

weight edge leading to its corresponding return address. We prove, using induction that

this holds true for all unobjectionable programs. An unobjectionable program is defined

as a program whose call stack represents a possible actual call path. Our formal inductive

hypothesis is as follows:

Theorem 4.1: For all unobjectionable programs in which n different functions

have been called, where n > 1, every invoked address a1, for i < n, has a minimum weight

edge leading to the return address co+,1.

With this stated we must first prove our base case.

Base case. In this case there is one active function and no other calls have been

made. Our assertion that a,, for i < n, has a minimum weight edge leading to return

address co+,1, is vacuously true. Alternatively, we can say that the GHCP corresponds to

the ACP, because they are both null.

Inductive case. For our inductive case we must prove that if the GHCP

corresponds to an ACP for n calls, it corresponds to an ACP when the (n+l)st call is

made. Stated more formally, we assume the following to be true:

GHCPn = ACPn = al, 0, c2, 2, 3, a3, (04... cOn, an

Thus we must prove the following to be true:

GHCPn+1 = ACPn+1 = al, O2, a2, 03, a3, 04... n, an, cOn+1, an+

The (n+l)st call results in adding the two additional nodes, con+, and an+ to our

graph. This also results in the additional edges, (a,, on+,) and (an, co+1), being added to

our graph. Since we know that GHCP = ACP, as long as every invoked address a,, has a










minimum weight edge leading to the it's corresponding return address co,+1, we must

prove the following proposition.

Proposition: For each i, where i < n, 1i eigh/1(a,, w,+1) < 1 eightt, w (n+)

Before proceeding any further we must define some variables.

Table 4-3. Variables for the induction proof.
Variable Definition
Li Length in bytes of the ith function.
ai Address of the first byte of the ith function.
rati The offset to the return address inside the ith function. (rmi = ji+1 ai)

We also assume that two separate functions loaded into memory never overlap.

Therefore, we must prove our proposition for two different scenarios, namely a, < an and

an < a,.

We can construct an abstract version of our graph as it would exist the moment our

(n+l)st call is made. This version of our graph, Figure 4-3, illustrates the relationship

between the function that made the (n+l)st call and any other invoked/return address

pairs. A solid line represents an existing edge. A dotted line represents a new edge.

ra" a,


-Z2
On+1)- --- ---
', 0

an+l
Figure 4-3. Abstract graph once (n+l)st call is made.

Now we prove our proposition holds for both scenarios. For the scenario

a, < an,

we know the following must also be true

(,+1 < (On+1.









Therefore we can conclude

11 eighi(a, co,+1) = ra, = i+1 a, < con+ a, = Z = 1 eigh1(a,, con+ ).

Thus our proposition holds true for our first scenario. Given the second scenario

an < a,,

we know the following must also be true

(On+1 < (O+1.

Therefore we can conclude

11 eigh/1(a,, con+ ) < 0,

and since our graph does not contain negative edges, our proposition still holds true.

It might seem logical to conclude that we also need to prove a second proposition.

This second proposition is stated below.

Proposition 2: For each i, where i < n, 11 i e11h ao n+) < 11 ig'h(an, (c,+ )

Proving this proposition for the first scenario we find

11 eighi(an, co,+1) < 0.

Once again, since our graph does not contain negative edges, our proposition still

holds true. With the second scenario we find

11 eight( con+) = ran c= on+] an< co+1 an = Z2 = 1 eigh/i(an, w,+1i)

Thus we can prove that our 2nd proposition also holds for both scenarios. However,

if Proposition 1 holds, then this second proposition is unnecessary. When we arrive at

the point where we must choose an edge leaving an, since we are searching for a GHCP,

our only feasible choice is ran leading to ,cn+ If the first proposition holds, every co,+1

for i < n, has already been visited. Thus the only choice that maintains a Hamiltonian

path is con+1.









In conclusion, we have proven that when the (n+l)st call is made, every invoked

address a,, still has a minimum weight edge leading to it's corresponding return address

co,+1. This in turn proves that if the GHCP corresponds to the ACP for n calls, it

corresponds to the ACP when the (n+1)st call is made. Therefore we know that the lack

of a GHCP demonstrates that some form of stack-smashing or buffer overflow attack has

occurred.

Recursion

Recursion is the case where a, = an for some i < n. When this is the case, we have

two different scenarios that may create a problem.

* COi+1 = Cn+l
* COi+l > On+l

The first scenario creates a problem because a, has two equal weight edges leading

to co,+ and cOn+1. Subsequently, these two equal weight edges are also the minimum

weight edges leaving a,. When searching for a GHCP, we won't know which edge to

choose. The second scenario creates a problem because a, has a minimum weight edge

leading to (On+1. To address these scenarios we add a new graph construction rule.

Rule 1: If a, = anfor some i, where i < n, we don't allow the edge (a,, On+1)) in our

graph.

With this rule being stated, we must now prove that our GHCP corresponds to our

ACP with this condition even when recursion is present. We now revisit each case of our

induction proof in the previous section dropping the requirement that all active functions

are different from each other.









It is important to note that we are not concerned about the scenario where co,+1 <

con+], for the same reasons we were not concerned about the Proposition 2 in Theorem

4.1.

We now prove the following theorem.

Theorem 4.2: For all unobjectionable programs in which functions have been

called, where n > 1, every invoked address a, for i < n, has a minimum weight edge

leading to the return address c,+1.

Base case (n = 1). Since this case has only one active function, the new condition

has no affect on it. Once again, the GHCP corresponds to the ACP, because they are both

null.

Inductive case (n > 1). For our inductive case we must prove that if the GHCP

corresponds to the ACP for n calls, it corresponds to the ACP when the (n+l)st call is

made even when our new condition is applied. We know that GHCPn still corresponds to

ACPn. We know this because before the (n+l)st call is made, an is the last node in our

ACPn. Hence, con+, does not exist yet and neither of our scenarios create a problem yet.

Once the (n+l)st call is made, we must still prove that when our additional

condition is followed that GHCPn+1 corresponds to ACPn+]. Fortunately, we know the

following:

Ifi < n, then i n

Thus,

(ai, con+]) ^ (az, cOz+])

Since the edge (a,, co,+1) is never the same edge as (a,, con+i) we can safely remove

(a,, con+]) from our graph and our GHCP is not affected. Since (a,, co,+1) is always left









unmodified, we know that our GHCP still exists. Figure 4-4 shows our abstract graph

when the (n+l)st call is made for when a, 1 an and a, = an.

Figure 4-4 illustrates that when the (n+1)st call is made, regardless of whether a, 1

an or a, = an, GHCPn+1 still corresponds to the ACPn+ Our new condition never alters

our (a,, co,+1) edges. In Figure 4-4, the left sides represents when a, 1 an, and the right

side represents when a, = an.


ra a, ra, a,

c- / --.Z2 Z2






Figure 4-4. Abstract graph with recursion when the (n+l)st call is made.

To summarize, we use other function's return addresses to perform bounds

checking on a specific function, sayf(. We do not allow multiple invocations off( to

create bounds criteria for itself.

Implementation and Testing

We have implemented our method of detecting stack-smashing attacks in a

prototype called Edossa (Efficient Detection Of Stack-smashing Attacks). Edossa was

designed as a kernel module on the Linux 2.4.19 kernel. It was implemented on a system

running RedHat Linux 7.3 with a 1.3 GHz AMD Duron processor. The system is running

with 128MB RAM. The source code for Edossa is freely available under the GPL at

http://www.cise.ufl.edu/-mfoster/research/edossa.html.

Edossa operates by checking the integrity of the program call stack at the point

when a system call is made. As an initial proof of concept version, the current Edossa is









only intercepting the execveO system call. While there are numerous systems calls that

could be used maliciously, the vast majority of buffer overflow and stack-smashing

attacks deal with the attacker attempting to spawn a root shell. This is done by passing

the string "/bin/sh" to the execveO system call. The intercepting of additional system

calls could be easily added to Edossa with minimal effort.

To test Edossa we collected a number of publicly available exploits. These

exploits consisted of a wide range of applications that are known to have some form of

buffer overflow vulnerability in their source code. The results of our testing are shown

below in Table 4-4.

Table 4-4. Publicly available exploits used to test Edossa.
Exploit Date Posted Result Overhead ([ts) Reference
efstool Dec. 02 Attack Detected 94 [9], [10]
finger Dec. 02 Attack Detected 104 [9]
gawk April 02 Attack Detected 113 [9]
gnuan July 03 Attack Detected 126 [9]
gnuchess July 03 Attack Detected 103 [9]
ifenslave April 03 Attack Detected 94 [10], [11]
joe Aug. 03 Attack Detected 90 [9]
nullhttpd Sept. 02 Attack Detected 103 [9]
pwck Sept. 02 Attack Detected 212 [9]
rsync Feb. 04 Attack Detected 109 [9]

As we can see from Table 4-4, Edossa was successful at detecting all of these

known exploits. In addition, the overhead for detecting all of these exploits never

exceeded 212 microseconds of CPU time.

Limitations

One limitation of our GHCP analysis is that it depends on the existence of a valid

frame pointer. In most cases when the return address is overwritten, the frame pointer is

also overwritten. Without a valid frame pointer, there is no way to trace through the

stack to extract return addresses. However, in the case where there is no valid frame









pointer, we already know that some form of buffer overflow or stack-smashing attack is

underway. A system that implements our proposed method, such as Edossa, detects that

no valid frame pointer exists even before generating a graph. A system that only tests for

the ability to trace up a stack is too easily evaded by an attacker to warrant a stand alone

buffer overflow detection system. However, due to the prevalence of attacks that could

be detected with such a test, we believe it should be incorporated.

There are two methods with which an attacker might be able to evade detection of a

system using GHCP analysis. The first method an attacker could use to evade detection

is to perform a buffer overflow attack that overwrites a return address to a new return

address but leaves the frame pointer unmodified. The second method an attacker could

use to evade detection, is to perform a buffer overflow attack that overwrites the return

address and frame pointer and also injects code onto the stack. The first few instructions

of this injected code must restore the return address and frame pointer to their original

values. The injected code would then jump to preexisting code the attacker wants to

execute. Both of these methods could work, but they exhibit a major limitation. The new

return address or the preexisting code jumped to by the injected code must reside in the

same function as the original return address. Recall that each invoked address must have

a minimum weight edge to its corresponding return address. If the new return address is

inside of another function, the attacker risks destroying the minimum weight edge

between the invoked address and the original return address. Likewise for the second

method. When a system call is made, a return address is placed on the stack. Thus, if the

injected code has jumped to a piece of code in another function, the attacker risks this

return address not having a minimum weight edge from its corresponding invoked









address. While both methods are possible, the challenges facing the attacker are much

more rigorous than without GHCP analysis.

Another limitation of our method is its ability to deal with function pointers.

Currently, we use return addresses to trace through memory and find a corresponding

invoked address. These invoked addresses are part of a call instruction in memory. The

bytes in memory representing a call instruction include an address or offset to an address.

In either case we can extract the address invoked by a call instruction. In the case of

function pointers, the call instruction is often calling an address that is in a register. We

have no way to determine what address was in this register when this call instruction

executed. However, we note that one can easily modify a compiler to store invoked

addresses on the stack. We have done this in the form of a patch for the GCC compiler to

verify the technique. This gives us the ability to always determine a return address'

corresponding invoked address. When using programs compiled with the patched GCC

compiler, function pointers are no longer a limitation.

Benefits

A system designed for buffer overflow detection using GHCP analysis has a

number of benefits. First, our system does not require access to a program's source code.

Secondly, it does not require that a program be compiled with any specially enhanced

compiler or even have the executable binary file rewritten unless one wants to verify calls

through pointers to functions. In addition, our system does not require linking with any

special libraries or place any additional burden on the application programmer. Many

intrusion detection systems also rely on a training phase with a program to learn its

normal behavior. After a training phase, the system can monitor a program to ensure that









it doesn't deviate from the behavior observed during the training. Our system does not

require any training phase.

Our method is similar to a nonexecutable stack because it makes it extremely

difficult for an attacker to execute malicious code on the stack. However, our method

provides a number of benefits the executable stack does not. For example, in addition to

stack-smashing attacks, our method can also detect heap smashing attacks. Furthermore,

it is likely to also detect a similar attack that uses the bss or data segment. Our method

would also detect most attempts to rewrite a return address to another location in

preexisting code. A nonexecutable stack would not detect such an attack. Lastly, our

method allows code with a legitimate stack trace to execute code on the stack. In cases

where an uncompromised process needs to execute code on the stack, the nonexecutable

stack would not allow such a process to proceed.

Our method also provides the framework for even more concise buffer overflow

detection system. Currently, one of the limitations of our method is that we rely on other

functions in the call path for our bounds checking criteria for a given function. A

compiler could easily be modified to inject a dummy function in between every function

in a given program. The code for the ith dummy function would consist of only the code

required to call the (i+1)st dummy function. By calling the sequence of dummy functions

before starting main() we would place the necessary bounds checking criteria on the stack

that we need for any function in our program. The cost of this is in the compilation and

start up times of the program. In addition, computation of the GHCP would only require

time proportional to the number of active functions.














CHAPTER 5
PROCESS FORENSICS

Proposed Process Forensics

All work done on a computer system is done in the form of a process. Processes

can be divided into two categories: user-space processes and kernel-space processes. For

the purpose of this discussion we refer only to user-space processes. The reason for this

is that a given kernel-space process is always acting on behalf of a particular user-space

process or processes. Regardless of the unique methods different platforms use to handle

processes, most all processes contain a great deal of information. Unfortunately, due to

the nature of computer forensics, by time a forensic investigation has begun, most of the

relevant processes have already been terminated. Often, the involved computer system

may have been completely shutdown. The only data with which a digital forensic

investigator has to analyze processes are any log files created while a given process was

executing. Thus, the digital forensic investigator is left with a very limited amount of

computer forensics concerning processes. We believe computer forensics can and should

be expanded to include more process information. We propose checkpointing as one

means to create more computer forensics in the form of process forensics.

Knowing the importance of log files, let us analyze the sources of log files.

Essentially, logging is the same as recording details about the currently running

processes. In other words, logging is no more than creating nonvolatile data by recording

details about volatile data. This perspective immediately leads us to consider what other

volatile data should be made nonvolatile. Knowing that a significant portion of volatile









data comprises user processes, we see where checkpointing can be a means to create

more nonvolatile data out of volatile data.

Possible Evidence in a Checkpoint

Let us continue our discussion with a quick overview of the main sources of

information found in the checkpoint of a process. Recall, a process exists in main

memory where it has been assigned its own address space. Some of the more useful

information found in a process's address space, therefore included in a checkpoint, might

consist of items such as a process identification (PID), the user who owns the given

process, and pointers to parent, child, and sibling processes. While an attacker may have

altered some of this information, it still provides a starting point for the forensic

investigator. Information such as a PID is essential in distinguishing between multiple

processes. Furthermore, knowing what user owned a process indicates who started the

process or whose account has been compromised. Ownership of a process, whether

legitimate or not, also tells us the permissions level of the process. Clearly, a process run

as root can do far more damage than a typical user process. In addition, knowing the

relationship between different processes can assist in isolating the source of a process or

what other processes resulted from the execution of a process. The parent and sibling

relationship between processes is something not likely found in log files.

One of the more notable portions of a process's address space is the stack. The

stack contains significant information pertaining to the execution sequence of a process.

This sort of information is extremely useful to someone investigating a buffer overflow

or stack-smashing attack [31]. Given access to the stack, a digital forensics investigator

can determine both where and how such an attack was made possible. Without knowing

where and how an attack was made possible, it is very difficult to prevent similar future









attacks without limiting one's usage of their own system. The process address space also

contains the heap, bss and data segments of a process. Analysis of the heap segment may

reveal evidence pertaining to a heap smashing attack much like the stack in a buffer

overflow or stack-smashing attack. The stack, heap, bss, and data segments are all

potential targets of malicious input attacks. In turn, each of these items would contain

essential evidence of such an attack. As an integral part of the process address space,

each of these items is included in a checkpoint image file. An additional example of this

sort of malicious input attack would be a format string attack.

A process' address space also contains information about items we refer to as

process peripherals. Process peripherals include opened files, sockets, and pipes.

Knowing what files a process accessed can be extremely valuable to the forensic

investigator. This can indicate the intruder's objective, help isolate the damage done

during the attack, or indicate attempts by the intruder to cover his or her own tracks. The

digital forensic investigator and system administrator very much need to know if files

such as password or log files have been modified or accessed. Tampering with a

password file indicates the likelihood of future attacks via a compromised account.

When log files have been tampered with it usually indicates an attacker is attempting to

cover his/her tracks. Furthermore, socket connections provide additional evidence of

communication links involved in a crime. Socket connections may indicate from where

an attacker is launching an attack or where the attack is dumping stolen data. Pipes are

another form of communication with which that digital forensic investigator would take

an interest. Some checkpoints even include data that is still in the pipe buffer. Process

peripherals could also include items such as a process' corresponding tty or terminal.









With this information, the digital forensic investigator might learn the attack was

launched locally.

The possibilities of evidence from process forensics are quite vast. However, to

gain from process forensics we must know when to collect process forensics. The

following section addresses this issue.

Opportunities for Checkpointing

A system administrator must make a number of tough decisions when dealing with

an intruder. At times, a system administrator may become aware of an attack while the

attack is in progress. This may be a result of the system administrator's own monitoring

of the system or an alert issued by an Intrusion Detection System (IDS). The knee-jerk

reaction to such a scenario is to kill all the processes related to the attack. While such an

approach can be very effective in stopping the attack, it does little towards collecting

evidence about the attack. Furthermore, such an approach is likely to tell the intruder that

he or she was detected. Most of the time, one does not want the intruder to know he or

she was detected, until there is enough evidence to prove a crime took place, and by

whom it was committed. We believe when an intrusion is detected, whether by the

system administrator or IDS, the immediate actions should include collection of

evidence, or more specifically process forensics. We encourage the use of incremental

checkpoints that can be created without alerting the intruder. Once the intruder's session

is ended, whether by the system administrator or by the intruder himself, the resulting

checkpoints can provide crucial information about the attack.

A recent look at the ICAT vulnerability statistics shows a significant number of the

CVE and CVE candidate vulnerabilities were due to buffer overflows. For the years

2001, 2002 and 2003 buffer overflows accounted for 21, 22, and 23% of the









vulnerabilities, respectively [6]. While much work has been done to detect buffer

overflow attacks, to the knowledge of this author, little has been done to enhance our

abilities to collect evidence resulting from buffer overflow attacks. We believe process

forensics derived from checkpointing can help fill this void. Recall that a checkpoint

contains the stack, heap, data, and bss segments of a process. In the case of a buffer

overflow attack, creating checkpoints the moment the attack is detected, and even while

the attack is in progress, is likely to collect vital evidence. A forensic investigator can

use this information to more closely determine how and when the intruder entered the

system. A thorough analysis of the stack is likely to show what function contains the

exploited vulnerability. Isolating the vulnerability is essential to preventing a similar

attack in the future. Furthermore, in the case of a stack-smashing attack, any code

injected onto the stack may uniquely correspond to code that is later found on the

attacker's computer. Likewise for a heap smashing attack and a process's corresponding

heap segment. While this alone does not prove anything, it does provide an additional

corroborating stream of evidence. Any additional such evidence is desirable in the case

of a legal setting. Stephenson [26] reminds us that it takes a "heap of evidence, to make

one small proof."

ICAT's CVE and CVE candidate vulnerabilities classified as buffer overflow

attacks are actually a subgroup of a much larger classification. This larger classification,

known as input validation errors, accounted for 49%, 51%, and 52% of the CVE and

CVE candidate vulnerabilities for the years 2001, 2002, and 2003 respectively. The idea

of collecting evidence about a buffer overflow attack from a checkpoint is based on the

concept that a buffer overflow attack stems from malicious input. Such input has no









choice but to become part of a process's address space. We believe this approach to

process forensics and evidence collection can be expanded far beyond buffer overflow

attacks to include other input validation errors. An example of an input validation error is

a boundary condition error. While some boundary condition errors result from a system

running out of memory, others may result from a variable exceeding an assumed

boundary. Inspection of variables in a checkpoint file may reveal such an assumed

boundary and expedite the process of closing a vulnerability once exposed. SQL

injection attacks also take advantage of input validation errors. SQL injection attacks

often allow the attacker to damage and/or compromise a website's database. The very

nature of attacks that exploit input validation errors automatically leave evidence in a

process's address space. The potential for evidence and process forensics from

checkpointing intruder related processes resulting from such vulnerabilities have yet to be

explored.

Most intrusion detection systems can be categorized as misuse detection and

anomaly detection. Misuse detection usually refers to those systems that utilize some

form of signature or pattern matching to determine whether or not a process is part of an

intrusion. Anomaly detection usually refers to those systems that attempt to define

normal behavior so that processes can be categorized as normal or intrusive. Due to the

inherent challenge in defining what is normal behavior, these systems often rely on some

form of threshold to distinguish between normal and anomalous behavior. Markov Chain

Model [32], Chi-square Statistical Profiling [33], and Text Categorization [34] are

examples of such approaches to anomaly detection. We propose that such anomaly

detection systems use checkpointing as an evidence collection technique for processes









that are approaching or have passed the given threshold. Incremental checkpoints can be

used to continually collect evidence of a process's behavior for any process that is

considered anomalous or nearing anomalous. This would result in process forensics for

those malicious processes that never quite reach the threshold and would usually go

undetected. In addition, this would create process forensics for processes that do cross

the threshold. Such forensics could expedite finding out why a process deviated from its

normal behavior.

A common dilemma facing the computer crime investigator when entering a crime

scene is whether or not to unplug the computer [28]. Any work by the criminal that

resides in main memory is lost if the computer is unplugged. However, forensic analysis

of a hard disk must always be performed on a copy rather than the original. In order to

create a copy of the confiscated hard disk, the computer must eventually be powered off.

Depending on the platform, Stephenson usually recommends directly unplugging the

power source [26]. This avoids any booby-traps that may be triggered if the machine is

not shutdown in a particular manner. Regardless of the manner by which a machine is

shutdown, all of the volatile data such as a running process is lost. This illustrates

another example of where additional evidence may be gained by using checkpointing.

Prior to shutting down or unplugging a computer, relevant processes could be

checkpointed. The resulting checkpoint files allow the forensic investigator to analyze

the running processes at a later time.

Additional Enhancements

If a computer crime ever reaches the courtroom, any evidence presented before the

court must have been preserved through a chain-of-custody [26]. In other words, one

must be able to verify with whom and where the evidence has been held since the









moment it was collected. In the case of a checkpoint, the checkpoint resides in a file and

can therefore be digitally fingerprinted immediately following its creation. In a

courtroom, this digital fingerprint can be verified to show that the checkpoint file remains

unaltered. A time and date stamp can also be included and verified with a digitally

fingerprinted checkpoint file.

In addition, a checkpoint stored as a file can easily be transferred to a secure

location much like some logging systems. It is often recommended that logs be stored on

a separate secure system from the system that generates the logs. These log files are also

commonly stored in an encrypted format. These measures deter an intruder from altering

log files to cover-up his or her unauthorized access to a system. Checkpoint files can be

treated in the same manner. They can be stored on secure systems separate from where

they were created. This prevents an intruder from modifying or destroying any evidence

that is collected in the form of a checkpoint file.

Sommer has provided a good analysis of why intrusion detection systems fall short

of providing quality evidence in [29]. We propose that a checkpointing system should be

developed separately from an ID system. During an attack, the ID system can trigger the

checkpointing system to handle any intrusion related processes. This allows the ID

research to focus on detection rather than evidence collection. A checkpointing system,

due to its inherent goal of recreating a process, is already aimed at collecting information

about a process. We believe this goal can be more easily combined with the goal of

evidence collection. Furthermore, by allowing checkpointing systems to provide the

evidence collection, we alleviate the need for drastic modifications to existing ID

systems.









We also believe the format of a checkpoint file could be shared amongst multiple

platforms. We believe that the format of a checkpoint file should be standardized similar

to that of the ELF format used on Linux platforms. The standardization of checkpoint

file formats would facilitate a common ground from which law enforcement, academia,

and other researchers can work. This would facilitate the development of tools for

working with and analyzing checkpoint files. In addition, standardizing any aspect of the

forensic investigation aids in training future forensic investigators. Furthermore,

standardization would assist in the acceptance of checkpoint evidence in legal

proceedings. Likewise, standardization could further facilitate process migration

amongst different platforms.

Carrier [27] has proposed a balanced solution to the open/closed source debate

with regards to digital forensic tools. Carrier urges digital forensic tools be categorized

into tools for extraction and presentation. Carrier proposes extraction tools should

remain open source, while presentation tools remain closed source. Such a balanced

solution could easily be applied to checkpointing tools. The checkpoint/restart engine of

a checkpointing system could remain open source. This allows researchers and the

digital forensics community to validate the inner-workings of such checkpointing tools.

Meanwhile, the presentation tools used for presenting the data from a checkpoint file

could remain closed source. It is likely that many individuals involved in the legal

proceedings following a computer crime do not posses the necessary technical skills for

understanding the data found in a checkpoint file. This provides ample opportunity for

software developers to create presentation tools for checkpoint files. The goal of making






79


complex checkpoint file data easily understandable would create ample competition for

the private sector.














CHAPTER 6
CONCLUSIONS

In this paper we have shown significant progress towards developing the ideal

checkpointing system. We define the ideal checkpointing system as one that satisfies the

Three AP's of Checkpointing which are: Any Process on Any Platform at Any Point in

time. The system we develop and discuss in this paper supports two of the three AP's:

Any Process at Any Point in time. To achieve these two AP's, we developed UCLiK as a

kernel module. As a kernel module we enjoy levels of transparency not experienced by

most checkpointing systems. This transparency includes relieving the application

programmer from any additional responsibility, not requiring any system call wrappers to

log information about a process, no special compilers and no checkpointing libraries with

which one must compile or link their programs. Additional transparency is achieved by

developing the system as a kernel module since the running kernel's code does not have

to be modified.

The benefits of such a checkpointing system are very broad. A checkpointing

system such as UCLiK can be used for process migration, fault tolerance and rollback

recovery. Furthermore, UCLiK can provide system administrators with an alternative to

the kill system call. System administrators who substitute kill with checkpointing a

process, now have the option to undo ending a given process' execution. This

functionality allows system administrator's to more preemptively protect their systems

against runaway and other suspicious processes without losing valuable work.









Our study has also served to deepen our belief that development of a single

checkpointing system that can satisfy the third AP to checkpointing, namely Any

Platform, is unrealistic. However, to best facilitate the long term goal of checkpointing

processes on Any Platform we suggest development of checkpointing systems be divided

into two separate components. The two components are the checkpoint engine and the

checkpoint file. We refer to the checkpoint engine as the portion of the system concerned

with collecting process data, kernel state, and any other interactions with the operating

system. This component is also concerned with stopping and restarting a process. The

checkpoint file is the actual file where a checkpoint image is saved and can be stored

indefinitely. We believe differences in hardware architecture and kernel data structures

make it nearly impossible for a single checkpoint engine to work on all existing

platforms. Thus, the development of checkpoint engines for different platforms are

probably best kept separate. However, we suggest that the development of checkpoint

engines for different platforms should be coordinated. This coordination should be

centered around designing the engines to work with a standardized checkpoint file

format. We believe checkpoint file formats should be standardized in a way similar to

that of ELF files. Standardizing the checkpoint file format would have numerous

benefits. One major benefit would be facilitating cross-platform migration. Checkpoint

engines of different platforms could all function on the same checkpoint file.

Furthermore, tools for reading and modifying the contents of a checkpoint file could be

developed and would immediately be platform independent. Additional benefits to a

standardized checkpoint file format with regards to computer forensics are prevalent and

discussed shortly.











In this paper we have also introduced a novel method for detecting stack-smashing

and buffer overflow attacks. We have shown how the return addresses extracted from the

program call stack can be used along with their corresponding invoked addresses to

create a weighted directed graph. We have shown that such a graph always contains a

Greedy Hamiltonian Call Path (GHCP) for all unobjectionable programs. Thus, the lack

of a GHCP can be used to indicate the existence of a stack-smashing or buffer overflow

attack. The benefits of such a method for detection are independence from specially

enhanced compilers and libraries, access to a program's source code is unnecessary,

executables do not have to be rewritten, there is no logging involved and it requires no

modifications to the operating system. These benefits make our approach unique when

compared with most other approaches to detecting stack-smashing and buffer overflow

attacks.

In addition, our work has laid the framework for an even more concise detection

system for stack-smashing and buffer overflow attacks. Using our methods in addition to

an enhanced compiler could remove the limitations experienced by our system involving

function pointers and programs with few active functions. An existing compiler could be

easily modified to include a series of dummy functions that are called at the beginning of

a program's execution with the sole purpose of placing bounds checking criteria on the

stack. Furthermore, enhancing a compiler to push the invoked addresses on the stack

would allow our method to handle function pointers.

We have begun implementing a prototype for our method. Early results show a

promising outlook for low overhead. Future work includes continued development of our









prototype with more exhaustive testing of overhead and compatibility with items such as

setjmp/longjmp calls. We expect such items to be compatible with our method but it

remains unconfirmed and beyond the scope of this paper.

Researching both checkpointing and intrusion detection results in a unique

perspective, namely, that computer forensics is lacking in the subfield we have termed

process forensics. Process forensics involves extracting information from a process

address space of a given program for the purpose of evidence collection. Since computer

forensics is restricted to nonvolatile data, to improve computer forensics we must find

new sources of nonvolatile data. Since checkpointing creates nonvolatile data from

processes, we believe that including checkpointing technology with intrusion detection

systems can create a new source of nonvolatile data. In addition, by increasing the

amount of nonvolatile data we have increased the amount of forensic evidence available

to the digital forensic investigator. Since this evidence comes from processes, we find it

appropriate to refer to it as process forensics. In this paper we have explored different

sources and benefits of process forensics. One primary example being the evidence

collected by an intrusion detection system enhanced with checkpointing technology.

Palmer [35] reminds us that the future is likely to bring even tougher standards for

digital evidence. We believe standardizing items such as the checkpoint file format used

for process forensics can help meet these standards. Standardizing methods of evidence

collection can help thwart some of the scrutiny placed on digital evidence in a courtroom

setting. In addition, standardizing the checkpoint file format helps facilitate the training

process of future digital forensic investigators. Lastly, it encourages the development of

tools used to analyze checkpoint files for the purpose of process forensics.









In [26], Stephenson addresses the importance of reconstructing the crime scene.

We suggest that anything less than the ability to recreate an entire process state may lead

to holes in the evidence required to identify an attacker or prevent similar future attacks.

Checkpointing provides us with the necessary level of detail to recreate an entire process.

Although many attacks to date have not necessitated checkpointing, we do not want to

use the needs of the past to limit our preparedness for the future.

In closing, researchers and the digital forensics community must continue to find

new sources of evidence following computer crimes. We believe that in many cases

checkpointing technology can achieve such a goal.
















LIST OF REFERENCES


1. National Telecommunications and Information Administration, Economics and
Statistics Administration, "Nation Online: How Americans Are Expanding Their
Use of the Internet," Author, Washington, D.C., February, 2002, Retrieved May 10,
2004, From http://www.ntia.doc.gov/ntiahome/dn/.

2. Carnegie Mellon University, "CERT Coordination Center Statistics," Author, 2004,
Retrieved May 10, 2004, From http://www.cert.org/stats/certstats.html.

3. A.B. Brown, D.A. Patterson, "Rewind, Repair, Replay: Three R's to
Dependability," 10th ACM SIGOPS European Workshop, Saint-Emilion, France,
September, 2002.

4. A.B. Brown, D.A. Patterson, "To Err is Human," Proceedings of the First
Workshop on Evaluating and Architecting System Dependability, EASY '01,
Goteborg, Sweden, July, 2001.

5. D.Wagner, J. Foster, E. Brewer, A. Aiken, "A First Step Towards Automated
Detection of Buffer Overrun Vulnerabilities," In Proceedings 7th Network and
Distributed System Security Symposium, Feb, 2000.

6. National Institute of Standards and Technology (NIST), "ICAT Vulnerability
Statistics," Author, September, 2003, Retrieved April 2, 2004, From
http://icat.nist.gov/icat.cfm?function=statistics.

7. SecurityGlobal.net, "SecurityTracker.com Statistics," Author, 2002, Retrieved
April 2, 2004, From http://www.securitytracker.com/learn/securitytracker-stats-
2002.pdf.

8. J. Plank, M. Beck, G. Kingsley, "Libckpt: Transparent Checkpointing under Unix,"
Conference Proceedings, USENIX Winter 1995 Technical Conference, New
Orleans, LA, January, 1995.

9. J. Plank, Y. Chen, K. Li, M. Beck, G. Kingsley, "Memory Exclusion: Optimizing
the Performance of Checkpointing Systems," Software -- Practice and Experience,
Vol. 29, Number 2, pp. 125-142, 1999.

10. M. Elnozahy, L. Alvisi, Y.M. Wang, D. Johnson, "A Survey of Rollback-Recovery
Protocols in Message-Passing Systems," ACM Computing Surveys, Vol. 34, Issue
3, pp. 375-408, September, 2002.









11. L.M. Silva, J.G. Silva, "System-Level versus User-Defined Checkpointing,"
Proceedings of the Seventeenth IEEE Symposium on Reliable Distributed Systems,
pp. 68-74, October, 1998.

12. M. Litzkow, T. Tannenbaum, J. Basney, M. Livny, "Checkpoint and Migration of
UNIX Processes in the Condor Distributed Processing System," Paper Retrieved
June 1, 2003, From http://www.cs.wisc.edu/condor/doc/ckpt97.ps.

13. G. Stellner, "CoCheck: Checkpointing and Process Migration for MPI,"
Proceedings of the 10th International Parallel Processing Symposium (IPPS '96),
Honolulu, HI, April 15-19, 1996.

14. S.E. Choi, S.J. Deitz, "Compiler Support for Automatic Checkpointing,"
Proceedings of the 16th Annual International Symposium on High Performance
Computing Systems and Applications (HPCS'02), pp. 213-220, 2002.

15. M. Beck, G. Kingsley, J. Plank, "Compiler-Assisted Memory Exclusion for Fast
Checkpointing," IEEE Technical Committee on Operating Systems and
Application Environments, Winter, 1995.

16. B. Ford, M. Hibler, J. Lepreau, P. Tullmann, "User-Level Checkpointing through
Exportable Kernel State," Proceedings of the 5th International Workshop of Object
Orientation in Operating Systems (IWOODS '96), Seattle, WA, October, 1996.

17. A. Barak, O. La'adan, "The MOSIX Multicomputer Operating System for High
Performance Cluster Computing," Journal of Future Generation Computer
Systems, Vol. 13, Number 4-5, pp. 361-372, March, 1998.

18. Y. Zhang, J. Hu, "Checkpointing and Process Migration in Network Computing
Environment," Proceedings of the 2001 International Conferences on Info-tech and
Info-net, Vol. 3, pp. 179-184, Beijing, October-Novermber, 2001.

19. E. Pinheiro, "Truly-Tansparent Checkpointing of Parallel Applications," Author,
2003, Paper Retrieved June 1, 2003, From
http://www.research.rutgers.edu/-edpin/epckpt/paperhtml.

20. H. Zhong, J. Nieh, "CRAK: Linux Checkpoint/Restart As a Kernel Module,"
Technical Report: CUCS-014-01, November, 2001, Paper Retrieved June 1, 2003,
From http://www.ncl.cs.columbia.edu/research/migrate/crak.html.

21. C. Cowan, C. Pu, D. Maier, H. Hinton, J. Walpole, P. Bakke, S. Beattie, A. Grier,
P. Waggle, Q. Zhang, "StackGuard: Automatic Adaptive Detection and Prevention
of Buffer-Overflow Attacks," Proceedings of the 7th USENIX Security
Conference, San Antonio, TX, 1998.

22. A. Baratloo, N. Singh, "Transparent Run-Time Defense Against Stack-smashing
Attacks," In Proceedings of the 2000 USENIX Technical Conference, San Diego,
CA, Jan 2003.









23. H. H. Feng, O. M. Kolesnikov, P. Fogla, W. Lee, W. Gong, "Anomaly Detection
Using Call Stack Information," IEEE Symposium on Security and Privacy,
Berkeley, CA, May, 2003.

24. M. Prasad, T. Chiueh, "A Binary Rewriting Defense Against Stack Based Buffer
Overflow Attacks," In Proceedings of the 2003 USENIX Technical Conference,
San Antonio, TX, June, 2003.

25. D. Wagner, D. Dean, "Intrusion Detection via Static Analysis," IEEE Symposium
on Security and Privacy, Oakland, CA, 2001.

26. P. Stephenson, "Investigating Computer-Related Crime," CRC Press, 1999.

27. B. Carrier, "Open Source Digital Forensics Tools: The Legal Argument," @Stake
Research Report, October, 2002, Retrieved May 20, 2004, From
http://www.atstake.com/research/reports/acrobat/atstake_opensource forensics.pdf.

28. A. Yasinsac, Y. Manzano, "Policies to Enhance Computer and Network Forensics,"
Proceedings of the 2001 IEEE Workshop on Information Assurance and Security,
West Point, NY, June, 2001.

29. P. Sommer, "Intrusion Detection Systems as Evidence," First International
Workshop on the Recent Advances in Intrusion Detection, Belgium, September,
1998.

30. M. Foster, J.N. Wilson, "Pursuing the Three AP's to Checkpointing with UCLiK,"
Proceedings for the 10th International Linux System Technology Conference,
October, 2003.

31. M. Foster, J.N. Wilson, S. Chen, "Using Greedy Hamiltonian Call Paths to Detect
Stack-smashing Attacks," Proceedings of the 7th Information Security Conference,
Palo Alto, CA, September, 2004.

32. N. Ye, "A Markov Chain Model of Temporal Behavior for Anomaly Detection,"
Proceedings of the 2000 IEEE Workshop on Information Assurance and Security,
West Point, NY, June, 2000.

33. N. Ye, Q. Chen, S. M. Emran, K. Noh, "Chi-square Statistical Profiling for
Anomaly Detection," Proceedings of the 2000 IEEE Workshop on Information
Assurance and Security, West Point, NY, June, 2000.

34. Y. Liao, V. R. Vemuri, "Using Text Categorization Techniques for Intrusion
Detection," 11th USENIX Security Symposium, August, 2002.

35. G.L. Palmer, "Forensic Analysis in the Digital World," International Journal of
Digital Evidence, Volume 1, Issue 1, Spring, 2002.















BIOGRAPHICAL SKETCH

Mark Foster received the BS degree in Computer Science and Mathematics from

Vanderbilt University in the Spring of 1999. Since the Fall of 1999, he has been a

graduate assistant in the department of Computer and Information Science and

Engineering at the University of Florida. He completed the MS degree in Fall of 2001.

His academic interests include checkpointing, intrusion detection, and computer

forensics. He also enjoys teaching, exercising, and movies.