<%BANNER%>

Rural Road Feature Extraction from Aerial Images Using Anisotropic Diffusion and Dynamic Snakes

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110114_AAAAAS INGEST_TIME 2011-01-14T07:49:47Z PACKAGE UFE0007100_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 68581 DFID F20110114_AAAMAW ORIGIN DEPOSITOR PATH sivaraman_v_Page_045.jpg GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
68b19240378acc84d52cf66d684ffc8c
SHA-1
9741ed9d41d2607c1eba8c0b9eb2c8a52a17571c
65591 F20110114_AAALVQ sivaraman_v_Page_018.jpg
7933d9678d8a454cb033f19742d870e2
272c06cfb8a104150d1ce65f5a1b0e6c9bc645cc
61167 F20110114_AAAMBK sivaraman_v_Page_067.jpg
ca16feaa5be3e2ea656879f2cd6c2b22
c0725ca2c0b88c73492faeeac572403cc459a322
2043 F20110114_AAALWE sivaraman_v_Page_093.txt
8937fe3fc28c148ec7591812ca23748f
92c2ddb01077b3bc13fc287517e18e7b2bbde5b8
74023 F20110114_AAAMAX sivaraman_v_Page_046.jpg
b6af49be600b6747222e6903ee235938
8dc11a6bed84e23aba4d8837641e1f838f1d6c3a
76877 F20110114_AAALVR sivaraman_v_Page_118.jpg
1d016d9d74754c9ae520ce9b33d26af6
1ffd85b1d5f60177132021cede59d803f1bf7eac
71483 F20110114_AAAMBL sivaraman_v_Page_069.jpg
b4cc8704007ba36b2de7b45cbaef3ff1
f567b11fc487347c33a4edb93fab592eae886f08
5701 F20110114_AAALWF sivaraman_v_Page_018thm.jpg
71e82a8abf429c6015641145963455f3
9b1f40c59f28f79bfd5ce85ff8960852b913ce6d
69989 F20110114_AAAMAY sivaraman_v_Page_047.jpg
b7886099726ee7809be1468547ae0ae3
e7f890f55929b59bd3b51b36c8d7a327a48b6d3d
1053954 F20110114_AAALVS sivaraman_v_Page_036.tif
816fe4cc0d1f6b41e83dce9ce99dd902
3d8de482019d6fe7fa4a26cf72b746c6826f930b
58694 F20110114_AAAMCA sivaraman_v_Page_099.jpg
d8babdda017bcf230b08293ca2b32543
b3c58a1c23ce05ef11b55f3e8ffaafe15901fb26
56541 F20110114_AAAMBM sivaraman_v_Page_070.jpg
9e31351c678074e4173f37a41033a74b
be029060862846f66b7294cc1e233fa26c6d9450
25271604 F20110114_AAALWG sivaraman_v_Page_018.tif
6dc6a26a66403e7e492509f95bbf1a0f
4627ddf755456e493b309486e0631cdcafd85fd2
64082 F20110114_AAAMCB sivaraman_v_Page_102.jpg
dcc4aa9c36fbd169f20e9fee78433435
30335491076f782d8fcb8d4aa5633d45fcd084e2
63024 F20110114_AAAMBN sivaraman_v_Page_071.jpg
98a9aae976c80555e59a963cae3f1d43
93b14967df57c6eea5f60cdc0c59f200da33a0d4
21591 F20110114_AAALWH sivaraman_v_Page_032.QC.jpg
52a3e4a9f8915dcbc7b71ba7c86dc762
e58d5d5188cd91e2f27610f73cf6a803f23dc284
74610 F20110114_AAAMAZ sivaraman_v_Page_048.jpg
bdd1e61458b57eabba2649b0d4d8f9ef
0d92827bd0ac352166f76fc1914069e092756593
6079 F20110114_AAALVT sivaraman_v_Page_117thm.jpg
437d5d445623f57e573b435aa612841f
29c5ef47c919b21f0d676d3142d47c48f784ccdc
60323 F20110114_AAAMCC sivaraman_v_Page_104.jpg
0483a519983e23e218659537c0ff62ae
578ebc574b9c8c3ae4ef4fb5aef7a15909df0150
64334 F20110114_AAAMBO sivaraman_v_Page_073.jpg
c22fd754885b4bee5f61f7fdae393b8a
c4bf3c16648ed5702d75ffabb14e8633c49b2419
108776 F20110114_AAALWI sivaraman_v_Page_130.jp2
132bd4acde5be9462ec7b57f0f0202d3
aeed8e3828d233a25158e87fe4f68a5b408a8d9b
24662 F20110114_AAALVU sivaraman_v_Page_102.pro
7a80e4c46fa3f7ad37092e6b8f7e2d31
44982ca4027720fd7c6acfc86b09372f0aa90d40
63218 F20110114_AAAMCD sivaraman_v_Page_105.jpg
fdfa9c59e720490e2f79b570aea8236a
011d5aa57118d5773f1fcab1cfb64272357520c0
52998 F20110114_AAAMBP sivaraman_v_Page_076.jpg
ef172c6ee56dffa092878be93c90a9fe
b993c43281039b51bd47305ee6be943c2b6f799b
65850 F20110114_AAALWJ sivaraman_v_Page_062.jpg
811412e351ab573ac992ae84644bbf8f
a922d1e3021825f3da9de4ed86811659c23949a7
5564 F20110114_AAALVV sivaraman_v_Page_038thm.jpg
8c8d78700dfaf0ace76a485e511d5ea7
6340876c2557c1c7caefa92fc6d0dedc5d1acfa7
62190 F20110114_AAAMCE sivaraman_v_Page_106.jpg
2285a97d88d30619b3b029d683275464
9001a250ebf3e2908c5142313480817bf5cfbd3e
68328 F20110114_AAAMBQ sivaraman_v_Page_079.jpg
1447c2724f9449ab00b4255a2af999ff
a56a1e514b0971cb876417bc6518d95fecd7dd40
F20110114_AAALWK sivaraman_v_Page_143.tif
c1a1bde9bf637db692b5a2b2a97a9e6c
17126c166727c6312f7939b8244aa341ec4dacfd
836 F20110114_AAALVW sivaraman_v_Page_112.txt
7931ea02e95b549781f44b1028c78fe9
8fdd6653395f8d4304ea13a0873975aaa2a9cf3c
58483 F20110114_AAAMCF sivaraman_v_Page_107.jpg
24c8b0f458594124e422c1eb0f48a493
18723aa31e8fbb7ae9da7a87010a41558a785b2c
69903 F20110114_AAAMBR sivaraman_v_Page_080.jpg
7e8883cc8e61462bbe894ee9c1f62a56
4ebd2089b05a8307a81ff03d3a189f57b3faaa92
51658 F20110114_AAALWL sivaraman_v_Page_124.pro
f6908c81ea936518e3c6b0748e0169ca
f599cc072f509de029a7033b6112d82717713afd
105248 F20110114_AAALVX sivaraman_v_Page_033.jp2
76c5172a2ac0ea03bc157480e14e2933
eea5639907272bb703e3d73b72b664747912816d
58809 F20110114_AAAMCG sivaraman_v_Page_113.jpg
2ba25a6b0128f6971629bd9b4092c9d6
f546f29626cac8f177b05509bb28b8ef2e566283
85393 F20110114_AAALXA sivaraman_v_Page_092.jp2
f4b897ee1d60f0177f2fe918c35c0f2f
5405eb8b8c51ec01c8939325f2475eda6e28ba8d
59018 F20110114_AAAMBS sivaraman_v_Page_082.jpg
80f8c9aeaf80070989ab63fb7e2b8bcf
e85b3e22de7b75b0b0f2693473542769750853b7
49408 F20110114_AAALWM sivaraman_v_Page_144.pro
0dd403bd7512f2b7874fd14e6074ca43
0ff529de9512cc05817f1e56c6808ec5060354ba
6546 F20110114_AAALVY sivaraman_v_Page_021thm.jpg
25edbd202539e982cb451e87a41eb84e
d12b843162a544a1792a38593ffa28c0bceba720
71274 F20110114_AAAMCH sivaraman_v_Page_114.jpg
30a982f658dd3f2fdd888a5908b93e22
8997d0e187b423b66a175e0d0e6e4a84dd434e0e
1697 F20110114_AAALXB sivaraman_v_Page_133.txt
2374150f6b9826e630d73e42abb907d5
7d4e9470e4c4f9ebe5bbc3127678f69a9944cf4a
57673 F20110114_AAAMBT sivaraman_v_Page_083.jpg
94187fc256ab32efb79e90a1b6e7f94e
23c2f4cd9f1b1139e9553f58c6578c64bb16ec27
72098 F20110114_AAALWN sivaraman_v_Page_126.jpg
a34bc7c2cf9d25927e64278ece4c7c07
be498f1c7bf204bbc43c76c2fb3f9f7296bf22af
68764 F20110114_AAALVZ sivaraman_v_Page_100.jpg
e488de0f8e190b5c93560c342857cf26
69f594476134cce45eaf196756f7608b79dbc1e6
66285 F20110114_AAAMCI sivaraman_v_Page_116.jpg
c1a8acaaac4819447d2c92848e5e5a7d
bd647c27b81cd73fa4bd31e215a00c4a88eff689
5388 F20110114_AAALXC sivaraman_v_Page_098thm.jpg
c27143e7aa46505c0f8ab16d3822fa72
d869b27f6da3127cc4df562fc5b51525157648fc
42145 F20110114_AAAMBU sivaraman_v_Page_084.jpg
97cb464e4f63e721d18c3695bf89e1c0
360c21437e3aa673a8c4bcf34c28da985a22de94
1106 F20110114_AAALWO sivaraman_v_Page_081.txt
e020a423867d3235cc688d0029b879b3
0b412ac54504fcc5fb82c3e57355a7bc093f648e
70466 F20110114_AAAMCJ sivaraman_v_Page_119.jpg
e98ab47a809cf507d71eca648a287c05
d8bfb3aeca26bd15826e7255518c9ff1e7365220
72525 F20110114_AAALXD sivaraman_v_Page_101.jpg
59a9efc09088d075c844a202fbcd9b47
e2f5c3ed284be9abb929ead9a9c8917f8dda688d
56009 F20110114_AAAMBV sivaraman_v_Page_085.jpg
cd8cc7b129ce9b7e0e84747c6e593e76
8a9a689b7daa19f9c7b3d4f102f921ef4f41d5d9
6502 F20110114_AAALWP sivaraman_v_Page_080thm.jpg
e04530d530c764175385e36d5ad45fad
80e5a112198ee5f3d056d9576f27a3d1f1216bd6
64444 F20110114_AAAMCK sivaraman_v_Page_121.jpg
81a1dd017ee99f9529fe2ebb4d4d8dcd
6823dd3d90e59203551b03f050022e96490c5f16
F20110114_AAALXE sivaraman_v_Page_074.tif
eed54b966e1ac92b9175b39300db9887
fe46b90355ddc3c1f4710dc255ba9c7f48c96c5a
44827 F20110114_AAAMBW sivaraman_v_Page_089.jpg
abf83bdc3255f89dc0b8da360048ea83
ac65aea5c6e59805f5fc7d4b89e71ba360975a39
1999 F20110114_AAALWQ sivaraman_v_Page_100.txt
a49d52fc4a05ff913eaec15fa6fa1201
250fd6e56cc4dadabaab42d92ea871c56bcd8cc1
73219 F20110114_AAAMCL sivaraman_v_Page_124.jpg
3991385e93d3b9d706e8e4875949859e
ca7f8bb166ae326c6a584262a4b3b2eadf0caf86
2026 F20110114_AAALXF sivaraman_v_Page_144.txt
566d0545d3116c99f3fe18bd40180d68
712cd9922b78a91ba59594964dc336ccd30f237a
65486 F20110114_AAAMBX sivaraman_v_Page_095.jpg
1f5ed28dc3d4795ea531ab7977cfb892
6cb1e311887ac45d47bac0fc3837f8d0c74d3620
110347 F20110114_AAALWR sivaraman_v_Page_052.jp2
b5bc0e788c8f0f88b61fd86ec1185235
908ee191ed87a1eea93d05919265c382abd4f555
81778 F20110114_AAAMCM sivaraman_v_Page_127.jpg
e00db66db8d9304b29498f9df08d933b
1679dd76d9e0c6a125e84cf4a6433e990071ad28
1417 F20110114_AAALXG sivaraman_v_Page_042.txt
ee5b680bc7a2d504d570519631c4442b
4d176ad2eb6fe3642808b050d629a7927757f136
62784 F20110114_AAAMBY sivaraman_v_Page_096.jpg
44be7d13b10ae21fe48202a0a78a9753
bf7568076f1f3e05bf49940ac8bce3b241eae8b5
1879 F20110114_AAALWS sivaraman_v_Page_049.txt
25899bd4e045dcd7d9f2e7b249c89505
9077c75d428c293ec4032a69bbf0bac7405ab0e8
1051974 F20110114_AAAMDA sivaraman_v_Page_009.jp2
4e68beb52d5cf6ca6a7e1e0478eaabf4
58ac0f4a027c029ad7e9a8f17a8d952d685a3973
32791 F20110114_AAAMCN sivaraman_v_Page_128.jpg
286e72088811b03283427caafb56ee63
6369dd9d22f842df5e2feafc873e41d7512f6d0b
35748 F20110114_AAALXH sivaraman_v_Page_082.pro
ba42e6748681e5f11524cdb01c0bbf34
d4cfc016889e5f3b48b9a427b40b404fca33dbf2
61280 F20110114_AAAMBZ sivaraman_v_Page_097.jpg
3663e8ee4e9dadd0ddb0a17272e238e0
b0ab49a54952d061734a5ca1c6029f4cbc49c23d
5980 F20110114_AAALWT sivaraman_v_Page_090thm.jpg
8c216f96c4910c0d1b68044b43f34fa5
336eaf5223514e3eda8910cbe21b130952dd39c2
83119 F20110114_AAAMDB sivaraman_v_Page_010.jp2
97f2eb78e7117175f64f703d95374aee
bec91c3b0012eb3502e5fd3f52f81f7348385fc5
71456 F20110114_AAAMCO sivaraman_v_Page_129.jpg
77bf17566051b2d101b9dedcbfc4a889
f3330da9961736bfee035b86856c3cb797b0349b
1243 F20110114_AAALXI sivaraman_v_Page_109.txt
baca28de5f143443835c4796928a6472
a18b1dbb201f9c93e7e43ada4a12d5338a85da1d
86963 F20110114_AAAMDC sivaraman_v_Page_014.jp2
6351b36b9707a731e3e4d7e9fd153ea2
ce7ae23305fd91c422add1725514ea676d510123
89234 F20110114_AAAMCP sivaraman_v_Page_132.jpg
d65fe119f98d4d7eefd9e9d0205be9d6
6c79e869da7c1c692681181e902af1c385f2166b
F20110114_AAALXJ sivaraman_v_Page_126.tif
203b7b544a1f718ddd7c41662842ab28
6315b29ed84c4617aca977ebbeda1665bf8b3cff
111841 F20110114_AAALWU sivaraman_v_Page_037.jp2
77106a813c3845e6884960a0d8590211
f2f320262e39d30405640c7e00c18fd7353db278
1051973 F20110114_AAAMDD sivaraman_v_Page_018.jp2
e30f9669ffb4e60e77f930561eecdb34
0146baeffe0edf368b41e6181d310ba04b2a76fc
45697 F20110114_AAAMCQ sivaraman_v_Page_133.jpg
ceff3070683b3a257abc378b20ddeab6
de04f7e2fc95d352435838a7251d8c6ae5738493
F20110114_AAALXK sivaraman_v_Page_061.tif
39d427364105f098ab2f434f089bd960
d2ed1324a8062fd20d53c98b67fb72b58949ba6a
47792 F20110114_AAALWV sivaraman_v_Page_093.pro
954992c1284b4c75a7f1d95cea59e9af
2a9853a87b4924a59d247894b47cdd1f27985fbc
110460 F20110114_AAAMDE sivaraman_v_Page_020.jp2
1da6a06797aaebeb2e5aad86742dc5b0
41b8d192902d157468cb03091ea7479864b8c895
41268 F20110114_AAAMCR sivaraman_v_Page_135.jpg
2708b8e981ce9cc2619bee7f55e740ac
ce1663dc0678ef133fc36246765ee374ab3b1f93
51100 F20110114_AAALXL sivaraman_v_Page_035.pro
1e31032a2f883ec1a988cabe7ad5e34f
b2ead00a32f14a3d769720936683a986ea9bafd0
48933 F20110114_AAALWW sivaraman_v_Page_064.pro
ac234414333aa8b25067ebee0a75d9f9
569dfedf3784b4de4a0eba25ba3cb76c52943b1a
108698 F20110114_AAAMDF sivaraman_v_Page_022.jp2
179bee753bb051b4c60299456eb64112
d5fd0fc982a7c1213b2021261120aaab94487b25
18734 F20110114_AAALYA sivaraman_v_Page_042.QC.jpg
5ff2a3738eb1aeb5fb818e14d5b601a1
e84f93067827a9b1362795d6bceee03f8874445e
28384 F20110114_AAAMCS sivaraman_v_Page_136.jpg
b89a0d2b18e2645afed05ef3f2b269ac
1c04cd60c86160c446c047a0545334160b2351ea
1218 F20110114_AAALXM sivaraman_v_Page_105.txt
bff53584950f209c57a2accdc11e4bba
7f4fbd8dddbfe18bd3a939253757fed2fb9091b7
5527 F20110114_AAALWX sivaraman_v_Page_015thm.jpg
5a26245c86711d537a9d6191ca13e83e
d72bebadde2ddcda5684680cff5c9419b887abdd
790756 F20110114_AAAMDG sivaraman_v_Page_023.jp2
adf3df65032bbb3f982ddc8213e67d2a
e68a11a2a4668c9e3f9934d008c632b6043e7f3e
103169 F20110114_AAALYB sivaraman_v_Page_053.jp2
6da8ac789f25595e22a7307f9784627b
e292e57b5138f7dbacf8290a3a221b729d648cdc
79477 F20110114_AAAMCT sivaraman_v_Page_138.jpg
2d67b03f711fa7c4d39fc5d6e58d5dd2
acae51802e2aa5260910be89390b299aaeb91d20
6364 F20110114_AAALXN sivaraman_v_Page_002.jp2
1d0876ad2f6fc2bb01cb65ad451d6b6f
ddc33379ed3570e077b2a3fb328b3b3c6acc5ff8
4452 F20110114_AAALWY sivaraman_v_Page_123thm.jpg
6d06b37ef49ad2754b57c10a8d1ff471
2a12068765ab25bc1154972bdf8ca96a5e4fe578
103436 F20110114_AAAMDH sivaraman_v_Page_024.jp2
7fb4e8c9056dff3f73bc6b42819227bc
d02bf43e72c4f33b1a68e46a9c9cb5ca2fa341f8
23019 F20110114_AAALYC sivaraman_v_Page_114.QC.jpg
73be98306468fb4fdc5ba2322816dea7
6cee5203d1d6869d280326ed58e1f486088efbfc
60568 F20110114_AAAMCU sivaraman_v_Page_141.jpg
ebc6b2011c92c62d23a2c237a7cfd295
ce4a2f24a14242691fc11cf4336ad7e3c6f31b1f
F20110114_AAALXO sivaraman_v_Page_105.jp2
e7d049449dccece33c58476c12b2b633
c9ef341455594d56130f3d034981217bad4e5d52
6629 F20110114_AAALWZ sivaraman_v_Page_139thm.jpg
d35a2459b4553935249a471a57406cf9
f70d84c3e83bed45b77a4ebb9d2635a887a74f53
1040147 F20110114_AAAMDI sivaraman_v_Page_025.jp2
f22dd3ec2c3e9e97778c8a81b3f0454d
d8d89d76a9dc9a46457063f42357574ce7f933eb
50266 F20110114_AAALYD sivaraman_v_Page_108.pro
5255f30af445579963cf7b356a07faf5
74436c03d763663fd6764220b755710464a6e1eb
79842 F20110114_AAAMCV sivaraman_v_Page_143.jpg
275b4801d82e188721e0c89a69db2b27
068b9d2a8e0a2730f64fd4c3e4cefc4b8b95618d
414 F20110114_AAALXP sivaraman_v_Page_006.txt
2d5a921accf907c29f6ce3f501e837f6
b77108665b176fae8fdf54cea638c637c7c7b427
105426 F20110114_AAAMDJ sivaraman_v_Page_028.jp2
4d30b0e3ed37e9ef6923d7a0ac4ae480
c507aa9f491fe40d497c1531af7f6d2fa409475e
5945 F20110114_AAALYE sivaraman_v_Page_065thm.jpg
dbc83bc1573cf9f82fcbff3ec9b417da
84f0041923d0720ca8b8c17af3518e5dbb08b4d8
73781 F20110114_AAAMCW sivaraman_v_Page_144.jpg
d1a842a7b9bdc7d962c2f9c65c03d8dc
30b8f66ec19f15fba23e2d8cec7a9801267e7924
F20110114_AAALXQ sivaraman_v_Page_023.tif
6246698a0c59edba7eb2a8a8c586be5d
78f7b74f0250bc2400410363fd9cfbfa2b9646fd
936656 F20110114_AAAMDK sivaraman_v_Page_029.jp2
db710b1f8595d7ebe245ed8f4b101a81
50d646d72db031948fc071d728ba24553731ff56
48147 F20110114_AAALYF sivaraman_v_Page_079.pro
532c226b6b44abae92523883e520a8c6
7430f1a75c18016e88c80fb3c54d6d525c033c11
26000 F20110114_AAAMCX sivaraman_v_Page_001.jp2
3538240ed7674e837acfe43f08da647a
f77ad676a6b5f54215ec76943bda3c18bfbdc56e
57307 F20110114_AAALXR sivaraman_v_Page_109.jpg
daa438f330542b014c1b6b5446316f72
2b501eccea039313060836dd9719e543d58ef76f
99160 F20110114_AAAMEA sivaraman_v_Page_062.jp2
36d4420dd3befa00649470b1ed36d570
08f37b2873a4d41d74dec8255fe3755d88d79231
103658 F20110114_AAAMDL sivaraman_v_Page_031.jp2
6fade20552221b85f602923e713f7bfa
31d1b949a65b8eba4d0e49686bbcfdf1bce6c95a
107420 F20110114_AAALYG sivaraman_v_Page_021.jp2
8e6029825177513da54493802e2a9616
bd0582886385f254f18f49f01f2758c711ca1313
1051985 F20110114_AAAMCY sivaraman_v_Page_005.jp2
15008c762b3749484815ca6ffbc7e3bf
5f5da5108bbdcd353e7b23f10316825cb4d57177
14420 F20110114_AAALXS sivaraman_v_Page_089.QC.jpg
97b9d359e58517bfd55df3b54c4899f9
c668b3a9485350f5ebaa0a7c73ae60d3f25687ac
112992 F20110114_AAAMDM sivaraman_v_Page_034.jp2
79b1082c96f30a8692d9bd2139577a8d
f9ea875d00da4b7719b9a9449ffa66907fbf8efd
1143 F20110114_AAALYH sivaraman_v_Page_113.txt
2b25ba5547cf1542df23d046cdf02de0
56c37d756f4362e7b3e5f9aeed1a37880e099b6f
727098 F20110114_AAAMCZ sivaraman_v_Page_007.jp2
002f00aa9ad590ac40e8f6af71cf71bd
6915169b006b3ff5823c3e1acada689a31861ec4
54942 F20110114_AAALXT sivaraman_v_Page_091.jpg
8568a3ac2e68cf8b29e5b20a1cccf5d1
953f2260fefd4a68e3d2943beef2b805bdbe0570
111979 F20110114_AAAMEB sivaraman_v_Page_063.jp2
0c9de51f8c2142f17c9aabc85e247858
2bca55a214da31c0a4f0124a00613c18bf9031a0
94181 F20110114_AAAMDN sivaraman_v_Page_041.jp2
f6efe25dcfe2d052c748db05cfd40d43
6da372c888fc66ed5bdcb0f6ee1d5ced84e46487
45833 F20110114_AAALYI sivaraman_v_Page_100.pro
f22184758e23439f7db31a373bb04fb3
318b9d9c3d87877ba64763accdb11160585d6a9d
107105 F20110114_AAALXU sivaraman_v_Page_080.jp2
e17953e96912bde6a76c6c74e0158a08
13cf8463aef1e9a2a0cfea97403fbf0058192b63
103450 F20110114_AAAMEC sivaraman_v_Page_064.jp2
3df121529058044f634c478d04f13dc5
2d4126f8291ef0e3d8173fcc0a81bc4668144208
763697 F20110114_AAAMDO sivaraman_v_Page_042.jp2
97899cc3941005f117b0e84a373b93ac
b7da88feb1d86ac14bfb163c508c8904a1d4eadc
9880 F20110114_AAALYJ sivaraman_v_Page_007.QC.jpg
07af28e1fdd949082a93a5ead122ba11
cc3e8a28c808fedd64d9ab720fba2e66103b99a1
90840 F20110114_AAAMED sivaraman_v_Page_065.jp2
a39ecb8f4cee9017c544f8330cb38fbe
665c6661bb7264e24280e7a8402b44fcd9310136
107818 F20110114_AAAMDP sivaraman_v_Page_044.jp2
d79ef5568595641f62edf587957ee8e1
966874d89d0bdf2c3d5f5fc727a8126c60a6ef91
71367 F20110114_AAALYK sivaraman_v_Page_044.jpg
d07bdc0eb059df8f60b734722c66afd5
e1a850a2f82195f287a299ea875b74e9b146cd99
F20110114_AAALXV sivaraman_v_Page_086.tif
afd8c09159d5d7ff5f11637aa4262e70
40c7e64a31255fb312695a75478f5b9bfc8436ca
99420 F20110114_AAAMEE sivaraman_v_Page_066.jp2
cb4c6791abb373f6f0e4c00ad1162faa
9a05d6ee63407808bb73733665a10ef270a64741
1051979 F20110114_AAAMDQ sivaraman_v_Page_046.jp2
c9ef9ae4741002b427631529504cd8b9
58e2d07045f49c9dde350480bc8a18cb7f0d17fe
F20110114_AAALYL sivaraman_v_Page_133.tif
7f106c03c26e324816d4ab80d9e4f7db
4e8d2c51fb5f3fdcb2455b61d049f4f697c3b777
57569 F20110114_AAALXW sivaraman_v_Page_142.jpg
2c8a3c3c21ad6f091c62fd60e26b36ec
006e49fddf58448e3b5699a3c04c52c9b7640616
107221 F20110114_AAAMEF sivaraman_v_Page_068.jp2
4f49ec2ab171cd529df921671b2dfa4e
b6692ca41ccea670dfd231cb5c406e311266d500
1051950 F20110114_AAAMDR sivaraman_v_Page_048.jp2
e0c202b26ee2dc292c1453fda01e48fa
2357335996f5dfaaa3c658bca986f48d55c47a48
50177 F20110114_AAALYM sivaraman_v_Page_022.pro
5af9a38698cfd067bcbbb60c85d5ce19
d3275c22ad44316d3b9f5cf723a9ac870d58f8cd
F20110114_AAALXX sivaraman_v_Page_139.tif
74c0146da3ddd2fbff3e76f5a5d4fa80
215536b610d48fa075b0e0ce8e12e29015227c15
107160 F20110114_AAAMEG sivaraman_v_Page_069.jp2
6c97757261803b6dfd8236f5c15ce218
4b6d44888f1910e8fbd3227f752b09a3bb297fdd
F20110114_AAALZA sivaraman_v_Page_136.tif
c95eb71f9f162fab576740b11cc07d42
65eb401c876b9e3e7f1a7a68db7909283da706a3
1051962 F20110114_AAAMDS sivaraman_v_Page_051.jp2
b618227489adc29e3ac55606cd0c1d55
93421a55127097c16cf5512372bfb128a0727841
F20110114_AAALYN sivaraman_v_Page_127.tif
240d3c290c582b5e3d6be4973b169006
5cc9d79138c00de03ede9c7add03d0ecdec745c5
45446 F20110114_AAALXY sivaraman_v_Page_012.pro
62f9aea681bde5d693f34e008ef0b2f4
ebde6f836b2b02f9c3487c78ba424afa70eff794
94215 F20110114_AAAMEH sivaraman_v_Page_071.jp2
51d3fe65efec8c3b032f1356db62b9aa
7e015093fb5c98d250f34e0f3f442ae80793f6db
6372 F20110114_AAALZB sivaraman_v_Page_022thm.jpg
fb47d9465fe64149b6206b13c5bc313f
a3b1076132a07c3f70fa42fa1bde0489575a40e2
110983 F20110114_AAAMDT sivaraman_v_Page_054.jp2
8e5be982980391ac7c73898a35f2ae63
44d2613d3103006b2741c1aeaf9eca8c7a142c8c
6075 F20110114_AAALYO sivaraman_v_Page_061thm.jpg
4b8a42d054700dad694418dfaa89eef8
56da9f7e3eae8915208dd36bfe0add5641ad1ee2
50118 F20110114_AAALXZ sivaraman_v_Page_019.pro
355d0192559b2127f719e60b5c09f1df
71939cffa6d09979e192f7a43942d272ba9a9a86
93551 F20110114_AAAMEI sivaraman_v_Page_073.jp2
b04e56521effeb4568264b4f6720e5fe
b9b564f3a05e3b98108a9151f00abc265a764bee
3612 F20110114_AAALZC sivaraman_v_Page_135thm.jpg
982204a68fd20a5aa97cbf0288ed2bde
7245ea11c5fbeeb89118cbed1d73b4b5186e2fd2
112045 F20110114_AAAMDU sivaraman_v_Page_055.jp2
6029f71ca1c9ecd4f6bbf64b42765d9b
9a8b24cacd5d3c885aebf279c7ab699271d04c73
103699 F20110114_AAALYP sivaraman_v_Page_049.jp2
aa4ee75abbeee7455d0d635766321902
23c6243598908a1f56dd58135a649567cb7ca904
30692 F20110114_AAAMEJ sivaraman_v_Page_077.jp2
ae7937228ca811fd329ce5e757e4470f
a6b66e07340874d09ee32f9abb4af6682bcab9d6
6504 F20110114_AAALZD sivaraman_v_Page_064thm.jpg
eafd6d6a07f1193d1711b9ed8dbffdc6
3caf302f80c1dd4025942c2f70eb9ac06f45da35
99029 F20110114_AAAMDV sivaraman_v_Page_056.jp2
78c28f60611d8ae980fe12270b34cfd9
93045f5f7d5a6aafa29659cd9c6df3ff673a4b34
19740 F20110114_AAALYQ sivaraman_v_Page_015.QC.jpg
db41ba1bd8301c6ff5109fdef03bda3c
76626898273e789e1ebff92c53e6895a609664dd
98381 F20110114_AAAMEK sivaraman_v_Page_078.jp2
21676cde07fcc2219955b5eef7feec91
d9fd84fbe40eff64040d9726af8e8f07d5e7e865
26464 F20110114_AAALZE sivaraman_v_Page_081.pro
afa797ecdce3996dbd756ad163d31c50
91590797c9a8ca47a285e877408402442d5f28eb
104627 F20110114_AAAMDW sivaraman_v_Page_058.jp2
f336fcb821060158ed076f096cef1a49
844d4454239030eda8d9a8ad6d5f04ec5d333047
61835 F20110114_AAALYR sivaraman_v_Page_092.jpg
28c0f66ed0fcef7c36a674a30896a028
f6ee18c49f88d47193b1c48ef5359f3b90efacea
835205 F20110114_AAAMFA sivaraman_v_Page_107.jp2
d20158b7c8a5bc80b63de7ed655335cd
67a55d00cebf2c62c0bc3fee9a3ef68d2ffa2c2f
103057 F20110114_AAAMEL sivaraman_v_Page_079.jp2
111660a4454996c16e8059c87b6528ba
d1014aa1e40e420feb1f0a3c2b4f227b16f18ca7
20282 F20110114_AAALZF sivaraman_v_Page_041.QC.jpg
50083e133bc378adb0a442ba7dccb85d
36ff407274fcfbd5dc7b1b6e6c304d29154d8ec8
782905 F20110114_AAAMDX sivaraman_v_Page_059.jp2
4206016407130bb6e21648593da6d2eb
7968c0e889d873a229d2a302e4c4882f915e5ec5
52309 F20110114_AAALYS sivaraman_v_Page_087.jpg
be3c2f75a50a73f5a18a26319178201f
9afddf23a8574dd45d9c260248434699a96c9379
110209 F20110114_AAAMFB sivaraman_v_Page_108.jp2
4d77966d4ffd5ec168fe38cff5371cfa
fe8401b98dff877ca574840c25dcd2b5fc27785f
86671 F20110114_AAAMEM sivaraman_v_Page_083.jp2
65931213939222cf1d87fe08b4d16e2a
8e79835e2c65f027002445f7ad335426892db54d
57798 F20110114_AAALZG sivaraman_v_Page_125.jpg
da16df7d4a9154691ba8f08c30497adc
2ee8b84253ba50ad52ac2c3d2ce63a6da40590ae
1051945 F20110114_AAAMDY sivaraman_v_Page_060.jp2
cf27556479e2763f962974ad5facaa5a
22c7e35d8583270eff2d32730e5c018c164f3e43
97422 F20110114_AAALYT sivaraman_v_Page_012.jp2
81604e916e1b33bccdfa0a148caf9afe
b459079f3894fc99732efd8b61698ad895d7c20f
63626 F20110114_AAAMEN sivaraman_v_Page_084.jp2
e57b3f1dae120d4db8dfd3701f2a299e
a2f35d7c5113c6ae2a083cb64040f77784a7ea6e
35575 F20110114_AAALZH sivaraman_v_Page_023.pro
8f4e0b7be328a3349e47152a11fa6048
4bd11c6c031b2ef72d83a6336ac94b70bbaff803
96357 F20110114_AAAMDZ sivaraman_v_Page_061.jp2
262e4634e689e1a5a84912afc0814fd8
b966591b19f036972ccadafe3f112947df64d717
426 F20110114_AAALYU sivaraman_v_Page_126.txt
8a18a437ec6d7cd5a865ea2bb1bf8194
7296401ddfa8577948cef791c6ae28df65982c5c
805820 F20110114_AAAMFC sivaraman_v_Page_109.jp2
a31e6a433ebc6532925f4e8487f95101
0662e72bb56c4d8ab15850322cb803fd9c8ef624
820957 F20110114_AAAMEO sivaraman_v_Page_087.jp2
f2525d63e6ac65b293a59d8fc8ddcdc5
31ce909d0cec3b42d8a434e3a7fdd3459d35c847
45633 F20110114_AAALZI sivaraman_v_Page_112.jpg
94fcffbe890a4d9024cfbe886bdbedd9
12995d04dd9ea38e0a919b1cf7b2f37f7199d350
94109 F20110114_AAALYV sivaraman_v_Page_106.jp2
9305ecf6b628a50245998b16aee8a5d7
640ef32c7c8708a431b0c0dc5c2d2db4e6a0f3e5
113255 F20110114_AAAMFD sivaraman_v_Page_110.jp2
9fde673edd204188912b7b8db08c9359
188c0e90b2c3d46e1d73c35de6e8f8ce72f16a41
65055 F20110114_AAAMEP sivaraman_v_Page_089.jp2
20fac30f48db24b66eb0eab3edc7dbc4
16852d1bbb37e4edd767d8e8ea36195730f4b9af
72556 F20110114_AAALZJ sivaraman_v_Page_052.jpg
87ffcb23a53ca2d8f5279754baec1e14
c07873ba5ea6274c58a857e953b60dd855d9d4e1
948659 F20110114_AAAMFE sivaraman_v_Page_113.jp2
a58d3e4fdb79b2c464a3bfa53b5e7ed2
057ea12f05c1b71eafa95f223c4f34b9cd61b925
94792 F20110114_AAAMEQ sivaraman_v_Page_090.jp2
4f7d9e1dbd91169765a6444c60ae7ac7
d160af6121e546079fc6419d3e894adf4c7c341d
52489 F20110114_AAALZK sivaraman_v_Page_055.pro
9a5e47777798caebf9dce3e579b182f5
9dde43e47a7a55ab6afa8d175eba452748c0e39a
3153 F20110114_AAALYW sivaraman_v_Page_007thm.jpg
97ac844c63804b32218b785084314baf
c324d09746e27d1c8443395717a2fe0daf110e9a
107674 F20110114_AAAMFF sivaraman_v_Page_114.jp2
c3d23c5c2a8e13beb5a746ae0b95a55b
c632214b2a6393b9d3e14ed30ac4a0f1a59216bc
81055 F20110114_AAAMER sivaraman_v_Page_091.jp2
30e07a3b905dd1d6bb5e989f01899388
89e4db59acb16f99b6ff187e2c9da54fba724408
6633 F20110114_AAALZL sivaraman_v_Page_035thm.jpg
f895dcc9e2143be733ad930e93a0d9bc
bca8ba7c786a17ad9e44557d6e706d136c06306c
F20110114_AAALYX sivaraman_v_Page_125.tif
db2a008f4150072c6172af3e10a70ef4
eebdef50031454be284e6fc9e8b171bf7e96aa3b
887508 F20110114_AAAMFG sivaraman_v_Page_115.jp2
5c0b7a80ce344087f608cad17de063e5
e7e1740f9290579e8e8cd04516e32e8e5b4f86a4
91623 F20110114_AAAMES sivaraman_v_Page_094.jp2
3c74e538d433cf479170b76d3d17d848
995e96d034c66d14b6da424de2d34a7b2991e9d2
132366 F20110114_AAALZM sivaraman_v_Page_139.jp2
505631c99edf2a951d32a9bd5694ceae
1065f614934a6974356a13d0b0a192e9af4e6000
F20110114_AAAMFH sivaraman_v_Page_117.jp2
f8e905cad637d0896e0b33e434bfdddc
9a7be40bc11ee637071d01de14a2d3266cb51e4e
89061 F20110114_AAAMET sivaraman_v_Page_095.jp2
f9194e99d182f8531cf18316de4a185e
af121c58d193cefab18f224a448ca471fd9521c3
18847 F20110114_AAALZN sivaraman_v_Page_125.QC.jpg
3d078fa4edf1a019fccfabdac6e9ee38
22e3a267121274a231a4b6a740ef3d3965d5bc7f
5274 F20110114_AAALYY sivaraman_v_Page_091thm.jpg
a4b5860df33fa22f38740a3665a2eb75
e931405f249eeb2cb394a71f9b3b4bcd34a159a5
109512 F20110114_AAAMFI sivaraman_v_Page_119.jp2
d2283f1e2f2604c5d6887e08959d4396
95e692b2d265034bc00595775b05c07c03b627ae
94765 F20110114_AAAMEU sivaraman_v_Page_096.jp2
51fd93d6f340167ef071c98073f4fdd2
1bdbbd59992fc283164af8b25b3898e4620cbdbd
F20110114_AAALZO sivaraman_v_Page_111.tif
3e95bf27c73b2ab2cb34b0ddeead7f65
7e88008ea6fd77a86b595df7e210ee3d34ecb619
24221 F20110114_AAALYZ sivaraman_v_Page_077.jpg
75521195f2770ff11b55674355be7149
f1d0c4c2074119451ec3ab83792030ebdae77cf8
97605 F20110114_AAAMFJ sivaraman_v_Page_121.jp2
f381c305cafc7f68b89d4da735daaacf
dcebe10cb7e0742bc236ae9bfcd31382801c75f8
80788 F20110114_AAAMEV sivaraman_v_Page_098.jp2
b7a0ef4bc75863705ae22cc3a66d8a01
f6b75ae7ef0a6d18327d025c7999d0cff0d2db20
23473 F20110114_AAALZP sivaraman_v_Page_054.QC.jpg
ca0ae9ba9ef3791b8efc352f39ba9f01
bb3036682b81b0a770baef79afb8b96389067863
80262 F20110114_AAAMFK sivaraman_v_Page_122.jp2
88dc1db238e9c45b9216b829cb69eb2e
8c793a91d4c9b8ffd1a913e5a5e6eafd110bfaa0
87610 F20110114_AAAMEW sivaraman_v_Page_099.jp2
69e18e5aa4489dbe775183aaf95331da
1a5a6202f1efc72aca3116bf453fe6f6ae2725b8
14338 F20110114_AAALZQ sivaraman_v_Page_131.jpg
17e6b992eae08b828aba920f437d61b4
f7205a9923d046d6b56a952add5ffcea86b85b7d
1051868 F20110114_AAAMFL sivaraman_v_Page_123.jp2
96fa80fb9ab9c8b602262ce3eca8eead
fff6c478122ad31c3cd4caa283e8770402943b1b
110853 F20110114_AAAMEX sivaraman_v_Page_101.jp2
1ca884ea5550bbb6cd54bab552800986
9d52161ad2dd2b9dea0a3d40cc7e662d1885009a
72586 F20110114_AAALZR sivaraman_v_Page_108.jpg
5c0e6a138f3198eb69e2d777fb5344cf
d3e0b7dbbea13aa68a2ca2339ac0b1a6eaabf67d
F20110114_AAAMGA sivaraman_v_Page_009.tif
d874d36bfef353cea969fa2381f8e145
8c6a140d57e52737ef4442f6b6362d6cd1a369fb
87180 F20110114_AAAMFM sivaraman_v_Page_125.jp2
d97b399b9e7f7186fde63aee48abb367
1e2f7322788fb28810ed59ab9dc9490a6de5d99a
1051964 F20110114_AAAMEY sivaraman_v_Page_102.jp2
9cae088c86bb8a2fb58c6949c36340b4
9f068c0cdd4e7078a7de3cd57bef2ce7c35787d3
5212 F20110114_AAALZS sivaraman_v_Page_023thm.jpg
882385d6a48d3e1745176967bde1592a
45461d8bf65b6105b7e70c6f188d8865860a2b4f
F20110114_AAAMGB sivaraman_v_Page_010.tif
89b9dc95178f56c21cf3829502a10835
f51dc6a00bc4b201b021a4e06f9f7c29fa1a0d04
1051978 F20110114_AAAMFN sivaraman_v_Page_126.jp2
d917b02029a452c403156d1a45a8d017
736735a3cb871bb2c09aa3c559772a8079a86e51
831498 F20110114_AAAMEZ sivaraman_v_Page_104.jp2
23ed96dd46e952d35492c824d7fd531b
55a8b9385ba9bf74c93532851a92f991b622ea0e
169514 F20110114_AAALZT UFE0007100_00001.mets FULL
074b4936abe117fb26b9ead49a336578
a2e8d06faee1d19a97463e5b7bfe24dfcdd63e1b
F20110114_AAAMGC sivaraman_v_Page_011.tif
98eb5637353489f4a6a6fd9115a811dc
8914fbfc13b3494ee4598b62786fd12ecee4d7ce
44161 F20110114_AAAMFO sivaraman_v_Page_128.jp2
9332a6acf014ebb2c5573820f0b8a3e7
f198107a7b6ea4583542822f1aa479b07bb86648
13147 F20110114_AAAMFP sivaraman_v_Page_131.jp2
2e89c38e58697af9ad7aa1d642483a77
ac2bde9a857b9cdebce492bb39dc0868e4e5f619
F20110114_AAAMGD sivaraman_v_Page_012.tif
f3e4c234cfbc394501092da382204f05
3cb5665052b2ba2638a8b9ace4a0a5ee6e3053bc
68238 F20110114_AAAMFQ sivaraman_v_Page_133.jp2
bcc9e61cd533aa72f79fb6c3fb5fe45d
db8ce8f3f6bbe9edb0c74020307c8e895a3fb836
24412 F20110114_AAALZW sivaraman_v_Page_001.jpg
eee362f47875bc95527e8bc65217074b
bdf384e7f26a8e75e9cabb6f1db48abf051307d1
F20110114_AAAMGE sivaraman_v_Page_016.tif
11a0acb20743f3542e5b95297f5ff88b
9c8c078427ddb7ca79b055ca2bea8b43ce949866
79589 F20110114_AAAMFR sivaraman_v_Page_134.jp2
28ec992dfe4e4fe9de52fa4bc5e86c9b
cb1138e37ac975fbe4a4704dddb92a2e1aa5c0fc
F20110114_AAAMGF sivaraman_v_Page_019.tif
cccb84637b184bb1410e58f0f0002fb5
174916650016aa780351934fd8cc2f423c9741d3
61322 F20110114_AAAMFS sivaraman_v_Page_135.jp2
c204dc81355cdadd18f2e0fb4befe43d
9e2b4a841241fd3eb426b712fe2be728bc4ea67e
10676 F20110114_AAALZX sivaraman_v_Page_002.jpg
03fff3680e3f59f7c67622aa7cdd3cb4
1a11f0705d1375717384baaf75c4a8e2d8ff4d3d
F20110114_AAAMGG sivaraman_v_Page_022.tif
dc8d7b82ef37e4ea8b80946638a64323
b142378a1e64f6d76be014926a90cc46d23639a9
118747 F20110114_AAAMFT sivaraman_v_Page_138.jp2
433fd89e951a11d0cf0b3c92caaf9915
725707e56794d99bf15e5dc32cec5086e61a29cd
30235 F20110114_AAALZY sivaraman_v_Page_003.jpg
d22f4b8e0b5af423d2c4d36beb5486bc
987116b87ad6e5419d514dfffdf1c892e137238f
F20110114_AAAMGH sivaraman_v_Page_026.tif
4e465781a6356d7e2f870acdb3e5f1c4
3f87564ecd0d60742f294fc8e7b54d9583e820ea
88887 F20110114_AAAMFU sivaraman_v_Page_141.jp2
c2f2de66705b6fac9ab122bfb4b485cd
eb60c7960fbc4fbc4be8d6f6a4932a6a85f0b5c9
59724 F20110114_AAALZZ sivaraman_v_Page_004.jpg
fad35d9faa3612a017a779c4ebd60d66
e448f5f8917af47f8f23557e07c8585566a5367e
F20110114_AAAMGI sivaraman_v_Page_027.tif
3a7cc5cd8e94dfa5e7c380e514dbd404
7d3b21899ff6ed09059152587da328a33cb6bfb5
111742 F20110114_AAAMFV sivaraman_v_Page_144.jp2
827bdf3d94b8758e3955e8f0c3d22940
5e50f4bcad5876f69ff6966d1838df84da6b6d02
F20110114_AAAMGJ sivaraman_v_Page_028.tif
9b0fd0602a010d1bfa3cfc6d2224e386
ff5e7372df9d0d4634d3c8708eb2e3274b22cbab
F20110114_AAAMGK sivaraman_v_Page_029.tif
cbb36219298959a9af421d31e4a11d3c
7274b3e6151b202d0aa01836803b94126394e244
31623 F20110114_AAAMFW sivaraman_v_Page_145.jp2
ab7b3269b41e084279c620b8ee380274
140fa09f5103e371d30ab815b7c71cc35c33a762
F20110114_AAAMHA sivaraman_v_Page_062.tif
7593d445c1e6dedf62c2f1c8f7fbea7a
b4e50a0841e98e16162aec17de981e5aef0f575b
F20110114_AAAMGL sivaraman_v_Page_030.tif
51068487c30ab720779cd6a0ce7ecb9b
e7753fae088182d543ed710b4957e78f94161fa6
F20110114_AAAMFX sivaraman_v_Page_002.tif
305303b6f9b865ec5d16b41b746599d4
210ab008f826187d818ac67f39e9aa3d241e0a70
F20110114_AAAMHB sivaraman_v_Page_065.tif
8076545243985e9c6d6df52150ecabb5
547ddae09affe54a1345aa474ea1e7207d76f59a
F20110114_AAAMGM sivaraman_v_Page_031.tif
d093efa8e72163bd671b9a10a93906c7
91540b003ecd4719ae1e9abff21abfbd97eb1aa6
F20110114_AAAMFY sivaraman_v_Page_004.tif
053f8272b2b06bfaaa2c4bd68011331d
07a4650cf194592abefcccd85f177e378f8ab081
F20110114_AAAMHC sivaraman_v_Page_066.tif
f3d12accf8793eba60f31b6b360173ee
92e5f267016ef4237966d0c87c56edbfc1e0a370
F20110114_AAAMGN sivaraman_v_Page_033.tif
2720f307df4452b82ea27054998a5645
7ba739cb1ad825cbf7e11274822a64c61932425b
F20110114_AAAMFZ sivaraman_v_Page_007.tif
4cb740499ee6ae48c956f3968c6f0dae
9a1e34b496d46d626e454d17c0b24e3fec674725
F20110114_AAAMHD sivaraman_v_Page_067.tif
e82a17e92f14a0c6e4136e058a46c030
f36082c7e1d3a2ce88fd28fb751af4a43fa51372
F20110114_AAAMGO sivaraman_v_Page_034.tif
1f054236afffc4f72a82a1beaa3fa185
4e7396ab32ecd1c38868c387725ab0dc4153ebed
F20110114_AAAMGP sivaraman_v_Page_038.tif
169dab7c45599a5dcb6a6f74779567a3
f66680e25c5a5c6c7efe6937624bd0bf3fb5e4f7
F20110114_AAAMHE sivaraman_v_Page_068.tif
b169ca3ff358c14ce4ef0c9cf3b5122f
7b68d485cafbd4d007d95c455cb57b83b2342387
F20110114_AAAMGQ sivaraman_v_Page_039.tif
d2eb86ca2454be0f4d70221920ffb72a
a7380a9bb09ef0502dd2f287e2d35dc64965dcfe
F20110114_AAAMHF sivaraman_v_Page_069.tif
8adbedb0da2e0f67ce5f8c9fe8b4fa6a
a6fb2326342f12b8c2011c5da21c95ab85d1bc3f
F20110114_AAAMGR sivaraman_v_Page_043.tif
2bdec180f6ead41493565a4ffc2aa0a0
eaeaea1ced21d70b193ec7736580a013e6b4798c
F20110114_AAAMHG sivaraman_v_Page_070.tif
ee7ed1eda98ca7a6e601e4f3a4521605
7a2ab6a10e114f37a0a67f4200b339795787a9be
F20110114_AAAMGS sivaraman_v_Page_045.tif
9818cd14e5ee97f112d948352338ac47
eb74863521e5020a5653e3065a0ebfef04910ba3
F20110114_AAAMHH sivaraman_v_Page_072.tif
8c03e492946fc138090850a307822265
5bc39db97c13291c5ede26dfaff95793ee6cf2bb
F20110114_AAAMGT sivaraman_v_Page_049.tif
fb6047234919105d9c94034127ca1e4e
1a3a6c3ec3b0f325efb1e544d1350fc2539bd0d9
F20110114_AAAMHI sivaraman_v_Page_073.tif
55d5de7d2a03ec57d17efa0e0581e032
c4d9da56b2b7a00c14c33db238fb7ccd82fc5908
F20110114_AAAMGU sivaraman_v_Page_053.tif
f9e0e055750d5ba437d065da550dbb76
06fa5c65c694b686b640662e424b737a1faa3933
F20110114_AAAMHJ sivaraman_v_Page_075.tif
2723f5ce19de106060a0d5085373328a
f1ab123c8053b1514811d353ac130d9472b4c135
F20110114_AAAMGV sivaraman_v_Page_055.tif
2c261a68919b86f9f891ec6e08b33c64
409d2b5156a3981ec168c343b6f13695d2ea695c
F20110114_AAAMHK sivaraman_v_Page_077.tif
e67d1faf62907eb8f02e6cf336249e89
edbcb7b87131a5068a926af6d514c3fe2f395a6b
F20110114_AAAMGW sivaraman_v_Page_056.tif
8a5f12b46ed4ac342a1cc1e7a9d3df89
681338f2c81d2669286ed1641fc8c64847350cd0
F20110114_AAAMHL sivaraman_v_Page_078.tif
d9aebe79db476ad9a148135a6c9126ad
8445f4de9a4ab66fa1afa2b74218e755e4d45c48
F20110114_AAAMGX sivaraman_v_Page_057.tif
e33ed4ba1ae94928e2f9114ed92a7eee
5272ad95e4bcd79f09038aee3608a83ca0d39687
8423998 F20110114_AAAMIA sivaraman_v_Page_100.tif
e228bf574c93c8d0ef545bf4aac5f5f7
0b9569547546c67736517af4f1eabd06c10cac9a
F20110114_AAAMHM sivaraman_v_Page_079.tif
e479eb25e4a1a21b225164249ea81609
3b58b1891956388d02eebfae5e3da08143e07447
F20110114_AAAMGY sivaraman_v_Page_058.tif
30327b8e1c8321abfbbb0803e275063d
2747c417aafd7d984b3c68c919de078ff9a265b5
F20110114_AAAMIB sivaraman_v_Page_101.tif
c3a064f2813304a595aec032080681c6
535c31b2a14fd25ca865194b45fb89c74ba57b50
F20110114_AAAMHN sivaraman_v_Page_081.tif
e4fe1e4a39b9bec2fb3e7bf1aec456fd
e76aa4ed55c0e9a73156ef01f6f6047d8d0e36e1
F20110114_AAAMGZ sivaraman_v_Page_059.tif
ee49d2d8d58be0a6fd3059a59de937b5
786a12d64835838d3f98e65236497edf05945395
F20110114_AAAMIC sivaraman_v_Page_102.tif
695ddfeea23206fb2e4faece91d80ca0
75748a368e869af6bd9eca0c13ee47f5e0acea20
F20110114_AAAMHO sivaraman_v_Page_084.tif
10fcf2e5185c4ebcae2251c7f39a4221
64df6a66b5033aca7f4a19e22bbd2ddb24e9adcf
F20110114_AAAMID sivaraman_v_Page_103.tif
ea93da8391aa5587bb3e976d497e051f
1d22382ec50743657dd2212c035ad699f9d85b3f
F20110114_AAAMHP sivaraman_v_Page_085.tif
09f5a29c136b8737c90185367ec6c6b1
882d2a669aab943c01af757412456bd4a97f6e48
F20110114_AAAMIE sivaraman_v_Page_104.tif
9e27dc0624363c9f850e825110ccce4e
ea40f197850b6322ae05d5894d9f0178752f40ac
F20110114_AAAMHQ sivaraman_v_Page_088.tif
e9b33e4af5f54c2dffacd08f857acb7c
3e0468d963defcdd75f7390002c61a5c2fc3b524
F20110114_AAAMHR sivaraman_v_Page_089.tif
c314ab964ec1369cae6bd7c51d9c3992
6bae8227a637a975bab82d9175ee372e661b8ef1
F20110114_AAAMIF sivaraman_v_Page_105.tif
9714c08cf508931a935496b20474a972
ec6eba823523740e3817efb13f86848da7bc8440
F20110114_AAAMHS sivaraman_v_Page_090.tif
600f7ab6e23771877d4736914e2f2cf9
8b9afc219779cd6275308e991021f98fbc37082d
F20110114_AAAMIG sivaraman_v_Page_107.tif
524631295ceb2c39ec4d9e5850e64b71
7212b2850116c9273373911c746b0548febf95f9
F20110114_AAAMHT sivaraman_v_Page_091.tif
0408e68d4ee46c27e5b42329676cc46b
3cb5a1c71f429b5edbd1be6483ca6ac646e3903e
F20110114_AAAMIH sivaraman_v_Page_108.tif
357c5bff9e3a61172fe3d8d0042e6df6
fba09508cb5697f1b65ba14b462c2d422cc6b710
F20110114_AAAMHU sivaraman_v_Page_092.tif
a8ebdac7bc7804a591c7e1cf857327ff
220713657f9bd659c1ffd537ec184f1bad9dfbbb
F20110114_AAAMII sivaraman_v_Page_109.tif
4cba95065a57013341a4c32fb4941bec
1c1a9e167b8ab2f1445bac81a31b39cc99b54d05
F20110114_AAAMHV sivaraman_v_Page_094.tif
3d381be8733c78ca984dd4abaaf43125
c9a075e8a57e836cb10aa547cb08ca1f981903ed
F20110114_AAAMIJ sivaraman_v_Page_112.tif
729603ea2a5e64541d1ce5691964bc71
a9da4ef9177676ab75a772214884a119b7529599
F20110114_AAAMHW sivaraman_v_Page_095.tif
c8004fbbe7c9091d1c824f71f39d015d
c99fc446426a8ebe8d1e553a18f964218a68e732
F20110114_AAAMIK sivaraman_v_Page_113.tif
98231d2e329e6ba227ba9c57fc308a04
b133d1c0f2860ae71a8c06571f9a718870548056
F20110114_AAAMHX sivaraman_v_Page_096.tif
d8e67e11cefd9619cca7c565477d5f25
978a19c7d515aacf3340afffe122e27e5c3af58a
F20110114_AAAMJA sivaraman_v_Page_144.tif
e2624cef9f9ccc9e14b8f2e05c5ad649
a32b58a54901780875b30780d376701f5b0250f3
F20110114_AAAMIL sivaraman_v_Page_114.tif
8cbfdda38d80592353f845618c6c9c74
b2554b334c10a64b4198064aca7cf29af81af43d
F20110114_AAAMHY sivaraman_v_Page_097.tif
073d55b3eb99b47ffba71b89dbe89b56
489f08a0f1b625bd2d421aae60ee877958c2e2af
16608 F20110114_AAAMJB sivaraman_v_Page_003.pro
6b2ccd725f69828ac1c10dff5b81976a
57f60d1198d193aafda344501bf71fdff35181fa
F20110114_AAAMIM sivaraman_v_Page_115.tif
f8af883769bc0d474405445f3aa7ce47
fc9bb3da289ed7d5ca9d4e4f02452429c964a170
F20110114_AAAMHZ sivaraman_v_Page_099.tif
46730c8f3faef620387531690fc4efe4
ffbcff819758d71529e2ff559e875dde396cc99d
10280 F20110114_AAAMJC sivaraman_v_Page_006.pro
9d87862b211cff62bf703978ae373936
14b42ce2e18734478c8574121308a861854cb677
F20110114_AAAMIN sivaraman_v_Page_117.tif
b8650729f55e317b91f3955df46da00f
b0121e7a11e763546e2e31af5f0514747149cb04
32494 F20110114_AAAMJD sivaraman_v_Page_007.pro
a2ab3e1555433d6cdbb1356e4cfaa48a
fedd4c70c4f025abd264fc0ca034ded16613d9c0
F20110114_AAAMIO sivaraman_v_Page_118.tif
f406e51d3fbe4be1c71d05eadcab6bf2
19a70db937c30b472163d59b86cb9d3012954d54
64138 F20110114_AAAMJE sivaraman_v_Page_008.pro
25dce8f9aaabb4c96848ad20af810bf3
b51352e9e22709948dd17bac022e3603a343a3fa
F20110114_AAAMIP sivaraman_v_Page_120.tif
b120749141df5af243877fb340ab7795
8eaa7691211728258013c9bd04d540d68b2acd56
36333 F20110114_AAAMJF sivaraman_v_Page_010.pro
863049c38951b14c37f48f3ff106c6af
d11ac20d5b8c9807a1874e821c60a8afc51602ad
F20110114_AAAMIQ sivaraman_v_Page_121.tif
f92dc91357283da416a795f4f7a13908
7d0e42c11f45f39edd12770f480d269c80828b9b
F20110114_AAAMIR sivaraman_v_Page_124.tif
d7b18cfd2f188992cb85b2efacf3c514
11e9c79083d040c7511d9d95ca3bc04dbec2b3f7
41287 F20110114_AAAMJG sivaraman_v_Page_015.pro
720d20065acac450f53309bb1f3e026e
0095b9f08b61c39e69bdeef1dbca4cf7cdd45f79
F20110114_AAAMIS sivaraman_v_Page_129.tif
272f8983cba29742de9feab06462f619
76d74da9b0fb301448f041709a603894acdae4e4
48661 F20110114_AAAMJH sivaraman_v_Page_016.pro
1bc1210f985763d98277a50a5583d21a
c87010e40f32aa877c7375b0ae30664f41ce3b55
F20110114_AAAMIT sivaraman_v_Page_130.tif
ca4aed457b5c97dbcc291a79813c4ace
3469e7e23a7ee13e22b293c2aed759723a2a3fe4
49302 F20110114_AAAMJI sivaraman_v_Page_017.pro
45645bea2d0812bd900767de173d9d53
7eef63d37159671d958eb2766965d3887651f393
F20110114_AAAMIU sivaraman_v_Page_131.tif
60a52005a12a3098c911e9fff1e79697
3e190b2bdc7ef424b83e6c1c4d9a06a0de53d375
24818 F20110114_AAAMJJ sivaraman_v_Page_018.pro
66547b596a38f956c1c12a6adc21cc0d
a835033cc0a13b4d19a2822299607ff765ca343e
F20110114_AAAMIV sivaraman_v_Page_134.tif
5a2e8d38753314c8579e703b6aa106a4
06c7454c6b180e0d4eb6f039579646897e82fd54
32463 F20110114_AAAMJK sivaraman_v_Page_025.pro
f72528d192d9dd2c46cf5ca1972c1c30
b3253de8a4419498faeb777289a0e02b8fdce895
F20110114_AAAMIW sivaraman_v_Page_135.tif
4cc22ba1e4e27d0993b9b159ee29e801
55f495c8a2a2f0f7f7cbd84ec55bf48308329802
42870 F20110114_AAAMKA sivaraman_v_Page_050.pro
d9669cc76f3a4a8692ce247c1c8cdd7a
9416d39a62b2cea0aef2b8e41b7fdb2861aedd23
45587 F20110114_AAAMJL sivaraman_v_Page_026.pro
2cb50fa1ca4c47f68ae61a21902982e8
717fa0a1ba83cb413a80dc1537c51387c1216996
F20110114_AAAMIX sivaraman_v_Page_138.tif
5b430b7ada8a0a7033aa8cd8de573a24
793009a2baa27705b3ac2bff5fce2f9065872eae
51852 F20110114_AAAMKB sivaraman_v_Page_054.pro
5c1470b4fcf5d8ac465862f61d33a54e
7ec506eefadb844585a9c9919391d8748c1294d6
35916 F20110114_AAAMJM sivaraman_v_Page_027.pro
8f2b962e3144ea16e55935b0c409a28a
dd305592e98369b8560e405dd214c4cb2224d86b
F20110114_AAAMIY sivaraman_v_Page_140.tif
ec637ea4cf67ea6cf6fd6e77869d54e1
1122e673043ee53143cc0438f234e57ebe323759
48023 F20110114_AAAMKC sivaraman_v_Page_056.pro
6372bbafc55d14db512ed1b903e82722
13f1593c91b24b495fdef3770f88f33f116d2b24
48565 F20110114_AAAMJN sivaraman_v_Page_028.pro
f428b4052b4bb2e9d540425c346a6463
d19531312b92f649538c6b469fc9214a651bd23f
F20110114_AAAMIZ sivaraman_v_Page_142.tif
f7e535827352b0340c1906649ea41c67
9c4e18e3481421d7668d9b2951bdbd554f9a2768
49038 F20110114_AAAMKD sivaraman_v_Page_058.pro
38321eb0b35c410e766c819b2b7db064
d0fe7e739f763972795304e44243d4321c213f6e
39990 F20110114_AAAMJO sivaraman_v_Page_030.pro
d62c8ff6b30b96bc7ff62c1472e656b1
5679e08e8647be945150c33a6b3edb4a477fd85f
30957 F20110114_AAAMKE sivaraman_v_Page_059.pro
dd3f957d7b1a16c37d2f72d257529ee2
7f4c2494b222905e3535c0907261d18ed7432f9e
51439 F20110114_AAAMJP sivaraman_v_Page_037.pro
8f460edb4d4e173ec77962efd1d824cb
c58206e395aea96bf954ced95c4c98e7a8815311
21873 F20110114_AAAMKF sivaraman_v_Page_060.pro
ad744874124efa8bb4be387b8db0394f
f07d49dccc98ff436d0bbcb02c9322a83612245c
14367 F20110114_AAAMJQ sivaraman_v_Page_038.pro
d042c7f9b99240770815827d4a496937
9e45ecbec2a7bf83a5f3963c0612345fd75744f4
52724 F20110114_AAAMKG sivaraman_v_Page_063.pro
3de5a5c35b96074753b419aa805429f9
35c0a96601858defc78935979612d682ceb04f70
30303 F20110114_AAAMJR sivaraman_v_Page_039.pro
f5ba02a0cfc4953c704199c72b955ced
920a6b31d9efc521a41bff47dba9d7816c693fa0
45046 F20110114_AAAMJS sivaraman_v_Page_040.pro
7fbc5fb7064e6db4261a4eec7423eedb
d3dda576d66ff27451c003261654cbfa6b6a491e
44091 F20110114_AAAMKH sivaraman_v_Page_065.pro
862c2b5db63203545db64a73aa50e510
55732bd734a7ecc2d3bcb4a0171f93377f17fb45
44203 F20110114_AAAMJT sivaraman_v_Page_041.pro
6acde0190a5c687266296da89f93d9b2
6294667843b8f3e789fe899652919bde404d01df
48511 F20110114_AAAMKI sivaraman_v_Page_066.pro
d217a07c0172742b25a5c3d4c02f7d4c
92b23c30cdc447b1fd89f839c0f1a7560e08e045
50138 F20110114_AAAMJU sivaraman_v_Page_044.pro
304ce7bf1f4bbc583e634300d44cf802
a42b45080494e3c67ecfc38f3278f0c4bd1293db
34422 F20110114_AAAMKJ sivaraman_v_Page_067.pro
3cb8c72c6d016ca21732ed4c37bdd376
241b1ac0f2d495d39d745f4c710477c3d288380c
42397 F20110114_AAAMJV sivaraman_v_Page_045.pro
7b06d252e011ae7ba177cdcc26604dcf
58fbbb4d122783ea17f5232a5352fc2c5d77f7d9
49753 F20110114_AAAMKK sivaraman_v_Page_068.pro
bb7d16a04c05c36b2407731d9938418c
949e88fd89cbd651d069fbcd9a88ffd35accc74b
41478 F20110114_AAAMJW sivaraman_v_Page_046.pro
473bce770eab9d3ff58e0e789ecea492
83ae4eceb0ad8cc4e360486146d9a36e40bc15cd
50519 F20110114_AAAMKL sivaraman_v_Page_069.pro
7f205eb27ce227edfccb8656aae775aa
684c6b2d3c71a067669d6c16153a0a4269518531
49709 F20110114_AAAMJX sivaraman_v_Page_047.pro
c8558baa0e47329a299a393b59814f95
8e3cb72efc7eb992916cbe3c4ad654abde0f0985
26801 F20110114_AAAMLA sivaraman_v_Page_104.pro
90a7f449a2d4b93ed7835ac22064538c
7c79ebc28bd29f15b627542d6ba361164f9cf4a4
33884 F20110114_AAAMKM sivaraman_v_Page_070.pro
49adacf53983d0b22933b43028d0bc8e
5a352be1909b3a836206510837ac99ddcd025448
39222 F20110114_AAAMJY sivaraman_v_Page_048.pro
7af97abf9f78cd0079170d385a6e355b
75f86e633aed213abe1cde17ba27261d042fc5ff
42165 F20110114_AAAMLB sivaraman_v_Page_106.pro
68e2f5f037bd9a834bb76094d140d705
79c28a123d10f1eba0a7a61bb4fffbffdb1d6a55
39007 F20110114_AAAMKN sivaraman_v_Page_072.pro
2fefd593fca05e2fafa9a69daf719f9d
ac87be126771baa69112dc36f0de6d95e13c0f94
47536 F20110114_AAAMJZ sivaraman_v_Page_049.pro
83fde0590cfa552d582ba5ff22400a69
9fa46356cfd435272c345d8a95ef9c84da4e67e7
19126 F20110114_AAALIA sivaraman_v_Page_141.QC.jpg
77575f883c546ca2c357ef018496b19e
097d4355bcaa3ecf01c63ce708bc505c64bf9086
28218 F20110114_AAAMLC sivaraman_v_Page_107.pro
fc8e0fa1a20845e1124cd4329a31b98e
bfa7a238d09b3857a88115a83f17557518b157d8
43636 F20110114_AAAMKO sivaraman_v_Page_073.pro
e7429aae67038c9fa62b9fd735c38959
665defbe1fef9380a6f4c7c4336544a687e6d42f
42696 F20110114_AAALIB sivaraman_v_Page_129.pro
9819f7d08772c7c12ab201eed9cccbbd
837ab39c7016a841fe7c7e889123c2d47e46b671
50143 F20110114_AAAMLD sivaraman_v_Page_114.pro
897d226d849e5a691a4ba47ebe5a9514
869463b59d51b2edfdc07ac934eca4b8a06f4833
18462 F20110114_AAAMKP sivaraman_v_Page_074.pro
a53f111e969dab353a8c208405ad0d66
13a7b280d31b9de741b534a8375510132e40b31e
23814 F20110114_AAALIC sivaraman_v_Page_035.QC.jpg
8c27a8ca2f63d061c30ee8e7ddc29294
fa642480466108de582e32bbc37f364a74493d46
24821 F20110114_AAAMLE sivaraman_v_Page_115.pro
2582d7f31cc1ff73e330a71658fafd18
93e27be953ee92a99081ee4147cac2514f2d0c38
44832 F20110114_AAAMKQ sivaraman_v_Page_078.pro
500ea38860eeda07d5f9f17c6a95b269
bf16f3e50cdc547fe45a100c5e2a26dfd4a1841b
F20110114_AAALID sivaraman_v_Page_082.tif
e8843ce20a41712f646e07fe51ef878d
6bb49da4a511fd0705e2321c8d49f3e63749c051
23784 F20110114_AAAMLF sivaraman_v_Page_117.pro
42d4d4d833b7fd871f8b481d5acedc9c
8c84c5e3732fedbb101d939b7a00082aa34fb23b
39638 F20110114_AAAMKR sivaraman_v_Page_083.pro
f8c6bd777abadccf4082e109639ae1b5
1f1b0c7dfc3b6068fe797cfeeec36e23c0732740
19060 F20110114_AAALIE sivaraman_v_Page_104.QC.jpg
4d05cd24c5203d1f1fa823fd1f8c4ca9
ff1b0136a027c9874bfb58e03d7747acff5c8731
49915 F20110114_AAAMLG sivaraman_v_Page_119.pro
972344fdb3f6726b11312b086737b6eb
7d8c1955c3c8d668757ebb988257d1ae7e1afd10
29633 F20110114_AAAMKS sivaraman_v_Page_084.pro
ee29a2249254f5e9f8378f9cff232284
2d193112a653bb2b76eac728a7ac7fc49f6f5eb8
8675 F20110114_AAALIF sivaraman_v_Page_001.pro
39cf0f6df896fecef9f059a017391fbc
1bb0c284dc13de19809211cae2f156db6ec1124a
24119 F20110114_AAAMLH sivaraman_v_Page_120.pro
1afbaf81c9bce50e9587ca6f49604c3f
75eb71110c8527d3f71735129dbb59829f6a8929
32727 F20110114_AAAMKT sivaraman_v_Page_085.pro
87be70ca5ad804b0ce93ba731c23e5cf
1e54a8d239dc65aea048909cf919300ba6eea5d9
4004 F20110114_AAALIG sivaraman_v_Page_134thm.jpg
1d74b9b74856d39374024eb049bf9abe
f62f27297ff875aae664b40682c402b94ebd5a2e
44866 F20110114_AAAMLI sivaraman_v_Page_121.pro
a9de1e3c2c286c6e9bf87b6cd1a93e7b
6250bd6a270c489eb3c735ed44397aa19300fe30
2519 F20110114_AAALHS sivaraman_v_Page_077thm.jpg
963abd11ff2c9251b79c5aa067355888
b75e92f4d8f797b698250901bf114f01fefae119
26650 F20110114_AAAMKU sivaraman_v_Page_087.pro
059ddf9de52b8c9fa924cd5189e65b85
16eceacc06d6c22185a835b38aff74e9f21beb12
18808 F20110114_AAALIH sivaraman_v_Page_014.QC.jpg
8625e89256cb0110f0a3a4e191c9059d
738152ef118541a42cc66e339b53c691ab2eddc4
39145 F20110114_AAAMLJ sivaraman_v_Page_125.pro
7bb4d471e507c40aa552262280c2c3fe
5b3c184a39f4b0293f417a7e09a82a0c358be6d5
72067 F20110114_AAALHT sivaraman_v_Page_068.jpg
697ba3657572ee3a20ae373c83af244e
ef8bfd2109d57423974272746134e47aed7af6d4
43123 F20110114_AAAMKV sivaraman_v_Page_090.pro
81d192d62ad40c04e461e650b28bc24b
8f4d17b0786edf1345bdd8fc83b5a6dc1c5590f4
23991 F20110114_AAALII sivaraman_v_Page_081.QC.jpg
bb9642a61486e1d8b0f095d0fa569208
704f801d6b9260d97c1d4db35ac13ba5c3807f48
8647 F20110114_AAAMLK sivaraman_v_Page_126.pro
05190290d737eb77f9fec746d82ad253
6db46035399fac2de6921a00ba11bb6ccca8dd43
43566 F20110114_AAALHU sivaraman_v_Page_071.pro
39c6352f686dfbfc9c67a106ff8dcf0a
cf84423950e13904ce8be688e6fc7cc66653604e
46072 F20110114_AAAMKW sivaraman_v_Page_094.pro
10c5a642fd9a7118002c0f6171bbbc8c
401d7890d1c2bac8c7d23e45f28f8dadebacbeef
2563 F20110114_AAAMMA sivaraman_v_Page_009.txt
d564fa0d55422bd1cd7c035fc478c868
abb63475ef353a4586c0d049c8382ef7cf3392df
23188 F20110114_AAALIJ sivaraman_v_Page_124.QC.jpg
18970ba66e7e237264b0de65a9cfe183
12f60583ff223a961fcb681f4e406d27742f5d76
56399 F20110114_AAAMLL sivaraman_v_Page_127.pro
d6d74cbe398ec6fb9e2b50957c7232c7
54deec1c7b1a5390506f74a0b30559d4ebf2c322
541 F20110114_AAALHV sivaraman_v_Page_077.txt
94b2913de7ad968ce693c0f732de0fee
04b9911b3b0b35efd8032044dea4680bccc754bb
44210 F20110114_AAAMKX sivaraman_v_Page_095.pro
15780b130c652caa2f3a14ceff7cee05
ec0bd23c23719dd0ebb50ad64f19566af1a72dd8
1613 F20110114_AAAMMB sivaraman_v_Page_010.txt
84840e6e8fe0c8222e38e4da5962942b
156b2bbbf6c8044413c9d8a40bf7b560451c98b7
1344 F20110114_AAALIK sivaraman_v_Page_097.txt
e4ff36b6f9a9d37871d0fdf4a266f15a
3c6f2a26dce2b3449e9751016c8cf281585c6329
50476 F20110114_AAAMLM sivaraman_v_Page_130.pro
d31d30399f7da9b98c42a8439b2d066f
f53d579cafb6a6b5030a8dea32497452edbb9421
27303 F20110114_AAALHW sivaraman_v_Page_111.pro
380a56aef13efc5c2acc551fc17fc5c9
d7183cf54fb05956d004142ac4491a90b40686c4
44341 F20110114_AAAMKY sivaraman_v_Page_096.pro
9a6bd69abf855f2607802d6cf0a172ea
2281d4e88c7a85234e9e3c9bb96e0bea039ba098
1730 F20110114_AAAMMC sivaraman_v_Page_011.txt
4837b82e3cad0082daf4deef4c383023
5cb46893ad48fd59959dae85bf97d27c8bb6036f
51549 F20110114_AAALJA sivaraman_v_Page_052.pro
b9261938c07404c7d004d2b76e2618f3
21fa299477175e0f75e9e42858a3e7443fcb1365
5379 F20110114_AAALIL sivaraman_v_Page_009thm.jpg
43c8a2c07aa26b356e688d94f4843777
03244e681f626fde2282f923b5fc3b9455ead257
44935 F20110114_AAAMLN sivaraman_v_Page_134.pro
37e3ae55d6eadeb5e268e0d9a2fdaeec
63a426127c31346a5236a62e890aaaf8c8f8ce4b
F20110114_AAALHX sivaraman_v_Page_106.tif
241eab39be9cfabce14e6f1159e263c9
625f3fc9c40c77b364cbdf174fab67fd339f15d5
37609 F20110114_AAAMKZ sivaraman_v_Page_098.pro
0b2bb6f2a0037a7fe1121f516ff892c2
f4e95afcee386d879f7360b1996231371f86742c
1880 F20110114_AAAMMD sivaraman_v_Page_012.txt
cc95284c4d42e5b4462fc6e8edfe37f2
c2c087705a84b5488c62bb478ecf745c185ca902
1834 F20110114_AAALJB sivaraman_v_Page_040.txt
166f334cb7c735409df53aedb76b9851
523a0523c8b67be01977fb0e52349cca85207815
2037 F20110114_AAALIM sivaraman_v_Page_037.txt
6ad4b9ed745038eea06718bb634ffea6
545b949373fef1b375c1eaecd7be0b994234f423
36320 F20110114_AAAMLO sivaraman_v_Page_135.pro
6d3e9081ec47a10208349680eae1fac5
f9e63100bd8f194d47372a05a01df320331152e2
F20110114_AAALHY sivaraman_v_Page_046.tif
0144d90bd68603bfca2377454a8ebd4e
deb571266aa9033325d82dab8969ab2724d7ddb9
1969 F20110114_AAAMME sivaraman_v_Page_013.txt
450315fb415bdf43b7847c67e7c2c2dd
eb6a1a6760556e849b89716931d2f61e630adb45
57967 F20110114_AAALJC sivaraman_v_Page_014.jpg
359232d28106aa8c0f6b6ceaa5a7b61d
871a091a93be4dcca27206b1b44347bc2ba388f4
15581 F20110114_AAALIN sivaraman_v_Page_112.QC.jpg
f742daefb900b7de49dc5a1d78da3cb5
15b26e44b27c471ce31243c27ef69557b0d3e523
20798 F20110114_AAAMLP sivaraman_v_Page_136.pro
af9ba5d0128f4caede02a1c3fd712622
31ea5f4d77c1000398415200f08ad6a8eb94db88
68562 F20110114_AAALHZ sivaraman_v_Page_031.jpg
c392f0c6c3f5b0e6114277891d104eb0
138a38fb949aa8de07d4ec4306ed1fc23809a04c
1576 F20110114_AAAMMF sivaraman_v_Page_014.txt
16ccd37a10169c047cb90dc51ac945f3
1f00a68f291c77f7c8e2b0b03f68a235ef4b33cb
50470 F20110114_AAALJD sivaraman_v_Page_080.pro
0991916d3ccff1bdd42e4d87c3e44e5b
b0fb3979fa40443501c8c0431821e5ed34e59fb0
463 F20110114_AAALIO sivaraman_v_Page_001.txt
945e626e13448a4d3c0fa707234513d0
344ffce9279eaf7799ca17ae68935b2be5aef925
61561 F20110114_AAAMLQ sivaraman_v_Page_138.pro
892a2c0baa0bb0d1293d637014ef6a62
6ae0e404dc390de42bed3c49c2bb2081f30c4cc0
1950 F20110114_AAAMMG sivaraman_v_Page_016.txt
00a3b46899dee6a0de379ff28d962e33
ff49a62fa183b67ea850ddc014f28a92c5b293c1
6176 F20110114_AAALJE sivaraman_v_Page_056thm.jpg
640016b6224b046f0ff233ac4cdc069f
66c13411f01c4e73b4ebde212f3133ea93161aa6
51855 F20110114_AAALIP sivaraman_v_Page_101.pro
9d290ef75febf346874844709a488325
d6b4e986ed80a9ac0d387efe67710b87e446509d
68655 F20110114_AAAMLR sivaraman_v_Page_139.pro
db2bb24f76d95f396675ed15e5297963
78efdf392322dbc637950fbd8da9b41dc447f706
1945 F20110114_AAAMMH sivaraman_v_Page_017.txt
ebe1e7ad7848a7001a8e0649997ac3e9
f85bfa2d90f712b51f77b758a61320ec980a5bc7
394537 F20110114_AAALJF sivaraman_v_Page_006.jp2
8ef116ca9a6fb4e697e2d30689015881
18840c57c57e95fdd29eae11e37d06c94855281f
F20110114_AAALIQ sivaraman_v_Page_116.tif
c61ad7b235ea40e36a509d9de8d4550d
4cbbe82ed53a442e9600d8106fcb1c9d93264bbd
89396 F20110114_AAAMLS sivaraman_v_Page_140.pro
6222dc21f65c12a7d5ed93f6d5f958b9
c5308ec3235a7dd7c0f5f9f635b25e3948c3d02c
1207 F20110114_AAAMMI sivaraman_v_Page_018.txt
f121bb30cabc1d8b79c70e72aff5150c
e026e8933df867502fa532d63b8c8394d82ad477
F20110114_AAALJG sivaraman_v_Page_051.tif
ac20a5a87c23149966ded2e22454f79f
3cdaff0cdd1f0908fe1f1c9d91da557561b190ed
1051986 F20110114_AAALIR sivaraman_v_Page_050.jp2
1fcaaf479a468af61a463f6d62142d1e
dfeeba80e349873511d48269ed8f93d1da787a83
45067 F20110114_AAAMLT sivaraman_v_Page_141.pro
20bc6bf6a3b3469ffc80557a98f937d5
4e087eba72e99ccef2082d5711ccf972ddb15327
105294 F20110114_AAALIS sivaraman_v_Page_017.jp2
bbff7d94963b076ce931ce81bb50c097
cc5300603c2f37cde190cca94efc71c97149045d
51527 F20110114_AAAMLU sivaraman_v_Page_143.pro
1ff69c7620a3bc90d3e988b48a733b09
9984ea07f77f37b5909346eca7bfa028ae0ff504
1973 F20110114_AAAMMJ sivaraman_v_Page_019.txt
fa96115c907c23cb9580221eea4727a2
284fc0804f4ed7b371943cf7a13cb36ee97e6a97
141423 F20110114_AAALJH sivaraman_v_Page_137.jp2
aca4a1414972eaf253e5b7d115c198f0
f7b2c8dfa141424ae1a7560c19891e1ebd054300
548 F20110114_AAALIT sivaraman_v_Page_145.txt
0f3249ad40c59c09f5a3ccab014161a9
1fcc49999cbfc97832d4306f0abaf7f2f4b295e9
12574 F20110114_AAAMLV sivaraman_v_Page_145.pro
53539b7c91259eca2278a9bf61adb1a4
9ea4fa2273c04a03108d9579fe85d39142564a92
2039 F20110114_AAAMMK sivaraman_v_Page_020.txt
57c0106f50c5ef0326b488e1c6a7f692
d6b1478ffb541acfcb931f6fa80e72e753310e9f
89121 F20110114_AAALJI sivaraman_v_Page_086.jp2
9e3013d3aa0a130fc5d15ec6f89abe6c
70237a5ce043f0970f0bdc0b1096307cbf189135
1985 F20110114_AAALIU sivaraman_v_Page_094.txt
72e39facd9b68ceabdb2325807bc8fe7
a3b22e4e68243fb46eae939bf044a5e4e5440cc5
710 F20110114_AAAMLW sivaraman_v_Page_003.txt
aa1c454fec25f1d7094e8092ea71f444
7733934f0a1706ed25e69c35bd757fa9def713f0
2074 F20110114_AAAMNA sivaraman_v_Page_056.txt
f936fbf94fd876e7cf49060cb402af1a
73d355403d2feac75985239b23d527f156222615
1961 F20110114_AAAMML sivaraman_v_Page_021.txt
c79e38676eb1b361d8d29fb4a4025647
f474c67e26d529bdfd2d0b84bebf654919e5cee6
2104 F20110114_AAALJJ sivaraman_v_Page_143.txt
f406a1368630e62ca5a5f45d2c912d6c
031d48e17ae39a5f00e5eeef1b92727deb4d65d5
25572 F20110114_AAALIV sivaraman_v_Page_145.jpg
5afd47c745bd443bf43f1ee0c2dbe621
33dd85f98edc8274ca14378f4b8329f1c251be94
3698 F20110114_AAAMLX sivaraman_v_Page_005.txt
8497d1e151fb1eb70c63781b9434f350
19e38c1696629d5149b36257a5c527cdbd29cffc
1788 F20110114_AAAMNB sivaraman_v_Page_057.txt
e36bc61e451fd7210bd04c252cebe949
5c0228dbe0fa33bb9fabe109f63ff8cc67e2d51f
2003 F20110114_AAAMMM sivaraman_v_Page_022.txt
07fef0b6bfcf6e0e86d7037c1cf54f77
262050db9f81ea513cb9b8d14dd71f8af6efee53
1358 F20110114_AAAMLY sivaraman_v_Page_007.txt
261975f8b303a00038b9bf5afd3e11bd
48ca4a729160b10b92967fb23d461a907509bf42
21867 F20110114_AAALJK sivaraman_v_Page_049.QC.jpg
d70d4f916d9e08ac8aa7df4dfc1782ef
95bcd1ca66da5f0d3cd4ec5dbe914b7eeadb3e60
76888 F20110114_AAALIW sivaraman_v_Page_076.jp2
3b7f05d3ae4bfefd09414557d7c0d96a
eb8b89b906ee2c1dcfb1d4f4db73576259572fd5
1986 F20110114_AAAMNC sivaraman_v_Page_058.txt
6c6758c9e6eb72c951070a09f80de420
d8477d92468425fa3e8c9c06dea199244b67c4e4
1455 F20110114_AAAMMN sivaraman_v_Page_023.txt
5ead92f08223e278d5c03e36da7a3346
07aa2c254c9835dd035d133ab209ea833aeebf77
2564 F20110114_AAAMLZ sivaraman_v_Page_008.txt
82d5d7ee30c79c04c8a9891cb45355e4
84ac510a6680040d1760989261474c67547188b4
94520 F20110114_AAALJL sivaraman_v_Page_093.jp2
59a0dd8fc75c592edcfc5588e5f95c42
d1fe71a483ddcf7a51c01bbfa8a90e316967842b
53961 F20110114_AAALIX sivaraman_v_Page_039.jpg
105283376db54f2eb005a96cd9c6207f
686761c8125ef11fd31c4e613ba7f0dcb15cbaee
55921 F20110114_AAALKA sivaraman_v_Page_072.jpg
0be278cbcc8c655089c3c124ec4bbd0a
5b7dab6135a355f5d48f6e5282a452adf636321e
908 F20110114_AAAMND sivaraman_v_Page_060.txt
a0942e2bcba939dc6a0b45f78d8a5be7
8200e2f0c7d3d1cb089f08ee7f82a3c1eb019a44
1978 F20110114_AAAMMO sivaraman_v_Page_028.txt
cad1cc6e0519da5196119e70b3c8c3d8
12214101935c3f9ee5e9fef1cddd8c631d3137f5
839372 F20110114_AAALJM sivaraman_v_Page_027.jp2
44fb68dbb219f715a123c350a4e6e212
9626a8c2e284e70a03998ecba625104ab6173ac9
18297 F20110114_AAALIY sivaraman_v_Page_109.QC.jpg
e7e192337bb38522e617ef99ae0df2ad
ebf3d9211b335ec023072c73b5472927a9bbccdc
109571 F20110114_AAALKB sivaraman_v_Page_036.jp2
d1fe02e2b94bfe08703c2616f3fad52f
63bf321feba498c61d115e6696a048cb914b0e87
1758 F20110114_AAAMNE sivaraman_v_Page_061.txt
22ddc42220bf02ed96028dee8ea444fe
9d6a0922915772f662a2785e18837c96533e4f97
1816 F20110114_AAAMMP sivaraman_v_Page_029.txt
7b31ad804d0d7153f4b2f4e9e751f3ae
ab084b5a0b7649c516b48e3ca7e5f0d345abfaf6
F20110114_AAALJN sivaraman_v_Page_141.tif
939c6560f254e1eae4e9ab5fa48fd607
c7e78a9e8de57520b322d398cb351fe1cb8b545f
1975 F20110114_AAALIZ sivaraman_v_Page_114.txt
5fb82a8c48e5946a93390e2c1be0bebd
c69c684e013edde93ff89a9cee366fdef571ed2f
17839 F20110114_AAALKC sivaraman_v_Page_072.QC.jpg
02ab2e1aa0258a95676b1be0c3aa10fd
b2c5a79731965387eb262e47c9658d14365353de
2012 F20110114_AAAMNF sivaraman_v_Page_064.txt
5d57627fd89adfb640037d82f107a97d
b04ccbe8ccc505d113f81c6f77d951ca28cd96a7
1942 F20110114_AAAMMQ sivaraman_v_Page_031.txt
4f3a86e4ca3e22471fb5c984201c54f7
640f8d06bd3c3cbc829bcd544f65a07ac66700f7
38803 F20110114_AAALJO sivaraman_v_Page_099.pro
9cfb81f405d0c4850f192c6432924a54
a4841bb6b1d06c3a1c56fd29cb8ef92590c24525
114559 F20110114_AAALKD sivaraman_v_Page_140.jpg
edac006703600cebc0f605ffdb9c00ce
ec4434488ee33098ecde4ff9ed798e455049d1a9
1970 F20110114_AAAMNG sivaraman_v_Page_066.txt
c8bdacb9f7d15351f567b8bdc574b3bd
7187900f0527bab257367d60fce4badee0bd6a31
2212 F20110114_AAAMMR sivaraman_v_Page_034.txt
257a83171afebc820c39ba15d3c3dee7
ffef220e5a0d939c9d6ac394dc8f7dc10cc7b65f
1051808 F20110114_AAALJP sivaraman_v_Page_143.jp2
93a35488f587800e68256df1bdf4a40c
9903ea8f2eb2c8cd676f46ec279d4eec89e4c1f1
5760 F20110114_AAALKE sivaraman_v_Page_011thm.jpg
f5c999129a86dbbed0c4acf8a30f7231
1aeeacb372936c7f166f92a54f5f6e2eb0040e8c
1398 F20110114_AAAMNH sivaraman_v_Page_067.txt
71d368e9c021655eb4dd4fd4fedb632d
440773209b5ca33248c3e6d8542e27c884e7d45b
2005 F20110114_AAAMMS sivaraman_v_Page_035.txt
65ccafd50407c23081ff8014d29d2af4
47d133c6ecf36017be0a5a0d4068323317f97440
82567 F20110114_AAALJQ sivaraman_v_Page_072.jp2
a0a832f3ece3396498bf20d6dd21a1f6
5914ed9a27337369ac4dee70dbcbdcfbc10fa199
68363 F20110114_AAALKF sivaraman_v_Page_075.jp2
733ef6b92623e04a799fff27159d1607
cda808b2d765d597c1366eb7bcbd121fa64f5030
1997 F20110114_AAAMNI sivaraman_v_Page_069.txt
17abfcffb4c57e208677a06760c3efe8
9aae2d1a684a7abcb8d8885f3a7ead36fe88a948
630 F20110114_AAAMMT sivaraman_v_Page_038.txt
3cb2ba6d4741b57cdd9cd8e23fa406f8
0b924a840cbffa8a2ae8afd6a7353de41da10505
F20110114_AAALJR sivaraman_v_Page_110.tif
42c633fad052f6aca40090c50bccaec2
7222ec9b84d1ea46e1d6a9a7cd88678fff4f12be
6280 F20110114_AAALKG sivaraman_v_Page_126thm.jpg
935a7984a8aced4f91163dad7974ee06
ad932fb4376f34f3586ca378250a10cd576b50c3
1741 F20110114_AAAMNJ sivaraman_v_Page_071.txt
30036d27fc82e0ec73d8a619ece25694
bd112f2f3cb7f72fb38d3f61a3de41623e08ea30
1872 F20110114_AAAMMU sivaraman_v_Page_041.txt
adbc24d1b00a9620c646780695c49604
21084326997651a2ec4b07f7d8a0cf4fc818dbb9
17860 F20110114_AAALJS sivaraman_v_Page_091.QC.jpg
8bb0af3439c5adcae17a08a97a93af6d
33c6fb6a93f37ca19ebe06b6256e1c4561577ea9
5580 F20110114_AAALKH sivaraman_v_Page_042thm.jpg
58afb807f63e0b108f9c0f4dc7eff12d
6ae71232dc7a1a7b4b5cd875b808d4f3074c03ca
1822 F20110114_AAAMMV sivaraman_v_Page_046.txt
ebd39883dca6146cc5925e3829e2e2aa
53120d72bf29fc01f6a34840ec5da3478c09fc54
F20110114_AAALJT sivaraman_v_Page_037.tif
adb12a16fa6c8d08483e37fae14e255c
01adc88064e1d1ab4082c103215e9aa6b0574ee0
1784 F20110114_AAAMNK sivaraman_v_Page_073.txt
e896e855fda4a632330dccce47f472bf
d51dd2d5f990c579841548cd2220c4f9129a3778
1960 F20110114_AAAMMW sivaraman_v_Page_047.txt
9b27baaaeb692f2b77e5744ef4bb05b5
536e487245ccdd97b1ecd2a27fa31f7874288321
F20110114_AAALJU sivaraman_v_Page_048.tif
c5ca5b5a524106884473e7df3e57f48c
7bfdc938f60a4894575fbeefa8ab372abc823637
5643 F20110114_AAALKI sivaraman_v_Page_097thm.jpg
fdda53c3c7f76c2296ef13d05e2646f8
ca83ddbe8635d74622ba841593814895e8a9b272
1684 F20110114_AAAMOA sivaraman_v_Page_106.txt
5b335f62134edbfcdfcc656eeab551ef
eda97125b4aa9a55316e9c2e8eca3b033a51cc49
984 F20110114_AAAMNL sivaraman_v_Page_074.txt
72327427104ee14c7ab09294e569dd72
5bf06ecdcc262aea0cfa488b2ef67cddfb7144a0
1819 F20110114_AAAMMX sivaraman_v_Page_050.txt
9c4c76a4147a95bfdd7ccb46817ee571
66379e57af5b41af0865fbf5c84bb36854051f65
71580 F20110114_AAALJV sivaraman_v_Page_051.jpg
35b6d22d8997ccdae4c857f39a5f2b08
63d20d6c5a93d431ae1303e5fadce0cb89652bb7
88345 F20110114_AAALKJ sivaraman_v_Page_030.jp2
c2e48aa00f6ff9e0b5705f20746958b5
4aa41d797efddd034de54c8a7028f31d2f6d2681
1091 F20110114_AAAMOB sivaraman_v_Page_111.txt
e0775134d7ef40b74e1eabdb7b6db0d9
7f721e62ec44451ad4e3d85f6ed69357b1f40a37
1467 F20110114_AAAMNM sivaraman_v_Page_075.txt
e442eb75f9d4acb2e32ca267e0d054c9
1610326c0f18a441f110d0adf1b7deaaea71e067
1605 F20110114_AAAMMY sivaraman_v_Page_051.txt
92d67881be8c0578040dcb18801259af
6d8b52befa24a974610d85d6746a60e9765e9404
39155 F20110114_AAALJW sivaraman_v_Page_003.jp2
8c17e2170996100eb01841c4ffb829b7
7eb09b814d09a992b8ba616f147c97def1546f21
29070 F20110114_AAALKK sivaraman_v_Page_088.pro
7dca469892fe383399549634f9ef9d5c
f042cdfe3d429ec1a5ed0d0dd223f3207aa43eec
1067 F20110114_AAAMOC sivaraman_v_Page_115.txt
415f4e8a35489f6c6a8dc3e0e5083fb4
e6d7ef69b4cc477dc675507badf1f6ce5b17c67b
1651 F20110114_AAAMNN sivaraman_v_Page_076.txt
d9a581169599a104a1d99cf14e9f2be1
c882cce718bb99cee0d6762ad67e7eb7af86559a
2062 F20110114_AAAMMZ sivaraman_v_Page_054.txt
a20167ab6f31ac7fad8032bb5c0a0bbb
5cbbd0fa8977bb2c29b971e117921e90f4e35aec
974514 F20110114_AAALJX sivaraman_v_Page_032.jp2
c9910fd49b31c79faea6831f05a32c59
16fe14fb155f0d01afe5f15508af89f08da960cf
F20110114_AAALLA sivaraman_v_Page_025.tif
272591bb1520a87d4def09089ed072cc
ff25bf66a15be1c9f69b26566a30e11ca3cd1c91
20562 F20110114_AAALKL sivaraman_v_Page_106.QC.jpg
d50fd4e78a297a3d8b4ba214df3b62a4
9bc2192554e0c5741cc5f8cde391b3f0e125c586
1063 F20110114_AAAMOD sivaraman_v_Page_116.txt
ca14ca4ff5832721b5f4efb9dfae676c
35cba018c8386e159f430ff5ba3d98f1d2c2e8fb
1751 F20110114_AAAMNO sivaraman_v_Page_082.txt
ab2a3f2d03458c6fe446d23c8a12a544
267830eefcecf33e2c81d0591fdb20281e84ff9a
4297 F20110114_AAALJY sivaraman_v_Page_004thm.jpg
66418c9f6161fad697104b987e3a0a5c
8005a0b6b775aca4323e78a26158a41805ad2d5c
68816 F20110114_AAALLB sivaraman_v_Page_017.jpg
fb15f71a81ccbc9db46534102b6c13a0
4320d04139de8324e4b5e74a5e2ead0af9a2b13c
54008 F20110114_AAALKM sivaraman_v_Page_123.jpg
29e7380218a967e90bc4b73027f0b3d2
8a5f36fa61faeeb2f799147aa8e349dc75c489c8
996 F20110114_AAAMOE sivaraman_v_Page_120.txt
00b2e6988a6a004e518aeb39802b788a
babea19d86d50ce2f73c62282aa8fcedcb44d3e3
1722 F20110114_AAAMNP sivaraman_v_Page_086.txt
ebf723114f3ca5498688ecd7d6ce1e54
06afc62e849f91b2d9f835753c7dc43b3711af19
107082 F20110114_AAALJZ sivaraman_v_Page_013.jp2
cbdee4c0e60035e8527b1c5817889faf
a4ec2bfe201bc9f28461626e5a68aa032835850d
97176 F20110114_AAALLC sivaraman_v_Page_011.jp2
cf09be950fb7ced5cdd1cbb5ac566ce5
30083bfbe4d1becb4828b21ef58af1e2ed4feff8
23097 F20110114_AAALKN sivaraman_v_Page_019.QC.jpg
bdf98b2c637989b62ae87fef9a5914b4
3732f6429ae4e9026f1351debb6c687f8488e651
1813 F20110114_AAAMOF sivaraman_v_Page_121.txt
5941364dc9d6aa26dce249561488b35e
1f1ebe565583936f35cff86e6b6c4c76028d3f38
1129 F20110114_AAAMNQ sivaraman_v_Page_087.txt
be4b29eca165c1efc3bc728d163ae17f
8836dc5f8bfadcd7c0dd5b650a0aa548011a79e6
F20110114_AAALLD sivaraman_v_Page_041.tif
96b666f96c43dbe3be3c8906f7e41dbb
b511145399d0d9c0f2e54ed05da2378c9ebca568
20471 F20110114_AAALKO sivaraman_v_Page_096.QC.jpg
2c9d977c65207e3ee253fce642bf8cbc
098d74d4e1b184115e8c3b07972268f3db6689ec
1438 F20110114_AAAMOG sivaraman_v_Page_122.txt
f6a254a5c7b86a26833c24c3bc18eb98
f14af1dfa1c5aea468e0f025588887389b0b6edc
1800 F20110114_AAAMNR sivaraman_v_Page_090.txt
d08b88f225104d6e1a3e2043870fb100
4b0d9680b18d5d37a339c6d47113804fee7c1ad1
22328 F20110114_AAALLE sivaraman_v_Page_005.QC.jpg
4390b159c83e72887466566af8d0c7df
a402566e1d6be429da80a2878515b54a80a87efe
47368 F20110114_AAALKP sivaraman_v_Page_075.jpg
f7a1b85eb1965902771a27a425de5d39
0ccdba55e0d1c22d5609d7bf72b0fa27fc407105
570 F20110114_AAAMOH sivaraman_v_Page_123.txt
5cac71f90be0e24778a4d5c8a4b780a7
e4fc9f3803470afe1f37f86757a2c6f5a825a77e
1595 F20110114_AAAMNS sivaraman_v_Page_091.txt
be3dbff9ddb29d4e23e37847bed1ac52
ad101745d9657f49f14087ab1e467884df6862eb
47907 F20110114_AAALLF sivaraman_v_Page_033.pro
a2fe9b84d9d32dd4128971874851911f
8d406d28a3cb18a058fa6da19ffd63509d0d3c78
54652 F20110114_AAALKQ sivaraman_v_Page_038.jpg
9ff1d3bdd823af96f5ee8016eb0437fc
e6fc65828a6ef731b4965c989fe95f9ba086eafb
1567 F20110114_AAAMOI sivaraman_v_Page_125.txt
ad31e430eeb6cf0d0d15a198d88a2914
3a3ba4c211fb94e35a9ab96958c1826a0917aafe
1930 F20110114_AAAMNT sivaraman_v_Page_095.txt
07e2f505079703cf670ba702ac51256c
52c33c1e7a27daf3285e11798e462a0208dad208
6247 F20110114_AAALLG sivaraman_v_Page_132thm.jpg
2b0e858e76586240237062d9f856fcb8
b8386e71cc631fd387095c8ed38cdc315987b2ec
759302 F20110114_AAALKR sivaraman_v_Page_039.jp2
c4b544e40bebf3c574d8f3a1b306ae20
ba9f3b0af760e8843e6f75020588a97a9af1b83c
2417 F20110114_AAAMOJ sivaraman_v_Page_127.txt
9e260560139b20d913f09f19526a3764
bc59b0cc10ebe5fc93f35e7bd4dd3c763b9a910d
1789 F20110114_AAAMNU sivaraman_v_Page_096.txt
c763531d0e5fc3a05541c738c64d2756
e4b6ced653039878a8f191b2ee13f9f16cabcf0f
19963 F20110114_AAALLH sivaraman_v_Page_065.QC.jpg
2c0389fe3a7a8b5006954a5f8a9e2d59
9c2464409fe83ba17f1105db9941f1c64f4921f9
F20110114_AAALKS sivaraman_v_Page_008.jp2
2deccb249182ccb8879d2e956d0fccc1
f04d4ab529e0821e410c3c814f57a914907a28ee
F20110114_AAAMOK sivaraman_v_Page_129.txt
c8d84de9b2ff36fd57ac7aeb7fc7c78b
e4948949b8f955b8f6c9a1b478ac5411f9d58074
1601 F20110114_AAAMNV sivaraman_v_Page_098.txt
3f204d2e9febb2bf5b135ebf16d5f84f
6739e042d8b7b08b7b8ddef7ec3639eee412ccfb
1412 F20110114_AAALLI sivaraman_v_Page_002.pro
7ac4007c0e59150fbb0a5d7c8e506bef
be13813265d9eba5af3c9109995daf6dc752be20
F20110114_AAALKT sivaraman_v_Page_001.tif
5e9b3cbb990786eac75e7bf82fe2cbd0
2e25c22a26725328e7680c378e9c786c2c4f8cde
2040 F20110114_AAAMNW sivaraman_v_Page_101.txt
74612e5e5adfaa6625816b42abd75864
620d4d6404388be2596cf1232800d5c0c8045f40
23514 F20110114_AAALKU sivaraman_v_Page_036.QC.jpg
7bb3b466a80e41a7db563f723e2d91f2
78cf9f800ef8a9ba09a5484079584d45d9cbe97d
20577 F20110114_AAAMPA sivaraman_v_Page_071.QC.jpg
83898341832c88585c343b8a5c730c6e
a3e02065c5d43b33bfbc7c030f419f10f5fb94e5
218 F20110114_AAAMOL sivaraman_v_Page_131.txt
bc3b88a7236d021db46d3ff417368e1b
3675e85fb0b4b77d145518a7b788ceee46bcc5b8
1039 F20110114_AAAMNX sivaraman_v_Page_102.txt
1bfacdee004da991e73d9682ac7764da
6db9a33af2c5524e216e64e3c7ee692d4ea059ef
F20110114_AAALLJ sivaraman_v_Page_021.tif
618719837869842ae77a90592912b2c8
6a9e7adbe83491c6281fae35ff5477bebdeb0c29
49679 F20110114_AAALKV sivaraman_v_Page_021.pro
90d8922ed95a6ae75f12051d5e195070
67c0bfd16ace048ad9acf821efe5566c2853c7a9
4982 F20110114_AAAMPB sivaraman_v_Page_131.QC.jpg
b9e0e4ead14fe22710e8b37a647aae5d
a5bc643107404fb446950f358a897f3beb991b94
2411 F20110114_AAAMOM sivaraman_v_Page_132.txt
14fabd4c6c43aa373982d326f578fd65
f6c0f3031f1f0b3a6bd9177f28919a93e6987747
815 F20110114_AAAMNY sivaraman_v_Page_103.txt
e357b6a61911cc7d0fe994c0a6787e46
c69ac1b4395806863d0936c83a7227b39a3aaea3
1051952 F20110114_AAALLK sivaraman_v_Page_081.jp2
720547a8c35586a5ba80696f2b7127ef
79c93432f58e0d3330012214ce0c575d67708add
61554 F20110114_AAALKW sivaraman_v_Page_111.jp2
32bd07f03a754949e25e010592451922
33b236ca1f3c4d6dfe2a719055b0c019f388af1c
6535 F20110114_AAAMPC sivaraman_v_Page_069thm.jpg
fbe9bd8c7232eda12ac06a2383b340d5
d6760f2658458ce4eb0710da04c7fd227c2a7a90
1868 F20110114_AAAMON sivaraman_v_Page_134.txt
e44cc5a023c89fba5dd6008a0e56e2f5
1bd73ec4dee1adcdf63d091c5c6b320e576e97c6
1210 F20110114_AAAMNZ sivaraman_v_Page_104.txt
b14b40dde5ff0a047b776d09ae068af1
11d02d2fa79aa9bb3b5ae62cc240d0a6016293db
F20110114_AAALMA sivaraman_v_Page_063.tif
76638b4865d00cbf25a1b06963ba7c56
c631ccb6bad65ba99b641ded71ce9796526fc309
6553 F20110114_AAALLL sivaraman_v_Page_055thm.jpg
858825250f30d2affb5253ee188af7f8
f3d8cc9050ca3eee568025bc9cc97bcd3d2d3aef
7629 F20110114_AAALKX sivaraman_v_Page_140thm.jpg
64165067de06264bba8b7f24c5fbafec
e0c34b2f79edbd53cb4ad5187f47a55b33ac314d
13870 F20110114_AAAMPD sivaraman_v_Page_084.QC.jpg
dc6bdff712de42b27156a0d17f8c5128
e8feaa9863dbd5a051b96c53be75048ce2dc40dc
1513 F20110114_AAAMOO sivaraman_v_Page_135.txt
e380785547481401e886b09b2e0139a3
616acfffefae06dd432944141ddba31afc0a8ec9
2948 F20110114_AAALMB sivaraman_v_Page_139.txt
77afc58e2ced5f765697d6d4cb547ec1
3e877648e849aa871e354453e68f84b63973b2b9
5459 F20110114_AAALLM sivaraman_v_Page_141thm.jpg
7db2f86d8505f6ee6b2427b62b12a519
9ef8d2653304480374515044a1750e38c7bbbf3d
105742 F20110114_AAALKY sivaraman_v_Page_016.jp2
a5b9a696c4d2120e7502e6a24e8f61d6
5ddc990208705ac97a67827d55cf32ccba16cc02
4550 F20110114_AAAMPE sivaraman_v_Page_084thm.jpg
8b6dc657e1bf958ef4ba6bc128c66893
9971e6647b2c16e3767578e3afeab1698ef5fa69
820 F20110114_AAAMOP sivaraman_v_Page_136.txt
9326ae2829246996304ed7865d1452b8
437c6bf7501eabe4a0039bef0ededc1fdd3d2576
26295 F20110114_AAALMC sivaraman_v_Page_113.pro
d5e7f370f6b94f0ab3cb2f982dc26bff
8a6483f2faecd3cd50148c46d5821707cc292cd7
792902 F20110114_AAALLN sivaraman_v_Page_097.jp2
26fdf71c2d1a1982962336df8f823c73
39589786014fa453cd0e44cc05f9572653ccf19b
22597 F20110114_AAALKZ sivaraman_v_Page_040.QC.jpg
c6cb7c19246c47f656d8077ea36e29c0
ed059ab84d695cddcabbd4073ff953ac8d942ccf
23367 F20110114_AAAMPF sivaraman_v_Page_138.QC.jpg
1a904ed29fb6ba6b1f59223861e20260
e7d35544ad1010635e48c19274e4e5be9b784c5e
2896 F20110114_AAAMOQ sivaraman_v_Page_137.txt
8485ee632e99b51f318e55e36062ee4a
42cd63d270527a272f8bda3e7d62620d18a888f4
68482 F20110114_AAALMD sivaraman_v_Page_049.jpg
2508b4da8d40f29b271315260aa8b860
3db5ff7962f5677b7838a41fc87418b99de2a90d
568491 F20110114_AAALLO sivaraman_v_Page_043.jp2
f5cfe9f6428f9d55d9536cc57ed6ec6f
448dfc7410289beb2428daf3988a0c1815e1e3e8
17286 F20110114_AAAMPG sivaraman_v_Page_085.QC.jpg
c18e6a693bc708b96f8df85cf9186407
6bd8d709222dd3d05de88f3e095006232b4f7c3c
2664 F20110114_AAAMOR sivaraman_v_Page_138.txt
fb1693885b3abc11fed366ffc98b6d71
b31492663d6d450f6a4b6aad726d331b0b4ce94c
39132 F20110114_AAALME sivaraman_v_Page_014.pro
c2e8d710281e8b57b6c667dd77ded4c2
46c1096fa595d4ec758725a569f4178bb6cb360d
1627 F20110114_AAALLP sivaraman_v_Page_048.txt
062d55d827ea16b8d83f41d88785fba3
4397e64e0cffbc98a475e68cac14a567faee5cb2
22262 F20110114_AAAMPH sivaraman_v_Page_017.QC.jpg
01b4034c236fc9aa54efe8df15ebeb20
52e0c8c2c138e3658343ade5391cd8585a2eb672
3579 F20110114_AAAMOS sivaraman_v_Page_140.txt
482e2ab6ad3e6155aec25ca0d1c933ef
a00927871d9d90ea7ecba45f7cb4c43569e5bc5f
65225 F20110114_AAALMF sivaraman_v_Page_009.pro
00285b36b6fd3fd9b51735181409169c
5985ef820537533461ab4e02014b5f96d158dd92
1386 F20110114_AAALLQ sivaraman_v_Page_089.txt
74a78243d3bbd6aa60867729e5295ed3
df5825964fd9ac577614fe7d95486e4384d6779e
24165 F20110114_AAAMPI sivaraman_v_Page_063.QC.jpg
8d9360e989da90803510600a618204d8
cae2c08c0444b3bc277503133f8a1758949de266
1852904 F20110114_AAAMOT sivaraman_v.pdf
51c91052e25b4b1f5340fe5b3b802a05
7a1721fa28779bfa70ff339691d61c1c50ae0bca
61184 F20110114_AAALLR sivaraman_v_Page_115.jpg
e74297ae9870f0f08718dfeb7b043189
7830745088f853ca37ee94382f4d5ebc92c97b86
19009 F20110114_AAALMG sivaraman_v_Page_082.QC.jpg
c393fbf3191afe0c3adebcaee8c5d167
494a8e52e4a51e911afeb565e1420f8234bfd70f
6412 F20110114_AAAMPJ sivaraman_v_Page_118thm.jpg
efb5a0e869b4ce36dfb9effaf37209bc
4540cff421c59a28a1674b15416b4a83f89c136a
6415 F20110114_AAAMOU sivaraman_v_Page_046thm.jpg
fb4e6cd5f4f92b58f00193f01f2b33f8
3ac0e73eee2eac2fed0e3c38ccba45d9e2f962b3
21211 F20110114_AAALLS sivaraman_v_Page_043.pro
55ff22aeec8e26a2eb33dded9d3e7b46
28add0ff25a936351045dd53394e5911de7ff296
1611 F20110114_AAALMH sivaraman_v_Page_099.txt
09757a16421d6024e1addd79a933b651
65d5623c919c9941bb83b2b6c12e3c5b26ab899d
19205 F20110114_AAAMPK sivaraman_v_Page_102.QC.jpg
7640ba403b52e39b7a4f08fcd4331bf2
997cb038629c5fde0933ca40c01eea0276c03552
5677 F20110114_AAAMOV sivaraman_v_Page_107thm.jpg
1cabdc77a7317ffd9a8c400a1eaa4f0b
9bf7fa91a1bcbdee131437fda3e2e45cd825eb23
91013 F20110114_AAALLT sivaraman_v_Page_005.pro
d1e6e82de4775078b2dba3598c3ea5bd
90b66e42f908c5fe3bb2ed70c9a9b6968c768f43
F20110114_AAALMI sivaraman_v_Page_044.tif
728e09c293885ae5cd91740dc6684702
0ace7cb2c3bbe94267ab725b95d6880f05c00af7
6797 F20110114_AAAMPL sivaraman_v_Page_063thm.jpg
376b24f7c7e94992c23e5bed753719f6
de34da423bdcbc0830646511bbd94bf8de84de46
5727 F20110114_AAAMOW sivaraman_v_Page_082thm.jpg
9d3ec9362374f5be0c71a461354c6a6f
41c5a1ac3bcd038c53c8151ab853b932a38385c2
F20110114_AAALLU sivaraman_v_Page_006.tif
3b3867ea96229029c32919ac1b08dd02
480ffdb1f526477d5fe1197b7c22d255ca6033b6
F20110114_AAALMJ sivaraman_v_Page_071.tif
709dfcd63e48837388935fa4645736e2
4a511ea3d40ad00091d830cca89fa367d228913f
22475 F20110114_AAAMQA sivaraman_v_Page_143.QC.jpg
eab4d3dd6e76a66d1b8014361009e97d
b580009729d31423aebe84202ac90145f1bdacb7
19913 F20110114_AAAMOX sivaraman_v_Page_030.QC.jpg
49f5fd219bf9790fdde94971f907c892
7d742e6d015d1ba20f3cb26ff6fb1dc8139866eb
1655 F20110114_AAALLV sivaraman_v_Page_083.txt
ab7c63bd393b75dcfe46aa8c9ea5710a
43dc8a8fe0235fff0e6f9eb383cb47e93ec86e72
5553 F20110114_AAAMQB sivaraman_v_Page_113thm.jpg
f9a46f58b7c2c65087756191b88a6c38
56c0d183ed2c66951bcf5d0521a763ecc5373188
22490 F20110114_AAAMPM sivaraman_v_Page_024.QC.jpg
71d06c4c843ead5b1d0e6a5f7137d9d3
28a4f47d447674acde0c028c0a407977a8eafb9b
5340 F20110114_AAAMOY sivaraman_v_Page_072thm.jpg
abf454da917bd167d07f9eba77ca5a70
0500ac4408bc6c66911e3963390759dd55e9e27e
73486 F20110114_AAALLW sivaraman_v_Page_110.jpg
8ea6517a02c8941fb9443419cfbc77b5
0309df29b8ddbd4a6bc54e3b280ac32ad030dd78
1911 F20110114_AAALMK sivaraman_v_Page_024.txt
b319ae9fef5d1fe89ba61e20021b5310
7aff8c68a5d1110b5206d01aa1be4525177f751a
6646 F20110114_AAAMQC sivaraman_v_Page_036thm.jpg
184f3c14b667b41fe24a274ba26cadcf
073d1e2dff36af3a4e11a17518a1592324c8d852
6282 F20110114_AAAMPN sivaraman_v_Page_017thm.jpg
5acc94488d33f2b23c6c0ddfa0e47bf2
12760793a24a1fd7246d3a8a73df282d5819a6b6
25448 F20110114_AAAMOZ sivaraman_v_Page_139.QC.jpg
5652fc495eb66d5929f9fdd781f0b697
cce4ad3011b121bfebecfafd1c3a5eb2a272ee22
1908 F20110114_AAALLX sivaraman_v_Page_079.txt
c21c5312a65d4045489731c3a48dc4b0
62a4e18f5c72768c15f6a439b8d0b07856c589c9
19692 F20110114_AAALNA sivaraman_v_Page_116.QC.jpg
42980509e63fd84e41a5e387bb3557f6
fb5b11202aff3608e4330ad8b2720f91d55e042b
1996 F20110114_AAALML sivaraman_v_Page_068.txt
4dbb2d1e650107f23e5adcd29fee6fd5
95406b527be354d71f254f96c0299890d4fa7ba8
6435 F20110114_AAAMQD sivaraman_v_Page_137thm.jpg
1acd417f68f94365b8b57aa0de08d092
494955043488986b138d5863c5ccb1630a10ef87
22841 F20110114_AAAMPO sivaraman_v_Page_021.QC.jpg
1cc9725e1bf7128aebee880104266ed0
b4cc634a14baa6dc8f7e77777b562d751865ac79
57045 F20110114_AAALLY sivaraman_v_Page_120.jpg
a60cd5c81579796f681f9b6bf0660936
914f1ce160dc55879098f5576e1ef591107f69df
4481 F20110114_AAALNB sivaraman_v_Page_089thm.jpg
4756ab799185e2a580196c887fd65c28
1209900dbf3a0bf7744ff7992674b2270c838dc0
17994 F20110114_AAALMM sivaraman_v_Page_122.QC.jpg
264aecc0f895fbf92ca52dc92d01697f
2dd446d0e29163e906b674a796068480c32abdfb
5277 F20110114_AAAMQE sivaraman_v_Page_083thm.jpg
8fa3cd6a075f4225154e7598fd70c676
47228696c527c9af47141737bc4697b9756b062d
6394 F20110114_AAAMPP sivaraman_v_Page_044thm.jpg
eef9f8473164c2dc006a195db1f62f21
57aba71a7f31d8d4b885aba04201bd3fa746300a
21560 F20110114_AAALLZ sivaraman_v_Page_094.QC.jpg
0c035a63700079de378a85ceb3cbfc7f
d0ec52c03fef42580286e411eb0aba57afcd4686
F20110114_AAALNC sivaraman_v_Page_032.tif
ebda02ac1d25e6c326063d82f91290c9
bc889c84882b9efd761ca7b7577163810c384279
F20110114_AAALMN sivaraman_v_Page_083.tif
3bfe7230673fc2e44a03d9320fccd346
2aa45a7c47cb6f929c9ca28b7485a1d4b7362b94
24339 F20110114_AAAMQF sivaraman_v_Page_132.QC.jpg
54cc821d2a524f8cabe7efb151e5b392
4a85c3b5857d228dbfb6a6c6526d5f19e2a37998
6306 F20110114_AAAMPQ sivaraman_v_Page_051thm.jpg
7111e1cc4a8afa7864f0a82bd77bc232
abfa8284dc8c6aa8dfc8eea1bb2fe861e086f774
66975 F20110114_AAALND sivaraman_v_Page_093.jpg
98fc38f6d5654711854c3e826e437b8f
07e2aa716dfa06c053df1b080b6ec20ffe56faa7
54666 F20110114_AAALMO sivaraman_v_Page_098.jpg
60974c07e8e1ee5b4f8335f49e4abdb3
cbac3072c4a210c323d740cf0bfd8cf4020d53bb
6545 F20110114_AAAMQG sivaraman_v_Page_138thm.jpg
9a7185f2529cae3ea5a977cf959a41e4
8bb6af269ecb9a5d629a7aeaeff646287d5cd92c
5865 F20110114_AAAMPR sivaraman_v_Page_092thm.jpg
3b9b70015c5d54d026cee8d531290166
6d65197d66ff35d072901d200a80e165f8b71816
21926 F20110114_AAALNE sivaraman_v_Page_046.QC.jpg
c367f0cd363ef39cc492f6980284a58f
bb33ed9acc11ac5147d1de402b8c7835e1b05111
2031 F20110114_AAALMP sivaraman_v_Page_080.txt
fa4068ddba66ea715c7dcc9cc3e8ae00
78245aea8cba7772a966f7de0f3f9fa5910e8d66
21818 F20110114_AAAMQH sivaraman_v_Page_060.QC.jpg
e2ecc1a48f521e22790fe2bceb62e11d
f8b2b3a99500f2ef70c91a6d5d73137c109688d9
23064 F20110114_AAAMPS sivaraman_v_Page_047.QC.jpg
1d9deba0d0bbe8be6e4a146c88c91927
2b6bb24c9cf64049de9b2d559e1c7dbcc75e814e
1887 F20110114_AAALNF sivaraman_v_Page_142.txt
df67ec9ed45cdf35c52b3bcb565e895f
301d66eb19ed8b05c2374fbbc4155ea0745934a5
106010 F20110114_AAALMQ sivaraman_v_Page_047.jp2
3cf88e6764f8dc3a9593c1e940a850a8
affaf8eeebaa457a3b8066963b44a3956b3ac6eb
24716 F20110114_AAAMQI sivaraman_v_Page_127.QC.jpg
0b37e16971ab74e0028b7eb9638d2f6e
178d96f37fc941afff438ba058e3c3bdfe260e04
20835 F20110114_AAAMPT sivaraman_v_Page_073.QC.jpg
0705e56cce642bdc03ee811c479e0c1d
502ab94327244c7c552e22e40b599b3df65e7d92
6558 F20110114_AAALNG sivaraman_v_Page_127thm.jpg
10ab99e5ada9411c2cb02cbc2ce82363
7041e9bdb33e188cc9a702b243ea0c070fcca5ac
36746 F20110114_AAALMR sivaraman_v_Page_091.pro
803a15e458da2d2036ae8701fa470f88
bfd5046b49b83b9698d4c87fd7a95e1f062395e6
6626 F20110114_AAAMQJ sivaraman_v_Page_110thm.jpg
80f1f061a65609a171b301d81f7f9986
5bbb8a3dcfab6797dafc7661782df3af43347563
21290 F20110114_AAAMPU sivaraman_v_Page_126.QC.jpg
f25e76a597c5e0dab1130973de83f6ab
95ebe614d60be9d8f0bb03343bea4ccf0950ae3e
1201 F20110114_AAALNH sivaraman_v_Page_107.txt
37c2b8538cbebca63a71137c068658cf
eccd139bf539820b3e81dbe697de44417a869ea4
F20110114_AAALMS sivaraman_v_Page_035.tif
a6cce9dde4667857b713b5e522019916
5b112e5b6ee4a76f6ec528649d86b9ef01c0ff74
23027 F20110114_AAAMQK sivaraman_v_Page_118.QC.jpg
2216933fb255f395abb31bfc4d0dc2a5
4d48939546dbcf1afd2b7b8b379a1dbe53069959
5329 F20110114_AAAMPV sivaraman_v_Page_076thm.jpg
9ecee7e611c27148ff2b0c4ad9264f33
3e99bb96e11b9ffba835b12d5810520b2809613e
F20110114_AAALNI sivaraman_v_Page_119.tif
28ce363716850c642af74df63e68777f
35fba16e9fe92e77f0b380e75a77d407305a55f5
21035 F20110114_AAALMT sivaraman_v_Page_066.QC.jpg
ab34e16974c46e7322f74f1fea7f564a
80785929969e8c4a8b2bd569cf96bd0458af2d7c
6291 F20110114_AAAMQL sivaraman_v_Page_049thm.jpg
fa8706172ffa7d33ecdc41aee9867231
a9278d45d9b70f7607526b4257a7a16328190085
6600 F20110114_AAAMPW sivaraman_v_Page_020thm.jpg
457069fdea3e01ae92fc1350f9207599
6bf1f79aa26bd0729fe374a83910fef988c1251e
5974 F20110114_AAALNJ sivaraman_v_Page_105thm.jpg
5c7e8294daa5a27d350766a14fe154c6
61cfd882981ba04a71e66f56a036440b7fe28d31
22734 F20110114_AAALMU sivaraman_v_Page_064.QC.jpg
baaadd45a552e008e4908aeceec3622d
05c5dc1627e69c30d89c3ca06c9e3667f4c8c92b
23123 F20110114_AAAMRA sivaraman_v_Page_080.QC.jpg
79fd3304a0f40420f20d71774e7dd754
c37b3424c668af41792c716724f11ddec9dfb233
6414 F20110114_AAAMQM sivaraman_v_Page_081thm.jpg
e64849a8443c0c5e6b724a099e4d48ff
d29f6c0631dcc9c019f9444aba7581d291fd4dd1
5665 F20110114_AAAMPX sivaraman_v_Page_025thm.jpg
21b334a89321c16989f153a0b4d2a055
1a72455c07671fec9b4feab395701ae7466ceb62
1051984 F20110114_AAALNK sivaraman_v_Page_103.jp2
6f2b982116f85cb37be63992bf0ced0c
0123de5d56120fcd818e407305badfe2fe684d29
64585 F20110114_AAALMV sivaraman_v_Page_027.jpg
3b7152a0fbea8abfaa14a46a464d37d9
7a2bab3e8230b30973b59a99dc959301569687b3
23754 F20110114_AAAMRB sivaraman_v_Page_055.QC.jpg
06addbc795c15c26b6e575f3c187cbaf
226fdd60884fed36c4823c64ec873d1edcf44405
23147 F20110114_AAAMPY sivaraman_v_Page_033.QC.jpg
fdfd51e4c388cbada7c3fb221a2781f3
8a28a8b2baa111b70504058cd83415e0b4f35309
2013 F20110114_AAALMW sivaraman_v_Page_052.txt
6d73c982239a32472e88337a1aa316fd
10dbc3a035911638cff6454b0e9483f0783bd47c
2693 F20110114_AAAMRC sivaraman_v_Page_145thm.jpg
75c2d9e3a1b222fcd7e796e7a4443a10
54d1d352a3c9cc90222ea3cd4705a220266960d7
23699 F20110114_AAAMQN sivaraman_v_Page_052.QC.jpg
91153dd91f807f9d8c37c1da8ff9624a
64a0b828ee02baf89ba1a747a0bc08da16e7d655
26404 F20110114_AAAMPZ sivaraman_v_Page_137.QC.jpg
544173a2cee20581e15a1a63c5ad2a00
d9de7a09381f9f45cba099d979738b800c9ac0ee
25326 F20110114_AAALOA sivaraman_v_Page_116.pro
c3242258650dccd8c76386e389b40cc5
7dfe9b7e869c829a01a19c1ab2c784d71f940e95
10526 F20110114_AAALNL sivaraman_v_Page_128.QC.jpg
10fa3cd0c8bf2680c5f011729416162d
2c5af262f76575be2bfba83de406764d28c63350
37090 F20110114_AAALMX sivaraman_v_Page_074.jpg
aedcc15c9d3ef39be3b524efd1df0ac9
8c86dbd37f846e63c767edc2019f47956b3aa809
F20110114_AAAMRD sivaraman_v_Page_121.QC.jpg
a053b865365486c07a64dea2b97b264e
0833d3a11f19bbc345568f8a0b4833bd4d8e3056
24579 F20110114_AAAMQO sivaraman_v_Page_069.QC.jpg
434c85f7f9332f1e46964fc270cad090
fc2b740cf9701e4d9a320bd46a7eda710306ea1d
41900 F20110114_AAALOB sivaraman_v_Page_086.pro
5ad338604f6575e80bf802ef4dc7a3bc
4d09db182bf211b7bc4ff466779a87c1c510236a
1827 F20110114_AAALNM sivaraman_v_Page_078.txt
79c89375bccccff2b7450052e4eb780f
4db6309df6bcc0e274bc2e75656b2fa600a5658c
11516 F20110114_AAALMY sivaraman_v_Page_123.pro
c0c86661e4943b674ae2b8bab88462d6
b69b3a3be5fc922f8787871a8afcf9e58745d995
19166 F20110114_AAAMRE sivaraman_v_Page_099.QC.jpg
0d039014120e2dff7125c126306156db
feb062a0a80dd10b236d8285c84e967fdf858f4c
6117 F20110114_AAAMQP sivaraman_v_Page_129thm.jpg
2586210cc6144ff1ef0504b1134bb1c5
6a62b2e43fa610bbb3d5f4f576c7861dde15a906
994034 F20110114_AAALOC sivaraman_v_Page_129.jp2
736819dac38305ae5d1cb0579c3b4ec4
e309d2cab1c549dc2599152ddedfca40221b3e01
5392 F20110114_AAALNN sivaraman_v_Page_085thm.jpg
68911073b209f614313ab979702a8ea4
72c165d6a5c770eda06d692ac0c9fafb6d8daf14
42596 F20110114_AAALMZ sivaraman_v_Page_029.pro
529025124ddf572d3f472b2e4a203579
a4d0e3072c68409faf9408b1e778dc642613b3bf
F20110114_AAAMRF sivaraman_v_Page_006.QC.jpg
a1d0c81fe205851632532800f138ed99
35a95c9aafb2271283c3cda8f68e712c72ee9780
20674 F20110114_AAAMQQ sivaraman_v_Page_100.QC.jpg
67b4a2e97b2219c0b186ca0e8a4c68fd
af7325a23329ca6989b377738d702dc9deb79e86
23302 F20110114_AAALOD sivaraman_v_Page_068.QC.jpg
7241bcb4f22aae2691fc8a695b36481e
4048a380595575d172363bd5cf8ef82c0782489d
173223 F20110114_AAALNO sivaraman_v_Page_140.jp2
363f648c2e91b0b84b0871c83947ff0b
6d4095ccfb9acec236503606a5934e65bc45b640
15741 F20110114_AAAMRG sivaraman_v_Page_043.QC.jpg
56ba04fec556435d9aa63c17f6be02d0
d2188c4cd0eaa378afd1e14259bef8f3b73a2bfb
5731 F20110114_AAAMQR sivaraman_v_Page_086thm.jpg
09f0a5a184c62884fc5d0be9f2b51b57
1316712fa67223e152d6a585d5f99514eb83e8f1
4795 F20110114_AAALOE sivaraman_v_Page_075thm.jpg
1b638e9c6d189621c32e0a3e26165280
c1d2349c0fffd3a6e03badf166d24caac4505ef8
F20110114_AAALNP sivaraman_v_Page_042.tif
a515404db7c123e27420f5e2c6f62acb
3b19af3ebcbe4e93aa781d177d7b7b7edaf4f896
23658 F20110114_AAAMRH sivaraman_v_Page_034.QC.jpg
6027a2a771b2a38ac5616617e19168e3
27e4215e5318b93ac65779ca376c378d289b9aed
22968 F20110114_AAAMQS sivaraman_v_Page_119.QC.jpg
33e6f464829dbd846709aed8f351e163
a86c9ed92f5f21aaeee0ccf3ddb0b8eaa2eb32ee
F20110114_AAALOF sivaraman_v_Page_123.tif
d7096bf2678dd5bb7237dd7d6b8fd988
f3b0a6fd412d804fb62417b39b14f25a24b967b3
5751 F20110114_AAALNQ sivaraman_v_Page_116thm.jpg
c3da7c18d48c3c2eae7f28bea4ed46b5
ed7da56149363c8f0ee35731fcafcde7b786a10e
F20110114_AAAMRI sivaraman_v_Page_103thm.jpg
e665c8f90fdce80aa828fce581043f81
1aef8e689185a439a331552ca6179f57e40aaa54
19927 F20110114_AAAMQT sivaraman_v_Page_097.QC.jpg
6c3adbd59b50c8002f52453c154a3b36
03a592da8fd0b2e45913c9752479bdf71a5956e0
50169 F20110114_AAALOG sivaraman_v_Page_134.jpg
ecbfbdcdb33bd5701a14634930b364cd
4ef574d400d4adc0be078bc607b227d45b038c5d
F20110114_AAALNR sivaraman_v_Page_040.tif
692f807e10ce832ac9f741480e1eb3fd
3ddb00396d8df7db10efb13a3557037ac2974846
8112 F20110114_AAAMRJ sivaraman_v_Page_077.QC.jpg
c297be3f8702be1128a530b05b3d45cb
99ddaec44476078000338ebe9dc4bcbbe456cadb
18178 F20110114_AAAMQU sivaraman_v_Page_103.QC.jpg
4246b321045009c448bbfc2e0b8570e9
e78e03d04fcfb66fb405496477cc4b08110cfcda
48259 F20110114_AAALOH sivaraman_v_Page_024.pro
ade0e2e1f26065de50af6951a5abed4e
a142c99085b819cf10e5ecb51f391b434d429aa9
18370 F20110114_AAALNS sivaraman_v_Page_098.QC.jpg
dabacec9ec0b4bb14dcba89bff8adcd4
013210963ce06ccdcef82e9bbab5663dd252773f
3675 F20110114_AAAMRK sivaraman_v_Page_133thm.jpg
19858d317ef416c5d28742c000fe350f
e76947c576d58803b39d304ad81921fa367272c7
6472 F20110114_AAAMQV sivaraman_v_Page_060thm.jpg
aaeffdf699d63b00531523c2899f0a9f
8aff5fbcbf37ec1f06579a3be8501023f208d3a2
1917 F20110114_AAALOI sivaraman_v_Page_033.txt
d2b094176343b64ab281695583e1adea
ae920386031440e90f624b5dee8c33e5f5fe6fc1
F20110114_AAALNT sivaraman_v_Page_050.tif
b921d61143102d3076d0f6f5fb97a3ae
735d6d4aa4ac9392288a7bb4aa4f484473096816
5887 F20110114_AAAMRL sivaraman_v_Page_071thm.jpg
43efbd69ddc4622b6478ceb83b325f21
333c8969ca3c2044ef0925e2e4faf48b08caf423
5156 F20110114_AAAMQW sivaraman_v_Page_087thm.jpg
2624a7e6c74d58c281c4edcd3d0003c9
527097c3b7684cd0055f80a9a7f042467e3136b8
6331 F20110114_AAALOJ sivaraman_v_Page_019thm.jpg
1bde4622b1765ce83a1ef087ba9260ce
aab792393dbe3263463267acb6ec74c21114c749
64443 F20110114_AAALNU sivaraman_v_Page_025.jpg
27b91cb001ba3b73a4445d0b624e8923
6c5092a1b485c8cb315ea64132958e519f8bf25b
3680 F20110114_AAAMSA sivaraman_v_Page_074thm.jpg
95420fe37ae9b6a935db16068982af9a
08437a8495c3d86515b2f5967dee898ddabc8756
3416 F20110114_AAAMRM sivaraman_v_Page_002.QC.jpg
56fdaa8012942a2f4320b3ccaaaf5fc0
44163bf0ee1d19d3bbb4497b9216492cfbb88d91
22290 F20110114_AAAMQX sivaraman_v_Page_051.QC.jpg
ba11e078af7a6e76a92ebff3a3b906f9
c1cfa120d73d4ff4c5690183539d87f31b3bba4b
19942 F20110114_AAALOK sivaraman_v_Page_027.QC.jpg
fc008cfc8336ac4c9dd0d4aeb9600757
30bff3b6b6fa74ede232125ba1f48f8b64f2bdd8
2149 F20110114_AAALNV sivaraman_v_Page_006thm.jpg
fc6bb36da2235623c2a48275d4a1d233
7ca779241bae6aba4b61ba1f4443be1e7f5b3999
6645 F20110114_AAAMSB sivaraman_v_Page_101thm.jpg
165e2214205c64747e361ee12b6fe91c
11fd9956626bf89bcb0275578c2c86289330c577
6269 F20110114_AAAMRN sivaraman_v_Page_024thm.jpg
ec2f6d92d472a9544355458a6423a918
989485d10c809c97924bdf2d4417e71f960c8d8c
17532 F20110114_AAAMQY sivaraman_v_Page_010.QC.jpg
b0ec413fc04b06d0293cc305018fea37
c9075bc375043f0dc8a6c76ae5174ac1b1c90263
14852 F20110114_AAALOL sivaraman_v_Page_088.QC.jpg
6bc7a806ab5de89a54146e3c9ed62f70
53ff8a11cb0a5dc7d46170db8ae244c5259c372c
55295 F20110114_AAALNW sivaraman_v_Page_059.jpg
80cf50dacbd7516b0fa3c36852f3b342
258fe950241705849b518cf023c73165d6fa9576
5888 F20110114_AAAMSC sivaraman_v_Page_012thm.jpg
640fd7131be432c0ce2111a7d212f982
62e0d49fafe6a43329bef1b095ba1bbe3c2b216e
9872 F20110114_AAAMQZ sivaraman_v_Page_003.QC.jpg
64dc4298587bafb6abba4a79613b6fa2
79215954d606c1724a4077c9edabf73215f3971e
1694 F20110114_AAALNX sivaraman_v_Page_072.txt
d31c7ff3cfc023f8ee83e9bd856c1469
2fa3edc73dba7eb8534f81335ce8887969cfa2cc
F20110114_AAALPA sivaraman_v_Page_128.tif
12d6788ca8af52fa1ce56d9d48cae760
ce68f22d6fb2895c1775c1a9ec141f28c514f7c0
19015 F20110114_AAAMSD sivaraman_v_Page_107.QC.jpg
319521ae952c35acc83e57cafd0de76d
2a55991788e0c8420453294e265740bc5b585424
21453 F20110114_AAAMRO sivaraman_v_Page_045.QC.jpg
3578a6b1c8af5e7816f54ed8ef56091e
df625fbf361abecb8dc7318af33578cd3edf5cfd
17027 F20110114_AAALOM sivaraman_v_Page_112.pro
ac96d3b8182908d150361bedd6cf5bce
4b9ecebd5fe1951d334d3ac035c4821d41e99232
23863 F20110114_AAALNY sivaraman_v_Page_130.QC.jpg
9eea605abe03ea58f80a5b1a8ea79127
9c3a9c2119c2d5f7b867bc9ef226ec79db0502eb
66223 F20110114_AAALPB sivaraman_v_Page_117.jpg
b7203a174cf98cf99d2e3cb184edee1b
5e5dd506107a6412c7fff3791d70d3e582fb662d
6057 F20110114_AAAMSE sivaraman_v_Page_121thm.jpg
ceb5cd4b28066dbb544092897befc477
766d76f01ef13c4c2812f70140b37e1b6a2e854d
18354 F20110114_AAAMRP sivaraman_v_Page_113.QC.jpg
b79b98b32daa0d4c753ea70b0abe7334
cee9f2a5383ea47fbe37f04c81196e0c2f94fa4a
48806 F20110114_AAALON sivaraman_v_Page_031.pro
1f9c386ac636422c442a4a641574b447
6cfb26ed9b4675646b8e6d16fcfb87eacdcd3f3e
21042 F20110114_AAALNZ sivaraman_v_Page_012.QC.jpg
b3b1b9d8d7b37ca01d84a4ee940c1a94
3be88395f63fa03476caef851e0d339a3c08cf9c
16648 F20110114_AAALPC sivaraman_v_Page_039.QC.jpg
e8de127e4c466f1aeb7cee93515bb435
73bafe4e9f61d05129012c5c90e920517eb17d1d
6122 F20110114_AAAMSF sivaraman_v_Page_045thm.jpg
b598a78aa58b19f05b6942bb6c396bb0
feebaee15aadbcb5171e8d2e6c08d6d445fcf793
21078 F20110114_AAAMRQ sivaraman_v_Page_056.QC.jpg
ef94b468fc9e304331a4ddbce485ab31
432438114af58dfb3c51fcec9cb819c44c14691e
F20110114_AAALOO sivaraman_v_Page_003.tif
e0c14ff026b71e8a624591468e667b4a
f45e8b96cf9538b9ed9249fa6d1b3024fb970008
993342 F20110114_AAALPD sivaraman_v_Page_040.jp2
3f63e6db781ed5870e209b2610789781
28aa4007335bd6d0894a56da7b19511d276c849f
6309 F20110114_AAAMSG sivaraman_v_Page_016thm.jpg
711fbdf6f47e827078ae369a9f461a98
1d605c65283a143b75925ba39c7b83828609532f
22452 F20110114_AAAMRR sivaraman_v_Page_016.QC.jpg
5fede48d080b42f050054e18908a9b2e
c2a94296a6986a1c635f7f89ca24afded304c294
2046 F20110114_AAALOP sivaraman_v_Page_110.txt
1e72a0241924b035f915ac8b7582f1d5
6ba93fcc57397a3b80cab4dfa042373c29f0b203
F20110114_AAALPE sivaraman_v_Page_080.tif
75163b7d75d4ea0f6073a2e13b1695f1
4597eb29c526c16fb48d81b7ad80e20354b73942
5983 F20110114_AAAMSH sivaraman_v_Page_095thm.jpg
2a0aa340e0e9bcc64a01507a440ec5cb
bc4f61cefa1203ca710576658ce645adf2b3905d
4385 F20110114_AAAMRS sivaraman_v_Page_112thm.jpg
3113ca00fb5e7d0aa5edf8d68c4c7bc1
bf5afec1e8a24ff5e302c0d28acd02caacd555ec
F20110114_AAALOQ sivaraman_v_Page_036.txt
9ef1c1d3cfabf7b8838faf53f0ca2a29
e98dff2f90b855b7bc3e27f642f71fabda7910c6
6640 F20110114_AAALPF sivaraman_v_Page_029thm.jpg
37a4c3eb5117aa363c09c15fc1dbeafc
48f466bad8ef23bad805323784b812a123a5a892
5268 F20110114_AAAMSI sivaraman_v_Page_059thm.jpg
15b62c270622b3e17b07353dc1c6f16b
b4e66dd198674367a5e1c535f21e06272123b1c4
22482 F20110114_AAAMRT sivaraman_v_Page_093.QC.jpg
5cf329c61bbeac1530df9e572ea126ae
ca80dcb09a347ec6c0df85a42484fbddf8544cee
F20110114_AAALOR sivaraman_v_Page_122.tif
5836388b6575cfab49833e33c541a70b
a469be893de2e237604e27f18820ae263e8d5cb1
7667 F20110114_AAALPG sivaraman_v_Page_001.QC.jpg
711a129a6d1cfcfe40488a0a31fc27df
9986937f2ea0bbaaeca0e3907c3cbab81898de40
5752 F20110114_AAAMSJ sivaraman_v_Page_109thm.jpg
975ec1916251713407fd0864e337da71
de8830963666aaeb68a707a540650e195c3da6aa
6074 F20110114_AAAMRU sivaraman_v_Page_050thm.jpg
850c2d63059cee68de9ec4bf1e06c77e
66a7561111c6d704394284705ff321b6509e7518
54644 F20110114_AAALOS sivaraman_v_Page_122.jpg
ad0957e1aae8b0ea233c2986b0434888
451da050994fd4a4759c5cf30be85075e4c4cdde
18122 F20110114_AAALPH sivaraman_v_Page_059.QC.jpg
d0ea02bb085e956eb32cade4ce917f30
8be01ebfe2cad36a37966b4a43d5c15de1bc72d7
24102 F20110114_AAAMSK sivaraman_v_Page_110.QC.jpg
f597d4c4a5d0d7ee93496b7c41559437
d015df31f9ae67b626e3d428c03848be3386b3af
6352 F20110114_AAAMRV sivaraman_v_Page_093thm.jpg
bd560e5814c7d4df30b5403ecb6ee7bb
bf523654219704ee7b18b98a3cf1439dd447baa8
18051 F20110114_AAALOT sivaraman_v_Page_142.QC.jpg
958523d176994827b850f9fad996c123
cbe1cac8a047adde46eede3a4768334f0fd9faba
4008 F20110114_AAALPI sivaraman_v_Page_111thm.jpg
8da07b092fea2868b0b1436025c85270
e2d168d9000c61f9bd72c92614ff890e511feb63
21006 F20110114_AAAMSL sivaraman_v_Page_144.QC.jpg
aac9fd8d0ddc7dbe93ed99cd088340bb
37970e51574679166efdd12fa76a1f165645205d
16071 F20110114_AAAMRW sivaraman_v_Page_123.QC.jpg
123aa200d606ffbe082776c3c51f417d
2896185bfe40b31ef625f437a2bb52593fc90515
63005 F20110114_AAALOU sivaraman_v_Page_041.jpg
f4db23419e9ff6b4fd2fd1456083b768
25bb9a207895c1705a71bad60f9e12a772e441c2
853427 F20110114_AAALPJ sivaraman_v_Page_067.jp2
8a09c01055396388ef28c3ea2165a11f
ff214321048c94befd8687f4a6c86f6828069fcb
23602 F20110114_AAAMTA sivaraman_v_Page_020.QC.jpg
3948fcf454005ed25b81a9d910243807
a26c7aeb2503360b33eabf579ceef21e4ed77605
15807 F20110114_AAAMSM sivaraman_v_Page_004.QC.jpg
2c7861aefea2b79e81dfa3c9a12fdc80
a97d57e1fcf85639d6913db9bef8f78e479aad87
6578 F20110114_AAAMRX sivaraman_v_Page_119thm.jpg
656113f39582fed7c99896062f26fa08
21e9c37f9828acee48278aaa6593f22dc3d4ad66
F20110114_AAALOV sivaraman_v_Page_132.jp2
c366c318fbc04186b868a07087136fa3
5804e8a458c59f1ea9675f33f165e3364f11d8f7
58667 F20110114_AAALPK sivaraman_v_Page_132.pro
3be62042f6f6fbf1d55fe321268e2176
afb9ddd8f7e4c9433430debdd88f9d3cfc1fcee5
6371 F20110114_AAAMTB sivaraman_v_Page_034thm.jpg
667133b2a5b846281bef6bd835828c7a
c3df6e9e2c7cd6717a373997a676eeb85a9a54bd
6181 F20110114_AAAMSN sivaraman_v_Page_053thm.jpg
19bb0bd7c22be46a35a19a2b45c1c6fd
ee2ff8d207e152434bd3c6b928ec96f3580b0ce1
4945 F20110114_AAAMRY sivaraman_v_Page_010thm.jpg
8c75fdaa20897102fc9e04f2e9952df9
b3083b4a1094aae592e3a40f45acb2cc1319342a
28845 F20110114_AAALOW sivaraman_v_Page_109.pro
3048b1d204ab627350d0173d41fa609c
c1faa200ba725ca463ae825be4b3026793271ad2
1355 F20110114_AAALPL sivaraman_v_Page_059.txt
7ff3ab1b780c09eed83c3147c6594832
7b7122fb8195630f6cf7c847ddc0a1863f33014f
5482 F20110114_AAAMTC sivaraman_v_Page_102thm.jpg
bce4f66ab6e1b444daf6668c14872fde
b6b13e926b39653f7156732aa251b45c6dd179b1
7676 F20110114_AAAMSO sivaraman_v_Page_136.QC.jpg
44c7e30fb80ce6107c0107fe61ada1a9
73022d946476a4ff5bc1da4b7efbf52f8d6dcb0c
6044 F20110114_AAAMRZ sivaraman_v_Page_143thm.jpg
51bcb2cd6ae37db0c909449f13f77649
629459e46a5c296b8f9a094e798754f4b128c96f
43397 F20110114_AAALOX sivaraman_v_Page_011.pro
e509b8c81d371b9f0582a0e37f168e9d
8ef7bb17d412eef1afbc2e75f3084a9de1a5aecd
F20110114_AAALQA sivaraman_v_Page_087.tif
63ffc42651393a3d6407afe0f8ec80d8
4951cc18a7eec5e8c66765042eea9cbf11d0659a
F20110114_AAALPM sivaraman_v_Page_015.tif
acbbe79e3e691aac2282d4ef52841a5a
301d050e4631d6809f7494875b0f97b92f26abf5
18996 F20110114_AAAMTD sivaraman_v_Page_025.QC.jpg
4f50d27a9118da8c6f190c921ded07ff
07cb84b2d2087bdc11c04df7ede43d9c095909e6
40427 F20110114_AAALOY sivaraman_v_Page_133.pro
2422e305903f4435eddfa9149d3d357c
555b95aa3d781962b5527b44300440953a1e039a
5596 F20110114_AAALQB sivaraman_v_Page_067thm.jpg
b44b2e606cdedb0239f063ddbd326fc8
a2571eb69be1e5b6dc0601c384529a171439775d
12286 F20110114_AAAMTE sivaraman_v_Page_074.QC.jpg
aecd2a16772c353e2b2d5ca5123337c5
9ffdc1d8be008c5310de8be8868c2b750defb4ac
5649 F20110114_AAAMSP sivaraman_v_Page_115thm.jpg
d85674179f1146d0d04aa2021ce95fc8
27cb65698a4c6bbab4742c81343aa08df8e6e357
F20110114_AAALOZ sivaraman_v_Page_024.tif
8f98ed039ffd479f1d0cac25f4d2ee82
5f97c3ceb5cea3591a8f603ca11905f1dea7f097
1994 F20110114_AAALQC sivaraman_v_Page_119.txt
248004cf968382260adb537e90b2fe44
6f7c2cef140a3085d927edc90a15676bc0291b7e
1979 F20110114_AAALPN sivaraman_v_Page_044.txt
70419a4378d90c9dc745245390730c70
6ca23e9e94c2d948dc3d5fead800ed9f79f6a54e
6376 F20110114_AAAMTF sivaraman_v_Page_124thm.jpg
f8387ffa6ace9a4d1d50ee91316754bf
566db136b015fd232259bc0172d95291c5860c24
5841 F20110114_AAAMSQ sivaraman_v_Page_106thm.jpg
f2095c651c6f5fbb69558441c15a6d34
52baf2775484c99ed7efe40f2ab7768413ca2818
19104 F20110114_AAALQD sivaraman_v_Page_128.pro
5474f1ed7fdff425c30a2a9aa4121ccf
d73a9e1982a76be076902cfa72c22d22a5d3d919
114623 F20110114_AAALPO sivaraman_v_Page_127.jp2
e56820ed762e4fcf6c173c9a4b8a363c
abfe77199d7bcb14c1af4c4ea9fe790bed9b6cce
19198 F20110114_AAAMTG sivaraman_v_Page_009.QC.jpg
f0fb5c9d0a408f2fdf4db55ff407b61e
0820b188aac11b1c806207a0bcf240f1dc9d2bd1
5687 F20110114_AAAMSR sivaraman_v_Page_100thm.jpg
63784f96d388743151ba758f431d35fe
63b3a51780812e37784bb0df9303084170ee4c7b
2021 F20110114_AAALQE sivaraman_v_Page_130.txt
f4e7c924b7ccd468ce14aa2df914ab8d
eaa740b2f8c64552c3f689b64b5d4aa2b1d31b64
8435 F20110114_AAALPP sivaraman_v_Page_145.QC.jpg
0f1136f6f2312fd1ce5102a52b80e9f3
ad50bfac3bf4ebef5e8227d033554d5bdaecd2bb
22811 F20110114_AAAMTH sivaraman_v_Page_053.QC.jpg
ab892effa5f8812b80b9f32fc65309f0
f04ec60e21963cc0f33713eb0815aaa7e7246592
15595 F20110114_AAAMSS sivaraman_v_Page_075.QC.jpg
d8208b9aad4bb33f09eafe807c2e01b4
66a052614b9af5252dc23d9e242ea83b13aff119
6063 F20110114_AAALQF sivaraman_v_Page_057thm.jpg
abbb3dec6c871d7070643b570ce0c251
05c3167d2cd96698854d331ff64c35d368f81f33
20650 F20110114_AAALPQ sivaraman_v_Page_095.QC.jpg
81ca918d13944d8c02ca14b60db5c619
a47ecf5ecfe13ff95fa8dd0227ce30eba607dbc0
6473 F20110114_AAAMTI sivaraman_v_Page_037thm.jpg
8a4d80d7e0322dc9f28f5802c925a3a4
a33a7029c1cb95a48aded005402d1398805f3849
5267 F20110114_AAAMST sivaraman_v_Page_005thm.jpg
f64632c57fb20d2848d10e03f56380c4
b9daed527a1ac1cb914ba332d7628187b0df5c48
100569 F20110114_AAALQG sivaraman_v_Page_026.jp2
b6712ef7b7d5a97722d35886f6169306
e33c9295ad56735a6b18b71f9320ac39c7d93b48
23963 F20110114_AAALPR sivaraman_v_Page_037.QC.jpg
3ffeefbc7eeda722d5588c08d9b921c8
442d9b49f3d63786d756a26fc95472d6d905fd3b
21869 F20110114_AAAMTJ sivaraman_v_Page_062.QC.jpg
8895fc54967bb9409c389c2ae488414f
33cf735c0222e67760b45db8816a130659ee84be
5720 F20110114_AAAMSU sivaraman_v_Page_104thm.jpg
9c37958d83cdc16cbfe17f40851a0b8a
10ded409a588391911f449b00a00bf6e03910aeb
933 F20110114_AAALQH sivaraman_v_Page_117.txt
12e5e138b5cc8f190b0ecbeebe766399
7c0c350437a3de4f134c15b042902eece24f0af0
111387 F20110114_AAALPS sivaraman_v_Page_124.jp2
4f9875daaa712a0b5ee8ecceeab7e4a3
3990b01d973ada07b0bf1613637b71130170f968
20349 F20110114_AAAMTK sivaraman_v_Page_117.QC.jpg
1f16095980d2a4cb21c18f3b73376500
9ada321ffc0c75fac0ee4837536c33ea39068b14
19263 F20110114_AAAMSV sivaraman_v_Page_105.QC.jpg
5d0a31471025bb7f484b7f9a53409056
9e51ace03437d196b1ef2c055cec13b05fadbdba
1326 F20110114_AAALQI sivaraman_v_Page_032.txt
33f34d756a087e5c031320d266ab73d5
1c033eebe5671a91ab4fc5a724c5e5449dec8c85
40971 F20110114_AAALPT sivaraman_v_Page_142.pro
3f3870a60e31c16d1a2506bd5ffe52ef
e893d3a8bdc94e468b39f648f582ea151c8cd705
13877 F20110114_AAAMTL sivaraman_v_Page_111.QC.jpg
72fd71777ffecf06cf2982cb135a4cd1
9506331a03beb23b37586c66be0a07198e446c5e
18403 F20110114_AAAMSW sivaraman_v_Page_038.QC.jpg
35bde5fd4af3551ccf1c3d36cb5ff09f
31db6fc9b4a3f5c84b5b6a6867c32881f3182a0f
65647 F20110114_AAALQJ sivaraman_v_Page_094.jpg
7ffdc4181fad4877b079d2c358f182e5
cd48598f55498d67f1264012c34d918b6ed879bc
92586 F20110114_AAALPU sivaraman_v_Page_137.jpg
97ccfe99c3ac6d35488ed7b8226fb6fb
ac809d6d57d72077ba93e1c3fd46dd4d37791e72
22572 F20110114_AAAMUA sivaraman_v_Page_058.QC.jpg
7d1f621d28b5ac18d0494246a680892d
c29e1a743d4e617ed9f1833832e0caf9c4996cff
23408 F20110114_AAAMTM sivaraman_v_Page_022.QC.jpg
a671b23b978684ffd845677d55bb1cfd
829b571cfc40b550d5d83c7b3458b0ea4a0a1ee3
22895 F20110114_AAAMSX sivaraman_v_Page_044.QC.jpg
118eb8234421e448b2928981e98bd76f
0b09924f4bdaa1766f2899bd87ccdb32877a3c76
69278 F20110114_AAALQK sivaraman_v_Page_024.jpg
f7cfcf92c1b24695190c6702ee0f0f95
c6ccd7971663750843ef811cce399a2ce9a63d46
5830 F20110114_AAALPV sivaraman_v_Page_030thm.jpg
bb52804beaba259715070853ea99a415
2e46d8bd2d5d68eb1d755a6e53e10c23da46ac48
4494 F20110114_AAAMUB sivaraman_v_Page_008thm.jpg
08c4337ff67229985138a8df2c1e8884
9be7e98d86f3534a4135380eaa58b8466f4e4031
6386 F20110114_AAAMTN sivaraman_v_Page_040thm.jpg
e9478a3e0442ee19c95cfc1eb43ecba0
8cc43948079407a785d0029b908a70a9ee8b6f6f
16649 F20110114_AAAMSY sivaraman_v_Page_087.QC.jpg
7899a9a9dee2e06c4b565cb673951ee4
6772eafa5e00925f0204e03807ed354923fd95b9
46002 F20110114_AAALQL sivaraman_v_Page_062.pro
f327dcea7ff4ffe9442e92e483463788
90f6da265863a41cf25dc262076d686b6a8ac1ec
30374 F20110114_AAALPW sivaraman_v_Page_140.QC.jpg
d4e5dbccb571ada3f9959dfe16d15698
0ef7c62e56857bb461aec03db3dddce6704187d1
6316 F20110114_AAAMUC sivaraman_v_Page_028thm.jpg
3ae3b23c3b4fdd5327f84ce5d2bae89c
b7bff0d10aad616c70021cdfd4095dd005b3589c
18464 F20110114_AAAMTO sivaraman_v_Page_023.QC.jpg
0e14cd98a1965db7681f572fd69a1e86
09387e0dcd23f84e8273038d73b480168f96ee7f
5805 F20110114_AAAMSZ sivaraman_v_Page_094thm.jpg
42f7a968283cf3d5029ce934dc131ffb
596aa80ec54cb362667773c7acdeca76f853ce83
2512 F20110114_AAALRA sivaraman_v_Page_001thm.jpg
c2ee690df4fede140122e47800f40745
4ace41f1dced5e9a51a7821e6344e6c8ac91e5dd
F20110114_AAALQM sivaraman_v_Page_060.tif
8611eba6cea3e616ac12c49523e96208
a9b00206ba8e56121720cba4082fe109f7a0f1fd
F20110114_AAALPX sivaraman_v_Page_013.tif
1669980abc2570d57e681b958f8b71b5
54ca193ccbed48c36679227d6494ab1e57ee98b8
F20110114_AAAMUD sivaraman_v_Page_122thm.jpg
d05120362179710a56431b481441f83b
9427e2f25926a4cd23b4ce40b18b1eecffe5a541
1402 F20110114_AAAMTP sivaraman_v_Page_002thm.jpg
6bfe646c243db8b19da549af70400d1d
c538bf6937f6a8b03e2853a49b0f9e097c45cd31
52811 F20110114_AAALRB sivaraman_v_Page_034.pro
42f972f2957183661477f6db1c670e78
9280df4df4290957c79a1dd0ad7e3fb625a3a350
F20110114_AAALQN sivaraman_v_Page_008.tif
b2d19613d03834eb5b637649b6933c58
7030f0ae4556c3eae67be9d7c1d1abe2bde95dc7
1629 F20110114_AAALPY sivaraman_v_Page_030.txt
3bfac7dcd205ba3d200b790ae6e6d3c3
58128207bb7628e63859b35a4179aef6b0129960
1767 F20110114_AAAMUE sivaraman_v_Page_131thm.jpg
7b0d1088254ed7e70794126d720a8b5c
4f4797a9cc53da31f23ea0cef2133d55e60c1e05
F20110114_AAALRC sivaraman_v_Page_145.tif
07eae3e986690a1ab29559229b38ebec
91b27bf095325e226cb2c21959ebd598095c825d
6482 F20110114_AAALPZ sivaraman_v_Page_047thm.jpg
d15925516ff1c7cbff191cda1e356435
d3d231bf8fe227d09068350510cade08b8ca8f7b
6097 F20110114_AAAMUF sivaraman_v_Page_096thm.jpg
c771f8e577f90b6a7c5f5f4c5a5a71ad
415797867082901fb69e7a2b34092bb9f3e5a71b
21475 F20110114_AAAMTQ sivaraman_v_Page_078.QC.jpg
e94cead22c3d0a167c266a6b6dbbd415
e52311ec6b93d4cf13b9dd5507d38778ad328b3f
65035 F20110114_AAALQO sivaraman_v_Page_066.jpg
e6ce9d79b16d48e0973facb88edc74ee
7f98e308cd28374bd8631dfcb91163dcc162fe40
62407 F20110114_AAALRD sivaraman_v_Page_103.jpg
f26e79bfb3f8680b0a3f5a4c8401a36c
dc90030f5f4130122b4ee938bd03310ae5358771
6059 F20110114_AAAMUG sivaraman_v_Page_026thm.jpg
4cc33e71403cc4956b77c3c758e97b55
03a38098411234078f08cafc5f10fb74cb877c73
6132 F20110114_AAAMTR sivaraman_v_Page_062thm.jpg
175cfed8f017daea7781baa440c1606a
46f9dba533803ebceec80be74b428cfa4a81d36f
21522 F20110114_AAALQP sivaraman_v_Page_050.QC.jpg
ae9a9eef278715f12d40bcd58a086879
cc1f76420df88e0ca404992750ec92251f5cab80
1714 F20110114_AAALRE sivaraman_v_Page_015.txt
18ea65c43e888c0caa2b854914fc76d0
85b84e2f4003850abdc762d59393ea6de446eb61
22406 F20110114_AAAMUH sivaraman_v_Page_028.QC.jpg
34d15ddf270bdc9b7637e19f6c97f2a8
24e93460812c0fe41b004439c500e2227ce80d73
5554 F20110114_AAAMTS sivaraman_v_Page_099thm.jpg
136be4c64621e901861647bc50552686
9d46e352060f6bd5353f9e2c81987ebfd76947ae
F20110114_AAALQQ sivaraman_v_Page_064.tif
e68409036cb4c6250bce24e163e32a10
b131c5dfb6c9c141aa9aea055954088d8df27f55
795351 F20110114_AAALRF sivaraman_v_Page_082.jp2
8710dc0099e2a2d4627d15f58c7bac26
b73555321ecc70f1cd6c645410abd77869a6ab32
5214 F20110114_AAAMUI sivaraman_v_Page_070thm.jpg
a9615eec05a36aa0000fde4f99755886
a382599a1f463a8aa79f76ef5dfe6276f7406001
16896 F20110114_AAAMTT sivaraman_v_Page_076.QC.jpg
6043f013e795a17c8236b46aed9e7b14
6f0ce5dbef39d1e9c0cb40a8a983b942eca7f8b2
31649 F20110114_AAALQR sivaraman_v_Page_075.pro
fd8bf6188ae6f46b2b340d68ce207edc
9ed375fdb9b9e2eb556b182c958975655cf39443
12584 F20110114_AAALRG sivaraman_v_Page_133.QC.jpg
adace2581bd9240503d266ba34baf984
beef447b1e904696ca8e07ebf58e0e0e3f78bd2e
4988 F20110114_AAAMUJ sivaraman_v_Page_043thm.jpg
f7541807161252cb530896057871dbec
023932b75cb41fdc46e5486c3c40d77589f4846a
6516 F20110114_AAAMTU sivaraman_v_Page_130thm.jpg
cdf5a642d1fb5e1719d784f42c3d2672
3e165d04a77572b5d850d949ecb3066c81ad1191
63459 F20110114_AAALQS sivaraman_v_Page_088.jp2
5d65c9d28a77e5eec77754440269c597
87c8e6c4d0ac92ebfd2cb139678465afe7abcd36
1051946 F20110114_AAALRH sivaraman_v_Page_116.jp2
c98a3d2ed2d7f5eff74b5464f9f74acb
72937b6f9b11b3490813dd28ddba0914cee88975
6161 F20110114_AAAMUK sivaraman_v_Page_033thm.jpg
2980602e6b0eba76425c56bf6bcd3470
0b20805a50943531c2e3304df67ee14dbf4b09e9
23305 F20110114_AAAMTV sivaraman_v_Page_108.QC.jpg
ab96f997a1f8cc62e5b173ea480baf1c
61f5a010ca3ed4f7f4fce28a74abb10226c25ffa
6303 F20110114_AAALQT sivaraman_v_Page_114thm.jpg
785620db2d5333b1a5d8a700571cfb58
55cc45916e1cd6c69bd270065db56972ed28a2f2
F20110114_AAALRI sivaraman_v_Page_017.tif
e0a0727e3249c154ffc5093a7ca2072f
d86cbfe197dc699ffb55a8e9dc75c03956b3bfa4
219941 F20110114_AAAMUL UFE0007100_00001.xml
8f506df1184996196f38bdd576bf10a7
df20372b80def4257fbe9417d18e9cfe1e8f37fd
16121 F20110114_AAAMTW sivaraman_v_Page_008.QC.jpg
84ab88298f7221be995c9404ea560251
d874629b5e397db2f7f5b483eaa47b2ee3bf787d
29393 F20110114_AAALQU sivaraman_v_Page_105.pro
05febdbbac3880cc06444789c01504e8
57fde9a9ce406bfe3bf70da886de6ae3aa4e76af
5286 F20110114_AAALRJ sivaraman_v_Page_125thm.jpg
6eec50836392efde2a52b8faad289da6
6b04fe0de3363989ff6d81d7400a36463b64fb1f
21165 F20110114_AAAMUM sivaraman_v_Page_011.QC.jpg
5996e1abcf13a30cc32b846e8340e27c
73f732314500fa27e89f5418b3af394248ef26d4
20929 F20110114_AAAMTX sivaraman_v_Page_057.QC.jpg
69d0b23f2d0fa3cb75fcad7fde49c4fb
095e43694290cf404570fbe947562024c276ae3d
44951 F20110114_AAALQV sivaraman_v_Page_088.jpg
fd5ee2f58cc562ed8cd4c8e82dfa53b3
d4e5687e1e7c89607041356fd06be7b882b04401
71259 F20110114_AAALRK sivaraman_v_Page_137.pro
7f5703348312f96ea4c4276072dc10da
2e322df1fa3e01bf2b90a3cb0637b885e01aa47a
6460 F20110114_AAAMUN sivaraman_v_Page_013thm.jpg
e45f1aeaf936134d9db6cdf0c4fd7a6d
ec7dd6948546cf5409168b4c31032a00754d5cbc
5035 F20110114_AAAMTY sivaraman_v_Page_142thm.jpg
bff2a2c2c34e7df57c092a7c486cebe3
18207e64f2fb357c31586aab9306c86bdb4cef9d
F20110114_AAALQW sivaraman_v_Page_093.tif
857a13c51ab4b01ba7b5b4f28714fd99
4b49fec04ed213c8c7b2b1e518dd8ded555df9b0
1348 F20110114_AAALRL sivaraman_v_Page_088.txt
df924c2bf1a1c965ae4232574512b387
86826a8d6e7cd8a11d26652c0ff9f991fa763780
21740 F20110114_AAAMUO sivaraman_v_Page_026.QC.jpg
6bdb9c05782c3667b188f94fb094bffa
374b067bcd0393c032b5bfe52b3d5323694aef74
22083 F20110114_AAAMTZ sivaraman_v_Page_048.QC.jpg
637f015d854a1bb69783510c493225cb
acac8d2e8301d13d47f21f8933243354d43feaa8
73306 F20110114_AAALQX sivaraman_v_Page_040.jpg
26757b7e89aa8af6cde3f018e81e1958
2e4ca176c046b62f78e865ae7daca4131f058253
31501 F20110114_AAALSA sivaraman_v_Page_097.pro
d18ed9c0c5179c54d4e68e44bb226110
8cc196c01f66ae1d9f11cf21ccf5fe4682e50b02
39473 F20110114_AAALRM sivaraman_v_Page_057.pro
0ef4a6305fbded0ae6e9bb8b34a305d0
fe26e48b079edefa408677eddce8f7ec06344fad
F20110114_AAAMUP sivaraman_v_Page_041thm.jpg
e689400261f78d884bda6492e14400c8
56687caef8373b49ef182a26dcbdfe0dd0b10453
32038 F20110114_AAALQY sivaraman_v_Page_032.pro
9c8b55170b42baa56f271255ebe03b6b
490f1220662dfcbcdb8ba4c60e5ecbd8c09a9613
106884 F20110114_AAALSB sivaraman_v_Page_019.jp2
ce84358ffc6d7d763b73043a4814037b
33faa9ce603065d8b0cc961b95a8fabffe5e7b50
19507 F20110114_AAALRN sivaraman_v_Page_018.QC.jpg
0c43c37b1c05afa24ca9985c78d91db7
d078bf2f1588ad9b4e130d28d59cf5f4c241d6a3
6358 F20110114_AAAMUQ sivaraman_v_Page_058thm.jpg
9540e025e6886fd9c5921b11c1d0a2b6
84a7016f36c5212f04ba50f68d136973486ed29b
23480 F20110114_AAALQZ sivaraman_v_Page_029.QC.jpg
d3903cf31db9c774e6a7f9464d30f615
19c0e97e66871ffa0797794cfee980f8be20a221
5448 F20110114_AAALSC sivaraman_v_Page_120thm.jpg
584f8669c13d0b771c9fd18cd7f6b317
caf5a7055252f5c4b95ca39987083311e9a2794a
34884 F20110114_AAALRO sivaraman_v_Page_122.pro
8a0c27d5a4609b45f8a99396bc9fd293
b812865189d5859dede8f2ef2e160be67d373bba
22609 F20110114_AAALSD sivaraman_v_Page_129.QC.jpg
a689afcdae065855479caabd0de9b2bf
494551622a4923f2bbebdd37f7797238c4dc88b5
21218 F20110114_AAAMUR sivaraman_v_Page_061.QC.jpg
d2c7233a6fe02675d36dde73541e513e
7b39a26c51abc2910636ba3cac1e8ac98c2badc7
73759 F20110114_AAALSE sivaraman_v_Page_037.jpg
28b2951bb4c6a2245441f0f1750e36b5
c64b6a8228d4065e1af4e1f6b430a3fda141b599
F20110114_AAALRP sivaraman_v_Page_053.txt
7ad3f303c6d91f498761bae3b4186f2f
b1ee3704abfdf4f7cc8647a27b3987c807053859
19281 F20110114_AAAMUS sivaraman_v_Page_067.QC.jpg
f849a1d5b0a6cf7267248ed1dfbb61f1
c54032fa8c3a5fca25274a460e6c34e0a8ba8fa8
1861 F20110114_AAALSF sivaraman_v_Page_062.txt
3675f4fe00ddba7e971dd89f16a2a787
0ae3036915334e2653b6cf3fb66dec6b8360c54a
50602 F20110114_AAALRQ sivaraman_v_Page_036.pro
4f29f3cb742640b16113cd2c7910b2a2
2cce784d204b4d136386138dc21a6c64313ce02b
5970 F20110114_AAAMUT sivaraman_v_Page_078thm.jpg
4aff09fe8d0c46a0e7954dad2e0bca1a
091ffb2b1ea5e7fb659d5e58b29b4d1e837da441
1807 F20110114_AAALSG sivaraman_v_Page_026.txt
e2769e7716c6c689223d39c9b33c5084
675f05f9f43f90900f134215a22ac7755f871de0
1051982 F20110114_AAALRR sivaraman_v_Page_120.jp2
ae6c902f964f874c1709a7e46e4db7e0
8c3f7f877604b6d8f5f91c9f87c2d364aed329ff
21641 F20110114_AAAMUU sivaraman_v_Page_079.QC.jpg
048ac081c2df2e6ae3dc722b7fe9e6a6
6a7c24e62baef67e6098c6602a4773d4f8572631
65955 F20110114_AAALSH sivaraman_v_Page_004.pro
b1a43e1b30f82bc6a61c19b6e329fc43
e66188816ff1caca43dcd4ab6a38aa2f0e015f89
90970 F20110114_AAALRS sivaraman_v_Page_015.jp2
fb058f543ae7185933ffaf06fa42f928
73fd0cde3cc8b1308e5c1a80f90f7a0c61710f08
20203 F20110114_AAAMUV sivaraman_v_Page_092.QC.jpg
b9ddd7bc4b790e68e1693a8e86fb854d
2d3746e4c89d71cad01532ba08338e59455bcf55
F20110114_AAALSI sivaraman_v_Page_098.tif
76802b9a2d6ca2c40085d5e1e5f98651
b87de6c2e399f5f48671a838716a78707905a373
1051965 F20110114_AAALRT sivaraman_v_Page_004.jp2
ae6bfd22f12ff8fb039ea34a9b339573
d06b5b87d536f41ddb936d4f291f5a24d7b3c764
17696 F20110114_AAAMUW sivaraman_v_Page_120.QC.jpg
9f593102e6185ac519463596bb8b1cfa
bf8f9ad3d698de12c56db9e6a388e3d9f354b80d
44517 F20110114_AAALSJ sivaraman_v_Page_061.pro
86a501b82567e0ca2932069557466e33
418558730586a0d9b071228dc5a1cd6b9dfaa377
1367 F20110114_AAALRU sivaraman_v_Page_084.txt
fa544fa9880a3782ca1791505b6c036d
278f6cf24e3a4975c15a43699df00555f4364646
11554 F20110114_AAAMUX sivaraman_v_Page_135.QC.jpg
5f0cfa8df0926b6a10c5a74019bc9edb
f1e9f5011386366e189dd17a18f741c5662ea15d
5976 F20110114_AAALSK sivaraman_v_Page_073thm.jpg
f9dc2f296bc95b14b7e93250c624fa74
e404f18f65f7da5997d9301a18fe9f9a4007ab75
5287 F20110114_AAALRV sivaraman_v_Page_014thm.jpg
e33e0ccee9284afd8f029b185d7d43e2
b046d5a222d421f2e854bb1c4040dc237844a633
2391 F20110114_AAALSL sivaraman_v_Page_136thm.jpg
db417269d4cdfb9a87bbc7563015250a
27899d7472c76c7d11cac88ed32617292fbaa062
1400 F20110114_AAALRW sivaraman_v_Page_025.txt
b7e8b26e67ab238ed7ab91b10a8d5853
b9c0bb2333e353b1ed730e96911ed3d1f5719f4d
6447 F20110114_AAALTA sivaraman_v_Page_068thm.jpg
325686f96fcc893241728305e983b339
83c7748c0e819e3d96c91d88dff40d9dffc88195
6604 F20110114_AAALSM sivaraman_v_Page_048thm.jpg
afcd7734e637a9d0c04a5edbdc3aab7a
ce6b4ff6ec314ad9c600d9e7b8d64efde473a1d0
1984 F20110114_AAALRX sivaraman_v_Page_108.txt
a773ee329bf456e11a53ce1b4e324cb3
bd44a1d15158a54ee2e677a34c35e9fdfceaf6b1
1032218 F20110114_AAALTB sivaraman_v_Page_045.jp2
06824ac210f7831dc203e7d2dfe0dc12
4405d24a61608d13b0d7e87b69525109bbae1fd1
2362 F20110114_AAALSN sivaraman_v_Page_141.txt
fda98d2102c8936f642fb099082614b3
2bc8eb07f74d6f77bb02b86e3d72fb7081b367ee
4497 F20110114_AAALRY sivaraman_v_Page_131.pro
7fbf5764a285d8598d71170e78bcccee
b5ac8873dea8b9ef9b5660cd0705d8d597d9bd3c
6599 F20110114_AAALTC sivaraman_v_Page_052thm.jpg
2fb57acede78f9f966b9adf2f1fd5a50
1bac0a99c3e3731a70474aeb22feb1ebe5ec72ad
F20110114_AAALSO sivaraman_v_Page_047.tif
733a3230ed399a9b432c1539997503c4
fbffc77087d5800b9aa895b179973c34db7d2c59
F20110114_AAALRZ sivaraman_v_Page_137.tif
2d4c2c0806014e7c26e2bbe720ef450e
a1747e77e3426289512cdbb7aaef2f785f257953
4647 F20110114_AAALTD sivaraman_v_Page_088thm.jpg
e94ddccaede960ab1e753854d78dc03c
ca86a431ad54b7ec5789b1251bbba8534d63cba8
51936 F20110114_AAALSP sivaraman_v_Page_020.pro
ce85ab96a3c03c9a86da7ec3615d6845
ccc708a353b48b2a0be40cfcfe8fa323da77b495
F20110114_AAALTE sivaraman_v_Page_020.tif
5e04dbac324284870d855f6cf9485f6f
d84b3dddbbd9ecc515ff3e0ddc5b0bbf930421b8
440299 F20110114_AAALTF sivaraman_v_Page_112.jp2
6c527afc8b6b4068e6159adf084905db
71b63d2d1f95ce409df7dd1ef3bd7d084acf5070
1440 F20110114_AAALSQ sivaraman_v_Page_070.txt
6d5f562f30f2d5f1f0db41e3ba319202
3bf3f1b327ac662cee4ed5ae42993dc26ac01d0e
39410 F20110114_AAALTG sivaraman_v_Page_136.jp2
f1a202c65e15ef5a1e1d8a6f5a02f737
29a5f0da49a71bbc9cbe0d244d2452c574fabc4d
420178 F20110114_AAALSR sivaraman_v_Page_074.jp2
f97f8ed549e8160feb7af43878999039
e707a042eeb177b372c27877f30251944b728795
766 F20110114_AAALTH sivaraman_v_Page_128.txt
253e7fb66025750f5ea87fe250a93287
7d94aca587bd17e3c0b6c0704d4f9d370af7a50e
73166 F20110114_AAALSS sivaraman_v_Page_055.jpg
c14ede3c540f0c0a89ece3dd4f71ff79
f713ccb71f2bb908adc47c01194a5bbe2a21171e
1820 F20110114_AAALTI sivaraman_v_Page_065.txt
547b47549cd157171050007d2957490b
67996cb69ddfc6ac28a9cf03d13965c248e89e1e
64935 F20110114_AAALST sivaraman_v_Page_078.jpg
c617f10ea5ce2f4eef95e878295e3a2a
a94bd7acdf15944c60bc2c1c416219be6808de78
F20110114_AAALTJ sivaraman_v_Page_054.tif
6009b42e94cb65a298fb0dc303f34212
25f81bc1b853e91e415d6033a41ae7ffc905598d
83173 F20110114_AAALSU sivaraman_v_Page_142.jp2
e96c00015f88345df6c9b62e5d8d3afa
d7f2c6e20f6581cbacc32bb6f24d4ca9c030f038
5723 F20110114_AAALTK sivaraman_v_Page_144thm.jpg
964273cf3b2d07fe892daaeec920ddcf
c04bf1a9bb8dffddcc5356d941315a1710d4ad20
71124 F20110114_AAALSV sivaraman_v_Page_130.jpg
48b43b837533f9e1e16cd3b71eb2ebbe
9bef73c77309edacd10b9b21c17a1a57c38316f7
63884 F20110114_AAALTL sivaraman_v_Page_090.jpg
f304a325e679869264660bc99bc2796b
2c92c76336551a3e3c7d20dc10af8687c0edbf74
F20110114_AAALSW sivaraman_v_Page_132.tif
d5b6613921310c01b808e5d0305d99f0
9831e7f153d0c4dabe6dd7eb6999b7de16ac3565
128 F20110114_AAALTM sivaraman_v_Page_002.txt
a4d7fd7c724a48612e1f22f764ec3347
69e1add6647174918eebb1efcc8a62f5c1911d33
22645 F20110114_AAALSX sivaraman_v_Page_031.QC.jpg
78c748e034f89f742ecf88fed3249900
3f3fbb3b54d6c4ffc57aacc0dcd40014845598cf
22894 F20110114_AAALUA sivaraman_v_Page_013.QC.jpg
c4e006124e7752b159b0f9093dadb4e8
3d5882a3b32ec40dfc6762bd0ad4f8cb2653fc27
2067 F20110114_AAALTN sivaraman_v_Page_055.txt
a67e8f707c527b2b768f2723aa56134b
15e029538e8f9e8c151eca7159cb9a602b6d3a3a
F20110114_AAALSY sivaraman_v_Page_009.jpg
f08a2e27a04fe07291ae71adf5c096a7
3f2379de61f23b2d424b1856d84d6548ed3b60da
49872 F20110114_AAALUB sivaraman_v_Page_013.pro
5bd41a299c2ecf8f39a001863afd6669
d7252f6444f48ca56542597f3e5f9c3c330a019a
59629 F20110114_AAALTO sivaraman_v_Page_086.jpg
f2f65d1d7ac598b82eb5093c7f85df2c
b0d4312f6334f5158c73163fa8fdb2626aca47cf
574529 F20110114_AAALSZ sivaraman_v_Page_038.jp2
d3652d4f83a7b9b56925ac3520dd3714
4e54d36f81bab62633c182abc744c3ebdb7ad30d
1791 F20110114_AAALUC sivaraman_v_Page_092.txt
45a5567976a9015922711cd0f6a9377d
1a1455a5bf68578cc42018c1bbc2121045d8f146
736846 F20110114_AAALTP sivaraman_v_Page_070.jp2
4e60871eb70c3892b4b1ddabf3ebfca8
401004b8e70d0495ca6d7a510f04d49cccc6cc24
F20110114_AAALUD sivaraman_v_Page_118.jp2
7e3190d8a1c6f983c989c61fc892baa0
b6476f8e56e90d3a6bdf36d8c9600aac9ec6fd81
2800 F20110114_AAALTQ sivaraman_v_Page_004.txt
b5fe13465e5a76bde3284fba9c6b6993
4b239c4664b2d843bcbb526d478fa5700b2c22c8
72887 F20110114_AAALUE sivaraman_v_Page_035.jpg
8eac8927c6da506ec0faf53f6287227c
3f396ef564580d13c36e4ead26944e6fdee68bbf
58746 F20110114_AAALUF sivaraman_v_Page_042.jpg
b9707fd13c1f6e9ceadb1c0e8aa2ba15
6dab9c8243859e5c90f52ba1411a350740cfef3d
847 F20110114_AAALTR sivaraman_v_Page_043.txt
ae70fa6359f53a5018cfdb170a947e0f
a83d0b19ccfbdafe44796bb138783519f1467610
33131 F20110114_AAAMAA sivaraman_v_Page_007.jpg
c028fdac03618e8357626edda35c84eb
f7ef714785c78ebefb83f94f673f3e27c489861f
85418 F20110114_AAALUG sivaraman_v_Page_005.jpg
4186beb90ca8e4dbe83cf2d2cfaf403c
00a0b5d61ffbede25b3ed2b73036ac87ae6c6a60
36952 F20110114_AAALTS sivaraman_v_Page_076.pro
57cad7d735e7ad5369c56261343226f6
e732a2bc774d580e8bc3d15ce96ec37ac05f316f
54441 F20110114_AAAMAB sivaraman_v_Page_008.jpg
f954d14a8c659099f74a6d55365bbfa9
a7bce7d309de1f7a30633992a6998e5b40161b2a
6261 F20110114_AAALUH sivaraman_v_Page_031thm.jpg
1e0903243433c8855cd7057af4107127
6be6aeaa7b69274a11255b4657083dfcc05a048f
28262 F20110114_AAALTT sivaraman_v_Page_089.pro
d2575c3ff9a6bbb8fade79fce542a04c
088f7ac3725d513e7d990378aac2b4f1a622cef5
57056 F20110114_AAAMAC sivaraman_v_Page_010.jpg
5f116d0f2afd2fec463380cfc7337839
1093943cebc79478224cdd3adac006355a5e36e5
1645 F20110114_AAALUI sivaraman_v_Page_027.txt
85ac7c53f0363fd7b83c92fc91a16940
b4403244ce297c4ef8b3f7e13711978606dd41c7
2087 F20110114_AAALTU sivaraman_v_Page_063.txt
53a16cc2ef0aeedf33d75ac5000767f3
c53d39226d61d19c62d008b2fbc2a5163e99b7c0
64414 F20110114_AAAMAD sivaraman_v_Page_011.jpg
5d4d5cd92a39b7bae842994c431b924f
59b85ad0ed7a1b4fa6fca25b98c7d91907f323cd
F20110114_AAALUJ sivaraman_v_Page_005.tif
5605f76abaefffd3f413fa0935400483
c539261eda28fb3144cce4ce2bd6dff69ada68b2
6026 F20110114_AAALTV sivaraman_v_Page_032thm.jpg
de3083999c507dfdcdd8edb153e00fc6
98465df51a48a31dcc14dce0166aa38b110ee12a
65693 F20110114_AAAMAE sivaraman_v_Page_012.jpg
1c582e0b347ac07f00109130db7fbaeb
183935cbe6ff75c5fb009b66801b73d24c9e264d
F20110114_AAALUK sivaraman_v_Page_052.tif
7ca4f9fa386b762e08f9bc6d887278e4
c1a4d4ed11e2926fe9e1c9437931fb885a5d3368
64996 F20110114_AAALTW sivaraman_v_Page_061.jpg
4a2ddf032f15f5fae8513ec94536eb9a
e28191f2f7c40e18110af2d34907c0ee0fc015ce
69714 F20110114_AAAMAF sivaraman_v_Page_013.jpg
8a91b41bbfc07a4d2e363c9e2a558d48
baa74b9f55aeb255c570c14a9ff03b30a9a85e49
89604 F20110114_AAALUL sivaraman_v_Page_139.jpg
8d1e4103e918b81cac821de4be5d1093
c86817cc37f915b123382362ed7e9c4c2190be4a
111706 F20110114_AAALTX sivaraman_v_Page_035.jp2
51b9a643f270a105054209d21e66ee4a
f286d65c5adf017005d766045b2597eed780d3de
60106 F20110114_AAAMAG sivaraman_v_Page_015.jpg
a59b1f2a61dc5f70efe4330f157e8fc7
d1919b1ec2ceb8bf5cacfaa85749764053580fde
758489 F20110114_AAALVA sivaraman_v_Page_085.jp2
ddb6f353de5d1750ddd6344bd97d204d
2d3b9f0ab237f956ce483e80867eb9e5051e930a
1377 F20110114_AAALUM sivaraman_v_Page_085.txt
5b8974b0fa15573cc74bebf3d6dbd5c9
0d34ecf5c4ef3acd5ae3d51c4de4aabcaadc4aaa
F20110114_AAALTY sivaraman_v_Page_066thm.jpg
cd852d5f8d0b3fb4001373da02b7fc41
d022bcf182d8ffb03e74ef12a2af7bc1091677c2
68481 F20110114_AAAMAH sivaraman_v_Page_016.jpg
8ad1d6f2a271dc962e0203e738272929
933ce4825e2c536d9ccfcee86e71723a8ec6bbaa
20956 F20110114_AAALVB sivaraman_v_Page_090.QC.jpg
723a8b05fb2e6feb085b068e45b740c7
80f030cd65e968f3fcf684a4e1fc5b213948982d
51933 F20110114_AAALUN sivaraman_v_Page_110.pro
203122e521a8afef6f6a5051b287a97d
377d68616746abaa19178d361470d37eec86be6c
48569 F20110114_AAALTZ sivaraman_v_Page_053.pro
0ae07d6276af2710b2d98574906ec533
2f1928590b929daa2d65d931e7d9979b3d7ddc70
70343 F20110114_AAAMAI sivaraman_v_Page_019.jpg
8c4bbeb16e2f596dd79d7ed76f4eca70
9882e24a65d49c3b1e8be28fb1624ff7728551a8
1708 F20110114_AAALVC sivaraman_v_Page_045.txt
8b4778d72086b80bd8bad107805393e4
92173c8ce5b3b7af7723d05e004451805fe72d1f
5638 F20110114_AAALUO sivaraman_v_Page_027thm.jpg
70240d5fbaa510736a0fbd1a016281b2
1797828827e22eb169a00cdd920086feb525b501
73068 F20110114_AAAMAJ sivaraman_v_Page_020.jpg
275c5e19d4bd9759624bff088dd9e31a
2c1cb3fe001ce7832869f9f2c41b78c83ac7d0d9
6521 F20110114_AAALVD sivaraman_v_Page_054thm.jpg
59c5569ea77733c8f8f44a4dbad4ed95
d91286d5a6cebae52ca8f6ef5be80e705f78d422
899709 F20110114_AAALUP sivaraman_v_Page_057.jp2
40d0daa76b6bac063a84fc2df4845d03
9b706d3c589fd7d0d3ab741b9e76d0b74f192cad
69766 F20110114_AAAMAK sivaraman_v_Page_021.jpg
eb16c3da98e98670c9bac032bccf52e6
4406476f0f9b78f1e7af43e0714f2b444807d00c
41713 F20110114_AAALVE sivaraman_v_Page_092.pro
079908f40772a813bfa4e443607343f3
0dd1e600ee83b25d0309fa5b85cfb1536f1196cf
19698 F20110114_AAALUQ sivaraman_v_Page_086.QC.jpg
cc1b1814de4f46b6e0eb1a1acc5f6b10
4fc86f33f263bdf5bd3f3c39d61f37801e875729
70465 F20110114_AAAMAL sivaraman_v_Page_022.jpg
cfdbb43f67cb8f4be7db5fc9d8403c3a
ce3c0e2c7fc731c548c43f29ca4f52ca03300382
30348 F20110114_AAALVF sivaraman_v_Page_118.pro
fae1383e5dae87e1c96a097530325e35
05747a3f03cec2aa7805c3c5a6a4c138aa2d84f4
6518 F20110114_AAALUR sivaraman_v_Page_108thm.jpg
88ba274891abce9a5474b6a7473ee10d
e89008fa8413bcf1f430a382c3a0ad8fc35d1d09
59624 F20110114_AAAMAM sivaraman_v_Page_023.jpg
2bef098435c9b4b1b618e1c2e9e3f687
cfb882eae9e79d3e5fd02e1763ac274b793a99cc
1231 F20110114_AAALVG sivaraman_v_Page_118.txt
8a53c4437f7259e0ef9b299c0a4e76f2
e4d02a30686f0bf0538530764561725512d72f2b
75628 F20110114_AAAMBA sivaraman_v_Page_050.jpg
7b979472766f330a441c1baf7e12037e
021764123a56ade3e3007282d2a7ec437a88784e
66082 F20110114_AAAMAN sivaraman_v_Page_026.jpg
9a19a0f5621899695f24234d977437c8
d2ff05876393e37b0f6e04760e1b2b757d651b2a
13627 F20110114_AAALVH sivaraman_v_Page_134.QC.jpg
e150fbbc99dd819ba4b1767a3d43e8c7
14ffbf176afcddfe4ab1115acf574557416fdc3b
6222 F20110114_AAALUS sivaraman_v_Page_079thm.jpg
fad93c3e4e521031f57ea52f852d0dbe
40eab5cf059e44e4bad825ce4d6b41d979fa1f06
68896 F20110114_AAAMBB sivaraman_v_Page_053.jpg
4c90301f17d590ee760a90b3e33d1c10
11e7bd88078bd19850d5c7da051c91e264fa27d1
70443 F20110114_AAAMAO sivaraman_v_Page_028.jpg
a2cec6cde9f073798d643d4126ebfa3f
eb02f62294a7fbde8990ed8fc4352e57a30f87af
994488 F20110114_AAALVI sivaraman_v_Page_100.jp2
3881f60bf6f8a11697f88124180534e4
008c7619d62434b1453eae2e4d6d32bb76c860b1
12593 F20110114_AAALUT sivaraman_v_Page_077.pro
d6f87370bfbe926be832e3d652eb67b5
fc87ecc3ca5afdce6ec7e8cca3a91c00c0609800
72810 F20110114_AAAMBC sivaraman_v_Page_054.jpg
11408fa534c09910d81091930dfcd673
284ea15ce8633d1b9e0c5fdc778f2997581502f7
71658 F20110114_AAAMAP sivaraman_v_Page_029.jpg
62b590b9e71702e96cc4d32b1cbe172f
5289191e9848f3449a3cd952f0597bc9f23c2950
19058 F20110114_AAALVJ sivaraman_v_Page_103.pro
dfb4a269258be54c9cb40789058ad79f
c68c66646d68ad62e32906829f04ec4f212b794c
19799 F20110114_AAALUU sivaraman_v_Page_006.jpg
a4c08c8e947c18a5e0a44493885a0a07
ac339c9e8387e7dd271077a16690a7b6f922ecbc
F20110114_AAAMBD sivaraman_v_Page_056.jpg
c8e5f92d4d77228054e5855d2b59e78c
f9894d04a84cbbd6f2c53f8e2da16c35e5a55d64
59987 F20110114_AAAMAQ sivaraman_v_Page_030.jpg
fea43c158449a670f8b8c9d720d14d0b
67ef833c5128b8673fd902e88b108c19302a59f3
3049 F20110114_AAALVK sivaraman_v_Page_003thm.jpg
a7fdf93b4f0045c25d9911cb9afd920d
c775a946f6e49fa4a37d2ad09137a6c05dbaa6c3
23905 F20110114_AAALUV sivaraman_v_Page_101.QC.jpg
aadf0759a85d166e03ce8d5eb4b7918f
cc49463a5986f3df2726585ce70e056f5b8ddac9
66061 F20110114_AAAMBE sivaraman_v_Page_057.jpg
969ead352cad42c29425ea5c30645263
4a4183a3c2180db0c6d50b010e15568772c9564b
70055 F20110114_AAAMAR sivaraman_v_Page_032.jpg
52d4b4d028ec8c040459669ce5947db2
51a25f408dc6fdbc2cfd4af60c6939238b98b652
34317 F20110114_AAALVL sivaraman_v_Page_042.pro
2839dd01cd6ed619bd07948783d264b5
6315c72accb14c9f1a1eefa721b9d53719962b89
17236 F20110114_AAALUW sivaraman_v_Page_070.QC.jpg
611ed897dc21f96c9740acae4501c543
417fb3f8e14ea25dce49fc907c444a0a70fe9101
69172 F20110114_AAAMBF sivaraman_v_Page_058.jpg
2cf349c9cbccbf6186563f4109b00fa9
7a9583e071e3539b6e3eff5b7d6d69359c2d89c0
69346 F20110114_AAAMAS sivaraman_v_Page_033.jpg
209dab3b22da1c4fa8f0fdbcb44eac73
45ded5b1b1a8d5a17e08e06d87e1a4b8a2646b5e
1278 F20110114_AAALVM sivaraman_v_Page_039.txt
d9a93aea96ec84cbb0d1f4e786ee84fa
64882a550129d146087775b3b82c507068242c26
F20110114_AAALUX sivaraman_v_Page_014.tif
4b3e961e2de082d00d1e75324d9983aa
63894b83c4955ad7df2a177e5ffee7b960146907
72366 F20110114_AAAMBG sivaraman_v_Page_060.jpg
03d7a6ab1aaad9eb97cb3c914c25362a
0252db392df0ae05c0ff4c90085946ada6051411
42481 F20110114_AAALWA sivaraman_v_Page_111.jpg
d8e320173870685195a8fbdbd8ff00a2
9bf78f03677838978d029905a6534eb5c9635349
76395 F20110114_AAAMAT sivaraman_v_Page_034.jpg
4eab7edaae9a11fb4342b7a6ad62805a
1feada3b418b908cf7423a55aca40e98dddd16c9
F20110114_AAALVN sivaraman_v_Page_076.tif
8d0402947ae3e2a3e563de96296cb491
86cb7891a85b819af9e8c4f9a3e9060051bd4a1e
5132 F20110114_AAALUY sivaraman_v_Page_039thm.jpg
e9738e164fc589e9586688673de5b3d1
230b4f86bc668db34c4f5d0effabecaf9c507287
73703 F20110114_AAAMBH sivaraman_v_Page_063.jpg
03e0068f3bfa30f6672ef5d6effb3e18
fb514630691b11949dfb17430f7410b03e051ddb
18525 F20110114_AAALWB sivaraman_v_Page_115.QC.jpg
f323f24531d9efd9f70c29ecb3a46c89
fb0baaf27b826317ac3887c1432a1a8f77f7c0b6
73119 F20110114_AAAMAU sivaraman_v_Page_036.jpg
d9d2de4a45eac1bbb4e678751922ced3
4efb4823fc8b85fe794fbba7c0507a8aeab416cc
18653 F20110114_AAALVO sivaraman_v_Page_083.QC.jpg
f6f6d5b76c0bcbf79296e4f523078827
90bc43a9aae5a53e61c78266065298396e9e7be8
2099 F20110114_AAALUZ sivaraman_v_Page_124.txt
8766d03db80cb1e2725a838031ccf4c0
ca5fa435ed3b2c7c9440e3e7634007d55ed2f5b8
67954 F20110114_AAAMBI sivaraman_v_Page_064.jpg
9a584ca3a8643c0e2a362647aadb23df
74d9a18221551797b0e9ae16d92d51aba073e3ce
3283 F20110114_AAALWC sivaraman_v_Page_128thm.jpg
d1cfb7e2e00487c294271184a1198faa
ab45debb4379df2d749d0264b73eb1f8d1a0882d
47722 F20110114_AAAMAV sivaraman_v_Page_043.jpg
9550cb7d6b59bc613be3abd73af8808d
6316e259e4866f57f805e144cda2c2df3694aa24
85893 F20110114_AAALVP sivaraman_v_Page_081.jpg
555b3fb6089f5f10263e153be389ed8e
bcf4949d8393cf88357543cc592793b230fae31d
59109 F20110114_AAAMBJ sivaraman_v_Page_065.jpg
167040652d948d38e3c1ec351b8a8e81
702cea1430a36b7664f562c71be63c3c0bc7a506
39756 F20110114_AAALWD sivaraman_v_Page_051.pro
537527cccb297bc11b9dd0407d6c9ea0
07b53d74b01de439786a21fb55817e03961ecc19



PAGE 1

RURAL ROAD FEATURE EXTRACTION FROM AERIAL IMAGES USING ANISOTROPIC DIFFUSION AND DYNAMIC SNAKES By VIJAYARAGHAVA N SIVARAMAN A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2004

PAGE 2

Copyright 2004 By Vijayaraghavan Sivaraman

PAGE 3

iii ACKNOWLEDGMENTS I sincerely thank Dr. Bon A. Dewitt for his continuous support and encouragement throughout the course of this research. He provided much needed technical help and constructive criticism by taki ng time out of his busy schedul e. Dr Michael C. Nechyba for getting me started with th e right background to do resear ch in the field of image processing. I would like to th ank Dr Grenville Barnes a nd Dr Dave Gibson for their patience and support, and their invaluable contribution in supervising my research. Finally, I wish to express love and respect fo r my parents, family and friends. They are always with me.

PAGE 4

iv TABLE OF CONTENTS Page ACKNOWLEDGMENTS.................................................................................................iii LIST OF TABLES............................................................................................................vii LIST OF FIGURES.........................................................................................................viii ABSTRACT....................................................................................................................... ..x CHAPTER 1 INTRODUCTION........................................................................................................1 1.1 Road-Feature Extraction Obj ectives and Constraints........................................1 1.2 Feature Extraction from a Geomatics Perspective.............................................3 2 BACKGROUND..........................................................................................................4 2.1 Road Characteristics..........................................................................................5 2.1.1 Geometric..................................................................................................8 2.1.2 Radiometric...............................................................................................8 2.1.3 Topologic..................................................................................................9 2.1.4 Functional...............................................................................................10 2.1.5 Contextual...............................................................................................10 2.2 Image-Processing Techniques.........................................................................11 2.2.1 Low-Level Processing............................................................................12 2.2.2 Medium-Level Processing......................................................................15 2.2.3 High-Level Processing............................................................................19 2.3 Approaches to Road Feature Extraction..........................................................22 2.3.1 Road Extraction Algorithm Using a Path-Following Approach.............25 2.3.2 Multi-Scale and Snakes Road-Feature Extraction..................................32 2.3.2.1 Module I........................................................................................36 2.3.2.2 Module II.......................................................................................41 2.3.2.3 Module III.....................................................................................45

PAGE 5

v 3 ANISOTROPIC DIFFUSION AND TH E PERONA–MALIK ALGORITHM........51 3.1 Principles of Isotropic and Anisotropic Diffusion...........................................52 3.2 Perona-Malik Algorithm for Road Extraction.................................................57 3.2.1 Intra Region Blurring..............................................................................58 3.2.2 Local Edge Enhancement.......................................................................61 3.3 Anisotropic Diffusion Implementation............................................................62 4 SNAKES: THEORY AND IMPLEMENTATION....................................................67 4.1 Theory..............................................................................................................69 4.1.1 Internal Energy........................................................................................74 4.1.2 External Energy......................................................................................75 4.1.3 Image (Potential Energy)........................................................................76 4.1.3.1 Image-functional (Eline).................................................................77 4.1.3.2 Edge-functional (Eedge)..................................................................77 4.1.3.3 Term-functional (Eterm)..................................................................78 4.2.1 Dynamic Programming for Snake Energy Minimization.......................80 4.2.2 Dynamic Programming...........................................................................81 4.2.3 Dynamic Snake Implementation.............................................................85 5 METHOD OF EXTRACTION...................................................................................88 5.1 Technique Selection.........................................................................................89 5.2 Extraction Method...........................................................................................98 5.2.1 Selection of Road Segments.................................................................102 5.2.2 Image Diffusion....................................................................................103 5.2.3 Interpolation of Road Segments............................................................104 5.2.4 Diffused Road Segment Subset and Road Point Transformation.........105 5.2.5 Snake Implementation and Transformation of Extracted Road............106 5.3 Evaluation Method.........................................................................................108 5.3.1 Goodness of Fit.....................................................................................109 5.3.2 F-Test....................................................................................................110 6 RESULT AND ANALYSIS.....................................................................................112 6.1 Results............................................................................................................112 6.2 Analysis of Result on Test Images.................................................................113 7 CONCLUSION AND FUTURE WORK.................................................................118 7.1 Conclusion.....................................................................................................118 7.2 Future Work...................................................................................................119

PAGE 6

vi APPENDIX A MATLAB CODE FOR ROAD FEATURE EXTRACTION...................................121 B PROFILE MATCHING AND KALMAN FILTER FOR ROAD EXTRACTION.126 LIST OF REFERENCES.................................................................................................132 BIOGRAPHICAL SKETCH...........................................................................................134

PAGE 7

vii LIST OF TABLES Table Page 2-1 Image pi xel subset....................................................................................................12 2-2 Convolution kernel...................................................................................................12 2-3 Methods of extraction...............................................................................................23 2-4 Module of extraction................................................................................................35 4-1 Proposals.................................................................................................................. 81 4-2 Stage 1 computation.................................................................................................82 4-3 Proposal revenue combination.................................................................................83 4-4 Stage 2 computation.................................................................................................84 5-1 Stages of development.............................................................................................89 6-1 Summary of evaluation fo r extracted road features...............................................116

PAGE 8

viii LIST OF FIGURES Figure Page 2-1 Road characteristics....................................................................................................7 2-2 Gaussian kernel........................................................................................................14 2-3 Edge detection..........................................................................................................16 2-4 Sobel edge detector..................................................................................................18 2-5 Hough transform......................................................................................................21 2-6 Path-following approach..........................................................................................27 2-7 Road seed selection..................................................................................................28 2-8 Width estimation......................................................................................................28 2-9 Cost estimation.........................................................................................................29 2-10 Road traversal at intersection...................................................................................31 2-11 Global road-feature extraction.................................................................................32 2-12 Salient road.............................................................................................................. .34 2-13 Nonsalient road........................................................................................................35 2-14 Salient road-feature extraction.................................................................................37 2-15 Nonsalient road-feature extraction...........................................................................39 2-16 Road linking.............................................................................................................4 0 2-17 Network completion hypothesis...............................................................................46 2-18 Segment insertion.....................................................................................................48 2-19 Extracted Road Segments.........................................................................................49 3-1 Anisotropic diffusion usi ng Perona-Malik algorithm..............................................56

PAGE 9

ix 3-2 Isotropic diffusion using Gaussian...........................................................................56 3-3 Nonlinear curve........................................................................................................59 3-4 Square lattice example.............................................................................................63 4-1 Snaxel and snakes.....................................................................................................70 4-2 Scale space repres entation of Snake.........................................................................71 4-3 Internal energy effect...............................................................................................74 4-4 Spring force representation......................................................................................76 4-5 Dynamic snake movement.......................................................................................86 5-1 Input image for Hough transform.............................................................................91 5-2 Extracted road using Hough transform....................................................................92 5-3 Input image for gradient snake extraction................................................................93 5-4 Road extracted using gradient snakes......................................................................94 5-5 Road extracted using Gaussian and dynamic Snakes...............................................96 5-6 Perona-Malik algorithm and dynamic Snakes.........................................................98 5-7 Process of road-feature extraction..........................................................................101 5-8 Selection of road segment......................................................................................102 5-9 Perona-Malik Algorithm vs Gaussian....................................................................104 5-10 Interpolated road points..........................................................................................105 5-11 Road segment subset and its transformed road point.............................................106 5-12 Extracted road using Perona-M alik and dynamic snake algorithm........................107 5-13 Desired and extracted road edges...........................................................................109 6-1 Road extracted using Gaussian a nd Perona-Malik with dynamic Snakes..............112 6-2 Road extracted on test images................................................................................115

PAGE 10

x Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science RURAL ROAD FEATURE EXTRACTION FROM AERIAL IMAGES USING ANISOTROPIC DIFFUSION AND DYNAMIC SNAKES By Vijayaraghavan Sivaraman December 2004 Chair: Bon A. Dewitt Major Department: Civil and Coastal Engineering The advent of information technology led to the implementation of various engineering applications. Geogr aphic Information System (GIS) is one such application that is being used on a large scale in the fiel d of civil engineering. GIS is used in tracking and maintenance of roads. Gr aphical representations includ ing attribute information of roads are stored in a GIS to track and main tain them. Graphical representation of road features is obtained through a process of digitization. Research groups in the past couple of decades have been working toward developing methods of extraction to automate the process of digitization. Our study reviewed methods of extraction developed by various research groups, and further developed a me thod of extraction using a combination of image-processing techniques (using 4 stages to extract road features from a rural image). In general, a method of extraction is com posed of three steps: pre-processing, edge detection, and feat ure extraction.

PAGE 11

xi The method of extraction developed in St age 1 was implemented using Gaussian, Sobel Edge Detector, and Hough Transform. Re sults obtained using this method were not as desired, because of the roads being extracte d as straight lines, while they existed as curvilinear features. Hence, this method was modified in Stage 2 by implementing Snakes, using the gradient-descent algorithm. This method yielded better results than Stage 1 by extracting curved as well as straight roads. The resultant extracted road had a jagged appearance due to Snake’s movement to the steepest gradient within the image. This problem was overcome by using dynamic pr ogramming in Stage 3, to restrict the movement of Snake to its neighborhood. Resu lts thus obtained in Stage 3 were smooth and continuous. However, these results deviat ed from desired road edges at locations with noise .The problem was due to implem entation of Gaussian blurring at the preprocessing stage, because of its isotr opic nature. This problem was overcome by implementing the Perona-Malik algorithm, an anisotropic diffusion technique, instead of Gaussian blurring, leading to better results as compared to Stage 3. Results obtained in Stage 4 were better co mpared to Stage 3 at locations with noise. Overall, Stage 4 performed better compared to Stage 3 on visual inspection. To support this conclusion, results from Stage 3 and Stag e 4 were evaluated over a set of 10 rural road segment images based on their goodness of fit and a hypotheses test implemented using F-test. Based on goodness of fit and the hypotheses test, results were better for roads extracted from Stage 4 than Stage 3.

PAGE 12

1 CHAPTER 1 INTRODUCTION Road networks are essentia l modes of transportation, and provide a backbone for human civilization. Hence, it is vital to maintain and restore roads to keep our transportation network connected. To do this, we must track their existence in both the temporal and spatial domains. The confi guration of road network depends on human needs a road may be constructed or aba ndoned depending on the needs of the neighboring community that the road serves. Spatial representation of road s (along with their attributes or aspatial inform ation) is managed well in a Geographic Information System (GIS). A GIS is a graphical representati on of geographic features, with attribute information related or linked to these feat ures. A GIS is used as an analysis and management tool, allowing the detection of changes over time and space. Spatial representation of geographic f eatures, such as linear stru ctures (e.g., roads) and point features (e.g., power poles or manholes) in a GIS is usua lly maintained in a vector format, as opposed to a raster. Digitization of de sired features in a raster image, leads to their vector representation. Dig itization can be either a manua l or an automated process. However manual digitization of features is a time-consuming and labor-intensive process. 1.1 Road-Feature Extraction Objectives and Constraints Ongoing research has led to a gamut of methods that automate the digitization process. Digitization methods ar e either automatic or semi-a utomatic in nature. In the literature, an automatic method implies a fully automatic process. Theoretically, a fully automatic approach requires no human interv ention, but this is no t practical. Our study

PAGE 13

2 considered a method automatic if no human intervention was needed for road feature extraction at the initial or processing stage. In a se mi-automatic method human intervention is required at the initial stage and at times during the processing stage. In both methods, human intervention is needed at the post-processing st age. Post-processing intervention is essential in both methods, to extract undetected but desired features from the raster image, and to fix incorrectly ex tracted features. An automatic method scores over a semi-automatic method due to its ab ility to automate th e operations of the initiation and processing stages. Road feat ure extraction from a raster image is a nontrivial and image-specific process; hence, it is difficult to have one general method to extract roads from any given raster image. According to McKeown (1996), roads extracte d from one raster image need not be extracted in the same way from another raster image, as there can be a drastic change in the value of important parameters based on nature’s state, inst rument variation, and photographic orientation. The existence of ot her features, both cu ltural (e.g., buildings) and natural (e.g., trees) and their shadows can occlude road featur es, thus complicating the extraction process. This ancillary info rmation provides a context for many of the approaches developed (Section 2.3.2). Thus, it is necessary to evaluate the extent of inclusion of other informati on needed to identify a road. Some extraction cases need minimal ancillary information; and some need a great deal. These limitations point to a need to develop a method to evaluate multiple criteria in detecti ng and extracting roads from images. Our study extracted roads solely based on th e road characteristics stored in an implicit manner in a raster image. Parameters used for extraction are its shape (geometric

PAGE 14

3 property) and gray-level in tensity (radiometric property) These purely image-based characteristics are affected by external sour ces as discussed earlier. No contextual information was used. The method works solely on image characteristics. The method is semi-automatic, with manual selection of the st art and end of road segments in the input image. Future work is needed to automate the initiation process, to automate the road selection process, using Kalman Filter a nd profile matching processes (Appendix B). 1.2 Feature Extraction from a Geomatics Perspective Feature extraction spans many applications ranging from the field of medical imaging to transportation and beyond. In Geom atics and Civil Engineering, the need for feature extraction is project-oriented. For ex ample, extracting features from an aerial image is dependent on project needs; the goal may vary from detecting canopies of trees to detecting manholes. The ability to classify and differentiate the desired features in an aerial image is a critical step toward auto mating the extraction process. Difficulties faced in the implementation of extraction methods are due to the complexity of the varied information stored in an aerial image. A good extraction technique must be capable of accurately determining the locations of necessa ry features in the image. Detection of a feature object, and its extraction from an im age, depends on its geometric, topologic, and radiometric characteristics (Section 2.2).

PAGE 15

4 CHAPTER 2 BACKGROUND Road-feature extraction was studied from aerial images over the past 2 decades. Numerous methods have been developed to ex tract road features from an aerial image. Road feature extraction from an aerial image depends on characteristics of roads, and their variations due to external factors (man-made and natural objects). A method of extraction is broadly classifi ed into three steps: pre-pr ocessing, edge-detection, and feature extraction (initialized by a feature identification step). The efficiency of a given method depends on image resolution and the input road characteristics (Section 2.1), and also on the algorithms used (developed to extract the desired information, using a combination of appropriate image-processing tec hniques). The task is to extract identified road features that are explicit in nature and visually identifiable to a human, from implicit information stored in the form of a matrix of values representing either gray levels or color information in a raster image. Digital raster images are portrayals of s cenes, with imperfect renditions of objects (Wolf and Dewitt, 2000). Imperfections in an image result from the imaging system, signal noise, atmospheric scatter, and sh adows. Thus, the task of identifying and extracting the desired information or features from a raster image is based on criteria developed to determine a particular feature (b ased on its characteristics within any raster image), while ignoring the presence of other features and imperfections in the image (Section 2.2).

PAGE 16

5 Methods of extraction develope d in past research are broa dly classified into Semiautomatic methods of extraction, or Auto matic methods of extraction. Automatic methods of extraction are more complex th an Semi-automatic methods of extraction. Automatic methods of extraction require anci llary information (Section 1.1), as compared to Semi-automatic methods that extract road s based on information from the input image. As part of a literature survey, Section 2.3 e xplains a Semi-automatic method of extraction in detail, developed by Shukla et al. (2002), and an Automatic method of extraction, developed by Baumgartner et al. (1999) from various methods developed in this field of research 2.1 Road Characteristics An aerial image is usually composed of numerous features, both man-made (e.g., buildings, roads) and natural (e.g., forests, vege tation) besides roads. Roads in an aerial image can be represented based on the following characteristics: radiometric, geometric, topologic, functional, and contextu al, as is explained in detail later in this section. Factors such as intensity of light, weather, and orientation of the camera can affect the representation of the road features in an image based on the afore-mentioned characteristics. This in turn affects th e road extraction process. Geometric and radiometric properties of a road are usually used as initial input characteristics in determining road edge features Both cultural and natural feat ures can also be used as contextual information to extract roads, along with external data apart from the information in the image (geometric and radiometric characteristics). Contextual information, and information from external s ources, can be used to develop topologic, functional, and contextual characteristics. Automatic method of extraction, implemented

PAGE 17

6 by Baumgartner et al. (1999), use these characteristics, explained in detail in Section 2.3.2. Human perceptual ways of recognizing a road come from looking at the geometric, radiometric, topological, functi onal, and contextual charact eristics of an image. For example in Figure 2-1, a human will first recognize a road based on its geometric characteristics, considering a road to be a long, elongated feature with constant width and uniform radiometric variance along its length. As shown in Figure 2-1, Road 1 and Road 2 have different overall pixel in tensities (a radiometric propert y) and widths (a geometric property). However, both tend to ex ist as long continuous features. Thus, it is up to the discretion of the user to select appropriate roads to be extracted at the feature-identification step. If the feature-identification step is automated, the program needs to be trained to select road s based on radiometric variance that varies depending on the functional characteristics of a road; explained later in this section. As an example in Figure 2-1, Road 1 and Road 2 have different functional properties and have different radiometric representations. In the case if a human is unable to locate a road segment due to occlusion, because of a tr ee (Figure 2-1) or a car, a human would use contextual information or topological characteristics. Exis tence of trees or buildings/houses in the vicinity is used as contextual information. Where as, topologic properties of the roads are used to determ ine the missing segment of the road network. Thus to automate the process of determining the presence of a road, there is a need to develop a technique for extracting roads, usi ng cues that humans would use, to give the system the ability to determine and extract the roads in an aerial image based on the characteristics of a road described.

PAGE 18

7 Figure 2-1. Road characteristics. This picture illustrates va rious characteristics of road explained in this section. The road characteristics explained in this section are derived from human behavior in road identification, based on the above e xplanation of the huma n interpretation of roads in an image. Further discussion explains in detail each of these road characteristics. Road characteristics are classified into five groups (Vosselman and de Knecht, 1995). Here follows a brief description of each of these characteristics, a couple of which (geometric and radiometric characteristics) are used in the Semi-automatic method explained in Section 2.3.1, and all of that ar e used in the Automa tic method in Section 2.3.2, to identify and extract road features from an aerial image.

PAGE 19

8 2.1.1 Geometric Roads are elongated, and in high-resoluti on aerial images they run through the image as long parallel edges with a cons tant width and curvature. Constraints on detection based purely on such characteristic s comes from the fact that there are other features, like rivers that may be misclassified as roads, if an automated procedure to identify road segments is implemented in an extraction method. This leads to a requirement for the use of additional characte ristics when extracting roads. In addition, roads within an image may have different wi dths, based on their func tional classification. In Figure 2-1, Road 1 and Road 2 have diffe rent widths, because of their functional characteristics, they are a local road and a hi ghway respectively, this issue is discussed in Section 2.1.4. Thus, this characteristic alon e cannot be used as a parameter in the automatic extraction of a ro ad from an aerial image. 2.1.2 Radiometric A road surface is homogenous and often has a good level of contrast with adjacent areas. Thus, radiometric properties, or overall intensity values, of a road segment remain nearly uniform through the road in an imag e. A road’s radiometric properties, as a parameter in road characterizat ion, identifies a road segment to be part of a road, based on its overall intensity value when compared to a model, or other road segments forming the road network in the image. This works we ll in most cases, with the exception of areas where buildings or trees occl ude the road or the presence of cars affects the feature detection process using this char acteristic. It also varies wi th the weather and orientation of the camera at the time of exposure. Fo r example, in Figure 2-1, A illustrates the complete occlusion of a road segment and B illustrates the partial occlusion of a road segment due to the trees near the occluded road segment.

PAGE 20

9 A method of extraction based on radiometri c properties may not identify segments A and B (Figure 2-1), due to its inability to match the occluded road segment with the other road segments in the image based purely on its radiometric property. Since the radiometric characteristics of the occluded ro ad segments would be very different from those of the un-occluded road segments in the image. In addition, if the process of identification is automated, and if the program is not trained to deal with different pavement types, detection would get affecte d, since an asphalt ro ad surface may have different road characteristics to a tar road. He nce, a group of characte ristics used together would better identify a road segment, as co mpared to identificati on based on individual characteristics. 2.1.3 Topologic Topologic characteristics of roads are base d on the ability of roads to form road networks with intersections/junctions, and te rminations at points of interest (e.g., Apartment, Buildings, Agricultural lands). Ro ads generally tend to connect regions or centers of activity in an aerial image; they may begin at a building (e.g., house) in Figure 2-1 and terminate at another cen ter of activity, or continue to end of an image. Roads tend to intersect and to connect to the other road s in an image. Topological information, as explained above, can be used to identify a nd extract missing segments of roads. As an example, if we have to extract the roads from the image in Figure 2-1, the radiometric and geometric characteristics of the road would help to extract all the road segments in the image. Though, it won’t be able to extract certain segments, due to shadow occlusion A or the presence of cars and buildings B in the vicinity (Figure 2-1). These missing or occluded road segments could be linked to the extracted segments based on the topological information of the neighboring segm ents. This characterist ic is used in the

PAGE 21

10 automatic method of extraction developed by Baumgartner et al. (1999) as explained in detail later in this chapter (Section 2.3.2). 2.1.4 Functional Roads, as discussed in the previous sec tion, connect regions of interest, such as residences, industries, and agri cultural lands. Therefore, roads may be classified based on their function as being a local road or a highway. This functional information is relevant in determining the width of the road and the characteristics of the pavement that would in turn be used to set the radi ometric properties, allowing the road to be identified based on its functional classification. In Figure 2-1, Road 1 and Road 2 have different widths (geometric) and overall intensity values (radi ometric), since they belong to different functional classes. However, to support the extraction process by using this characteristic there would need to be an ex ternal source of information ch aracterizing the road, besides the information stored in the image. 2.1.5 Contextual With this characteristic we may use a dditional information, such as shadows, occlusions due to buildings, tr ees along the side of the road and GIS maps, to reference roads using historical and cont extual information. This information is generated using a combination of information deduced from the im age and from external sources, such as a GIS database. In Figure 2-1, the occluded road segment could be extracted by combining the information about the extent to which the segment is occluded in the image, with the information stored in the GIS database concerning the corresponding road’s history. Of the various characteristics of roads di scussed in this section, only geometric and radiometric properties are inherent and ex ist as implicit information in any image. Whereas functional, topologi cal, and contextual inform ation can be used both as

PAGE 22

11 information from the image and from an extern al data source, to de velop an intelligent approach to the identification and extracti on of occluded and missing road segments in the image. The Semi-automatic method explained in section 2.3.1 illustrates the use of the geometric and radiometric properties of a road as input information for the extraction of road features technique that was implemented by Shukla et al. (2002). Furthermore, in Section 2.3.2, the Automatic met hod implemented by Baumgartner et al. (1999) illustrates an extraction process, where the in itial extraction process is carried out using the geometric and radiometric characteristic s of the road in an image, supported by extraction using topologic, functional, and contextual characteristics. Furthermore, this chapter reviews various image-proces sing techniques that could be implemented to identify and extract road fe atures from an aerial image. In brief, an image processing system is composed of thr ee levels of image processing techniques. These techniques are used in combination to develop methods for road feature extraction from an aerial image, using characteristics of the features in an image to identify and extract road features. Secti on 2.3 introduces the various leve ls of an image processing system, with an example to illustrate each level. 2.2 Image-Processing Techniques According to the classical definition of a three level image processing system (Ballard and Brown, 1982) and (Turton, 1997), image processing is classified into lowlevel, medium-level and high-level proce sses. Low-level processes operate with characteristics of pixels like color, texture, and gradient Medium-level processes are symbolic representations of sets of geometric features, such as points, lines, and regions. In high-level processes, seman tic knowledge and thematic information is used for feature

PAGE 23

12 extraction. Sections 2.3.1 through 2.3.3 explain various levels of image processing, with an illustration from each level, expl aining a technique and its implementation. 2.2.1 Low-Level Processing This step is concerned with cleaning a nd minimizing noise (i.e., pixels with an intensity value different from the average inte nsity value of the rele vant region within an image) in the image, before further operati ons can be carried out to extract the desired information from the image. One of simplest Low-Level processes is to blur an image by averaging the values of the pixels forming the image, thereby minimizing noise; here a mean or an average value is calculated for a group of pixel values forming an image, thereby reducing the variation in intens ity between the pixels in the image. Table 2-1. Image pixel subset 2 3 3 3 2 4 2 3 4 4 5 2 3 4 5 3 6 6 4 4 Image pixel subset represents an image, with red values representing the pixels considered for convolution using Tabl e 2-2 explained in this section. Table 2-2. Convolution kernel 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 Convolution kernel is convolved through the image whose pixel values are represented in Table 2-1, convolution of Table 2-2 with pixel subset highlighted in red in Table 2-1 is explained in this section.

PAGE 24

13 For example, given an image, whose subset pixel values are as in Table 2-1, an average is calculated using a convolution kernel (Table 2-2) This kernel calculates an average intensity value from the intensity valu es of the pixels masked by the kernel. The average intensity value calculat ed by the kernel is then assigned to the pixel coinciding with the central cell of the kernel. The kern el, while moving across the image, calculates and assigns an intensity value for each pixe l in a similar fashion. In Table 2-1 (the numbers in bold), a portion of the image pixel subset is masked by the kernel in Table 22, the total of these cells is 27; as the kernel is a 3x3 wi ndow, composed of 9 pixel masks and the total value under the mask is 27. The av erage of the 9 pixels amounts to 3. Thus, the pixel coinciding with the central cell of the kernel is assigned a value of 3. This process assigns the average pixel value to the pixel coinciding with the central cell of the convolution kernel, while moving across the image. Other Low-level image processing techni ques include convolution, using various forms of weighting functions such as Gau ssian, and Laplacian. Blurring using Gaussian as a weighting function, involve s generating a Gaussian convolution mask that is then further convolved with the image to be blurre d, in a fashion similar to the averaging by kernel convolution discussed ear lier in this section and us ing Table 2-1 and Table 2-2. During Gaussian blurring, the generated mas k, when convoluted with the input image, gives a weighted average value for each pixe l relative to the valu es in its neighborhood, with more weight assigned to values toward the center pixe ls. The resultant blurred image is thus different from averaging or mean blurring, where the av erage is a uniform weighted average.

PAGE 25

14 2 2 2 2 2 1 ) (2 y x e y x G (2-1) The Gaussian function is calculated using Equation 2-1, resulting in a distribution as shown in Figure 2-2. Here, x and y are the values of th e x and y coordinates of the convolution kernel, and is the standard deviation. A c onvolution kernel is calculated based on its size, with the mean in the cente r of the kernel, and the weights assigned to the kernel cells being based on the standard deviation. The convolution kernel in a Gaussian distribution is usually set up with a va lue of 3 as the standard deviation; this is done as at values of the standard deviation beyond 3 the Gaussian di stribution is close to zero. Using this kernel, the convolution is perf ormed along the x and y directions, to blur the whole image. Figure 2-2. Gaussian kernel. Gaussian we ighting distribution kernel is analogous to kernel in Table 2-2, with higher weights assigned to pi xel close to the central pixel. The conventional Gaussian blurring process is isotropic in nature as it blurs the image in a similar fashion in all directions This process does not respect the boundaries between regions in an image, and so it affect s edges, moving them from their original

PAGE 26

15 position. Hence, in our study the Perona-Malik algorithm (Malik and Perona, 1990), is implemented, an anisotropic diffusion princi ple to blur an image, in the developed method of extraction, instead of the conventional blurring pr ocess using a Gaussian. In the Perona-Malik algorithm images are blurre d within regions, while the edges are kept intact and enhanced, preserving the boundaries between regions. Chapter 3 introduces the principle of isotropic and anisotropic diffusi on in Section 3.1, and its implementation in the Perona-Malik algorithm, in Section 3.2. 2.2.2 Medium-Level Processing Medium-level processing is a step toward image classification. Some image processing techniques at this level classify the image into regions by themselves. One of the simplest forms of image classificati on can be performed by thresholding. When thresholding an image, the pixels within an image are classified based on the threshold intensity value. For example, if we have a gray scale image, with the intensity value ranging from 0 to 255, to obtain a binary image, or two-class image, based on a set threshold value all the pixels values below the set threshold value would be assigned an intensity of 0, and all those above the value would be assigned 1. Other techniques involve detecting the edge s within an image that can be further used to visually identify boundaries between regions and support high-level feature extraction processes. This level of processi ng is mostly used to determine edges, or boundaries between regions, in an image. What follows is an explanation of the principle of edge detection in an image, and the work ings of the Sobel edge detector, a medium level image processing technique.

PAGE 27

16 Figure 2-3. Edge detection. A) Edge image w ith bright regions in the center and dark on the boundary of the image. B) Edge image with dark regions in the center and bright regions along the boundary of the imag e. C) Horizontal line profile of edge image in A. D) Horizontal line profile of edge image in B. E) First derivative of the edge image A. F) Fi rst derivative of edge image B. G) Second derivative of edge image A. H) Second derivative of edge image B. An edge in an image represents a signifi cant change in intensity between pixels in the image. Edges detected in an image are usually used as information concerning the boundaries between regions in the image, or to allow a shape description of an object in the image. The concept of edge detection is explained further usi ng the illustration in Figure 2-3. An edge in an image, as in Figure 2-3, exists as a ramp within an image. In Figure 2-3, two edges exist in A and B. In A and B, the edges delineate a dark region and bright region, with a bright region existing in the center of A, and a dark region existing in the center of B. 2 )) ( ( 2 )) ( ( | ) ( | y x f y y x f x y x f (2-2) ) ( ) ( tan ) ( y x f x y x f y Arc y x f (2-3)

PAGE 28

17 A and B in Figure 2-3, are considered to be continuous along x and y, ) ( y x f then represents the image. Derivatives along the x () ( y x f x ) and y directions ( ) ( y x f y ), also known as directional derivatives, are calc ulated from the input image. Edges within the image are determined based on Equation 2-2 and Equation 2-3 that are calculated using the directional derivatives. Equation 22 gives the magnitude of the gradient and Equation 2-3 gives the orie ntation of the gradient. Simple edge detectors, developed at the medium level, detect edges based on the gradient information obtained for an input im age that is obtained us ing Equations 2-2 and 2-3. In Figure 2-3, C and D show the pr ofiles of pixel inte nsity across A and B respectively. In Figure 2-3, E and F give a graphical representation of the gradient calculated using Equations 2-2 and 2-3. The gr adient graph in E and F is a representation of the change in intensity of pixels acro ss the image. The edges within an image are detected by determining the local maxima of magnitude of image gradient (Equation 22). The peaks in E and F represent the locations of the edges in the images A and B in Figure 2-3. Detecting edges usi ng magnitude of gradient (fir st derivative) gives a region rather than a specific edge location. Edges could be better detected using the second derivative, or rate of change of gradient. In Figure 2-3, G and H give a graphical repres entation of rate of change of gradient (second derivative). Here, the s econd derivative becomes zero when the first derivative reaches a maximum. Hence, edge s can be easily identified by locating the points at which the second deri vatives of image become zero, instead of identifying local maxima within an image using the first derivative. Further, this section discusses the

PAGE 29

18 working a Sobel edge detector that performs gradient measurement and locates regions with high gradients that corres pond to edges within an image. The Sobel edge detector is a convolu tion kernel commonly used in image processing to determine regions having high spat ial gradients, regions in the image where there is a significant change in gradient from the neighbor ing pixels. Generally, these regions are along boundaries within an image, or exist as noise within a homogenous region. A Sobel edge detector usually consis ts of two 3x3 kernels, as shown in Figure 24. In Figure 2-4, a pseudo-convolution kern el, representing the input image, is convolved along the x and y directions to dete rmine the edges in an image using Gx and Gy. Here the convolution masks (Gx and Gy), when moved through an image, compute the gradient along the x and y directions and responds maxima lly to edges along x and y. Figure 2-4. Sobel edge dete ctor. A) Convolution kernel along x to compute gradients along x represented Gx. B) Convolution kernel along y to compute gradient along y represented Gy. C) Pseudo convol ution kernel on which gradients are determined from the pixel values. In Figure 2-4, the gradients along the x and y directions are computed by convolving the Sobel convolution kernels with the Pseudo-convolution kernel, to get the gradient in the x and y directions, using Equation 2-4 and Equation 2-5.

PAGE 30

19 Gx = (P2 + 2P3 + P4) (P0 + 2P7 + P6) (2-4) Gy = (P0 + 2P1 + P2) (P6 + 2P5 + P4) (2-5) The magnitude of the gradient is calculated by | | G = 2 2y xG G (2-6) The direction of the gradient is the ar c-tan of the gradient along the x and y directions = arc tan (Gx/Gy) (2-7) The detector then uses the magnitude of the gradient obtained using Equation 2-6, to respond maximally to regions within an im age which have similar spatial gradients to the convolution masks in A and B (Figure 2-4). Section 2.2.3 introduces High-level processing techniques in an image processing system that identify and extract desired objects from an image, based on informati on obtained through Low and Medium-level image processing techniques. 2.2.3 High-Level Processing In this step, information gathered from the Low and Medium-level image processing techniques is used as input inform ation to identify and extract desired objects or regions from an image. The simplest form of High-level processing is to label the desired regions with one value, while leaving the rest of the image at zero, by using a threshold value on the original image. More complex image processing techniques at this level involve detecting and ex tracting shapes within an image. Prominent techniques from this level of image processing include the Hough transform and Snakes (deformable contour models) method. During various stages of the development of a method of road extraction in our study, both thes e techniques were implemented.

PAGE 31

20 The Hough transform is an image processing technique that is used to extract or detect features of a particular shape in th e image. Hough transform is used to extract features that can be represented in a parametr ic form. It detects regular geometric features such as lines, ellipses, circles, and pa rabolas. The Hough Transform works best with large images where the effects of noise and undesired features are minimal. However, it is difficult to implement in detection of high order curves, those with orders greater than 2. An explanation of how the Hough transform works to extract linear features from an image is presented in the following discussion. Consider an edge-detected image, with a set of point locations /edge pixels that represent a boundary in the im age, as shown in Figure 2-5. In Figure 2-5, a number of line segments, can connect combinations of poi nts 1, 2 & 3, to represent a linear edge. The following is a parametric representati on of a line that is significant to Hough transform implementation. Each of the possibl e segments connecting set of points can be represented in the form of Equa tion 2-5 by varying the values of and that uniquely identify a single line. sin cos y x (2-5) is the orientation of the line, with respect to the origin, and is the length of the normal of the line from the origin, as in A (F igure 2-5). The objective is to pick the bestfit line that passes through the maximum number of edge pixels (i.e., 3 edges) as shown in (Figure 2-5). In (Figure 2-5), three edge pi xels and each of these points, or edge pixels, can have many lines passing through it; as s hown with lines (red and bold black) in A (Figure 2-5). The objective of the Hough transf orm is to pick a lin e that passes through maximum number of edge pixels, th e black line in A (Figure 2-5).

PAGE 32

21 Figure 2-5. Hough transform. A) Cartesian or spatial re presentation of the points and possible lines that could pass through each of points 1 2 and 3. B) Representation of points 1 2 and 3 as curves passing through possible lines represented in parametric form defined by cells. As is shown in A (Figure 2-5), there ar e numerous lines passing through each of the points. The lines passing thr ough an edge pixel can be unique ly identified by the values of and represented by Equation 2-5 in parametric form. Each and uniquely represent a cell that identifi es a line in Hough-Accumulator space in B (Figure 2-5). The splines in B (Figure 2-5) are edge pixel representations in Hough Space; the three curves represent the th ree edges existing in A (Figur e 2-5).As the splines in B (Figure 2-5), representing each edge pixels, pass through the accumulator-cells in Hough space they cause an increment in the count on the accumulator for the number of edge pixels through which a particular line passes; where each line is uniquely identified by a and value. Thus, the best f it line that passes through the maximum number of edge pixels in an image is equal to the accum ulator cell with hi ghest count. The line corresponding to the maximum accumulator count is picked to represent an edge in the original image. In B (Figure 2-5), the cell in which all three splines intersect represents

PAGE 33

22 the cell with the highest count of edge pixels. Hence, it is considered to be the best-fit line, and so the black line in represen tation of an edge in A (Figure 2-5). During the initial stage of this research an implementation of the Hough transform to extract road features was attempted, but was not considered, as road features were extracted from the image as straight lines; whereas roads typically exist as splines or curvilinear features in an image. This led to the implementation of Snakes (Active Contour model) to extract roads, as they represented road features better than Hough lines. Section 2.3 further introduces various met hods of road feature extraction developed over the past couple of decades. This secti on discusses in detail a Semi-Automatic and Automatic approach to ro ad feature extraction. 2.3 Approaches to Road Feature Extraction There are numerous methods that have b een developed to extract road features from an aerial image. Table 2-3 lists a few of the road extraction me thods reviewed here, as part of literature survey prior to work beginning on th e development of a method of extraction in our study. Met hods of extraction developed by researchers have been developed using a combination of image proc essing techniques. Techniques implemented in the methods of extraction may be common to one or more of the listed methods. Road extraction methods are broadly classified into Semi-automatic approaches and Automatic approaches, as was discussed in Section 1.1. The methods of extraction listed in Table 23, include a group of Semi-automatic appro aches, and an Automatic approach that was developed by Baumgartner et al. (1999) According to McKeow n (1996), one of the early researchers involved in de veloping road feature extr action methods, every image

PAGE 34

23 considered for the extraction of a desired featur e is unique. Hence, it is difficult to have a general method for extracting road features from any image. Table 2-3. Methods of extraction Method of Extraction Research Group Cooperative methods of ro ad tracking using road follower and correlation tracker McKeown and Delinger(1988) Road feature extraction using camera model and snakes. Gruen and Li (1995) Road feature tracing by profile matching and Kalman filter Vosselman and de Knecht. (1995) Multi-Scale and Snakes for Automatic Road Extraction Baumgartner et al. (1999) Detection of roads from satellite images using optimal search and Hough transform Rianto et al. (2002) Semi-Automatic road extraction algorithm for high resolution images, using path following approach Shukla et al. (2002) Methods for road feature extraction have been pursued for th e past couple of decades. Methods of extraction developed in the early days of this field of research were carried out using a manual initialization of th e process; also know as Semi-automatic extraction approaches. A cooperative method of extraction (McKeown and Delinger, 1988), one of the early methods of road f eature extraction, was a process that was developed using a combination of image pr ocessing techniques; it extracted roads by edge tracking and texture corr elation matching from the input image. These processing techniques (edge tracking and correlation matching) supported each other in detecting road features, in case either of them faile d during the extraction process. Hence, the method of extraction is called a cooperative me thod of extraction. Later, in 1995, a Semiautomatic approach for road extraction was developed using a digital terrain model, a type of camera model, along with dynami c programming and Snakes (Gruen and Li, 1995). This approach extracte d road edges by concatena ting a set of points that

PAGE 35

24 represented road locations. A nother Semi-automatic approa ch was developed around the same time, and extracted road features us ing the Kalman filter and profile matching (Vosselman and de Knecht, 1995). During the ev olution of the various methods of road feature extraction, a resear ch group lead by Baumgartner et al. (1999) developed an Automatic approach. Most of methods develope d until this date had the similar extraction steps, but this method tried and tested a different combination of image processing techniques to work in cooperation with each other in modules. Our study will discuss further a Semi-automatic method of extrac tion, a Semi-automatic road extraction algorithm for high-resolution images usi ng the path following approach (Shukla et al. 2002), and an Automatic method of extraction, the Multi-scale and Snakes road feature extraction method developed by Baumgartner et al. (1999). Furthermore, a method of extraction is developed in our study that uses a combination of image processing techniques, ev olved over stages that use cues from past research to develop a method of road feature extraction. An initial attempt was made to extract roads using the Hough Transform based on a concept from method of extraction developed by Rianto et al. (2002), although the results obta ined were not as desired. Hence many combinations were tested, the final method of extraction that will be implemented in our study uses the Perona -Malik algorithm (Malik and Perona, 1990), based on the anisotropic diffusi on principle and Snakes, was developed at final stage, stage 4 (Section 5.1). As part of our study, an attempt was made to automate the initialization, or road segment identificati on, stage prior to extr action (Section 5.2.1) using the Kalman Filter and profile ma tching (Vosselman and de Knecht, 1995). Appendix B of our study gives a detailed explan ation of the principle and working of the

PAGE 36

25 Kalman Filter, along with its implementation for detecting road segments using profile matching and Kalman filter. Furthermore, Sections 2.3.1 and 2.3.2 explain in detail the methods of extraction that were developed by Shukla et al. (2002) and Baumgartner et al. (1999), each under the Semi-automatic and Automatic approaches to road feature extraction respectively. Prior to discussing and evaluating the appro aches that have been developed toward road feature extraction from an aerial image, th ere is some information to be dealt with concerning the general observation of a road Roads are generally uniform in width in high-resolution images, and appear as lines in low-resolution images, depending on the resolution of the image and functional classification of the road. In the Automatic approach discussed below, road features are extracted at various resolutions using contextual information to complete the extract ion of roads from an input aerial image. In both approaches (Automatic and Semi-automatic ), there is a need for human intervention at some point during the extr action process. A Semi-automa tic approach requires initial human intervention, and at times it requires intervention during the extraction process, whereas an Automatic approach only needs human intervention at the post processing stage. In the Semi-automatic approach, road detection is initialized manually with points representing roads, also called seed points. The roads are tracked using these seed points as an initial estimation of ro ad feature identifiers. In th e case of a fully Automatic approach, the roads are completely extrac ted without any human intervention. Post processing is carried out for misidentified and unidentified roads in both approaches. 2.3.1 Road Extraction Algorithm Using a Path-Following Approach A Semi-automatic method is usually impl emented using one of the techniques below.

PAGE 37

26 Post initialization of the road, the road is mapped using a road tracing algorithm. Distribution of a sparse set of point s along a road segmen t which are then concatenated to extract the desired road segment. McKeown and Delinger (1988) developed a method by which to track and trace roads in an aerial image, using an edge detector and texture co rrelation information (Table 2-3). Whereas Gruen and Li (1995), im plemented a road tracing technique using a sparse set of points spaced along the road to be mapped using dynamic programming. This section explains in detail a Semi-a utomatic method of extraction using path following approach developed by Shukla et al. (2002). In the method developed using path following approach, a road extraction algorithm extracts roads using the width and variance information of a road segment, obtained through the pre-proces sing and edge detection step s, similar to McKeown and Delinger (1988) and Vosselman and de Kn echt (1995). This process, being a Semiautomatic approach, is initiali zed by a selection of a minimu m of two road seed points. These seed points are used to determine the center of the road programmatically from the edge-detected image. Then, after the desired points representing the initial road segment are obtained, its orientation and width are calc ulated. The orientation of the initial seed point is used to determine the three directi ons along which the next road segment could exist. From the three directions, the dire ction having minimum cost, (i.e., having the minimum variance based on intensity or radiom etric information) is considered as the next road segment. This proce ss is carried out iteratively, unt il the cost remains within the predefined variance value. Below is a detaile d systematic explanati on of this approach. Figure 2-6, gives a flow diagram of the ex traction process, developed using the path following approach (Shukla et al. 2002).

PAGE 38

27 Figure 2-6. Path-following approach. Flow chart gives a brief overview of extraction process using path following approach e xplained in detail in this section. Pre-processing (scale space diffusion and edge detection). The original image is diffused or blurred at this step (Figure 2-6), into a sequence of images at different scales. The blurring in this step is carried out using Non-Linear Anisotropic coherence diffusion (Weickert, 1999), as this mini mizes variance within the regions in an image. Non-Linear

PAGE 39

28 Anisotropic coherence diffusion helps mainta in the homogeneity of regions within an image. Variance across sections of the road se gment is then further used to estimate the cost, based on which road is traced. The anis otropic diffusion appr oach is a non-uniform blurring technique, as it blurs regions within an image ba sed on pre-defined criteria. This is different to Gaussian blur ring that blurs in a similar ma nner across the entire image. The image diffused using the above diffusion technique is then us ed to compute the radiometric variance across the pixels in the image. Edges are then detected from the diffused image using a canny edge detector. The edge-detected image is used to calculate the width of the road across road segments later in the process of extracting road segments. Figure 2-7. Road seed selection. Black line represents the initial se ed point selected by the user. Figure 2-8. Width estimation. Road width and direction of road is estimated from the initial seed point selected as in Figure 2-7.

PAGE 40

29 Selection of initial seed points. As this algorithm is a Semi-automatic approach to road feature extraction, the process of de tecting and extracting road segments is initialized by manual selection of road seed poi nts. Road seed points, as in (Figure 2-7), are two points on or near a road segment in an image that form a line segment representing the road to be extracted selected by the user. (Figure 28) illustrates a road seed with ends a and b. Co mparing (Figure 2-7) and (Fi gure 2-8), a-b correspond to the end points for the black road seed in (Figure 2-7). Orientation and width estimation. In (Figure 2-8), a-b th e current seed point’s orientation gives the direction of the road, on the basis of which the rest of the road segments could be determined. The width of th e road at the given seed point is estimated by calculating the distance from the parallel edges, g-h and e-f, to the road seeds a-b as in (Figure 2-8). At this point the width of the road at the initial se ed points is estimated, along with the orientation of the road. The orientation of the ro ad at the initi al seed points gives a lead as to three directions in whic h the road could propagate and form the next road segment. Figure 2-9. Cost estimation. This figur e gives possible orientat ions of next road segment based on the information obtaine d from Figure 2-7 and Figure 2-9 of initial road segment. Cost estimation in three directions. As shown in (Figure 2-9), there could be three directions b-c, b-d, and b-e, along which the road segment could propagate, based on the current orientation of the seed point ab. The edges g-g’ and h-h’ are road edges

PAGE 41

30 parallel to the current road seed a-b. Thus if a-b is the current direction of the road segment, b-c, b-d or b-e are the possible choi ces of direction for the next road segment. As per this algorithm, the minimum of the lengt hs in the three direc tions b-c, b-d and b-e is considered to be the width of the ro ad at current node b, as in (Figure 29).Furthermore, each of the three directions b-c, b-d and b-e are as signed weights, with the line having the similar direction to th e previous road segment being assigned the minimum weight, b-d in (Figure 2-9). After a ssigning weights to each direction, a cost factor is computed using Equation 2-6: bd Length bd Direction bd Variance bd Cost ) ( (2-6) Here, ) ( 2 ) ( bd length bd mean pixelvalue bd Variance (2-7) Once the cost is estimated in the three directions using Equa tion 2-6 and Equation 2-7, the path having the minimum cost is consid ered. The cost value is stored and is used to determine the road direction in the next target window. This pro cess continues until the cost factor remains within the set values. Th is approach continues, forming consecutive target windows, and thereby determining the mi nimal cost of the road direction at each node. Once all the road points are obtained, the road is traced through the set of points to extract the minimum cost path. This approach is also called the minimum path following approach, as the path having the minimum cost is selected until the en d of the road is reached, and is connected to form the final extracted road. While traci ng roads the parameters at intersections vary drastically, as is explained below.

PAGE 42

31 Figure 2-10. Road traversal at intersection. There would be instances, such as junctions or road intersections, where the width of the road at a point on the junction will s uddenly exceed the width at the previous point that was traversed on the road segment, and will have the same minimum path in all directions. As seen in (Figure 2-10), at th e junction, point c woul d have a greater width than the other road segment points, and the pa ths in all directions would have an equal minimum cost. This problem is overcome by b acktracking, the width of that point is reduced by considering the width of the pred ecessor point that was traversed in this method, and the problem with the equivalence of the minimu m path values is sorted by following one path and tracing the rest of th e path, after the whole extraction process is completed. Issues associated with this method of extraction, as with any Semi-automatic approach, is its inability to extract ro ad segments occluded by shadows and other occlusions that then need to be initiated manually by the user. Section 2.3.2 illustrates the working of an Automatic approach to road feature extraction, implemented by Baumgartner et al (1999). This method of extraction, as its name suggests, does not need

PAGE 43

32 any initialization or feature identification step, these functions are performed by the feature extraction method itself. This method of extraction in cludes some processes that if implemented as stand-alone processes, would work as a Semi-automatic method of extraction. 2.3.2 Multi-Scale and Snakes Road-Feature Extraction The automatic method of extraction deve loped by Baumgartner et al. (1999), explained in this section, gives an idea of the working of an Automatic method of extraction, using information from various sour ces to extract road f eatures from an aerial image without any human intervention (Section 2.1). Figure 2-11. Global road-feature extraction. This picture illu strates the two models used to extract road features in an im age automatically over three modules.

PAGE 44

33 Figure 2-11 illustrates an automatic method of extraction developed by Baumgartner et al. (1999) to extract road features fr om aerial images using information from coarse resolution and fine resolution images. The method of extraction is divided into two models, a road model (A), and a c ontextual model (B), as shown in (Figure 211). The road model extracts the roads from an aerial image, from fine and coarse resolutions of an input aerial image. At coar se resolution, the road s exist as splines or long linear features, with intersections and junctions as blobs. At fine resolution, roads exist as long homogenous regions with unifo rm radiometric variance. The road model extracts roads at coarse reso lution by assuming that road se gments exist as long, bright linear features. At fine resolution, the road model uses real world (A1) information (e.g., road pavement marking, geometry). It also uses material information (A2) determined based on the width of the road segment and th e overall radiometric variance of the road segment, depending on the pavement type or material (e.g., asphalt and concrete), and image characteristics of whet her the identified road segm ent is an elongated bright region. In brief, the road model introdu ced above extracts roads based on the road segment’s geometric, radiometric, and topologic characteristics (Section 2.1). The method of extraction developed by Baumgartner et al. (1999) also includes a context model (B) in (Figure 2-11) that extr acts road segments from the input image, using information about other features that exist near the road segment. The context model extracts the road from an input imag e using a global context and a local context. These contexts support each other in the pr ocess of extraction. The global context (B) sets an input image to an urba n (B1), rural (B2), or forest (B3) as in (Figure 2-11). The

PAGE 45

34 local context exists within the input image (e .g., a tree or building near a road segment), that is occluded by the feature or its sha dow, or individual road segments existing by themselves. A tree occluding a road segment could occur whether the global context is urban, rural or forest, whereas a building or its shadow o ccluding a road segment could only occur in an urban or a rural area, where buildings such as resi dences or industrial infrastructures may exist. Thus, the global and local context within the context model work together to extract road segments. This se ction explains the method of ex traction in detail that uses the road model and context model; it does so wi th the use of an example of rural (global context) road feature extracti on. Another significant point is that roads existing in an urban area may not be able to be extracted in a similar fashion to those in a rural area, since they may have different geometric a nd radiometric characteristics and contextual information. Thus, the local context within an input image is assigne d to a global context, based on which roads are to be extracted. Th e model used depends on what information is needed to extract a road. Salient roads (Figur e 2-12) that are clearly visible and are not occluded or missing sections may be extr acted using geometric and radiometric characteristics, the geometry and material part of the road model. Figure 2-12. Salient road. Road in gray in this picture is a salient road as it is not occluded or there is no section of ro ad missing and exists as a continuous feature across the image.

PAGE 46

35 Figure 2-13. Nonsalient road. Road in this picture is a nonsalient ro ad, as it is partially occluded by shadows of tree thus aff ecting the radiometric and geometric property of the road. Nonsalient roads (Figure 2-13), (road segm ents within an aerial image that are occluded by the shadow of a tree or building) may need the use of a context model to extract them from the image. Table 2-4. Module of extraction Module I (Local Extraction) Module II (Global Extraction) Module III (Network Completion) Salient road Low-level pr ocessing Generation of link hypotheses Nonsalient Road Fusion Verification of hypotheses Road junction linking Graph represen tation Insertion of accepted road hypotheses -Road network generation -Module of extraction is composed of three m odules, through which roads in an image are extracted using road and context model in combination. As per the strategy of extraction developed by Baumgartner et al. (1999) salient roads are initially extracted; followed by the extraction of Nonsalient roads. This process is followed as extracted sali ent road segments, can help to guide the extraction of nonsalient road segments, explained in detail late r in this section. After the extraction of all roads, a network is generate d by connecting salient and non-sa lient roads, forming a road network within the input aerial image. The me thod of extraction developed using the road model and context model can be broadly classi fied into three modules, as in Table 2-4.

PAGE 47

36 Module I performs road ex traction in a local contex t, using a high-resolution image, initialized by extraction of salient road segments, followed by nonsalient road segment extraction, and the extraction of the junctions or intersecti ons that connect the extracted road segments. Module II performs ex traction in a global context, as a low level processing step, using a low-re solution image as input. This is followed by the fusion of the extracted road segments from the loca l level extraction that was implemented in Module I, and the first step (l ow-level processing) implemen ted in Module II. The final step of Module II involves the generation of a graph representing th e road network from the road segments generated from the fusion. Road segments obtained through this fusion represent the edges, and its ends represent th e set of vertices of the generated graph. Module III of the developed method improve s the extracted road network obtained through Module I and II. It does so by th e generation of link hypotheses, and their verification, leading to the insertion of links. This allows complete extraction of the road segments forming a network, without any br oken road segment links. What follows in this section explains in brief the implementation of each module. 2.3.2.1 Module I This module uses edge and line informati on to begin extraction. Hypotheses for the location of the salient roads in the image ar e determined from the extracted lines and edge information in the image. Extracted salient roads, along w ith local contextual information, are then used for the extraction of non-salient roads. Then, in the final step of Module I, the road juncti ons are constructed geometrica lly, using the extracted road information at the end of this module. In formation about salient, nonsalient roads and road junctions is passed on as input to Module 2.

PAGE 48

37 Figure 2-14. Salient road-featur e extraction. A) Represents the extracted road centerline in black and edges in white B) represen ts the road quadrilaterals formed by information from extracted road edge a nd centerline in A. (Picture Courtesy Baumgartner et al. (1999) Figure5 Page 6). Salient road extraction. In this step, roads are extracted at a local level, using edge and line information extracted from fine resolu tion input image, and the image at a coarse resolution. (Figure 2-14) A repr esents the road lines, extract ed using a coarse resolution image, in black, and road edges extracted from a fine resolution image in white. The distance between the pair of extracted edge s must be within a certain range. The minimum and maximum distance depends on the class of road being extracted. For the extracted edge to be considered as a road edge it must fulfill the following criteria: Extracted pairs of edges s hould be almost parallel. The area enclosed by a pair of parallel edges should have homogenous radiometric variance along the road segment. There should be a road centerline extracted along the center of the extracted road edges. As in A (Figure 2-14), the black ro ad centerlines lie along the middle of the extracted white road edges. The edges are selected as road edges by th e local fusion of extracted lines and road edges. Using the road edge in formation, road segments are constructed as quadrilaterals

PAGE 49

38 (Figure 2-14) that are generated from the pa rallel road edges. Quadrilaterals sharing points with neighboring quadrilaterals are connected. Th e points on their axis, along with the road width, represent the geometry of th e road segments. This road information is used as semantic information for the extracti on of non-salient parts of the road network in the next step of Module I. Nonsalient road extraction. Nonsalient road segments cannot be extracted as salient road segment, since they are occlude d by the presence of cu ltural (e.g., buildings) or natural (e.g., trees) ob jects or their shadows. Thus to extract a non-salien t road, there is a need for additional knowledge compared to the information needed for the extraction of salient roads. This step of Module I, extr acts non-salient road segments by linking the extracted salient roads obtained from the prev ious step, and assuming that the non-salient road segments are gaps between salient road segments. In addition to the linking of nonsalient roads, incorrect hypothe ses for salient road segments are eliminated at this step. As most of the road segments extracted by the fusion of local edge and line information in previous step are short, the linking of co rrect road segments and the elimination of false road segments is achieved by grouping sa lient road segments into longer segments. This process is performed using the followi ng hypothesis and test paradigm that groups short salient road segments, bridging the gaps as well as extracting the non-salient road segments. Hypotheses concerning which road segmen ts should be bridged, are generated based on the comparison of the geometric (width, collinearity and distance) and radiometric properties (mean gray value, sta ndard deviation) of the new segment and the

PAGE 50

39 segment to be linked. The road segments ar e verified through three stages, using the following hypotheses: In the first stage, the radiometric proper ty of the road segment to be linked is compared to that of the segments to be linked. If the di fference between the radiometric properties is not too great, then the connec tion hypothesis is accepted. If the connection hypothesis is not accepted from the firs t stage, the “ribbon snake” approach is applied to find an optimum path to connect the salient road segments. If this also fails, final verification is pe rformed using local context information. The final verification is the weakest form of hypotheses testing, at this stage local contextual information is used to extract the non-sa lient roads. Figure 2-15. Nonsalient road -feature extraction. A) Represents an occluded road segment and extracted salient road edge in white B) represents the occluded road segment extracted using optimal pa th C) represents the road extracted using optimal width information D) represented the road extracted by on width hypothesis (Picture Courtesy of Baumgartner et al. (1999) Figure 8 Page 8). Figure 2-15A illustrates an occluded or non-salient road segment, with the corresponding extracted salient ro ad segment in white; this is used to give the initial hypothesis. In (Figure 2-15), B represents the road extrac ted using the optimal path process, C represents the road extracted by optimal width verificati on, and D represents the road extracted by selection of hypothesis on the basis of constant width. As can be

PAGE 51

40 understood from the results, the road extr acted by the hypothesis based on geometric characteristics of the road gives a better result than any othe r stage verification. Figure 2-16. Road linking This figure illustra tes the extracted road edges in white, with their links represented in bl ack and junctions in white dots(Picture Courtesy of Baumgartner et al. (1999) Figure 9 Page 8). Post the extraction of salien t and non-salient roads in M odule I, the extracted road segments need to be connected. The connecti on of road segments is performed in final step of Module I throug h road junction linking. Road junction linking. The hypotheses concerning junctions are based on geometric calculations. In this step of Modul e I, the extracted road segments are extended at their unconnected ends. If an extension in tersects with an already existing segment, then a new road segment is constructed th at connects the intersection point with the extended road. The verification of these new road segments is performed in the same manner as in the case of non-salient road segment extraction in the previous step. In A (Figure 2-16), the black dotted lines repr esent the extension of a road segment to form a new road segment, and B in (Figure 2-16) illustrates the extracted road segments with junctions as white dots. Although this approach leads to the extraction of most of the roads in rural images, it does not tend to work in the same way in ur ban images and forest images, as the local

PAGE 52

41 context for rural images is different to that in urban and forest images. In the case of urban images, the network of roads may be denser, and their appearance may also be different from the road segments existing in a rural image. The road features extracted in this Module, i.e. Module I, were based on local context, i.e. within an image. Module I extracted roads using geometric and radiomet ric properties of the road segment, and concentrated on local criteria within an imag e to extract road edge s. Module II performs extraction on a global context, considering the whole image. In Module II, the topological properties of the roads are us ed to extract roads, to support the extraction process implemented in Module I, and improve upon the extracted results The road network extracted in Module II has more road se gments than Module I as Module II is less constrained. 2.3.2.2 Module II An intrinsic topological characte ristic of roads is to connect places. Roads are constructed along paths that provide the s hortest and most convenient way to reach places. This property leads to searching for an optimal connection between places. The method of determining the best connection between two poi nts is of importance for road extraction. This approach is feasible on low-resolution satellite images, as roads exist as linear bright features, forming a network; they do not do so in high -resolution images, as highresolution images are more specific con cerning individual road segments and their geometric and radiometric properties. In this module, the approach adopted is modified to integrate and process road like features from various input so urces, i.e. lines extracted at different scales. Module II performs extrac tions over four steps, i.e. Low-level processing, Fusion, Graph Representation and Road Network generation. The information obtained from Module I of the ex traction process is pa ssed on as input for

PAGE 53

42 Module II. During the initial step of low-level processing, th e roads that exist as long bright features are extracted. These extracted features are fu rther merged with road edges extracted by local extraction in Module I, in the fusion step of Module II. The Graph representation step cons tructs graphs using the fused ro ad segments from the previous step, with road segments re presented by edges and the junc tions of road segments as vertices. The final step in Module II is to use the output of the Gr aph representation step to generate the road network. The disc ussion below briefly explains each step. Low-level processing. In this step, road segments are extracted by extracting a line from a low-resolution image. This approach re turns lines as sets of pixel chains, as well as junction points, in sub-pixel precision. Some of the extracted lines may represent actual roads, and some of the roads that are extracted may not necessarily be roads, they may be other features such as rivers misidentif ied as roads. In the analysis of roads, the behavior of several lines attri butes is important, but the most significant change in lines is high curvature. Hence, from the lines extracted at low-resolution, the lines were split into road segments and non-road segments, base d on points of high curvature, as the probability of road having a very steep curv e is low. If some road segments are misidentified, or not identified at all, then th ey will be identified in the next step of the fusion process. Here each extrac ted line feature that is classified as road segment is supported by an extended description, ba sed on the following calculated properties: Length Straightness, i.e. standard de viation of its direction. Width ( mean width of the extracted line). Constant Width ( standard deviation of the width).

PAGE 54

43 Constant radiometric value of a road segm ent (standard deviation of the intensity value along the segment). Fusion. In this step, the road segments obt ained from the pr evious step are combined with the roads extracted from the local extraction performed in Module I. On fusion, both types of road segments are stored as one set of linear data. Segments in this linear data set are unified if they happen to lie within a buffer distance with a suitable width, and if the two segments have a dire ctional difference less than a set threshold, otherwise they are evaluated as intersections. Ov erall, after the roads are extracted in this Module, the result is a more complete networ k, than was extracted in Module I. However, the process may also result in falsely detected road segments. Next, the extracted output is represented in the form of a network graph. Graph representation. Once the segments are fused, a graph is constructed, with the road segments as edges and vertices as points of connectivity. In cases where two or more segments intersect, only one point/vertex is retained, to preserve the topology of the road. Attribute values of road segments, assigned in the low-level processing of this Module, are used to weigh the graph, by associ ating every edge with a single weight. At this step of the extraction it is difficult to determine whether a road junction between two segments truly represents a connection of road segments. Thus, an additional hypothesis is generated to determine the connections be tween the edges of the graph. The following are the criteria that are used to measure the quality of the hypotheses: Direction difference between adjacent road segments; either orthogonality (within road) or orthogonality (at a T-junction) is assumed as a reference. The absolute length of the connection. The relative length of a conn ection compared to the le ngth of the adjacent road segment with the lower weight.

PAGE 55

44 An additional constraint that prevents a connection hypothesis from assigning a higher weight than its adjacent road segments. A linear fuzzy function is then defined to obtain fuzzy values for the hypothesis on each of the above criteria; these values are then aggregated into an ove rall fuzzy value using the fuzzy and operation. For example: a fuzzy functi on is defined for the difference in direction to determine orthogonal ity within a road segment or at road segments. To prefer either a continuation of the road segment, or to support the id ea of a possible road junction, a fuzzy function with two peaks is considered, one at o0 and one at o90, this supports collinearity and junctions. Thus, a ro ad connection may be classed as either a road segment or a T-junction, by classing th e connections as either a T-junction or a collinear road segment. This classification can be built usin g the other parameters used for evaluating junction hypotheses; for exampl e the length of the connection as compared to the length of the road segments to be conn ected, can be used as a weighting function in the process of determining whether the conn ection is a junction or a road segment, by using the above defined fuzzy value. Next the roads are generated using road seed generation (points or places of interest to which a road c onnects) as the final step in Module II. Road network generation. Here, the road seeds are us ed in extracting roads, by determining the optimal path between the seed s representing the origin and destination. The seeds in this step are points of in terest, like buildings, and industrial areas. The algorithm for road network generation finds the shortest path using the DijkstraAlgorithm on the weight assigned to each road segment. Weights ar e assigned to road segments depending on their fuzzy value. The weight ( w ) is assigned to a road segment based on the fuzzy value that is assigned, th ese vary between 0 and 1, by using the true

PAGE 56

45 distance between the vertices. If a segment doe s not form a link between vertices, then a fuzzy value of 0 is assigned, leading to an infinite weight on the road segment, and thereby removing it during calculation of the shortest path for road network generation. ij w = otherwise j and i rtices between ve distance euclidean the is d where 0, r and graph original in the connected not are j and i if 0 r" em between th distance and length and connceted are j and i vertices if , , j i r j i d j i r j i I(2-8) In Equation 2-8, the weight is assigned based on r, the fuzzy value as introduced earlier, and “i” and “j”, representing the vertices. Here ij w is calculated based on the true distance between the vertices that is us ed below in generating a road network by determining the optimal path, using the wei ghts as inputs for the Dijkstra algorithm. Most of the road segments are extracted from an input image, through the extraction processes implemented in Module I and Module II. The extracted road segments exist as fragments of a disconnected road network. Si nce some road segments were not extracted in either Module I or II, the resultant road network is further connected to complete the network using the functional ch aracteristics of a road, along with verification from the image using a hypothesis generated in Module III. Module III, the final module of the extraction process, is implemented over thr ee steps. The first step generates a link hypothesis, based on that the extracted ro ad segments are connected through the verification and insertion steps in Module III. 2.3.2.3 Module III In this module, the information about the u tility of a road networ k, its topographical characteristics and various factors such as e nvironmental restrictions and the locations of

PAGE 57

46 cities and industrial areas, are used in the process of linking extrac ted road segments. The results obtained up to this st ep, through Module I and Module II, along with information on missing segments, are again used and the whole road is reconstructed based on a hypothesis generated in this module. This hypothesis is then used in completing the network at this final stage of the th ree module extraction process developed by Baumgartner et al. (1999). Figure 2-17. Network completion hypothesis. This figure is used as an illustration to explain the process of extraction us ing the functional and topological characteristic of the road explained in detail in this section. Hypotheses for network completion, as they ar e implemented in this research, work as follows. A sample network is shown in (Figur e 2-17), with four nodes A, B, C, D being considered. Among the set of four points or nodes the shor test path in the network is determined; optimal paths along diagonals ar e also considered for eval uation. These distances are evaluated for shortest path, as this is the best means for fast and cheap transport among a set of options. The network distance nd in Fi gure 2-17 depends on the actual length and the road class along which the shortest path is found, where as the optimal distance od in Figure 2-17 depends on factor s such as topography, la nd use and environmental

PAGE 58

47 conservation, given that we have this inform ation readily available for the generation of hypotheses. Generation of link hypotheses. A preliminary link hypothesis is defined between each possible pair of points or nodes. A so-cal led “detour factor” is calculated for each preliminary hypothesis as per Equation 2-9. In Figure 2-17 the calculation is done for each possible pair of nodes (AD and AC). ) ( tan ) ( tan od ce Optimaldis nd ce Networkdis or Detourfact (2-9) In this step, potentially rele vant link hypotheses are selected. The selection is based on a detour factor, in the sense that links with lo cally maximum detour f actors are of interest, and that there is no preferred direction within th e road network. Here the link hypotheses that are generated are verified as per their detour factor. If a li nk with a higher detour factor is rejected, then the link with the ne xt highest detour fact or is considered for verification. Verification is carried out based on image data, whether a detour is considered depends on whether the hypothesis actually matches a road network in the image. Once a link is accepted, it is incl uded in the road network, thus changing the topology of the road network. Link hypotheses, once rejected, are not considered again in the iterative process of hypothesis verification. Verification of hypotheses. The verification of the hypotheses is carried out in relation to the image data. In the verificati on stage, the roads extracted from the prior Modules are used. Here the link hypothesis is verified against the roads extracted using the road seed generation from Module II. Ve rification of the link hypotheses is carried out by determining the optimal path between th e road seeds using the weighted graph. If the graph provides no connection between tw o end points, the hypothesis is rejected;

PAGE 59

48 otherwise if a path is found, th en it is inserted into the road network and replaced with a geometrically improved link. Figure 2-18. Segment insertion. Insertion of accepted road hypotheses. At this stage, if a road connects two end points, link hypothesis is detected. The new road is insert ed into the whole road network as is shown in (Figure 2-18). Sections of the new road that overlap with alread y existing road segments (the redundant part of new link) in (Figure 2-18) are eliminated. In most insertions, a larger portion of the new road is left that is then inserted into the network by connecting its two ends to the nearest points on the network (the red dot in (Figure 2-18) could have been connected to the blue dot). If the verified segment is not the end of the segment, a junction is introduced as per the process explained in M odule I. In instances wh ere a completely new link segment is eliminated based on hypothesi s, no portion of the segment is introduced into the road network. (Fi gure 2-18) shows a completely extracted road network from Baumgartner et al. (1999).

PAGE 60

49 Figure 2-19. Extracted Road Segments. (Picture Courtesy of Baumgartner et al. (1999) Figure 10 Page 9). (Figure 2-19) illustrates the complete ro ad extracted using th ree module two model process. developed by Baumgartner et al. (1999). In this method, Road model and context model supported each other through th e modules of extraction. Many processes implemented within this techni que can be used to develop an individual Semi-automatic road extraction method. As was discussed earlier in this chapter, many of the modules from Automatic approaches are implemented in Semi-automa tic approaches. The extraction results thus obtained are evaluated further, based on their connectivity a nd their deviation from the reference data. The method of extraction discus sed above is an example of road feature

PAGE 61

50 extraction for a rural road. In case of urban ro ad extraction, informa tion from sources as Digital surface models, along w ith contextual information. are needed to make the approach automatic. This chapter was an overview of the various ch aracteristics that affect the road extraction process, and different approaches to road extraction. Chapters 3 and 4 introduce the Perona-Malik algorithm and Snakes Theory. In our study a Semi-automatic road feature extraction method is developed, using anisotr opic diffusion, rather than Gaussian blurring (isotropic diffusion) that is implemented th rough the Perona–Malik algorithm (explained in Chapter 3). In Chapter 4, the theory and concept of Snakes, and its implementation for feature extraction, is explaine d; it will be implemented in our study to extract road features from diffused image information using dynamic Snakes. The method of road feature extraction is explained in Chapter 5 that uses the anisotropic diffusion approach developed by Perona and Malik and Snakes to extract roads. Chapter 6 discusses the results obtained, followed by an evaluation and analysis of the results. Chapter 7 concludes the thesis with an overview of the method of extraction implemented in our study, and the future work to be pursued in this research. The automation of the initial step of feature identification and the selection of road segments is one of the essential pieces of work to be carried out in the futu re. Automation of initial identification, using the Kalman Filter and Profile matching, is expl ained, as a possibility for the initial road identification step, prior to the feature extraction met hod implementation (Appendix B).

PAGE 62

51 CHAPTER 3 ANISOTROPIC DIFFUSION AND THE PERONA–MALIK ALGORITHM An image is in general a photometric repres entation of real world features. Objects or features from the real wo rld are represented as regions composed of a group of pixels, typically with similar intensity values, in a digital image. Features or objects represented in an image may have similar pixel intensity va lues, at least within ea ch feature or object existing as a region in an image. Ideally such features may be represented as homogenous regions within an image, for example buildings, trees, or agricultural land within a highresolution image, may be represented as re gions with similar pixel intensity values overall. During the capture and development of this information into a digital image, noise or undesired information is also genera ted, affecting the representation of the realworld features in the image. The noise exists as blobs on the image, with pixel intensity values different from the overall or average pixel intensity values that represent a particular region or feature. Many fields use information extracted from an image for varied purposes: medical image analysis etc. During the process of ex traction, the existence of noise leads to misrepresentation or false feature extraction. A feature extraction method usually extracts the desired features from an image base d on shape and feature boundary descriptions, obtained through the edge detec tion step of the feature extraction method. The existence of noise within an image affects the feature ex traction step, as noise results in false edges being detected that may not exist in th e real world and should not exist in the representation of the f eature in the image.

PAGE 63

52 To overcome this problem, noise across the image is minimized by implementing blurring or smoothing operations; th is is done at the initial st ep of pre-processing in the feature extraction method. In general, smoot hing operations assign each pixel within the input image a new intensity value that is calcul ated from the intensity of the pixel values in its neighborhood in the digi tal image. This process thereby minimizes variation across pixels, and consequently reduces the noise w ithin an image. The resultant image is a blurred or smoothed variant of the original input image. The image obtained from the pre-processing step is thus si gnificant in extraction of desi red features. Below, Section 3.1 explains the principles of isotropic and anisotropic diffu sion; this is followed by a discussion on the need and implementation of anisotropic diffusion in the Perona-Malik algorithm in Section 3.2. Section 3.2.1 expl ains the process of intra region blurring, carried out using the Perona-Malik algorit hm, while simultaneously performing Local edge enhancement, as is explained in Sec tion 3.2.2. This chapte r concludes with an illustration of the algorithm’s implementation on an image lattice structure (Malik and Perona, 1990). 3.1 Principles of Isotropic and Anisotropic Diffusion Conventional smoothing operations implemen ted at the pre-processing step are usually performed using a Gaussian, Laplacian etc. system. Blurring performed using a Gaussian system blurs the image by assigning each pixel a value based on the weighted average of the local pixel intensity valu es that are calculated using a Gaussian distribution kernel (Secti on 2.2.1). Conventional smoot hing techniques perform well when used to minimize variation across the im age. The process of blurring performed by a conventional technique, like Gaussian, is isotropic in na ture; conventio nal technique will blur the whole image in a similar fashion in all directions. This isotropic property of

PAGE 64

53 conventional techniques, while achieving the de sired minimization of noise and variation across the image, also blurs the boundaries be tween regions or features in an image. Therefore it shifts, or leading to the loss of the location of the act ual boundaries between regions, when they are sought in the edge-d etection step. In the method of road feature extraction de veloped in this thesis, the pre-processing step of blurring the image is carried out using the Perona -Malik algorithm; it is an anisotropic diffusion method of blurring, and will be used in stead of Gaussian blurring, an isotropic diffusion technique This anisotropic diffusion a pproach blurs regions in an image based on location information, (i.e., th e blurring within an image is carried out depending on a predefined set of criteria that specify the lo cations where blurring can be performed). In this algorithm, blurring is car ried out within regions in an image, while blurring across regions within an image is restricted by th e criteria; the criteria are discussed in this chapter. This method thus preserves th e boundary information in the output-blurred image. The blurred image is th en used to extract the desired boundaries between regions or shapes, after edge detection. ) 2 | | | | exp( 2 1 ) (2 2 2 2 y x y x K (3-1) The idea behind the use of the diffusion e quation in image processing arose from the use of the Gaussian filt er in multi-scale image anal ysis (Weeratunga and Kamath, 2001). Equation 3-1, illust rates a Gaussian filterK where is the standard deviation and x and y represent the coor dinates of the generated Gaussian mask. The Gaussian mask or kernel, generated using Equation 31 has cell values corre sponding to weights that are used in calculating new pixel intens ity values by convolving with the input image

PAGE 65

54 ( Section 2.2.1). Through this convolution, the image is blurre d, with a weighted average value for each pixel arisi ng from the distribution. ) , ( ) , ( ) , ( ) , (2 2 2 2t y x I y t y x I x t y x I t y x I t (3-2) Equation 3-1 can also be written in the form of the diffusion equation, illustrated in Equation 3-2. In Equation 3-2, ) , ( t y x I is a two dimensional image of ) ( y x I at time t = 0.5 2 that denotes the variance Here time t re presents the variance; an increment in the value of t corresponds to, or re sults in, images at coarser re solutions than the original resolution of the image. As an initial conditi on, the variance is zero that represents the original image ) ( y x I )) , ( ) , ( .( ) , ( t y x I t y x c t y x I t It (3-3) Equation 3-3 represents a more general form of Equation 3-2. Equation 3-3 is used to calculate an output image at any variance “t”. In Equation 3-3, ) , ( t y x c is the diffusion conductance, or diffusivity, of the equation. and in Equation 3-3 are the gradient and divergence operators respectivel y. The general form illustrated in Equation 3-3 reduces to a linear or is otropic diffusion equation, as shown in Equation 3-2, if the diffusivity ( ) , ( t y x c ) is kept constant, and is independe nt of (x, y), the location within the input image. This leads to smoothness or blurring in a similar fashion in all directions within the image. Gaussian blurring implem ented using Equations 3-1 and 3-2 is an example of isotropic diffusion, where it is dependent only on standard deviation, and not on the location within the image wher e the blurring is being carried out.

PAGE 66

55 The ability of a diffusion method to blur regions within an image, based on the location criteria, is known as anisotropi c diffusion, blurring pr ocess becomes image dependent and is not the same in all directi ons or at all locations within an image. Anisotropic diffusion implementation in imag es is derived from principle of heat diffusion. The distribution of the intensity grad ient, or change in intensity values, in an image is analogous to the temperature distribu tion in a region as a function of space and time. In heat diffusion, the temperature dist ribution in a region is a function of space and time, in images intensity gradient informa tion in the image also a function of space and time. The need to restrict diffusion across bounda ries between regions in an image, and to permit diffusion within regions and along bounda ries, leads to the development of a criterion to be implemented in Equation 33, based on the diffusion conductance, or diffusivity, c(x ,y ,t). ) ) , ( ( ) , ( I t y x c div t y x I t t I I c I t y x c ) , ( (3-4) Equation 3-4 is an anisotropic diffusion e quation that evolved from the general diffusion equation, shown in Equation 3-2. In Equation 3-4, ) , ( t y x c is a symmetric positive–definite tensor that allows diffusion parallel to the gradient and limits any diffusion perpendicular to the gr adient, thereby restricting blu rring across edges. Here div is the divergence operator and, and are the gradient and Laplacian operators respectively. Malik and Perona, (1990) developed an al gorithm of converting the linear diffusion into a non-linear diffusion, or anisotropic di ffusion; taking place depending on location. It occurs within regions and along boundaries, wh ile it is restricted across edges in an

PAGE 67

56 image. The anisotropic diffusion thus impl emented in the Perona-Malik algorithm is carried out locally at the pixel level, and in its neighborhood, based on the “c” value. In addition to the diffusivity “c”, conductance “K” is also used to perform blurring within regions, while enhancing the local edges, this process is explained later in this chapter. Section 3.2 explains anisotropic diffusion implementation using “c” and “K” values in the Perona-Malik algorithm. Figure 3-1. Anisotropic diffu sion using Perona-Malik algor ithm. Red block highlights well defined edge boundary of the intersection. Figure 3-2. Isotropic diffusi on using Gaussian. Green bloc k highlights the blurred and not so well defined edge boundary of the same intersection as in Figure 3-2. As can be seen in Figure 3-1, the Perona -Malik algorithm an anisotropic diffusion process preserves, and gives a better representation of, the boundaries of road intersections as compared to the boundary in formation in Figure 3-2, obtained through Gaussian blurring an Isotropic diffusion proc ess. The boundaries of the road intersection are blurred more in Figure 3-2, shown in gr een block than in Figure 3-1, shown in red block. The road edges extracted from the Pe rona-Malik algorithm gives a more complete

PAGE 68

57 and accurate set of road edge information, than would result from the information obtained from a Gaussian blurring pro cess. The Perona-Malik algorithm was implemented in the pre-processing stage of road feature extracti on method developed in our study because of the following reasons: Its ability to implement intra region sm oothing, without inter region smoothing. Region boundaries are sharp a nd coincide with meani ngful boundaries at that particular resolution (M alik and Perona, 1990). Section 3.2 further explains the implem entation of the Perona-Malik algorithm through intra region blurring and local edge enhancement that is performed using the diffusivity “c” value and conductance “K” value. 3.2 Perona-Malik Algorithm for Road Extraction From a road feature extraction perspective, this algorithm would help retain the much needed edge information that is essent ial in delineating road edges for extraction; whilst it also preserves the radiometric char acteristics of the road across the image, by preventing blurring across regions. Hence, using a road’s uniform radiometric characteristics, along with semantically mean ingful geometric prope rties representing the road edges, the initial step of road iden tification and road seed generation could be automated; although this process is performe d manually in the method developed in this thesis ( Section 5.2.1). The identified road segments are further used as inputs for the feature extraction method implemented using Snakes ( Chapter 4) in this thesis. Roads are represented as long network stru ctures, with consta nt width at fine resolutions, and as bright lines in low -resolution images. The diffusion process is implemented in high-resolution images, rather than low resolution, as on blurring a lowresolution image the roads existing as brig ht lines would disapp ear. The process of

PAGE 69

58 obtaining a coarse scale (blurred) image, fr om the original imag e, involves convolving the original image with a blurring kernel. In the case of an image, I(x, y), at a coarse scale t, where t represents the variance, the output image is obtained by convolving the input image with a Gaussian kernelK as was illustrated in Equation 3-1. ) , ( ) ( 0 ) ( t y x K y x I y x I (3-5) Equation 3-5 represents an anisotropic diffusion equation, co nvolving an original image 0 I with a Gaussian kernel that perf orms blurring dependent on the variance t and location) ( y x. Increasing the time value (variance) leads to the production of coarser resolution images. The success of blurring wi thin regions and along region boundaries, as per the principle of the Perona-Malik al gorithm, depends on determining the boundaries between regions in an image; this is done ba sed on the value of c(x, y, t). Blurring is carried out within regions in an image, de pending on the value of the coefficient of conductance or the value of diffusivity, c. This can be achieved by assigning the value of 1 to the diffusivity c within region s, and 0 at the boundaries (Perona and Malik, 1990). However, we cannot assign the conducti on coefficient value to each pixel or location within an image, a priori, as the boundaries between re gions are not known. Instead, the location of boundaries between region s is estimated as is explained in further detail in 3.2.1., in order to assign diffusivity values and to perform intra region blurring. 3.2.1 Intra Region Blurring Assuming the existence of an original im age I(x, y), and blurring represented by t. the scale to which the image is to be blurred. At a particular scale, t, if the location of boundaries between regions is known, the conduc tion coefficient c(x, y, t), defined in Equation 3-4, could be set to 1 within the region and 0 at the boundaries, as was

PAGE 70

59 discussed earlier. This would result in blur ring within regions, whilst the boundaries are kept sharp and well defined. The problem is that the boundaries at each scale are not known in advance. The location of boundaries is instead estimated at the scale of the input image (Malik and Perona, 1990). The estimation of the locati on of boundaries is carried out as follows. Let E(x, y, t) be a potential edge pixel at a particular scale “t”, it is a vector valued function with the following properties: the value of the potential edge is set to 0 If the pixel or location lies within a region, otherwise it is assigned a value that is the product of the Conductance (K), or local c ontrast, and a unit vector normal to the edge at the given location (e): E(x, y, t) = 0 if the pixel is within the region E(x, y, t) = K e(x, y, t) at the edge point Figure 3-3. Nonlinear curve. This curve re presents the magnitude of gradient used for estimating boundary locations within an image. In (Figure 3-3), e is a unit vector normal to the edge at a given point, and K is the local contrast (i.e., the difference in image in tensities on the left a nd right of the edge), equivalent to the flux in a heat diffusion equation. Once an estimate of the edge is

PAGE 71

60 available, E(x, y, t), the conduction coefficient c(x, y, t) is set to be a function of g(|| E ||), the magnitude of E. The value of g(.) is non-negative, and is a monotonically decreasing function, with g(0) = 1, as illustrated in Figure 3-3. Once diffusivity is estimated for all the locations within an image. Diffusion is carried out in the interior of regions, where E = 0, wh ile restricting diffusion along boundaries between regions, where the magnitude of E is large. Thus, preserving the boundaries of the roads at each s cale of the image. Further this section, explains how the diffusion coefficient, chosen as a local func tion of the magnitude and gradient of the brightness function (Malik and Perona, 1990) within the imag e, preserves and sharpens the boundary by the appropriate sele ction of the g(.) function. ||) ) , ( (|| ) , ( t y x I g t y x c (3-6) In general, scale space blurring of images is used to obtain coarse resolution images; this helps in filtering out the noise, but also losses a lot of edge information in the process, leading to the problem of blurri ng the edges in the image. In anisotropic diffusion as implemented by Malik and Perona (1990), the conduction coefficient, also known as diffusion conductance, Equation 3-6 is c hosen to be an appropriate function of the magnitude of the local image gradient. This is chosen as it enhances edges, while running forward in time/scale, keeping the stab ility of the diffusion principle (Malik and Perona, 1990). Section 3.2.2 explains the concept of edge enhancement process acting locally, while the process steps ahead in the diffusion process to derive coarse scale images.

PAGE 72

61 3.2.2 Local Edge Enhancement This Section explains how the edges in an image are enhanced, during the process of blurring within regions, from the prior scale or time step. Ma lik and Perona (1990) modeled an edge as a step function convolved with a Gaussian mask, as is expressed in Equation 3-7 ) ) , ( ( ) ) , ( ( x I t y x c x I t y c c div (3-7) It is assumed that an edge is aligned to the y-axis, in order to explain the concept (Malik and Perona, 1990). Here “c”, the diffusivity or conducta nce coefficient, is chosen to be a function of the gradient of “I”, as illustrated in Equation 3-8: )) , ( ( ) , ( t y x x I g t y x c (3-8) Let (Ix) = g (Ix). Ix denot es the flux c. Ix of the intensity between pixels along x. Thus, the 1-D version of the diffusion equation becomes xx ).I x '(I ) x (I x t I (3-9) The interest here lies in determining the va riation in time, t (variance), of the slope of the edge, / t (Ix). If c(.) > 0 the function I(.) is smooth, and the order of the differentiation may be inverted: )) ( ( ) ( ) ( x I x x t I x x I t = xxx I xx I '. 2 '. (3-10) Instead of differentiating the image by the ch ange in scale of the time step “t”, the image at a particular scale, t, is differentia ted in space. As is explained in (Malik and Perona, 1990), if the edge is or iented in such a manner that Ix > 0, then at the point of inflection Ixx = 0 and Ixxx << 0, as the point of inflec tion corresponds to the point of

PAGE 73

62 maximum slope (Ivins and Porill, 2000). This has the result that in the neighborhood of the point of inflection, / t (Ix) has a sign opposite to ’(Ix). If ’ (Ix) > 0, then this implies that the slope of the edge will decrease with time, and if ’ (Ix) < 0 this implies an increase in the sl ope of the edge with time. There should not be an increase in the slope of the edge with time, as this would contradict the maximum principle that states that no new information should be formed in coarse images, derived from the original image (Mal ik and Perona, 1990). Thus a threshold is set, based on the value of K and below which (.) is monotonically increasing, and above that it is monotonically decreasing, gi ving the desirable result of blurring small discontinuities, whilst enhanc ing and sharpening edges (Ma lik and Perona, 1990). A later section of this chapter explains the whole pr ocess of anisotropic di ffusion, being carried out on a square lattice as an example. 3.3 Anisotropic Diffusion Implementation This section explains anis otropic diffusion on a square lattice, with brightness values associated with the vertices. The equa tion for anisotropic diffu sion is discretized for a square lattice. In Figure 3.4, the brightness values are associated with the vertices and the conduction coefficien ts are shown along the arcs. Equations 3-11 and 3-12 are, respectively, general and discrete representa tions of anisotropic diffusion for the square lattice shown in Figure 3-4 that represents an image subset as a square lattice. I c I t y x c I t y x c div t I ) , ( ) ) , ( ( (3-11) t j i I W W c I E E c I S S c I N N c t j i I t I ] . . [ (3-12)

PAGE 74

63 Figure 3-4. Square lattice example. This example explains the working of a PeronaMalik algorithm with the vertices repr esenting the image pixels and the lines representing the conductance. In discrete anisotropic diffusion Equation 312, a four neighbor discretization of the Laplacian operator is used, where 0 <= <= and N, S, E and W are subscripts used for the vertex locations along each direction; the symbol represents the difference in the nearest neighbor latti ce structure, and not the gradient: j i I j i I ij I N , 1 j i I j i I ij I S , 1 j i I j i I ij I E 1 j i I j i I ij I W 1 (3-13)

PAGE 75

64 The conduction coefficients, or diffusion conductance, are updated with every iteration, as a function of the brightness gradient (Equation 310), as is shown in the list of conductances in (3.14): ||) ) 2 1 ( ) ( (|| t j i I g t j i N c ||) ) 2 1 ( ) ( (|| t j i I g t j i S c ||) ) 2 1 ( ) ( (|| t j i I g t j i E c ||) ) 2 1 ( ) ( (|| t j i I g t j i W c (3-14) Perona and Malik, in their paper on “Scale space edge detection using anisotropic diffusion”, have proved that image informati on at the next scale will lie between a maximum and minimum value in the neighbor hood of the pixel under consideration from the previous time step or scale. Hence, with [0,1/4] and c [0,1] the maximum and minimum of the neighbors of Ii,j at iteration t is (IM)ti,j = max{(I,IN, IS, IE, IW)ti,j} and (Im)ti,j = min{(I,IN, IS, IE, IW)ti,j} Thus, the new value at t+1 is 1 t j i I that lies between the maximum and minimum values in its nei ghborhood, as illustrated in Equation 3-15. t j i M I t j i I t j i m I ) ( 1 , ) ( (3-15) Hence, it is not possible for there to be lo cal maxima or minima values within the interior of the discretized scale space. t j i I W W c I E E c I S S c I N N c t j i I t ij I ] . . [ 1

PAGE 76

65 t j i W I W c E I E c S I S c N I N c t j i W c E c S c N c t j i I ) . . ( ) ( 1 ( t j i W c E c S c N c t j i M I t j i W c E c S c N c t j i M I ) ( , ) ( 1 ( t j i M I (3-16) Similarly, t j i W c E c S c N c t j i m I t j i W c E c S c N c t j i m I t ij I ) ( , ) ( 1 ( 1 (3-17) The scale space diffused edges can be obt ained using either of the following functions for g(.), as used by Perona and Ma lik in their work to blur images using anisotropic diffusion. ) 2 ) / || (|| ( ) ( K e I g (3-18) 2 ) || || ( 1 1 ) ( K I g (3-19) The scale space generated by these two f unctions is different, depending on the edges that they are used to detect. The first function (Equation 3-18) prioritizes high contrast edges over low contrast edges, wher eas the second function of g(.) Equation 3-19 is used for wider ranges over smaller regions This chapter has presented an explanation of the Perona-Malik algorithm, and how it detects edges through th e scale-space of an image, using anisotropic diffusion. The main reason behind implementing this approach in road extraction is to get appropriate edge information at each scale, and to obtain a uniform radiometric variance with in the desired features. In this thesis, road edges are detected using information from the diffuse d image, and then extracted using Snakes

PAGE 77

66 (deformable contour models). Snakes, as implemented in this thesis, uses the information about an edge, gained from the diffused image around the position of each snaxel, in the process of relocating the snaxel s closer to the road edges. A detailed discussion of this process, with an explanation of the c oncept of dynamic programming and Snakes, is provided in Chapter 4. Chapter 4 introdu ces the working of a Snake, and its implementation using dynamic programming.

PAGE 78

67 CHAPTER 4 SNAKES: THEORY AND IMPLEMENTATION There are numerous methods to extract road features from an edge detected in an aerial image. In this research, road featur e extraction is performe d using Snakes (Kass et al. 1988) on an image that was pre-processed using the Perona-Malik algorithm (Malik and Perona, 1990), explained in Chapter 3. Sn ake is a vector spline representation of a desired boundary that describes the shape of an object or feature in an image, existing as a group of edges detected from a pre-pro cessed image. This vector is obtained by concatenating the snaxels, or points, initiall y located close to the desired edge of the feature in the image, and then recursively re locating and concatenati ng them to align to the desired shape in the image. In our st udy, in working toward the objective of extracting road segment edges from an aerial im age, an initial set of road point locations, or snaxels, are generated and used as inputs to be recursively reloca ted to get the desired shape by aligning them to edge over a series of iterations. The reason behind implementing Snakes on the Perona–Malik algorithm processed image, is the unique nature of the Perona-M alik algorithm that blurs the image within regions, while preserving boundari es and edges in the image, as was discussed in Chapter 3. This process retains and further defines th e boundaries of the road edges in an image that is significant in the process of extracting ro ad edges, as it is the edge information that is needed for Snake’s implementation. According to Kass et al. (1988), Snake is defined as an energy minimizing spline, guided by exte rnal forces and influenced by image forces

PAGE 79

68 that pull the spline toward desired objects th at are defined and predetermined by the user, as is discussed in further detail in this chapter. Snakes. is also called the Active Contour M odel; ‘active’ because of its habit of exhibiting dynamic behavior by recursively re locating the snaxels to align the Snake to the desired feature in the image. When implem enting Snake on an image, the first step in the process of extracting the de sired object is carried out by an initialization, where a set of points are placed close to the desired featur e. This set of points, or snaxels, can be generated automatically or semi-automatically In a Semi-automatic approach the user needs to select the points in or around the vici nity of the desired object, in case of roads, we need to place points randomly, but close to th e road edge features in the image. In the case of Automatic approaches, the desired fe atures are identified automatically, this process is followed by the generation of road seeds/points or snaxels. Snakes relocate the snaxels from their initial positions recursively; they do this by moving each snaxel individuall y, to minimize its energy a nd the overall energy of the snake so as to get the best possible alignmen t of the snake to the shape of the desired feature in the image. This se t of points known as snaxels, ar e iteratively moved closer to the original location of the edge, using either the dynamic programming or gradient descent technique to minimize the overall ener gy of the snake, as will be explained in detail in Section 4.2. In what follows is a discussion of the th eory and concept behind Snakes and their implementation. The basic mathematical expl anation of Snakes is based on Euler’s theory, as it is implemented by Kass et al. ( 1988) and is explained in Section 4.1; their

PAGE 80

69 implementation, and how they are going to be used in the process of road feature extraction is explained in Section 4.2. 4.1 Theory Snakes are splines, or deformable contour s that take different shapes based on a given set of constraints. Ther e are various forces acting on Snakes, to deform them, so as to align them closely to the desired object; in general these forces can be classified as internal force, image force and external force, as is discussed in detail later in this section. The internal forces (Section 4. 1.1), energy developed, due to bending, serves to impose a smoothing constraint that produ ces tension and stiffness in the Snakes, restricting their behavior so as to fit to the desired obj ect using minimal energy. The image forces (Section 4.1.3) push the Snake toward the desi red edges or lines. External constraints (Section 4.1.2) are responsible for putting the snakes near to the desired local minimum. External constraints can be either manually specified by the user, or can be automated. Geometric curves can be as simple as a circle or sine curve, and can be represented mathematically as y = f(x), where f(x, y) = x2 + y2 = 1, and f(x) = sin(x), respectively. Mathematical representations of splines or hi gher order curves are much more complex in nature than sine and circular curves. To initialize a Snake, a spline is produced by picking a desired set of points in the image that are in the vicinity of the edge of the desired object. Snakes are also called deformable contours, and they are supposed to pass through points that have similar characteristics. Snaxels, or road points that form a Snake are located on pixels that have similar intensity values to the desired object and are spread along the road feature. The Snake is started as a contour, tr aced through this set of points that represents the edges of the desired feature in the image.

PAGE 81

70 Figure 4-1. Snaxel and snakes. Snake (Activ e contour model) in yellow, with snaxels in red are relocated iterativ ely through an energy minimi zation process to align the snake to the road edge. The initialization process can be manua l or automated; automation of the initialization can be done using high-level imag e processing techniques. (Figure 4-1), is a sketch, giving a visual illustration of Snake initialization points, or snaxels (red points), and the Snake as a contour (yellow). Here the red points represent the initial Snake points, and the yellow spline is the deformab le contour or Snake, whose shape changes depending on the relocation of the snaxels, also called Snake or road points in our study. Snakes cannot just detect road edge feat ures, and align themselves to the desired feature’s boundary or shape, as they first need some hi gh level information, (i.e., someone to place them near the desired object). In this research, snaxels, or edge points,

PAGE 82

71 are relocated iteratively to deform the Snake, or contour, to align it to the desired feature, by minimizing the total energy of the Snake. Figure 4-2. Scale space repr esentation of Snake. A) Represents the orientation of snaxels forming a snake. B).Represe nts the position of snaxel along x based on s C) Represents the position of snaxel along y based on s. The elements of the Snake/contour, its snaxels (i.e., the points forming the Snake), are influenced by space and time parameters. They can be implicitly represented on the basis of space and time as follows. Consider e ach snaxel position, re d points in Figure 41, to have x(s,t) and y(s,t) as its coordina tes that depend on s (space) and t (time/iteration) parameters; this is explained in Figure 4-2. In Figure 4-2 the space, “s”, represents the spatial location of an edge in the image, and “t ” represents the time step or iteration of the energy minimization process. The contour c onstructed through th ese snaxels (Snake elements) is affected by the energy developed using internal and exte rnal constraints and image forces; Sections 4.1.1 through 4.1.3 expl ain these constraints. These forces move

PAGE 83

72 the snaxels over time and space to new coor dinates, while minimizing the energy over each individual snaxel and whole snake. The objective is to minimize the overall en ergy to align the Snake to lay over the desired edge. The energy mini mization process allows the Snake to detect the desired edge. Here, the energy, ESnake, possessed by the contour is the sum of three energy terms, (i.e., internal, external and image). The total energy, also known as the potential energy, is the force that makes the Snake move toward the desired edge objects. The potential energy is used to detect lines, edges, and terminations in the image. The potential energy developed by the processing of the image is used in Snakes. The total energy of a Snake is the sum of the energies of the snaxels, that form the snake, or deformable contour. The position of a snaxel can be parametrically represented as is shown in Equation 4-1. )) ( ), ( ( ) ( s y s x s V (4-1) Thus, the contour in A (Figure 4-2) can be represented as: ] 1 0 [ )] ( ), ( [ ) ( s T s y s x s V (4-2) Here the Snake represented by Equation 4-2 is composed of a number of snaxels, who’s locations, (i.e., “x” and “y ”) coordinates, are restrict ed by the value of “s” being set to fall between 0 and 1. The objective is to align the Sn ake to the desired object; this can be obtained by minimizing the total energy of the Snake, (i.e., the sum of the energies of the individual snaxels) forming the Snake or contour: ds s V element E snake E )) ( ( 1 0 (4-3)

PAGE 84

73 Equation 4-3 expresses the total energy of th e Snake as the integral of the energy of the individual Snake elements, or snaxels, forming the Snake in Figure 4-1. Thus, the energy of a Snake, or contour, as an integral of the various snaxels forming the Snake, with forces affecting the energies of each indi vidual snaxel “s” is expressed as below in Equation 4-4. ds s V element E snake E )) ( ( 1 0 ds t s V image E ds t s V extern E ds s V E )) ( ( 1 0 )) ( ( 1 0 )) ( ( 1 0 nti (4-4) Here, ds s V E )) ( ( 1 0 nti is the internal constraint that provides the tension and stiffness, requiring the snake to be smooth and continuous. ds t s V extern E )) ( ( 1 0 is the external constraint, ta ken from an external operation that imposes an attraction or repulsion on the Snake, such external factors can be human operators or automatic initialization procedures. ds t s V image E )) ( ( 1 0 this energy, also known as the potential energy, is used to drive the contour toward the desired features of interest; in this case the edges of the road in the image.

PAGE 85

74 Figure 4-3. Internal energy effect. A) Re presents the shape of contour due to high internal energy. B).Represents the sh ape of contour due to low internal energy. 4.1.1 Internal Energy The internal energy of a Snak e element is composed of tw o terms, a first order term controlled by (s), and second order term controlled by (s). The first term makes the Snake act like a membrane or elastic band, by imposing tension on the snake, while the second order term makes the Snake act like a st iff metal plate to resi st bending. Relative values of (s) and (s) control the membrane and thin plate terms (Kass et al. 1988).Thus, the internal energy of the spline can be expressed as in Equation 4-5 2 ) 2 | ) ( | ) ( 2 | ) ( | ) ( ( )) ( ( int s ss V s s s V s ds s V E (4-5) In Figure 4-3, the objective is to trace th e edge of the circle using Snakes. If the internal energy is kept high the Snake remains stiff; A in Figure 4-3 represents the shape of the contour when the energy is high, and th e shape of the contour when the energy is low is as in B (Figure 4-3). Thus, increasing increases the stiffness of the contour, as it serves as a tension component, while keepi ng it low keeps the contour more flexible.

PAGE 86

75 4.1.2 External Energy This energy is derived from processes initia lized either manually or automatically. Either a manual or an automatic process can be used to control the attractive and repulsive forces that are used to move th e contour model toward the desired features. Here the energy generated is a spring-lik e force (Kass et al. 1988). One point is considered to be fixed, the prior position of a snaxel, and another point is taken to be free in the image, the estimated current position of a snaxel, where it ma y be relocated at a given iteration. This energy is developed be tween the snaxels (the pixel points where the points are located) and another po int in the image that is cons idered fixed. Here, is the mathematical representation of this energy: C onsider u to be a snake point and v to be a fixed point in the image (Ivins and Porill, 2000), the external energy is given by: 2| | u v k extern E (4-6) This energy is minimal, when u = v, when the image point and the Snake point are the same, and takes a multiple value of k, when –1 < v-u < 1. Along the same lines we can have a part of an image repel the contour. 2| | u v k extern E (4-7) This energy is maximised as infinite when v = u, and is unity when -k < v-u < k. In Figure 4-4, fixed end repr esents a point in the image, and the free end is a Snake point. Spring like forces (springs) developed between the Snake point and the fixed point in the image, and this adds an external constr aint to the Snake that is implemented as an external energy component in the development of the Snake.

PAGE 87

76 Figure 4-4. Spring force representation. Th is force aligns the snake to desired edge based on user information, expl ained ahead in this section. 4.1.3 Image (Potential Energy) To make the Snake move toward the desired feature, we need some energy functional, functions that attract the Snakes toward edges, lines, and terminations (Kass et al. 1988). Kass et al. (1988) developed three functions, they are shown below, along with their weights. By adjusting the weights of these three terms, the Snake’s behavior can be drastically altered. As such, the nearest local minimum potential energy is found using dynamic programming as is explained in Sect ion 4.2; dynamic programming is therefore applied in our study for implementing Snakes that will extract road edge features from an aerial image. term E term w edge E edge w line E line w image E P (4-8) x x x (4-9) Here, the image forces, x produced by each of the terms in Equation 4-8 are derived below in Sections 4.1.3.1 to 4.1.3.3.

PAGE 88

77 4.1.3.1 Image-functional (Eline) This is the simplest image functional of the three terms in Equation 4-8 1 0 )) ( ( ds s x I line E (4-10) If we have the image intensity for a pixe l as E line, then depending on the sign of wline in Equation 4-8, the Snake will be attracted either to dark or light lines. Thus, with conformity to other constraints, the Snake w ill align with the neares t darkest or lightest contour of image intensity in th e vicinity (Kass et al. 1988). The image force x is directly proportional to the gr adient in the image, as it is expressed in Equation 4-10: ) ( x I x I x p x (4-11) Thus, local minima near a snaxel ca n be found by taking small steps in x ) ( x I x x (4-12) where is the positive time step used to find the local minima. 4.1.3.2 Edge-functional (Eedge) Edges in an image can be found using a simple energy functional. 2 | ) ( | y x I edge E (4-13) Here the Snake is attracted to contours with large image gradients. Edges can be found using gradient-based potential energies as: ds x I edge E 2 | 1 0 | (4-14)

PAGE 89

78 As an example, if x = (x, y) a nd has a potential energy P(x) = | I(x)|2; then the image force acting on the element is given by: ) ( ) ( 2 ) 2 | (| x I x I I x x P x (4-15) Hence the strong edges could be found using Equation 4-16 ) ( ) ( x I x I x x (4-16) 4.1.3.3 Term-functional (Eterm) Term functions are used to find the end poi nts or terminal points of Snakes. To do this curvature of level lines is used in a slightly smoothed image. C(x, y) = G (x, y)* I(x, y) Here C(x,y) is a Gaussian convolved image with a standard deviation of If the gradient direction/angle is given by. ) ( tan1 x C y C (4-17) where n = (cos ,sin ) and n = (-sin ,cos ) are the unit vect ors along that are tangents to the curve at (x, y) and perpendicula r to it, respectively. Using this information the curvature of level cont ours in C(x,y) is determined using Equation 4-18. Eterm = / n = 2c / n2 / c / n = 1 0 2 3 ) 2 2 ( 2 2 2 y C x C y C x C xy C y C xx C x C yy C (4-18) Equation 4-18 helps to attract the Snake to corners and terminations. Snakes Implementation

PAGE 90

79 Section 4.1 discussed the theory and the working principles of Snakes, based on various energy functions. The objective is to ge t the desired Snake-deformable contour to align with the boundary edge that is needed to minimize the overall energy of the Snake, the sum of the energies of the individual sn axels forming the Snake. So the aim is to optimize the deformable contour model, by minimizing this energy function to find the contour that minimizes the total energy. Here from the discussion in Section 4.1, the energy E of the active contour model v(s) is: ds s s x s s x ds s V P s X E2 2 2 2| 1 0 ) ( | 2 | 1 0 1 0 ) ( | 2 )) ( ( )) ( ( (4-19) In Equation 4-19, the first term is the potential energy and the second and third terms control the tension and stiffness of the Snake. The objective here is to minimize the above energy. Minimization of this energy can be performed using the gradient descent algorithm, or dynamic programming. In this research, both methods were tried, and dynamic programming was chosen because of its ability to trace the edge better than the gradient descent algorithm. Chapter 5 illustrates and explains the difference in results between the two methods. Dyna mic programming does better as it has the ability to restrict the detection of local minima within a localized region of the location of the snaxel. The energy function, E(x), as in Equati on 4-19 can be minimized by changing the variable by a small value (x). Here x represents in th e (x, y) coordinate system. x x x By linear approximation, an expression fo r the new energy can be obtained, as expressed in Equation 4-20.

PAGE 91

80 x x E x E x x E ) ( ) ( (4-20) Hence, a decrease in the value of x reduces or minimizes the energy.to x E x Thus, the energy function is modified as follows: 2) ( ) ( ) ( x E x E x x E (4-21) The second parameter in Equation 4-21, with its negative sign and the fact that the result is squared, makes certain that the E function will decrease upon each iteration, until a minimum is reached. Section 4.2 further illu strates and explains the implementation of Snakes using dynamic programming. In Sect ion 4.2, principle of dynamic programming is explained, with an illustration of a cap ital budgeting problem in S ection 4.2.1 that is an example of dynamic programming implemen tation, and in Section 4.2.2, the implementation of dynamic programming to minimize the energy of a Snake is explained. 4.2.1 Dynamic Programming for Snake Energy Minimization Dynamic programming determines a minimu m, by using search technique within given constraints. This process is a disc rete, multi-stage, decision process. Dynamic programming when applied to the project of minimizing the energy of a Snake, or deformable contour, would include the location of snaxels, or pixels, as stages. Here the decision to relocate a snaxel to a new locati on, to minimize the energy, is performed by restricting the movement of the snaxel to a window ar ound its present location.

PAGE 92

81 Section 4.2.1 explains the concept of dynamic programming usi ng an illustration (Trick, 1997). This section gives an unde rstanding of the principle of dynamic programming leading us to its implementation in Section 4.2.2. Section 4.2.3 explains the implementation of dynamic programming in Snakes to minimize the total energy of the Snake so as to optimally orient the Snake close to the desired edge in the image. 4.2.2 Dynamic Programming This section explains the principle of dynamic programmi ng, with a capital budgeting problem as an example. In this dem onstration, the objective is to maximize the firm’s revenue from the allocated fund. Problem definition. A corporation has $5 million to allocate to its three plants for possible expansion. Each plant has submitted a number of proposals on how it intends to spend the money. Each proposal gives a cost of expansion (c) and the total revenue expected (r). The following table gives the proposals generated. Table 4-1. Proposals Plant 1 Plant 2 Plant 3 Proposal C1 R1 C2 R2 C3 R3 1 0 0 0 0 0 0 2 1 5 2 8 1 4 3 2 6 3 9 --4 --4 12 --Solution. There is a straightforward approach to solve this problem, but it is computationally infeasible. Here, the dynami c programming approach is used to solve this capital budgeting problem. It is assumed in this problem that if the allocated money is not spent, it will be lo st; hence, the objective is to utilize the allocated amount.

PAGE 93

82 The problem is split into three stages, each stage representing the money allocated to each plant. Thus, stage 1 represents m oney allocated to Plant 1, stage 2 and 3 representing money allocated to Plants 2 and 3 respectively. In this approach the order of allocation is first set to Plant 1 and then to Plant 2 and 3 respectively. Each stage is further divided into states A state includes the information required to go from one stage to the next. In this case th e states for stages 1, 2 and 3 are as follows. {0, 1, 2, 3, 4, 5}: Amount of money spent on Plant 1, as x1, {0, 1, 2, 3, 4, 5}: Amount of money spent on Plants 1 and 2, as x2, and {5}: Amount of money spent on Plants 1, 2 and 3 as x3 Thus, each stage is associated with revenue and to make a decision at Stage 3, only the amount spent on Plants 1 and 2 needs to be known. As can be seen from the states above, in states x1 and x2, we have a set of options for the amount that can be invested. Whereas in the case of state x3 we only have an option of 5, as the total amount invested in the Plants 1, 2 and 3 must be equal to $5 million. Since we cannot spend above it, nor can we spend less than $5 million dollars, if we do not spend the allocated amount, it will be lost as per problem definition. Table 4-2. Stage 1 computation If Capital Available (x1) Then Optimal Proposal And revenue for Stage 1 0 1 0 1 2 5 2 3 6 3 3 6 4 3 6 5 3 6 The following computation at each stage illustrates the working principle of dynamic programming. In Table 4-2, we have an option to select from the set of capital available (x1), for an optimal proposal, and th e revenue from them invested in Plant 1,

PAGE 94

83 inferred from Table 4-1. Furthe r, this process evaluates the best solution for Plants 1 and 2 in Stage 2, with a number of pre-defined options for states being represented by x2. At Stage 2, to calculate the best revenue for a given state x2, this process goes through all the Plant 2 proposals, and allocate s the amount of funds to Plant 2 and then the remainder of the amount is optimally used for Plant 1, based on the information from Table 4-2. This example further illustra tes the above discussion: suppose the best allocation for the state x2 = 4, then in Stage 2, one of the following proposals could be implemented. From Table 4-1, if we select a particul ar proposal for Plant 2 in Stage 2, the remainder of the amount invested from Stage 2 is utilized for Plant 1. Table 4.3 below illustrates the total revenue based on the combin ation of the proposals for Plants 1 and 2. Table 4-3. Proposal revenue combination If Plant 2 proposal Then Plant 2 Revenue Then Funds remaining For Stage 1 Maximum revenue From Stage 1 Total revenue from Plant 1 and 2 1 0 4 6 6 2 8 2 6 14 3 9 1 5 14 4 12 0 0 12 Thus, the best proposal to be selected for Plants 1 and 2, would be either proposal 1 for Plant 2 and proposal 2 for Plant 1, returni ng a revenue of 14, or proposal 2 for Plant 2 and proposal 1 for Plant 1, also returning a re venue of 14. Further Table 4.4 illustrates the set of options available for state x2 in Stage 2, with the corresponding optimal proposals for each of the options, and the total revenue re turn from Stages 1 and 2. Below, Stage 3 is considered, with only one option for the state x3 = 5.

PAGE 95

84 Table 4-4. Stage 2 computation If Capital Available (x2) Then Optimal Proposal Revenue for Stage 1 and 2 0 1 0 1 1 5 2 2 8 3 2 13 4 2 or 3 14 5 4 17 Along the same lines, computati ons are carried out for Stag e 3, but here the capital available would be x3 = 5. Once again, the process goes through all the proposals for this stage, determines the amount of money rema ining, and uses Table 4.3 to decide the previous stages. From Table 4.1, for Plan t 3, there are only two proposals, where: Proposal 1 gives revenue 0, and leaves 5. From Table 4.3 the previous stages give 17, hence a total revenue of 17 is generated. Proposal 2 gives revenue 4, and leaves 4. And from Table 4.3 the previous stage gives 14, hence a total reve nue of 18 is generated. Hence, the optimal solution would be to implement proposal 2 at Plant 3, proposal 2 or 3 at Plant 2, and proposal 3 or 2 (respec tively) at Plant 1. Each option gives revenue of 18. Thus, the above example illustrates the recursive procedure of this approach. As per this method, at any particular state, a ll the decisions for the future are made independently of how the particular state is reached. This is the principal of optimality, and dynamic programming rests on th is assumption (Trick, 1997). The following formula is used to perform the dynamic programming calculation. If r(kj) is the revenue for proposal kj at Stage j, and c(kj) the corres ponding cost, then let fj(xj) be the revenue of the state xj in Stage j. Then f1(x1) = max ) ( :1x k c kj j{r (k1)} (4-22)

PAGE 96

85 and fj(xj) = max ) ( :j j jx k c k {r(kj) + fj-1(xj – c(kj))} for j = 2, 3 (4-23) This formula is used to compute the revenue function in the forward procedure; it is also possible to compute it in a backward pro cedure that gives the same result. Using the same principle, dynamic programming is implem ented in the next section for the energy minimization of Snakes. 4.2.3 Dynamic Snake Implementation In an analogous way to the implementati on in Section 4.2.2, the snaxels, point locations along the Snake, are relocated in the deformable model based on the energy minimization procedure. This is done in a similar fashion to the states of revenue in Stage 1, as was explained in Section 4.2.2. In S ection 4.2.2, the revenue was restricted to a maximum of $5 million, in Snakes, the movement of a snaxel is restricted to a search window around its current position. The objective of the proce ss is to minimize the total energy, by minimizing the energy at each stag e ( i.e., at each snaxel location in the model). Figure 4-5, illustrate s the movement a snaxel in its vicinity search window, and the changing orientation of the Snake. Here each snaxel is analogous to a stage and the positions in the search window represent the states. At any snaxel position, th e energy is given by the sum of the energies at the preceding position and the current snaxel positi on. The minimal sum of these energies is retained as the optimal value. The process continues through all the snaxels, and at the end of each iteration of the minimization proce ss, the points, or snaxels, move toward those new locations that genera ted the optimal path in each of their neighborhoods, as is

PAGE 97

86 in Figure 4-5. Here the optimal path would be equivalent to the minimization of the total energy. Figure 4-5. Dynamic snake movement. The total energy of the snake would be given by: E (v0, v1, vn ) = E1(v1, v2) + E2( v2, v3) +.... En-1(vn-2,vn-1) (4-24) Where, each variable “v”, or snaxel, is allowed to take “m” possible locations, generally corresponding to adjacent snaxel locations w ithin a search neighborhood. Each new snaxel location, “vi”, corre sponding to the state variable in the ith decision stage, is obtained by dynamic programming as follows. A se quence of functions is generated, {si}| for i = [1, n-1], an optimal value function. Th is function, Si for each stage (i.e., snaxel) is obtained by a minimization performed over vi. To minimize Equation 4-24 with n = 5, there would be minimization of the state variab le for each of the n snaxel locations. This would require minimizing the sum of the energy between the snaxel location under consideration and its preceding location, as in the illustration in Section 4.2.2. Hence for n = 5, we would have to follo wing energy minimization at each stage. s1 (v2) = min v1 {E1 (v1, v2)}

PAGE 98

87 s2 (v3) = min v2 {s1 (v2) + E2 (v2, v3)} …….s4 (v5) = min v4 {s3 (v4) + E4 (v4, v5)} Min over v1 ...v5 of E = min v5 {s4 (v5)} Thus in general sk(vk+1) = min vk(sk-1(vk) + Ek(vk,vk+1)) (4-25) Considering Equation 4-25, with “k” repr esenting the stage and “vk” being the states, the recurrence relation used to com pute the optimal function for the deformable contour is given by: sk(vk+1) = min vk{ sk-1(vk) + Eext(vk) + |vk+1 – vk|2 (4-26) Assuming that the possible st ates, or new locations, of snaxels is in a 3x3 window around the current snaxel location, then ther e are nine possible st ates per stage (i.e., snaxel location). In this case, the cost associ ated with each of these possible states is equivalent to the internal energy of the sn axel at those locations. The objective is to minimize this energy over the n snaxel points using Equation 4-26. The optimal deformable contour is successively obtained through an iterative process, until Emin(t) does not change with time. This approach is significant as it enfo rces constraints on th e movement of the Snake; this is not possible in the gradie nt descent algorithm a pproach toward the minimization of Snake energy. Hence, we get better results using the dynamic programming approach, than we do with th e gradient descent algorithm. Chapter 5 explains the overall extracti on process using anisotropic diffusion and dynamic Snakes.

PAGE 99

88 CHAPTER 5 METHOD OF EXTRACTION Numerous methods exist to extract road f eatures from an aerial image. Most of the feature extraction methods that are developed are implemen ted using a combination of image processing techniques, from various le vels of an image processing system. Road representation varies from image to image de pending on the resolution of the image, the weather conditions prevailing at the time of photograph, and the sun’s position, as was discussed in Chapter 2. Hence, it is very di fficult to have a common method to extract the roads from any image. To overcome the hurdl e of implementing new methods to identify and extract road features from any aerial im age, depending on the nature of the image, research in the recent past was targeted toward developing a global method, using a combination of image processing techniques to extract road featur es from any aerial image. Our study has developed a feature extracti on method that could be implemented as an initial road extraction method in a global m odel, or as an independent semi-automatic road extraction method. This method evolved through stages, with implementation of the Perona-Malik algorithm and Snakes (Defo rmable Contour Models) using dynamic programming. Section 5.3 explains in detail th is stage based on evolution of the method over stages in the process of development, and is followed by a detailed explanation of the implemented method of road feature extraction.

PAGE 100

89 5.1 Technique Selection A generic feature extraction method is a three-step pro cess involving: preprocessing, edge detection and feature extraction. The road-edge feature extraction method developed in our study evolved over stages, and implements a combination of image processing techniques at each stage. At each stage, a me thod developed during research was inspected, and then evaluated, based on its ability to extract road-edge features from an aerial image. The roads extr acted at each stage we re visually inspected and compared to the desired road edge locati on in the image. Methods were developed in stages, using a combination of image processing techniques, until the extract roads were close to the desired or actual road edges in the aerial image, based on visual inspection and comparison of the results. Table 5-1. Stages of development Stages Steps Stage 1 Stage 2 Stage 3 Stage 4 PreProcessing Gaussian Gaussian Gaussian Perona-Malik Algorithm EdgeDetection Sobel Sobel Sobel Perona-Malik Algorithm Feature Extraction Hough Transform Gradient Snake Dynamic Snake Dynamic Snake Stages of development in Table 5-1 lists in brief various image-processing techniques implemented over stages to develop a method of extraction. Road edges extracted at Stage 4 (Table 5-1) gave results close to the desired road edge locations in the image. The method developed in St age 4 involved the Perona-Malik Algorithm (Chapter 3), and Dynamic Snakes (Chapter 4) implementation. Results obtained using Stage 3 and Stage 4 were pr etty close to the de sired road edges upon

PAGE 101

90 visual inspection of test da ta. Hence, an evaluation of the performance of road-edge extraction, using the method developed in Stage 3 and Stage 4, was done by calculating their goodness of fit to the desired road edge, a nd an F-Test on the re sults obtained from a set of 10 image sets. The met hod of evaluation and the test image data are discussed further in Section 5.3. Chapter 6 further i llustrates the results obt ained using the method developed in Stage 3 and Stage 4. As part of the analysis, Chapter 6 evaluates and compares the results obtained using the met hod developed in Stage 3 and Stage 4, based on its goodness of fit and an F-test. Goodness of fit and the F-test, as methods of evaluation, are explained in detail in Section 5.3. Further illustrations and discussion in this section, concerning the results obtained at each stage; explain the evol ution of the stages. As per Ta ble 5-1, road edges extracted at the initial stage (Stage 1) were very rudimentary. As the Hough Transform was used at the feature extraction step the ability of the extraction process was constrained to extracting roads that existed as straight lines across the input image. This constraint restricted the extraction of curv ed roads, and road features that existed as splines. Figures 5-1 and 5-2 illustrate the image input and corresponding output (extracted road edge feature) upon implementing the method developed in Stage 1. In Figure 5-1, the road exists as a spline or curved feature, instead of a straight line across the image. In Figure 5-1, road segments 1, 2 and 3 exist as long edges within the image. Implementing Stage 1 to extract road edges would result in extracting multiple lines across the image, representing the l ong and prominent edges within the image. Upon Hough Transform implementation in Stage 1 (Feature extraction step in Table 5-1), these three edges (1, 2 and 3 in Figure 5-1) would be extracted with multiple lines

PAGE 102

91 passing through them, as many lines in Hough sp ace (Chapter 2) would have edge pixels forming the road edges for Sections 1, 2 and 3 passing through them. Figure 5-1. Input image for Hough transform. Figure 5-2 illustrates the ro ad extracted using the Hough Transform from the input image in Figure 5-1. Thus following the pr inciple of the Hough Transform, the lines passing through the maximum number of edge pixels are traced. The results obtained using the method developed in Stage 1 re quire a lot of post-processing, involving selection of the best fit line, trimming the line to fit actual road segments in the image and linking the extracted segments. All cons idered, roads extract ed using the Hough transform will not align to the curves and spline road features, and would require alternative image processing techniques to extract curves and splines. Hence, Active

PAGE 103

92 Contour Models (Snakes) were introduced as the high-leve l feature extraction technique to be implemented at Stage 2 of the development process. Figure 5-2. Extracted ro ad using Hough transform. In Stage 2 the high-level process being implemented by the Hough Transform from Stage 1 is replaced by Active Contour Models (Snakes) (Table 5-1), developed by Kass et al. (1988) Figure 5-3 displays a road feature that needs to be extracted using the method developed in Stage 2, here Snakes is implemented using the gradient descent algorithm during the feature extraction step. Fi gure 5-4 illustrates the extracted road edges when Figure 5-3 is the input image.

PAGE 104

93 Figure 5-3. Input image for gradient snake extraction. Figure 5-4 illustrates the initial road segm ent or snake, represented in red with green points representing the snaxels. The initial and extracted road segments are overlaid on a diffused or blurre d image of Figure 5-3. The extracted road segment in yellow represents the snake after one iteration of the gradient descent algorithm, and the road segment in blue represents the snake af ter two iterations. The snake in yellow, after the first iteration, does not align well w ith the road edge; instead it shows a jagged appearance due to the movement of the snaxel s to the steepest gradie nt near the current snaxel location. Furthermore, the road extr acted on the second iteration, represented in blue, shows the snake moving away from the desired road edge lo cation, getting more jagged.

PAGE 105

94 Figure 5-4. Road extracte d using gradient snakes. Although, the results obtain ed did not align closely enough to the desired edge location, the road edge extracted using the Active Contour Model (S nakes) as the highlevel processing technique in the Feature extraction step at Stage 2, still gave significantly better results than the road edge extracted using the Hough Transform in the Feature extraction step of Stage 1. Further research and inspecti on of results using the method developed in Stage 2, it was deduced that implementing snakes using th e gradient descent algorithm in the feature extraction step, resulted in a jagged appearing snake instead of continuous smooth feature along the road edge. This occurs as the grad ient descent algorithm, in the process of minimizing the energy of the snake over seve ral iterations, moves the snaxels to the steepest image gradient near current snaxel location. Without this movement being restricted the orientation and alignment of th e snake was drastically changed. Over time

PAGE 106

95 steps or iteration, this algorithm significan tly affected the alignment of the snake, by making the snaxels converge to the steepest gradient in the image and moving the snake away from the desired road edge location. Problem with aligning the snake to th e desired road edge lead to the implementation of snakes using dynamic programming. This me thod was developed during Stage 3 of the development process. The principles and implementation of dynamic programming for snake energy minimizati on are explained in detail in Chapter 4. The results obtained using the method devel oped in Stage 3 showed the appropriate aligning of snakes or extracted road edges to the desired road edge location. The roads extracted at this stage showed better fits when compared to any of the prior stages. Although some results did get si gnificantly effected by noise or other road edges, with similar gradients, near the desired road edge ; this resulted in major movement of the snake from the desired road edge locations The central portion of the road segment extracted in Figure 5-5 is a good example of such a problem. Figure 5-5 illustrates a road segment ex tracted using Gaussian blurring and the Dynamic Snake, as developed in Stage 3. Th e extracted road segment, or snake, is represented in green with final snaxels positions in red. Initial locations of the snaxels are represented in blue. In Figure 5-5, the extracted road segmen t is away from the desired road edge in the central porti on of the segment, whereas, toward the ends of the segment, it aligns closely to the outer edge of the road.

PAGE 107

96 Further inspection and research, the majo r reason behind this movement of the snake or extracted road-edge from the desi red road-edge was due to the implementation of Gaussian blurring at the pre-processing stage. Figure 5-5. Road extracted usi ng Gaussian and dynamic Snakes. Blurring or diffusion was perf ormed using the Gaussian system to minimize noise and variation across pixels within homogenous regions in an image (i.e., regions within an image representing a single entity such as agricultural land, buildings). The need to minimize noise within homogenous regions in an image and minimizing variation across pixels is very well met by blurring the image using Gaussian Convolution. However, this process of blurring considered the whole im age to belong to one homogenous region, due to its isotropic nature, whereas an image us ually consists of several different homogenous regions, representing agricultural fi elds, houses, roads, and trees.

PAGE 108

97 Consequently, upon implementing blurri ng using Gaussian Convolution, the boundaries between regions, represented by a high variation in intensity, were blurred, thereby affecting the boundaries. This affected the edge repr esentation between regions in the image, resulting in either the loss of the e dge or in shifting it from its original location in the image. Upon further re search and evaluatio n of the results, a need for a blurring process was identified that could preserve th e edges between regions in an image, while minimizing noise and variation between pixe ls within regions. This objective was accomplished by implementing the Perona-Mal ik algorithm instead of the Gaussian blurring and the Sobel Edge detection, im plemented in Stages 1 through 3 during the development process of the method. As th e Perona-Malik algorithm performed blurring within homogenous regions in the image by anisotropic diffusion, blurring across edges was restricted, thereby pres erving the edges. Stage 4 of the development process implemented the Perona-Malik algorithm that performed the blurring as well as the edge detection step of the feature extraction pr ocess, followed by feature extraction using Dynamic Snakes (i.e., snakes implemente d using dynamic program ming, Table 4-1). Figure 5-6 illustrates the ro ad extracted for same input image as Figure 5-3 using the Perona-Malik algorithm and Dynamic Snak e. The points or snaxels in blue are the same initial road segment points as shown in Figure 5-5; the road extracted using the Perona-Malik algorithm and Dynamic Snakes is represented in blue, with its snaxel or road-edge locations represented by red points. On visual inspection, the road extracted using the Perona-Malik algorithm, as an imag e processing technique at the pre-processing stage, gives a better fit than the Gaussian technique, espe cially in regions surrounded by noise.

PAGE 109

98 Figure 5-6. Perona-Malik algorithm and dynamic Snakes. To support this conclusion, a performan ce evaluation was carried out using the methods explained in Section 5.3, compar ing the roads extracted using the methods developed in Stage 3 (Gaussian and Sobel Edge Detector) and St age 4 (Perona Malik Algorithm). 5.2 Extraction Method This section explains in de tail the feature identifica tion step and the method of extraction. The method developed in Stage 4 (Tab le 5-11) is explained in detail in this section, as it is final method of feature ex traction, developed over the previous stages. Feature extraction methods, developed through St ages 2 to 4, were all implemented post the feature identification step. During feature identification, the road points representing the start and end of road segments were manua lly identified in the input image, and then used to generate further road points. Generate d road points or snaxels are used as inputs

PAGE 110

99 for the final step of feature extraction in Stages 2 to 4. Hence, road extraction is initialized by the feature identification step, where a user manually identifies and selects the start and end of those road segments form ing the road network in a given input image. The coordinates for the start and end of ro ad segments, selected during the feature identification step, are stored in the process, and are used to further interpolate road points or snaxels. They represent the initial estimated location of road edges in the image and the subset road segment image from the pre-processed input image. During the first step of extraction the input image is pre-pr ocessed, or diffused, to minimize noise and variation within regions in the image. In the method developed in Stage 4, the input image is pre-processed using the Perona-Mal ik algorithm that performs both edge detection and blurring simultaneously, to rest rict blurring across e dges. Using the road segment information, road points or snaxel s are interpolated for each segment, and a corresponding road segment image is cropped as a subset of the diffused/blurred image that was obtained through the pre-processing and edge detecti on steps. Interpolated road points, or snaxels, coordinate s are transformed to the coordinate system of the road segment image. The road segment image, along with the transformed snaxel coordinates, is used as the input for the feature extr action step. The feature extraction step is implemented using Dynamic Snakes. The resultant road edges, detected through the feature extraction step, are transformed back to the coordinate system of the input image, thereby aligning the extracted road to the desired road edge feature. The image considered for road featur e extraction was a DOQQ (Digital Ortho Quarter Quadrangles) with dimensi ons of 1000x1000 pixels, obtained from www.labins.org. This image was converted to a grayscale image of 250x250 image pixel

PAGE 111

100 subsets. The road segments were identifie d and extracted from the converted images. using the implemented method. The process of extraction is broadly separa ted into feature identification and feature extraction. Feature identificati on is the initial step as the name suggests, and involves the selection of a set of points repr esenting the desired features that are to be extracted. This step is followed by feature extraction that is split into pre-processing, edge detection, and the final step of feature extraction. The method implemented in our study, involves the Perona-Malik algorithm that performs both th e pre-processing and edge detection steps simultaneously, followed by feature extrac tion implemented using Dynamic Snakes. Detailed explanation of the extraction pro cess including feature id entification and the method of extraction, developed through Stage 4, is explained in detail in the flow chart in Figure 5-7. The code in Mat lab used to perform the implemented method of extraction is presented in Appendix A.

PAGE 112

101 Figure 5-7. Process of road-feature extraction.

PAGE 113

102 5.2.1 Selection of Road Segments Figure 5-8. Selection of road segment In relation to the flowchart shown in Figure 5-7, this section illustrates the step of selecting a road segment in the input image. Th is is the initializati on stage of the feature extraction method. The user manually selects the start and ends of the road segments forming the road network in the input image, as in Figure 5-8; the points in blue are the start and end of the road segments formi ng the road network in the image, and are selected by the user. The start and end of road segments can also be considered as nodes of the road network. The coordi nates of the start a nd end of the road segment are used in the process to interpolate road seed points or snaxels, and to crop the road segment from the output image after pre-processing. These inputs are used together in the feature extraction step.

PAGE 114

103 5.2.2 Image Diffusion Image diffusion, or blurring, is performe d as a pre-processing step operation prior to performing road point interpolation, or generation of snaxels that represent the estimated road edge locations. As per Stage 4, the Perona-Malik algorithm is implemented at the pre-processing stage to di ffuse the input image instead of Gaussian blurring, as explained earlier in Section 5.1. The Perona-M alik algorithm blurs the image based on the principle of anisotropic diffusi on. Based on the principle of anisotropic diffusion, the Perona-Malik algorithm blurs the image based on a predefined criterion; as compared to Gaussian blurring, using isotro pic diffusion that blurs in a similar fashion across the image. The Perona-Malik algorithm bl urs the image with a criterion that is set to blur only within homogenous regions in an image, and to restrict blurring between regions, thus preserving the edge information in an image. This ability of the PeronaMalik algorithm helps to retain the much need ed edge information that is used for road edge extraction later in the process. Figure 5-9 illustrates the blur ring of a highlighted intersec tion region in the original image. The insets overlaid on the original image illustrate the blurred portion of the highlighted intersection in the original image. The inset on the left, highlighted in the red block, represents the edge between the road and the adjacent region, pre-processed using the Perona-Malik algorithm, and the inset of th e right highlighted in blue represents the edge between the road and the same adjacen t region, pre-processed using the Gaussian convolution. As can be seen on close inspecti on, the road edge is better defined in the anisotropic diffused image, the inset on the left as compared to the edge highlighted in blue on the right, blurred using Gaus sian that has a jagged appearance.

PAGE 115

104 Figure 5-9. Perona-Malik Algorithm vs Gaussian. 5.2.3 Interpolation of Road Segments In this step, road points or snaxels, are generated by interpolati on. Interpolation of road segments is carried out by assuming that the road exists as a straight line and the interpolation interval is set by the user. A line equation is generated for each road segment, based on the start and ends coordina te information gathered during the initial step of feature identification. The process, using the start and end of the road segment, computes the slope and the intercept for each road segment based on its line equation. Using the equation for a straight line, the calculated slope, and the intercept for each road segment, coordinates of road points are determ ined at set intervals between the start and

PAGE 116

105 the end of the road segment, for all the segmen ts selected during the initial step of feature identification. Figure 5-10. Interpolated road points. Figure 5-10 shows a plot of interpolated points, calculated by the above-explained process, for selected road segments. Interpol ated road points, or snaxels, are used to estimate the initial road edge location input information for the feature extraction step, along with the diffused segment image. 5.2.4 Diffused Road Segment Subset and Road Point Transformation This step of the feature extraction pro cess uses the start and end coordinate information of each road segment, stored from the initial feature id entification step, to crop each road segment image as a subset of the diffused pre-processed input image. Figure 5-11 illustrates, in the inset on the left of the original image, the diffused image

PAGE 117

106 subset of the northwest road segment overl aid with road points transformed onto the road-segment image subset in green. Figure 5-11. Road segment subset and its transformed road point. Similarly, the diffused image subset and road points, or snaxels, are transformed for all the selected road segments to be extrac ted, and are passed ont o the feature extraction step as input. The diffused image subset of road segments, with their corresponding transformed road points or snaxels, are used as the input for Snakes implemented using Dynamic programming. 5.2.5 Snake Implementation and Transf ormation of Extracted Road Dynamic snake implementation takes the road segment subset and the road points as input. The snake works on the principle of energy minimization. The energy here is the total energy of the snake, co mposed of the energy of each road point or snaxel. The

PAGE 118

107 energy of each snaxel is com posed of internal and external energy, as is explained in detail in Chapter 4. The energy of each snax el is minimized by relocating the snaxel to the location of maximum gradient; in this piece of research this is the edge of the road in the image. The objective of aligning the snake to the road edge is obtained by minimizing the overall energy of the snake (i.e., the sum of the energy at each road point). This is obtained by dynamic programming, where the sn axels, or road points, are relocated iteratively till the last snaxel is located near est to the road edge by minimizing the overall energy of the snake, constituting all the road poi nts or snaxel. Detailed explanation of the implementation of the snake, using energy mi nimization, is explained in Chapter 4. Figure 5-12. Extracted road using Per ona-Malik and dynamic snake algorithm. Figure 5-12, illustrates the road edge extracted using Snakes implementation. Here the points in blue represent the snaxels that are aligned to road edge through the energy minimization process implemented using Dynamic programming. These points are

PAGE 119

108 further concatenated to form a spline, repres ented in blue. The red points in Figure 5-12 represent the start and end of the road se gments selected at the initial feature identification step. Section 5.3, speaks more in brief about the met hod of evaluation used to support the implemented method, as compar ed to road edge feature extracted using Stage 3. 5.3 Evaluation Method The roads extracted during the first tw o stages were, on visual inspection, considered undesirable, as they could not ex tract the desired road edge locations, as in Section 5.1. Hence, further stages were developed through experimentation using a combination of image processing techniques. The results obtained through the methods developed in Stage 3 and the final stage (i.e ., Stage 4) of the development process was close to the desired road edges in the image. Inspection of the results on a dataset of 10 road segments obtained through implementation of the method developed in Stage 4 gave better results as compared to the results obtained in Stage 3. To support this conclusion, an evaluation of the method of extraction wa s performed, based on the extracted road edge information from each stage (i.e., Stag e 3 and Stage 4), against the road edge extracted by a manual process. Performance on road extraction, when im plemented using methods developed in Stage 3 and Stage 4, was determined using goodn ess of fit and F-test statistics. The mean squared error was calculated for each road segment extracted, based on the difference between the coordinates of desi red road point, or snaxel, locat ion and the final road point, or snaxel, location obtained using Stag e 3 and Stage 4. Further a hypothesis was developed based on the performance of the methods developed in Stage 3 and Stage 4 were evaluated. Thus, the criteria for evalua tion of the method of extraction were based

PAGE 120

109 on the goodness of fit of the method of extr action, and hypotheses testing using the Ftest. Figure 5-13. Desired and extracted road edges. 5.3.1 Goodness of Fit The goodness of fit of the method of extrac tion was evaluated based on the value of maximum absolute error, mean squared erro r and root mean squared error. As an example, in Figure 5-13, the road extracted using the method devel oped in Stage 4 (i.e., the Perona-Malik algorithm and Dynamic Snak es) is represented in blue, and the road extracted using the method developed in Stag e 3 (i.e., the Gaussian and Dynamic Snakes) is represented in green. The desired road e dge in Figure 5-13 is re presented in yellow, with its corresponding snaxel or road point location in red on the yellow line. The maximum absolute error for each of the methods, developed in Stage 3 and Stage 4, was calculated using the following formula:

PAGE 121

110 If di = distance between the actual/desired road point or snax el, and the extracted road point or snaxel Where i = 1 to n, and n is the number of snaxels or road point locations. Then, the maximum absolute error = maxi | di | With respect to Figure 5-13, the distance between the desired snaxel location (red dots on the yellow road edge in Figure 513) and their corresponding snaxel positions obtained through the methods implemented in St age 3 (red dots on green in Figure 5-13) and Stage 4 (red dots on blue in Figure 513) were calculated. From the calculated differences between the desired snaxel posi tions and extracted snaxel positions, the one with the maximum value in each of the methods gave an estimate of how far a snaxel in each method could deviate from its desire d location, also known as the maximum absolute error. Furthermore, an overall measure of the deviation of th e curve, or extracted road edge, from the desired road edge was dete rmined from the mean squared error. This value was calculated by determining the mean of the squares of the difference, or distance of each extracted snaxel from its corresponding desired location. Mean Squared Error = 2 1/ 1 n i id n The square root of mean s quared error was used as a performance indicator on the extraction process. The lower the root mean squared error, the better the method of extraction. Furthermore, the mean squared error value calculated in this method of evaluation was also used in the gene ration of a hypothesis for the F-test. 5.3.2 F-Test An F-test was performed by generating a hypothesis based on the comparison of mean squared values for the roads extracted using the methods developed in Stage 3 and

PAGE 122

111 Stage 4. This test was performed on road segmen ts extracted from a set of test data of 10 road segment image subsets based on the following Null hypotheses. H0 : Mean Squared Error Anisotropi c = Mean Squared Error Gaussian HA : Mean Squared Error Anisotropic < Mean Squared Error Gaussian HG : Mean Squared Error Anisotropic > Mean Squared Error Gaussian Where: Mean Squared Error Anisotropic = Mean Squared Error for the road extracted using the method developed using the Per ona-Malik Algorithm and Dynamic Snakes implemented in Stage 4. Mean Squared Error Gaussian = Mean S quared Error for the road extracted using the method developed using the Gaussian blur ring and Dynamic Snakes implemented in Stage 3. If H0 = True, resulted in similar performance by both methods, HA = True, resulted in Anisotropic Diffu sion (Stage 4) performing better when compared to Gaussian (Stage 3). HG = True, resulted in Gaussian (Stage 3) performing better when compared to Anisotropic Diffusion (Stage 4). Further, evaluation was carri ed out on a set of 10 image subsets that were extracted using Stage 3 and Stage 4, using goodness of f it and the F-test. Chapter 6, illustrates the results obtained using the methods developed in Stage 3 and Stage 4, followed by an evaluation and analysis of the results obtained.

PAGE 123

112 CHAPTER 6 RESULT AND ANALYSIS 6.1 Results This chapter illustrates and evaluates th e results obtained through the method of road feature extraction that wa s developed in Stage 3 (with Gaussian Blurring as the PreProcessing Step) and Stage 4 (the Perona-Mal ik Algorithm as the Pre-Processing Step) of the development process (Table 5-1). Figure 6-1. Road extracted using Gaussi an and Perona-Malik with dynamic Snakes.

PAGE 124

113 Figure 6-1, illustrates a fina l extracted road, using the method from Stage 3 for the green line (Gaussian) and the method from St age 4 for the red (Per ona-Malik Algorithm). The methods of extraction developed in Stage 3 and Stage 4, were similar except for the initial pre-processing step. In Stage 3, the initial pre-processing was implemented using Gaussian, an isotropic diffusi on technique, and in Stage 4, it was implemented using the Perona-Malik Algorithm, an anis otropic diffusion technique. 6.2 Analysis of Result on Test Images The performance of the methods of extracti on, developed in Stage 3 and Stage 4, is to be evaluated from their implementation on a set of 10 road segments. The performance of Stage 3 (Gaussian) and Stage 4 (Perona-Mal ik Algorithm) is determined by evaluating the goodness of fit and the F-test statistic for each extracted road segment. The method of evaluation using goodness of fit and the F-test is explained in detail in Section 5.3. Goodness of fit was determined for each ex tracted road segment, and gives basic statistical evidence concerning how well the road is extracted using each method of extraction, based on the following factors: Maximum absolute error: This value give s a numerical description of how far (in pixels) an extracted edge (sna xel or road edge) is from the actual/desired road edge in an image. Mean Squared Error: This gives a numeri cal description of how well the overall Snake or extracted road is aligned to the desired road edge. The square root of the mean squared error is used as a performa nce indicator, where lower values of the root mean square suggest better perfor mance from that method of extraction. An F-test statistic was also performed to compare the methods of extraction that were implemented on each road segment, to evaluate the best method of extraction. This test was based on a hypothesis developed usi ng the mean squared e rror value that was

PAGE 125

114 calculated for each road segment from the eval uated goodness of fit, as was explained in Section 5.3. Figure 6-2 (A to J) illustrate the road segment images and th eir extracted output. The roads extracted using Gaussian as pre-pr ocessing step (Stage 3) are represented in green, and roads extracted using the Perona-M alik algorithm as the pre-processing step (Stage 4) are represented in blue. Green dots along green line s in the image represent the final snaxel, or road edge, positions after Stage 3 implementation; blue dots along blue lines represent the final snaxel, or road e dge, positions after Stage 4 implementation. On visual inspection of the roads extr acted, the method implemented in Stage 4 (Perona-Malik algorithm) performed bette r than the one implemented in Stage 3 (Gaussian). Thus to support this evaluation, goodness of f it and an F-test, as per the hypothesis explained in Section 5.3, were performed on each of 10 images in Figure 62(A-J). The final location of the snaxels, or road edge points, green and blue in color for Stages 3 and 4 respectively, were determined and compared to their desired location that was specified by the user on the image. The distance between the desired and the actual location gave an absolute error for each e dge position extracted. A maximum absolute error was determined for each road extrac ted, giving a numerical estimate of the maximum extent to which an extracted edge deviated, in the implemented method of extraction.

PAGE 126

115 Figure 6-2. Road extracted on test images A to J. This figure illust rates the road feature extracted using Stage 3 in green and Stage 4 in blue over images A to J.

PAGE 127

116 Table 6-1. Summary of evaluati on for extracted road features Maximum Absolute Error Mean Squared Error Root Mean Squared Error F-test Label G PM G PM G PM F-valueDOF Accept A 10.28 3.82 36.611.626.051.2722.6020 HA B 5.45 1.63 8.190.842.860.929.6713 HA C 9.94 5.01 17.674.434.202.103.9823 HA D 9.72 3.44 18.672.964.321.726.3015 HA E 10.09 2.43 33.641.515.801.2222.2826 HA F 14.63 8.54 82.8323.79.104.863.4922 HA G 7.98 4.20 21.403.974.621.995.3819 HA H 10.81 3.99 34.813.405.901.8410.2418 HA I 7.48 8.46 17.375.684.162.383.0629 HA J 20.47 13.91 88.0151.89.387.201.7044 HA Hypotheses were tested at an level of = 0.05 with DOF representing the degrees of freedom or number of snaxel or road points forming the extract ed road features in each label (image subset). Table 6-1 gives a summary of the statistics obtained for the road edges extracted. G and PM respectively represent the methods of extraction in Stage 3 (Gaussian) and Stage 4 (Perona-Malik algorithm), for A-J extracted in Figure 6-2. Table 6-1 lists the maximum absolute error, a value in pixels that give s the distance between the detected road edge and the desired road edge location. The mean squared error and the root mean squared errors that were calculated for the roads ex tracted, using Gaussian and the Perona-Malik algorithm as a pre-processing step, were used respectively for generation of hypothesis. As can be seen in Table 6-1, according to th e evaluation of the hypothesis as explained in Section 5.3, in all the test im ages the performance of Stage 4 turned out to be better than that of Stage 3. The value of the maximum ab solute error in the case of Gaussian blurring showed more deviation than in the case of the Perona-Malik algorithm values. Images A, E & H in Figure 6-2 had high maximum absolute errors, mean squared errors and F-test hypothesis values for Stage 3 (Gaussian), because the road edges extracted using

PAGE 128

117 Gaussian deviated drastically from the desire d road edge, as can be seen in the results obtained in images A, E, H (Figure 6-2). Thus on visual interpretation of the result s in images A, E, H ( Figure 6-2)., along with the basic statis tics and hypothesis evaluation pres ented in Table 6-1, the method from Stage 4 performed much better than the method of extraction developed in Stage 3. Hence, the method of extraction developed using a combination of the Perona-Malik algorithm, an anisotropic diffusion pre-pr ocessing approach, along with Snakes, implemented using dynamic programming, gives the best result when compared to any other combination of techniques tested through Stages 1 to 4, as listed in Section 5.1.

PAGE 129

118 CHAPTER 7 CONCLUSION AND FUTURE WORK 7.1 Conclusion In our study, the road-extraction methods that were developed in stages, were compared in Chapter 5, and based on the evaluation in Chapter 6, the method of extraction developed using the Perona-Malik algorithm and Dynamic Snakes in Stage 4 gave the best results. The test images that we re used to evaluate the method of extraction were made available from land and boundary information system (Source: http://data.labins.org last accessed: 16 August 2004). The implementation of the PeronaMalik algorithm in pre-processing and Snakes using dynamic programming at the feature extraction step (Section 5.1) wa s significant in getting the de sired result. This was due to blurring of homogenous regions within an image while restricting blurring between regions within an image. Perona-Malik al gorithm therefore led to well defined boundary information. In comparison, Gaussian blurri ng, an isotropic blurring technique, blurred the image uniformly, led to shifts in bounda ries, or incomplete boundary information. Thus, pre-processing using the Perona-M alik algorithm performed better than conventional Gaussian blurring. In addition to the Perona-M alik algorithm, implementi ng Snakes (Active Contour Models) lead to the extraction of the appropria te road edges, whereas implementation of the Hough Transform that performed a general straight-line feature extraction within an input image, failed to do so (Section 5.1). Above all, the results using Snakes were enhanced when implemented using Dynamic programming, rather than the Gradient

PAGE 130

119 descent algorithm; this is because of the ab ility of Dynamic program ming to restrict the movement of the Snake, as was explaine d in detail in Chap ter 4 (Section 5.1). The method of extraction developed in this research, and the proposals for future work aimed at automating the initial step of the identification and selection of road segment points (Section 5.2), may only wo rk on high-resolution images. As, these processes need edge information for extrac tion, and geometric (w idth) and radiometric (radiometric/intensity varia tion) characteristic informa tion across the road along the direction of road, to identify and select road segment points. This information may exist only in high-resolution images, where roads exist as long continuous features with uniform width; in the case of low-resolution images, roads exist as long bright lines that may disappear or exist as ve ry thin features after the pre-processing step. Thus the method of extraction developed here, and the proposed future work, may not work with information obtained from low-resolution im ages. Section 7.2 gives an overview of future work to be carried out to automate the initial step of identification and selection of road segments; the suggestion is based on a concept developed by Vosselman and de Knecht (1995) using the Kalman Filter and Profile Matching approach. 7.2 Future Work As part of future work in this area, the proposal is to automate the initial step of identification and selection of the start and e nd of road segments. To do this the use of profile matching and the Kalman filter a pproach (Vosselman and Knecht, 1995) should be used to identify and select road segments. This process would use an initial model of the road segment, selected by the user from the input image that is updated on a regular basis using the Kalman Filter. Appendix B, explains the identification and extraction of

PAGE 131

120 road segments using Profile Matching and Kalman Filtering (Vosselman and de Knecht, 1995), this preceded by a Kalman filter principle tutorial (Simon, 2001).

PAGE 132

121 APPENDIX A MATLAB CODE FOR ROAD FEATURE EXTRACTION This section contains code for road f eature extraction implemented in our study. The code for feature extr action is divided into FunctionPickSegmentPoints: This function is the main function used for selecting the start and end of road segments to be extracted, by calling rest of the functions. FunctionSegmentInterpolation: This function generates road seeds or points by interpolation, locating road segments between start and end of road segment picked by PickSegmentPoints function. FunctionTransformation: This function subsets the input image and transforms the generated road seed from FunctionSegmentInterpolation to coordinate system of subset image. FunctionAnisotropicDiffusion: This function performs blurring using PeronaMalik algorithm, an anisotropic diffusion te chnique. The code used for anisotropic diffusion using Perona-Malik algorithm is from “MA TLAB function for Computer Vision and Image Analysis” (Available at http://www.csse.uwa.edu.au/~pk/Research/MatlabFns last accessed: August 7 2004). FunctionDynamicSnakes: This function performs en ergy minimization of active contour models (Snakes) by using dyna mic programming. The code used for implementing active contour models using dynamic programming is from a course website “CS 7322: Computer Vision II Spring 1997” (Available at http://www.cc.gatech.edu/classes/ cs7322_97_spring/midterm/midterm.html last accessed: August 7 2004). Following are the matlab code for each of the functions explained earlier in this section. apart from anisotropic diffusion and implementation of snake using dynamic programming. FunctionPickSegmentPoints ImOrig = imread('c:\research\img tst\Imin_bw.jpg'); %Input Image ImFine = ImOrig(1300:1550,1200:1450); %ImFine = ImOrig(1:250,500:750); hold on figure(1);colormap gray; imagesc(ImFine); % u = []; v = []; %Msgbox('Start picking Segments (start and end point)') i = 1; but = 1; segloc = []; but == 1 while but ~= 3 %perform these set of actions when left button is clicked (i.e., pick two). %values, i.e two time button click to pick start and end point, if %second button clicked is = 2, i.e middle button, goto to while but ==2 switch but

PAGE 133

122 case 1 [ x y but ] = ginput(2); but = but(2,1); segloc(i,:) = [x(1) y(1) x(2) y(2)]; h= plot([segloc(i,1);segloc(i,3)],[segloc(i,2);segloc(i,4)],'r'); plot(segloc(i,1),segloc(i,2),'c*'); plot(segloc(i,3),segloc(i,4),'c*'); i = i +1; if but == 3 i = i -1 segloc(i,:) = []; end %while middle button is clicked, it goes on picking the en be the end %point of previous segmentd point of %segment and the start point of segment is considered to case 2 [ x y but] = ginput(1); but = but(1,1); segloc(i,:) = [segloc(i-1,3) segloc(i-1,4) x y ]; h= plot([segloc(i,1);segloc(i,3)],[segloc(i,2);segloc(i,4)],'r'); plot(segloc(i,1),segloc(i,2),'c*'); plot(segloc(i,3),segloc(i,4),'c*'); i = i + 1; if but == 1 i = i -1; segloc(i,:) = []; end if but == 3 i = i-1 segloc(i,:) = []; end end %switch end % i = max(size(segloc)); % segloc(i,:) = []; % while but == 3 % segloc(i,:) = []; % hold off % close(figure(1)); % end % this is to pick the previous point as start point of next segment %uiwait(msgbox('Need to Pick More Segments','Segment Selection','Modal')) %uiresume(1) segrec = []; ImOrig = imread('c:\research\imgtst\Imin_bw.jpg'); %ImFine = ImOrig(1300:1550,1200:1450); %ImFine = ImOrig(1:250,500:750); %figure(2);colormap gr ay; imagesc(ImFine); m = max(size(segloc(:,1))); segrec = segloc; ImOrig = imread('c:\research\imgtst\Imin_bw.jpg'); ImFine = ImOrig(1300:1550,1200:1450); %ImFine = ImOrig(1:250,500:750); ImFineA1 = anisodiff(ImFine,4,80,0.25,1); hold on;colormap gray; imagesc(ImFine); for i = 1:m

PAGE 134

123 %h = plot([segrec(i,1);segrec(i, 3)],[segrec(i,2);s egrec(i,4)],'r'); plot(segrec(i,1),segrec(i,2),'r*'); plot(segrec(i,3),segrec(i,4),'r*'); %use this to use gaussian input for road extraction %[xint yint] = intpoint(i); pause(1); end %ImGauss = gaussianblur(ImFine,1); %ImGauss = gradient(ImGauss); %ImFineA1 = gradient(ImGauss); %ImFineA1 = ImGauss; j = []; for j = 1:m %perform seginterpolation for each segment ssegx = segrec(j,1);sse gy = segrec(j,2);esegx = segr ec(j,3);esegy = segrec(j,4); intpts = Seginterp(ssegx,ssegy,esegx,esegy); origcx = Segtrans(intpts,ssegx,ssegy,esegx,esegy,ImFineA1); %perform for each segment and image %compute dynamic snake module for each segment %return the new location of road segment and plot plot(origcx(:,1),origcx(:,2),'b-'); % plot(origcx(:,1),origcx(:,2),'r*'); % plot(origcx(:,1),origcx(:,2),'g-'); % plot(origcx(:,1),origcx(:,2),'r*'); end hold off; FunctionSegmentInterpolation function intpts = Seginterp(ssegx,ssegy,esegx,esegy) %calculate the interpolated points %nseg = max(size(segrec(:,1))); ssx = ssegx; ssy = ssegy; esx = esegx; esy = esegy; %for si = 1:nseg segslope = (esy-ssy)/(esx-ssx); %intercept is c at x = 0 i.e y at x = 0 %esy(si) =segslop e(si)*esx(i) + ces(si); ces = esysegslope*esx; css = ssy segslope*ssx; segvecx = [ssx esx]; segvecy = [ssy esy]; minsegvecx = min(segvecx ); maxsegvecx = max(segvecx ); minsegvecy = min(segvecy ); maxsegvecy = max(segvecy); diffminmaxx = abs(maxsegvecx minsegvecx); diffminmaxy = abs(maxsegvecy minsegvecy); %end %for si = 1:nseg %interpolate on the basis of x if diff in maxx-minx is greater than diff in %y if diffminmaxx > diffminmaxy res = 5; cip = 1;

PAGE 135

124 minx = minsegvecx; maxx = maxsegvecx; xi = []; yi = []; xi(cip) = minx; yi(cip) = segslope*xi(cip) + ces; while xi < maxx cip = cip+1; xi(cip) = xi(cip-1) + res; yi(cip) = segslope*xi(cip) + ces; end intcd = [ xi' yi'] ; intpts = intcd; %ptset(intcd,si) = intcd; %plot(intcd(:,1),intcd(:,2),'b.'); %hold off; end %interpolate with respect to y if diffminmaxx < diffminmaxy res = 5; cip = 1; miny = minsegvecy; maxy = maxsegvecy; xi = []; yi = []; yi(cip) = miny; xi(cip) = (yi(cip)-ces)/segslope; while yi < maxy cip = cip +1; yi(cip) = yi(cip-1) + res; xi(cip) = (yi(cip)-ces)/segslope; end intcd = [ xi' yi'] ; intpts = intcd; %ptset(intcd,si) = intcd; %plot(intcd(:,1),intcd(:,2),'b.'); %hold off; end %end %hold off %pass all the values x and y interpol ated segment wise to snakedeform and %to hough transform to calcluate the an gle and width of the road at each of %these points once for each x y in the set of of segment we get the width %and angle, pass in this information as constrain to be used along with FunctionTransformation %original image position of the intial interpolated points for a road %segment function origcx = Segtrans(intpts,ssegx,ssegy,esegx,esegy,ImFineA1) x1 = round(intpts(:,1)); y1 = round(intpts(:,2)); ssx = ssegx ; esx = esegx; svx = [ssegx;ssegy]; svy = [esegx;esegy]; D = sqrt(sum((svx-svy).^2));

PAGE 136

125 minx = round(min([ssegx,esegx]))-5; maxx = round(max([ssegx,esegx]))+5; miny = round(min([ssegy,esegy]))-5; maxy = round(max([ssegy,esegy]))+5; cwwidth = round(maxxminx); cwheight = round(maxy-miny); %anisotropic diffused images = gaussian blurred image. SegIm = imcrop(ImFineA1,[minx,miny,cwwidth,cwheight]); %hold on; %colormap gray;imagesc(SegIm); %interpolated set of snake points. SegImOrig = round(intpts); tx = minx.* ones(size(SegImOrig(:,1))); ty = miny.*ones(size(SegImOrig(:,2))); SegImP = [ SegImOrig(:,1) tx SegImOrig(:,2) ty]; SegImPx = SegImP(:,1); SegImPy = SegImP(:,2); %plot(SegImP(:,1),SegImP(:,2),'r-'); %plot(SegImP(:,1),SegImP(:,2),'g*'); [snake_pnts e] = roadsnake(SegImP,SegIm); origcx = [snake_pnts(:,1) + tx snake_pnts(:,2) + ty]; %newsnakeseg

PAGE 137

126 APPENDIX B PROFILE MATCHING AND KALMAN FILTER FOR ROAD EXTRACTION Profile matching and the Kalman filter approach, developed by Vosselman and Knecht, identifies and extracts roads from an image, based on their geometric (width of road) and radiometric characteri stics (intensity vari ation across road segment); for some details Section 2.1. This appr oach is proposed as part of the future work needed to automate the initial step of identification and selection of road segments in method of extraction (Section 5.2), as it was implemen ted in our study using the Perona-Malik algorithm and Dynamic snakes. In the appro ach developed by Vosselman and Knecht, the profile of a road segment is built using the geometric and radiometric characteristics of a road segment that are selected by the user to initiate the process. Then the approach identifies and selects other road segments in an image based on the initial profile that is developed. The profile is updated on a regular basis using the Kalman filter that updates the geometric and radiometric characteristics of a road using the last identified road segment, until all the road segments in the image are identified. Prior to explaining the implementation of the road feature selection and identification using the Kalman filter and profile matching (Vosselman and Knecht), the principles of the Kalman filter are explained with an illustration of the movement of a vehicle; this illustration is derived from a tutorial on Kalman Filtering (Simon, 2001). Kalman Filter Principle The Kalman filter is a recursive procedur e used to estimate the parameters of a dynamic system. It is a tool that can estimate the variables of a wide range of processes, as long as they are describe d by a linear system; examples include: the trajectory of a rocket, or a moving vehicle. The concept of th e Kalman Filter is explained in detail later in this section, with an exampl e of a moving vehicle (Simon, 2001). k w k Bu k Ax k x 1 (B-1) k k kz Cx y (B-2) A linear system is described by two equatio ns, a state Equation B-1, and an output Equation B-2. Here, each of these quantities is a vector containing more than one element. A B and C matrices k is the time index x is the state of the system a nd u is the input to the system y is the measured output, and w and z are the noise The vector x contains information about the cu rrent state of the system. This information cannot be measured directly. Hence,y is measured that is as a function of x corrupted by noise z (B-2). Thus by using the value ofy, an estimate of x can be obtained. y, the measured quantity, cannot be accepte d at face value, as it is corrupted by

PAGE 138

127 noise z (Simon, 2001). Thus, a Kalman Filter is used to minimize this measurement error, or variance, and give the best possible estimate for the current state. The workings of the Kalman filter can be explained with an example of a moving vehicle. k Tu k V k V 1 (B-3) In the case of a model representing a vehi cle moving along a strai ght line, the state will consist of the vehicle position p and velocity v. If u, the commanded acceleration, is given as an input to the system, then the output position y can be measured. This will be the case if the user has th e ability to change the accelera tion of the vehicle. Measuring its position every T seconds, and using the laws of phys ics, the velocity one-time step ahead would equal the present velocity, plus the product of the time step and acceleration, as is shown in Equation B-3. ~1 k V k Tu k V k V (B-4) Equation B-3 would not give a precise value of 1 k V, the present velocity of the moving vehicle, as it does not include the vari ance or errors that are introduced due to wind gusts, potholes, and accidents. A more precise value could be obtained instead by using Equation B-4 that is an equation deve loped along the lines of Equation B-1. It includes measurement of noise; here ~k V is the velocity noise in Equation B-4. Similarly, the measured position of the vehicle is determined using Equation B-5: ~ 22 1 1 k p k u T k Tv k p k p (B-5) Here ~k p is the position noise, T is the time step in seconds, and k u is the acceleration. k V k p x (B-6) Now, the state vector x contains current the position k p and the velocity k V as in Equation B-6. k w k u T T k x T k x 2 2 1 0 1 1 (B-7) k z k x k y 0 1 1 (B-8) Knowing that the measured output is e qual to the position, the linear equations would be Equation (B-7) and Equation (B-8) th at represent the current position and the measured position. In Equati on (B-7) and Equation (B-8), k z is the measurement noise due to instrumentation errors. To get an accurate estimate of the position p and the velocity V in Equation (B6), so as to control the movement of the vehicle with a feedback system, a way to estimate the state x Equation (B-7) is needed. Thus, we need a best estimate of the state

PAGE 139

128 of the system ( x ) in Equation (B-6), give n a measured position (y) in Equation (B-7), for the moving vehicle. The criteria for the esti mator that gives an estimate of state of the system ( x ) are as follows: The average value of the state estimate must be equal to the true value, and should not be biased. Mathematically the expected va lue of the estimate should be equal to the expected value of the state (Simon, 2001). The average of the state estimates must be equal to the average of the true states, but we also want the estimator to give an estimate of the state with the smallest possible variance. Mathematically the estimator shoul d have smallest possible error variance. The Kalman Filter satisfies both of th e above requirements, with certain assumptions about the sources of noise ( k w process noise, and k z measurement noise) that affect the system. It is assumed that the average value of the process noise ( k w ) and the measurement noise ( k z ) is zero, and there is no corre lation between the process noise and the measurement noise. The covariance of the process noise and measurement noise is given by using Equations (B-8) and (B-9): ) ( T k w k w E w S (B-8) ) ( T k z k z E z S (B-9) Here T k w and T k z in (B-8) and (B-9) are transposes of k w and k z the noise vectors, with (.) E as their mean value. Using variants of the equations derived above, the Kalman Filter is defined using three Equations (B-10), (B-11) and (B-12) with values obtained from previous equations. 1 ) ( z S T C k CP T C k AP k K (B-10) ) ^ 1 ( ) ^ ( 1 ^ k x C k y k K k Bu k x A k x (B-11) T A k CP z S T C k AP w S T A k AP k P 1 1 (B-12) In the above Equations B-10 to B-12, -1 superscript indicates a matrix inversion. T superscript indicates a matrix transposition. K matrix, represented by Equa tion B-10, is the Kalman gain P matrix, represented by Equation B-12, is the estimated error covariance. The first term in Equation B-11 is the state estimate at time 1 k that is a product of A and the state estimate at time k, plus B times the estimate at timek. This would be the state estimate if there were no measurem ent; with no measurements the state estimate would propagate in time just as the stat e vector in the system model does. The second term in Equation B-11 is the correction term and it represents the amount by which to correct the propagated st ate estimate due to measurement. Inspection of K shows that if the measurement noise is large, then z S will be large therefore, not

PAGE 140

129 much credibility would be assigned to a measurement of y in computing the next state estimate ^ x ; on the other hand if the measurement noise is small, z S will be small, so K will be thereby give a lot of credibility to the measurement y while it is used in computing the next state estimate ^ x Thus using the set of Kalman Filter equations, the next state estimates can be found for a movi ng vehicle. The next section discusses in detail the process of road tracing, using profile matching and Kalman Filtering, as was developed by Vosselman and Knecht. Road Tracing Using Kalman Filter and Profile Matching Vosselman and de Knecht (1995) developed a road tracing tec hnique using profile matching and the Kalman Filter technique. In th is method, an operator initializes a road tracer by placing two points that indicate a s hort road segment. Between these two points, grey value cross sections are computed at intervals of one pixel. The model profile built in this way is then used as a template in profile matching. Using the profiled road segment, an initial estimate of parameters th at describe the road’s position and shape, is made. This estimate is used to predict the pos ition of adjacent road profiles. The profile of the predicted segment is matched with the m odel profile. If there is a match then this is used as a way to cause a shift between the tw o profiles. The Kalman Filter also uses this shift to update the parameters that describe the road’s position a nd shape in the model profile. Thus, by an iterative process, the next position of a road se gment is predicted; this road’s profile is then matched to the model profile, a nd the parameters updated. This road tracer continues until the br eak off criterion is reached. As was discussed earlier in this secti on, the matching between profiles (i.e., the predicted road segment profile and the m odel profile) is carried out based on the following three checks that also then allow the update of the equation parameters using the Kalman filter. Cross correlation between the grey values of the predicted ro ad profile and the model road profile must be > 0.8 The estimated values of the geometric and radiometric parameters should be reasonable, e.g., if the estimated contrast pa rameter has a high value, say 10, then the match cannot be accepted. A contrast value of 10 indicates that the contrast in the model profile is 10 times the contrast in the prof ile of the predicted pos ition. A high value of contrast suggests that there is hardly any ma tch between the profile of the predicted road position and the model (Vosselman and Knecht, 1995). A match is only accepted if the estimated st andard deviation of the estimated shift parameter is less than 1 pixel. Here the shift is the difference between gray level values of the profile of the model and the pr ofile of the predicted position. If any of the above conditions are not sa tisfied, then the Kalman Filter does not update the equation, and instead continues on to another time update e quation. If there are consecutive occurrences of condition failure, af ter a particular number of iterations, then the road is assumed to end a nd the road tracer terminates. Below is a mathematical representati on of how the Kalman Filter updates the parameters for the equation of the road tracer. It was assumed for the parameterization of

PAGE 141

130 the road that they existed with constant curvature. For each position of the road, the following parameters were estimated, with position being analogous to time “t.” Row and column coordinate of the road ) ( t c t r Road direction ) ( t Change in road direction ) (.t These four parameters (i.e., row coordinates, column coordinates, road direction, and change in direction) constitute the state vector of the Kalman filter ) ( t x ^ | t dt t x = ^ | ^ | ^ | ^ |.t dt t t dt t t dt t c t dt t r = ^ / ^ / ^ / 2 | | ^ sin( | 2 | | ^ cos( |. ^ ^t t dt t t t t t t t t dt t t x t t t t dt t t x (B-13) In timedt t, the state vector is predicted by the state vector Equation B-13, using the state vector at time t Both the state vector equations at time dt t and time t were based on all the observations that had been made up to time t This is expressed in Equation B-13 by their indices t dt t | andt t |. t dt t Q t dt t T t t P t dt t dt t P | | | | (B-14) Equation B-14 is the covariance matrix for the predicted state vector in Equation B13. Here, t dt t | contains the coefficients of the linearized time update equation. Thus, if the assumption that the road has cons tant curvature is not correct, the deviation of the true road shape from the shape modele d in the time update equation, Equation B13, is considered the noise in the system, and the resulting un certainty is accounted for in the prediction of the covariance matrix t dt t Q | After calculation of the time update equation, a profile is extracted from the image at the predicted position of the time step t dt t | and perpendicular to the direction of the

PAGE 142

131 road. Comparing the profile analysis of the predicted segment with the profile of the model, gives the estimated shift s and its variance 2 s. This shift is observed indirectly in the Kalman filter through the tw o dimensional observation vector dt t y and the covariance matrix dt t R as in Equations (B-15) and (B-16). ) cos( | ) sin( | dt t s t dt t c dt t s t dt t r dt t c dt t r dt t y (B-15) ) | ( 2 cos ) | cos( ) | sin( ) | cos( ) | sin( ) | ( 2 sin t dt t t dt t t dt t t dt t t dt t t dt t dt t R (B-16) dt t R in Equation B-16 is a singular matrix, and can only be used to adjust the observed road position perpendicu lar to the road direction. Thus, the observation model reduces to dt t x dt t Ax dt t y E 0 0 1 0 0 0 0 1 } { (B-17) This model is then used in updating the meas urement, to estimate the state vector at timedt t based on the prediction and observations at timedt t ) / ^ ( / ^ / ^ t dt t x A dt t y dt t K t dt t x dt t dt t x (B-18) 1 ) ( T A dt t AP dt t R T A dt t P dt t K The road traced with this model, had to be defined to be of an average width of 10 pixels, with a ground resolution of 1.6 m/pi xel. The model profile had a width of 16 pixels, with a step size ) (dt of one pixel. It is proposed th at this model could be used in performing the initial steps of identification and selection of road segment points (Section 5.1) in the method of extraction developed in our study as part of future work to automate the initial step of selection of road points.

PAGE 143

132 LIST OF REFERENCES Amini, A.A., T.E. Weymouth, and R.C. Jain, 1990. Using dynamic programming for solving variational problems in vision, IEEE Transactions on Pattern Analysis and Machine Intelligence 12(9): 855-867. Ballard, D.H., and C.M. Brown, 1982. Computer Vision, Computer Vision (P. Rose, editor), Prentice-Hall, Inc., Englewood Cliffs, N.J, pp. 1-12. Baumgartner, A., W. Eckstein, C Heipke, S. Hi nz, H. Mayer, B. Radig, C. Steger, and C. Weidemann, 1999, T-REX: TUM Research on road extractio n (C. Heipke,H. Mayer, editors), Festschrift fr Prof. Dr.-Ing. He inrich Ebner zum 60. Geburtstag Lehrstuhl fr Photogrammetrie und Fe rnerkundung, Technische Universitt Mnchen, pp 43-64. Bebis, G., 2004. Edge contour representation, URL: http://www.cs.unr.edu/~bebis/CS791E/ Notes/EdgeContourRepresentation.pdf University of Nevada, Reno, Nevada (last date accessed: 7 August 2004). Gruen, A., and Li, H., 1994. Semi-automatic road extrac tion by dynamic programming International Archives of Photogrammetry a nd Remote Sensing, Vol. 30, part 3/1, pp. 324-332. Ivins, J., and J. Porrill, 1993. Everything you wanted to know about Snakes. AIVRU technical memo #86, Artificial Intelligence Vision Research Unit, University of Sheffield, England S10 2TP. Kass, M., A. Witkin, and D. Terzopoulos 1988. Snakes: active contour models, International Journal of Computer Vision 1(4): 321-331. Malik, J., and P. Perona, 1990. Scale-space and edge detection using anisotropic diffusion, IEEE Transactions on Pattern Anal ysis and Machine Intelligence 12(7):629-639. Mayer, H., I. Laptev, and A. Baumgartner, 1998.Multi-scale and Snakes for automatic road extraction, Fifth European conference on computer vision 2-6 June 1998, Freiburg im Breisgau, Germany. McKeown, D.M .Jr, 1996, Top ten lessons learned in automated cartography. CMU-CSTR, School of Computer Science, Carneg ie Mellon University, Pittsburgh, PA 15213-3890.

PAGE 144

133 McKeown Jr., D.M., and J.L. Denlinger, 1988. Cooperative methods for road tracking in aerial imagery, Defense Advanced Research Projects Agency pp.327-341. Rianto, Y., S. Kondo, and T. Kim, 2000, Detect ion of roads from satellite images using optimal search, International journal of patte rn recognition and artificial intelligence 14(8): 1009-1023. Shukla, V., R. Chandrakanth, and R. Ramachandran, 2002. Semi-automatic road extraction algorithm for high resolution aerial images using path following approach, The III Indian Conference on Com puter Vision, Graphics and Image Processing 16-18 December 2002, Ahmedabad, India. Simon, D., 2001. Kalman filtering, Embedded Systems Programming 14(1): 72-79. Trick, M.A., 1997. A Tutorial on Dynamic Programming, URL: http://mat.gsia.cmu.edu/classes/dynamic/dynamic.html, Carnegie Mellon University, Pittsburgh, PA (l ast date accessed: 9 August 2004). Turton, I., 1997, Application of Pattern Recognition to Concept Discovery in Geography M.S. Thesis, The University of Leeds, School of Geography, Leeds, United Kingdom. Vosselman, G., and J. de Knecht, 1995.Road tracing by profile matching and Kalman filtering, Automatic extraction of Man-M ade Objects from Aerial and Space Images, Birkhuser Verlag, Basel, Switzerland, pp 265-274. Weeratunaga, S.K., and C. Kamath, 2002. PDEbased non-linear diffusion techniques for denoising scientific and industria l images: an empirical study, Image Processing: Algorithms and Systems Conference, SPIE Electronic Imaging Symposium 20-25 January, San Jose, California. Weickert, J., 1999.Non-linear diffusion filtering, Handbook of Computer Vision and applications (B. Jahne, H. Haussecker and P. Ge issler, editors), Academic Press, San Diego, C.A., pp. 423-450. Wolf, P.R., and B.A. Dewitt, 2000. Elements of PhotogrammetryWith Applications in GIS McGraw-Hill, New York, 608 pp.

PAGE 145

134 BIOGRAPHICAL SKETCH Vijayaraghavan Sivaraman was born in Mu mbai (Bombay), India, on May 2, 1979. He obtained his bachelor’s degree in ci vil engineering from Regional Engineering College, (Warangal, India) in May, 2000. He further pursued his education for a Master of Science degree in Civil and Coastal Engi neering at the Univer sity of Florida, Gainesville. He loves watching movies, is ent husiastic about cricket, and is an ardent follower of the Indian cricket team.


Permanent Link: http://ufdc.ufl.edu/UFE0007100/00001

Material Information

Title: Rural Road Feature Extraction from Aerial Images Using Anisotropic Diffusion and Dynamic Snakes
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0007100:00001

Permanent Link: http://ufdc.ufl.edu/UFE0007100/00001

Material Information

Title: Rural Road Feature Extraction from Aerial Images Using Anisotropic Diffusion and Dynamic Snakes
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0007100:00001


This item has the following downloads:


Full Text












RURAL ROAD FEATURE EXTRACTION FROM AERIAL IMAGES USING
ANISOTROPIC DIFFUSION AND DYNAMIC SNAKES














By

VIJAYARAGHAVAN SIVARAMAN


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2004

































Copyright 2004

By

Vijayaraghavan Sivaraman















ACKNOWLEDGMENTS

I sincerely thank Dr. Bon A. Dewitt for his continuous support and encouragement

throughout the course of this research. He provided much needed technical help and

constructive criticism by taking time out of his busy schedule. Dr Michael C. Nechyba

for getting me started with the right background to do research in the field of image

processing. I would like to thank Dr Grenville Barnes and Dr Dave Gibson for their

patience and support, and their invaluable contribution in supervising my research.

Finally, I wish to express love and respect for my parents, family and friends. They are

always with me.















TABLE OF CONTENTS

Page

A C K N O W L E D G M E N T S ................................................................................................. iii

LIST OF TABLES ................................................... vii

LIST OF FIGURES ........................................... ............................ viii

ABSTRACT .............................................................................. x

CHAPTER

1 IN TR O D U C T IO N ........ .. ......................................... ..........................................1.

1.1 Road-Feature Extraction Objectives and Constraints..................................1...
1.2 Feature Extraction from a Geomatics Perspective.......................................3...

2 BACKGROUND .............................. ............ .............................4

2.1 R oad C characteristics .................. ............................................................5......
2 .1.1 G eom etric......................................................................................... .. .8
2.1.2 R adiom etric ... ................................................................................. 8
2.1.3 Topologic ................ ........ .............................. .9
2 .1.4 F unctional ...................................................................................... 10
2 .1.5 C ontextual ...................................................................................... 10
2.2 Im age-Processing Techniques .................................................... ................ 11
2.2.1 Low -Level Processing ....................... ............................................ 12
2.2.2 M edium -Level Processing ................................................... 15
2.2.3 H igh-L evel Processing ....................................................... ............... 19
2.3 Approaches to Road Feature Extraction ..................................... ................ 22
2.3.1 Road Extraction Algorithm Using a Path-Following Approach ..........25
2.3.2 Multi-Scale and Snakes Road-Feature Extraction................................32
2.3.2.1 M odule I .................................................................................. 36
2.3.2.2 M odule II .................................................................. ... .......... ..41
2.3.2.3 Module III ...... ................. .................................... 45









3 ANISOTROPIC DIFFUSION AND THE PERONA-MALIK ALGORITHM ........ 51

3.1 Principles of Isotropic and Anisotropic Diffusion ................ ..................... 52
3.2 Perona-Malik Algorithm for Road Extraction ...........................................57
3.2 .1 Intra R egion B lurring ......................................................... ................ 58
3.2.2 L ocal E dge Enhancem ent .................................................. ................ 61
3.3 Anisotropic D iffusion Im plem entation....................................... ................ 62

4 SNAKES: THEORY AND IMPLEMENTATION...............................................67

4 .1 T h e o ry .............................................................................................................. 6 9
4.1.1 Internal Energy................................................................................. 74
4.1.2 External E energy ....................................... .. .......... ............ ............ 75
4.1.3 Im age (Potential Energy) ................................................... 76
4.1.3.1 Im age-functional (E ine) ............................................ ................ 77
4.1.3.2 E dge-functional (E edge) ............................................. ................ 77
4.1.3.3 Term -functional (Eterm).......................................... ..................... 78
4.2.1 Dynamic Programming for Snake Energy Minimization .......................80
4.2.2 D ynam ic Program m ing ...................................................... .................. 81
4.2.3 Dynam ic Snake Implem entation ................ .................................... 85

5 M E TH O D O F EX TR A C TIO N ...................................................................................88

5.1 T technique Selection ....................................... .. ..................... ..... .......... 89
5.2 Extraction M ethod ................... ................ 98
5.2.1 Selection of Road Segments ............... .......................102
5.2.2 Im age Diffusion ............................................ ............... 103
5.2.3 Interpolation of Road Segm ents..................................... .................. 104
5.2.4 Diffused Road Segment Subset and Road Point Transformation......... 105
5.2.5 Snake Implementation and Transformation of Extracted Road......... 106
5.3 E valuation M ethod ...................................... ........................ ............... 108
5.3.1 G goodness of Fit ..................................... .. ........ .......... .. .. ........ ..... 109
5 .3 .2 F -T e st .................................................................................................... 1 10

6 RE SU LT A N D AN A LY SIS.................................... ...................... ...............1...... 12

6 .1 R e su lts ............................................................................................................ 1 12
6.2 Analysis of Result on Test Images....................................... 113

7 CONCLUSION AND FUTURE WORK .......... .......................118

7.1 Conclusion .......... .... .... .. ................................ 118
7.2 Future W ork .............................................................................................119









APPENDIX

A MATLAB CODE FOR ROAD FEATURE EXTRACTION.................................121

B PROFILE MATCHING AND KALMAN FILTER FOR ROAD EXTRACTION. 126

LIST O F R EFEREN CE S .. .................................................................... ............... 132

BIOGRAPH ICAL SKETCH .................. .............................................................. 134
















LIST OF TABLES

Table Page

2-1 Image pixel subset ...................... ........... ............................. 12

2-2 C onvolution kernel .... ....................................................................... .............. 12

2-3 M ethods of extraction.. ..................................................................... ................ 23

2-4 M odule of extraction .. ...................................................................... ................ 35

4 -1 P rop o sales .............. ....................................................... ............................... . 8 1

4-2 Stage 1 com putation .. .................................................................... ................ 82

4-3 Proposal revenue com bination ......................................................... 83

4-4 Stage 2 com putation .... ................................................................... ............... 84

5-1 Stages of develop ent .................................................................... ................ 89

6-1 Summary of evaluation for extracted road features .................... ...................116
















LIST OF FIGURES

Figure Page

2-1 R oad characteristics........................................... ............................................7.......

2 -2 G au ssian k ern el .................................................. .............................................. 14

2 -3 E dg e d election .......................................................................................................... 16

2-4 Sobel edge detector ............... ................ .............................................. 18

2-5 Hough transform ...................... .. ........... .....................................21

2-6 Path-follow ing approach .................. ............................................................. 27

2-7 R oad seed selection .............. ................... ................................................ 28

2-8 Width estimation ..................................... .............................28

2-9 C ost estim action ............. .. .................. .................. ............ ........... .. ............... 29

2-10 Road traversal at intersection ........................................................ 31

2-11 Global road-feature extraction ......................................................... 32

2 -12 S alien t ro ad .............................................................................................................. 3 4

2 -13 N o n salien t ro ad ........................................................................................................3 5

2-14 Salient road-feature extraction ......................................................... 37

2-15 N onsalient road-feature extraction ...................................................... ................ 39

2 -16 R o ad lin k in g ............................................................................................................. 4 0

2-17 N etw ork com pletion hypothesis.................. .................................................... 46

2-18 Segment insertion .................. .. ............ .....................................48

2-19 E xtracted R oad Segm ents......................................... ........................ ................ 49

3-1 Anisotropic diffusion using Perona-Malik algorithm ........................................56









3-2 Isotropic diffusion using G aussian...................................................... ................ 56

3-3 N onlinear curve ............. .. .................... .................. .................... ...... ... ........... 59

3-4 Square lattice exam ple ........................................... .......................... ................ 63

4-1 Snaxel and snakes ........................ ........... .........................70

4-2 Scale space representation of Snake.................................................... ............... 71

4-3 Internal energy effect. ............................................... ............. ................ 74

4-4 Spring force representation ....................................... ....................... ................ 76

4-5 D ynam ic snake m ovem ent ........................................ ....................... ................ 86

5-1 Input im age for H ough transform .................. .................................................. 91

5-2 Extracted road using H ough transform ............................................... ................ 92

5-3 Input im age for gradient snake extraction........................................... ................ 93

5-4 R oad extracted using gradient snakes ................................................. ................ 94

5-5 Road extracted using Gaussian and dynamic Snakes..........................................96

5-6 Perona-M alik algorithm and dynamic Snakes .................................... ................ 98

5-7 Process of road-feature extraction ...... ........ ...... ..................... 101

5-8 Selection of road segm ent ................. ......................................................... 102

5-9 Perona-Malik Algorithm vs Gaussian.......... ........................ 104

5-10 Interpolated road points...................................... ......................... ............... 105

5-11 Road segment subset and its transformed road point.................. ...................106

5-12 Extracted road using Perona-Malik and dynamic snake algorithm......................107

5-13 Desired and extracted road edges...... .......... ........ ...................... 109

6-1 Road extracted using Gaussian and Perona-Malik with dynamic Snakes...........1...12

6-2 Road extracted on test im ages ...... .......... ......... ..................... 115















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

RURAL ROAD FEATURE EXTRACTION FROM AERIAL IMAGES USING
ANISOTROPIC DIFFUSION AND DYNAMIC SNAKES

By

Vijayaraghavan Sivaraman

December 2004

Chair: Bon A. Dewitt
Major Department: Civil and Coastal Engineering

The advent of information technology led to the implementation of various

engineering applications. Geographic Information System (GIS) is one such application

that is being used on a large scale in the field of civil engineering. GIS is used in tracking

and maintenance of roads. Graphical representations including attribute information of

roads are stored in a GIS to track and maintain them. Graphical representation of road

features is obtained through a process of digitization. Research groups in the past couple

of decades have been working toward developing methods of extraction to automate the

process of digitization. Our study reviewed methods of extraction developed by various

research groups, and further developed a method of extraction using a combination of

image-processing techniques (using 4 stages to extract road features from a rural image).

In general, a method of extraction is composed of three steps: pre-processing, edge

detection, and feature extraction.









The method of extraction developed in Stage 1 was implemented using Gaussian,

Sobel Edge Detector, and Hough Transform. Results obtained using this method were not

as desired, because of the roads being extracted as straight lines, while they existed as

curvilinear features. Hence, this method was modified in Stage 2 by implementing

Snakes, using the gradient-descent algorithm. This method yielded better results than

Stage 1 by extracting curved as well as straight roads. The resultant extracted road had a

jagged appearance due to Snake's movement to the steepest gradient within the image.

This problem was overcome by using dynamic programming in Stage 3, to restrict the

movement of Snake to its neighborhood. Results thus obtained in Stage 3 were smooth

and continuous. However, these results deviated from desired road edges at locations

with noise .The problem was due to implementation of Gaussian blurring at the pre-

processing stage, because of its isotropic nature. This problem was overcome by

implementing the Perona-Malik algorithm, an anisotropic diffusion technique, instead of

Gaussian blurring, leading to better results as compared to Stage 3.

Results obtained in Stage 4 were better compared to Stage 3 at locations with noise.

Overall, Stage 4 performed better compared to Stage 3 on visual inspection. To support

this conclusion, results from Stage 3 and Stage 4 were evaluated over a set of 10 rural

road segment images based on their goodness of fit and a hypotheses test implemented

using F-test. Based on goodness of fit and the hypotheses test, results were better for

roads extracted from Stage 4 than Stage 3.














CHAPTER 1
INTRODUCTION

Road networks are essential modes of transportation, and provide a backbone for

human civilization. Hence, it is vital to maintain and restore roads to keep our

transportation network connected. To do this, we must track their existence in both the

temporal and spatial domains. The configuration of road network depends on human

needs a road may be constructed or abandoned depending on the needs of the neighboring

community that the road serves. Spatial representation of roads (along with their

attributes or aspatial information) is managed well in a Geographic Information System

(GIS). A GIS is a graphical representation of geographic features, with attribute

information related or linked to these features. A GIS is used as an analysis and

management tool, allowing the detection of changes over time and space. Spatial

representation of geographic features, such as linear structures (e.g., roads) and point

features (e.g., power poles or manholes) in a GIS is usually maintained in a vector

format, as opposed to a raster. Digitization of desired features in a raster image, leads to

their vector representation. Digitization can be either a manual or an automated process.

However manual digitization of features is a time-consuming and labor-intensive process.

1.1 Road-Feature Extraction Objectives and Constraints

Ongoing research has led to a gamut of methods that automate the digitization

process. Digitization methods are either automatic or semi-automatic in nature. In the

literature, an automatic method implies a fully automatic process. Theoretically, a fully

automatic approach requires no human intervention, but this is not practical. Our study









considered a method automatic if no human intervention was needed for road feature

extraction at the initial or processing stage. In a semi-automatic method human

intervention is required at the initial stage and at times during the processing stage. In

both methods, human intervention is needed at the post-processing stage. Post-processing

intervention is essential in both methods, to extract undetected but desired features from

the raster image, and to fix incorrectly extracted features. An automatic method scores

over a semi-automatic method due to its ability to automate the operations of the

initiation and processing stages. Road feature extraction from a raster image is a non-

trivial and image-specific process; hence, it is difficult to have one general method to

extract roads from any given raster image.

According to McKeown (1996), roads extracted from one raster image need not be

extracted in the same way from another raster image, as there can be a drastic change in

the value of important parameters based on nature's state, instrument variation, and

photographic orientation. The existence of other features, both cultural (e.g., buildings)

and natural (e.g., trees) and their shadows can occlude road features, thus complicating

the extraction process. This ancillary information provides a context for many of the

approaches developed (Section 2.3.2). Thus, it is necessary to evaluate the extent of

inclusion of other information needed to identify a road. Some extraction cases need

minimal ancillary information; and some need a great deal. These limitations point to a

need to develop a method to evaluate multiple criteria in detecting and extracting roads

from images.

Our study extracted roads solely based on the road characteristics stored in an

implicit manner in a raster image. Parameters used for extraction are its shape (geometric









property) and gray-level intensity (radiometric property). These purely image-based

characteristics are affected by external sources as discussed earlier. No contextual

information was used. The method works solely on image characteristics. The method is

semi-automatic, with manual selection of the start and end of road segments in the input

image. Future work is needed to automate the initiation process, to automate the road

selection process, using Kalman Filter and profile matching processes (Appendix B).

1.2 Feature Extraction from a Geomatics Perspective

Feature extraction spans many applications, ranging from the field of medical

imaging to transportation and beyond. In Geomatics and Civil Engineering, the need for

feature extraction is project-oriented. For example, extracting features from an aerial

image is dependent on project needs; the goal may vary from detecting canopies of trees

to detecting manholes. The ability to classify and differentiate the desired features in an

aerial image is a critical step toward automating the extraction process. Difficulties faced

in the implementation of extraction methods are due to the complexity of the varied

information stored in an aerial image. A good extraction technique must be capable of

accurately determining the locations of necessary features in the image. Detection of a

feature object, and its extraction from an image, depends on its geometric, topologic, and

radiometric characteristics (Section 2.2).














CHAPTER 2
BACKGROUND

Road-feature extraction was studied from aerial images over the past 2 decades.

Numerous methods have been developed to extract road features from an aerial image.

Road feature extraction from an aerial image depends on characteristics of roads, and

their variations due to external factors (man-made and natural objects). A method of

extraction is broadly classified into three steps: pre-processing, edge-detection, and

feature extraction initializedd by a feature identification step). The efficiency of a given

method depends on image resolution and the input road characteristics (Section 2.1), and

also on the algorithms used (developed to extract the desired information, using a

combination of appropriate image-processing techniques). The task is to extract identified

road features that are explicit in nature and visually identifiable to a human, from implicit

information stored in the form of a matrix of values representing either gray levels or

color information in a raster image.

Digital raster images are portrayals of scenes, with imperfect renditions of objects

(Wolf and Dewitt, 2000). Imperfections in an image result from the imaging system,

signal noise, atmospheric scatter, and shadows. Thus, the task of identifying and

extracting the desired information or features from a raster image is based on criteria

developed to determine a particular feature (based on its characteristics within any raster

image), while ignoring the presence of other features and imperfections in the image

(Section 2.2).









Methods of extraction developed in past research are broadly classified into Semi-

automatic methods of extraction, or Automatic methods of extraction. Automatic

methods of extraction are more complex than Semi-automatic methods of extraction.

Automatic methods of extraction require ancillary information (Section 1.1), as compared

to Semi-automatic methods that extract roads based on information from the input image.

As part of a literature survey, Section 2.3 explains a Semi-automatic method of extraction

in detail, developed by Shukla et al. (2002), and an Automatic method of extraction,

developed by Baumgartner et al. (1999) from various methods developed in this field of

research

2.1 Road Characteristics

An aerial image is usually composed of numerous features, both man-made (e.g.,

buildings, roads) and natural (e.g., forests, vegetation) besides roads. Roads in an aerial

image can be represented based on the following characteristics: radiometric, geometric,

topologic, functional, and contextual, as is explained in detail later in this section. Factors

such as intensity of light, weather, and orientation of the camera can affect the

representation of the road features in an image based on the afore-mentioned

characteristics. This in turn affects the road extraction process. Geometric and

radiometric properties of a road are usually used as initial input characteristics in

determining road edge features. Both cultural and natural features can also be used as

contextual information to extract roads, along with external data apart from the

information in the image (geometric and radiometric characteristics). Contextual

information, and information from external sources, can be used to develop topologic,

functional, and contextual characteristics. Automatic method of extraction, implemented









by Baumgartner et al. (1999), use these characteristics, explained in detail in Section

2.3.2.

Human perceptual ways of recognizing a road come from looking at the geometric,

radiometric, topological, functional, and contextual characteristics of an image. For

example in Figure 2-1, a human will first recognize a road based on its geometric

characteristics, considering a road to be a long, elongated feature with constant width and

uniform radiometric variance along its length. As shown in Figure 2-1, Road 1 and Road

2 have different overall pixel intensities (a radiometric property) and widths (a geometric

property). However, both tend to exist as long continuous features.

Thus, it is up to the discretion of the user to select appropriate roads to be extracted

at the feature-identification step. If the feature-identification step is automated, the

program needs to be trained to select roads based on radiometric variance that varies

depending on the functional characteristics of a road; explained later in this section. As

an example in Figure 2-1, Road 1 and Road 2 have different functional properties and

have different radiometric representations. In the case if a human is unable to locate a

road segment due to occlusion, because of a tree (Figure 2-1) or a car, a human would use

contextual information or topological characteristics. Existence of trees or

buildings/houses in the vicinity is used as contextual information. Where as, topologic

properties of the roads are used to determine the missing segment of the road network.

Thus to automate the process of determining the presence of a road, there is a need to

develop a technique for extracting roads, using cues that humans would use, to give the

system the ability to determine and extract the roads in an aerial image based on the

characteristics of a road described.









Road 2
Road 1 Road 2

























BA-Complete Segment Partial Segment
A-Compl ete Segment Occlusion
Occlusion

Figure 2-1. Road characteristics. This picture illustrates various characteristics of road
explained in this section.

The road characteristics explained in this section are derived from human behavior

in road identification, based on the above explanation of the human interpretation of

roads in an image. Further discussion explains in detail each of these road characteristics.

Road characteristics are classified into five groups (Vosselman and de Knecht, 1995).

Here follows a brief description of each of these characteristics, a couple of which

(geometric and radiometric characteristics) are used in the Semi-automatic method

explained in Section 2.3.1, and all of that are used in the Automatic method in Section

2.3.2, to identify and extract road features from an aerial image.









2.1.1 Geometric

Roads are elongated, and in high-resolution aerial images they run through the

image as long parallel edges with a constant width and curvature. Constraints on

detection based purely on such characteristics comes from the fact that there are other

features, like rivers that may be misclassified as roads, if an automated procedure to

identify road segments is implemented in an extraction method. This leads to a

requirement for the use of additional characteristics when extracting roads. In addition,

roads within an image may have different widths, based on their functional classification.

In Figure 2-1, Road 1 and Road 2 have different widths, because of their functional

characteristics, they are a local road and a highway respectively, this issue is discussed in

Section 2.1.4. Thus, this characteristic alone cannot be used as a parameter in the

automatic extraction of a road from an aerial image.

2.1.2 Radiometric

A road surface is homogenous and often has a good level of contrast with adjacent

areas. Thus, radiometric properties, or overall intensity values, of a road segment remain

nearly uniform through the road in an image. A road's radiometric properties, as a

parameter in road characterization, identifies a road segment to be part of a road, based

on its overall intensity value when compared to a model, or other road segments forming

the road network in the image. This works well in most cases, with the exception of areas

where buildings or trees occlude the road or the presence of cars affects the feature

detection process using this characteristic. It also varies with the weather and orientation

of the camera at the time of exposure. For example, in Figure 2-1, A illustrates the

complete occlusion of a road segment and B illustrates the partial occlusion of a road

segment due to the trees near the occluded road segment.









A method of extraction based on radiometric properties may not identify segments

A and B (Figure 2-1), due to its inability to match the occluded road segment with the

other road segments in the image based purely on its radiometric property. Since the

radiometric characteristics of the occluded road segments would be very different from

those of the un-occluded road segments in the image. In addition, if the process of

identification is automated, and if the program is not trained to deal with different

pavement types, detection would get affected, since an asphalt road surface may have

different road characteristics to a tar road. Hence, a group of characteristics used together

would better identify a road segment, as compared to identification based on individual

characteristics.

2.1.3 Topologic

Topologic characteristics of roads are based on the ability of roads to form road

networks with intersections/junctions, and terminations at points of interest (e.g.,

Apartment, Buildings, Agricultural lands). Roads generally tend to connect regions or

centers of activity in an aerial image; they may begin at a building (e.g., house) in Figure

2-1 and terminate at another center of activity, or continue to end of an image. Roads tend

to intersect and to connect to the other roads in an image. Topological information, as

explained above, can be used to identify and extract missing segments of roads. As an

example, if we have to extract the roads from the image in Figure 2-1, the radiometric

and geometric characteristics of the road would help to extract all the road segments in

the image. Though, it won't be able to extract certain segments, due to shadow occlusion

A or the presence of cars and buildings B in the vicinity (Figure 2-1). These missing or

occluded road segments could be linked to the extracted segments based on the

topological information of the neighboring segments. This characteristic is used in the









automatic method of extraction developed by Baumgartner et al. (1999) as explained in

detail later in this chapter (Section 2.3.2).

2.1.4 Functional

Roads, as discussed in the previous section, connect regions of interest, such as

residences, industries, and agricultural lands. Therefore, roads may be classified based on

their function as being a local road or a highway. This functional information is relevant

in determining the width of the road and the characteristics of the pavement that would in

turn be used to set the radiometric properties, allowing the road to be identified based on

its functional classification. In Figure 2-1, Road 1 and Road 2 have different widths

(geometric) and overall intensity values (radiometric), since they belong to different

functional classes. However, to support the extraction process by using this characteristic

there would need to be an external source of information characterizing the road, besides

the information stored in the image.

2.1.5 Contextual

With this characteristic we may use additional information, such as shadows,

occlusions due to buildings, trees along the side of the road and GIS maps, to reference

roads using historical and contextual information. This information is generated using a

combination of information deduced from the image and from external sources, such as a

GIS database. In Figure 2-1, the occluded road segment could be extracted by combining

the information about the extent to which the segment is occluded in the image, with the

information stored in the GIS database concerning the corresponding road's history.

Of the various characteristics of roads discussed in this section, only geometric and

radiometric properties are inherent and exist as implicit information in any image.

Whereas functional, topological, and contextual information can be used both as









information from the image and from an external data source, to develop an intelligent

approach to the identification and extraction of occluded and missing road segments in

the image. The Semi-automatic method explained in section 2.3.1 illustrates the use of

the geometric and radiometric properties of a road as input information for the extraction

of road features technique that was implemented by Shukla et al. (2002). Furthermore, in

Section 2.3.2, the Automatic method implemented by Baumgartner et al. (1999)

illustrates an extraction process, where the initial extraction process is carried out using

the geometric and radiometric characteristics of the road in an image, supported by

extraction using topologic, functional, and contextual characteristics.

Furthermore, this chapter reviews various image-processing techniques that could

be implemented to identify and extract road features from an aerial image. In brief, an

image processing system is composed of three levels of image processing techniques.

These techniques are used in combination to develop methods for road feature extraction

from an aerial image, using characteristics of the features in an image to identify and

extract road features. Section 2.3 introduces the various levels of an image processing

system, with an example to illustrate each level.

2.2 Image-Processing Techniques

According to the classical definition of a three level image processing system

(Ballard and Brown, 1982) and (Turton, 1997), image processing is classified into low-

level, medium-level and high-level processes. Low-level processes operate with

characteristics of pixels like color, texture, and gradient. Medium-level processes are

symbolic representations of sets of geometric features, such as points, lines, and regions.

In high-level processes, semantic knowledge and thematic information is used for feature









extraction. Sections 2.3.1 through 2.3.3 explain various levels of image processing, with

an illustration from each level, explaining a technique and its implementation.

2.2.1 Low-Level Processing

This step is concerned with cleaning and minimizing noise (i.e., pixels with an

intensity value different from the average intensity value of the relevant region within an

image) in the image, before further operations can be carried out to extract the desired

information from the image. One of simplest Low-Level processes is to blur an image by

averaging the values of the pixels forming the image, thereby minimizing noise; here a

mean or an average value is calculated for a group of pixel values forming an image,

thereby reducing the variation in intensity between the pixels in the image.

Table 2-1. Image pixel subset
2 3 3 3 2

4 2 3 4 4

5 2 3 4 5

3 6 6 4 4

Image pixel subset represents an image, with red values representing the pixels
considered for convolution using Table 2-2 explained in this section.
Table 2-2. Convolution kernel
1/9 1/9 1/9

1/9 1/9 1/9

1/9 1/9 1/9

Convolution kernel is convolved through the image whose pixel values are represented in
Table 2-1, convolution of Table 2-2 with pixel subset highlighted in red in Table 2-1 is
explained in this section.









For example, given an image, whose subset pixel values are as in Table 2-1, an

average is calculated using a convolution kernel (Table 2-2). This kernel calculates an

average intensity value from the intensity values of the pixels masked by the kernel. The

average intensity value calculated by the kernel is then assigned to the pixel coinciding

with the central cell of the kernel. The kernel, while moving across the image, calculates

and assigns an intensity value for each pixel in a similar fashion. In Table 2-1 (the

numbers in bold), a portion of the image pixel subset is masked by the kernel in Table 2-

2, the total of these cells is 27; as the kernel is a 3x3 window, composed of 9 pixel masks

and the total value under the mask is 27. The average of the 9 pixels amounts to 3. Thus,

the pixel coinciding with the central cell of the kernel is assigned a value of 3. This

process assigns the average pixel value to the pixel coinciding with the central cell of the

convolution kernel, while moving across the image.

Other Low-level image processing techniques include convolution, using various

forms of weighting functions such as Gaussian, and Laplacian. Blurring using Gaussian

as a weighting function, involves generating a Gaussian convolution mask that is then

further convolved with the image to be blurred, in a fashion similar to the averaging by

kernel convolution discussed earlier in this section and using Table 2-1 and Table 2-2.

During Gaussian blurring, the generated mask, when convoluted with the input image,

gives a weighted average value for each pixel relative to the values in its neighborhood,

with more weight assigned to values toward the center pixels. The resultant blurred image

is thus different from averaging or mean blurring, where the average is a uniform

weighted average.










x2 2

G(x,y) = e 2c2
2H2o- (2-1)

The Gaussian function is calculated using Equation 2-1, resulting in a distribution

as shown in Figure 2-2. Here, x and y are the values of the x and y coordinates of the

convolution kernel, and o is the standard deviation. A convolution kernel is calculated

based on its size, with the mean in the center of the kernel, and the weights assigned to

the kernel cells being based on the standard deviation. The convolution kernel in a

Gaussian distribution is usually set up with a value of 3 as the standard deviation; this is

done as at values of the standard deviation beyond 3 the Gaussian distribution is close to

zero. Using this kernel, the convolution is performed along the x and y directions, to blur

the whole image.








0-,






Figure 2-2. Gaussian kernel. Gaussian weighting distribution kernel is analogous to
kernel in Table 2-2, with higher weights assigned to pixel close to the central
pixel.

The conventional Gaussian blurring process is isotropic in nature as it blurs the

image in a similar fashion in all directions. This process does not respect the boundaries

between regions in an image, and so it affects edges, moving them from their original









position. Hence, in our study the Perona-Malik algorithm (Malik and Perona, 1990), is

implemented, an anisotropic diffusion principle to blur an image, in the developed

method of extraction, instead of the conventional blurring process using a Gaussian. In

the Perona-Malik algorithm images are blurred within regions, while the edges are kept

intact and enhanced, preserving the boundaries between regions. Chapter 3 introduces the

principle of isotropic and anisotropic diffusion in Section 3.1, and its implementation in

the Perona-Malik algorithm, in Section 3.2.

2.2.2 Medium-Level Processing

Medium-level processing is a step toward image classification. Some image

processing techniques at this level classify the image into regions by themselves. One of

the simplest forms of image classification can be performed by thresholding. When

thresholding an image, the pixels within an image are classified based on the threshold

intensity value. For example, if we have a gray scale image, with the intensity value

ranging from 0 to 255, to obtain a binary image, or two-class image, based on a set

threshold value all the pixels values below the set threshold value would be assigned an

intensity of 0, and all those above the value would be assigned 1.

Other techniques involve detecting the edges within an image that can be further

used to visually identify boundaries between regions and support high-level feature

extraction processes. This level of processing is mostly used to determine edges, or

boundaries between regions, in an image. What follows is an explanation of the principle

of edge detection in an image, and the workings of the Sobel edge detector, a medium

level image processing technique.










A B


C D

E --F


G H


-Image
S / profile
hoizo


First


Second


of a
ntal line


Derivative


IDerivative


Figure 2-3. Edge detection. A) Edge image with bright regions in the center and dark on
the boundary of the image. B) Edge image with dark regions in the center and
bright regions along the boundary of the image. C) Horizontal line profile of
edge image in A. D) Horizontal line profile of edge image in B. E) First
derivative of the edge image A. F) First derivative of edge image B. G)
Second derivative of edge image A. H) Second derivative of edge image B.

An edge in an image represents a significant change in intensity between pixels in

the image. Edges detected in an image are usually used as information concerning the

boundaries between regions in the image, or to allow a shape description of an object in

the image. The concept of edge detection is explained further using the illustration in

Figure 2-3. An edge in an image, as in Figure 2-3, exists as a ramp within an image. In

Figure 2-3, two edges exist in A and B. In A and B, the edges delineate a dark region and

bright region, with a bright region existing in the center of A, and a dark region existing

in the center of B.


I Vf(x, y) = ( (x,fy))2 +((8 f(x,y))2

(8 f(x,y)
ZVf(x, y) = Arc tan
a x (2-3)


(2-2)









A and B in Figure 2-3, are considered to be continuous along x and y, f (x, y) then

8 f(xy) a f(x,y)
represents the image. Derivatives along the x ( x ) and y directions ( Y ),

also known as directional derivatives, are calculated from the input image. Edges within

the image are determined based on Equation 2-2 and Equation 2-3 that are calculated

using the directional derivatives. Equation 2-2 gives the magnitude of the gradient and

Equation 2-3 gives the orientation of the gradient.

Simple edge detectors, developed at the medium level, detect edges based on the

gradient information obtained for an input image that is obtained using Equations 2-2 and

2-3. In Figure 2-3, C and D show the profiles of pixel intensity across A and B

respectively. In Figure 2-3, E and F give a graphical representation of the gradient

calculated using Equations 2-2 and 2-3. The gradient graph in E and F is a representation

of the change in intensity of pixels across the image. The edges within an image are

detected by determining the local maxima of magnitude of image gradient (Equation 2-

2). The peaks in E and F represent the locations of the edges in the images A and B in

Figure 2-3. Detecting edges using magnitude of gradient (first derivative) gives a region

rather than a specific edge location.

Edges could be better detected using the second derivative, or rate of change of

gradient. In Figure 2-3, G and H give a graphical representation of rate of change of

gradient (second derivative). Here, the second derivative becomes zero when the first

derivative reaches a maximum. Hence, edges can be easily identified by locating the

points at which the second derivatives of image become zero, instead of identifying local

maxima within an image using the first derivative. Further, this section discusses the









working a Sobel edge detector that performs gradient measurement and locates regions

with high gradients that correspond to edges within an image.

The Sobel edge detector is a convolution kernel commonly used in image

processing to determine regions having high spatial gradients, regions in the image where

there is a significant change in gradient from the neighboring pixels. Generally, these

regions are along boundaries within an image, or exist as noise within a homogenous

region. A Sobel edge detector usually consists of two 3x3 kernels, as shown in Figure 2-

4.

In Figure 2-4, a pseudo-convolution kernel, representing the input image, is

convolved along the x and y directions to determine the edges in an image using Gx and

Gy. Here the convolution masks (Gx and Gy), when moved through an image, compute

the gradient along the x and y directions and responds maximally to edges along x and y.

A B C


Figure 2-4. Sobel edge detector. A) Convolution kernel along x to compute gradients
along x represented Gx. B) Convolution kernel along y to compute gradient
along y represented Gy. C) Pseudo convolution kernel on which gradients are
determined from the pixel values.

In Figure 2-4, the gradients along the x and y directions are computed by

convolving the Sobel convolution kernels with the Pseudo-convolution kernel, to get the

gradient in the x and y directions, using Equation 2-4 and Equation 2-5.


-1 0 +1



-2 0 +2


-1 0 +1


+1 +2 +1



0 0 0


-1 -2 -1


PO0 P P2


P7 [w] P3



PS K P4









Gx = (P2 + 2P3 + P4) (PO + 2P7 + P6) (2-4)

Gy =(PO + 2P + P2) -(P6+2P5+P4) (2-5)

The magnitude of the gradient is calculated by

I G | = G G2G (2-6)

The direction of the gradient is the arc-tan of the gradient along the x and y

directions

0 = arc tan (Gx/Gy) (2-7)

The detector then uses the magnitude of the gradient obtained using Equation 2-6,

to respond maximally to regions within an image which have similar spatial gradients to

the convolution masks in A and B (Figure 2-4). Section 2.2.3 introduces High-level

processing techniques in an image processing system that identify and extract desired

objects from an image, based on information obtained through Low and Medium-level

image processing techniques.

2.2.3 High-Level Processing

In this step, information gathered from the Low and Medium-level image

processing techniques is used as input information to identify and extract desired objects

or regions from an image. The simplest form of High-level processing is to label the

desired regions with one value, while leaving the rest of the image at zero, by using a

threshold value on the original image. More complex image processing techniques at this

level involve detecting and extracting shapes within an image. Prominent techniques

from this level of image processing include the Hough transform and Snakes deformablee

contour models) method. During various stages of the development of a method of road

extraction in our study, both these techniques were implemented.









The Hough transform is an image processing technique that is used to extract or

detect features of a particular shape in the image. Hough transform is used to extract

features that can be represented in a parametric form. It detects regular geometric features

such as lines, ellipses, circles, and parabolas. The Hough Transform works best with

large images where the effects of noise and undesired features are minimal. However, it

is difficult to implement in detection of high order curves, those with orders greater than

2. An explanation of how the Hough transform works to extract linear features from an

image is presented in the following discussion.

Consider an edge-detected image, with a set of point locations/edge pixels that

represent a boundary in the image, as shown in Figure 2-5. In Figure 2-5, a number of

line segments, can connect combinations of points 1, 2 & 3, to represent a linear edge.

The following is a parametric representation of a line that is significant to Hough

transform implementation. Each of the possible segments connecting set of points can be

represented in the form of Equation 2-5 by varying the values of p and 0 that uniquely

identify a single line.

x*cosO+ y*sin0 = p (2-5)

0 is the orientation of the line, with respect to the origin, and p is the length of the

normal of the line from the origin, as in A (Figure 2-5). The objective is to pick the best-

fit line that passes through the maximum number of edge pixels (i.e., 3 edges) as shown

in (Figure 2-5). In (Figure 2-5), three edge pixels and each of these points, or edge pixels,

can have many lines passing through it; as shown with lines (red and bold black) in A

(Figure 2-5). The objective of the Hough transform is to pick a line that passes through

maximum number of edge pixels, the black line in A (Figure 2-5).









A B


















represented in parametric form defined by cells.

As is shown in A (Figure 2-5), there are numerous lines passing through each of the

points. The lines passing through an edge pixel can be uniquely identified by the values

of p and 0, represented by Equation 2-5 in parametric form. Each p and 0 uniquely

represent a cell that identifies a line in Hough-Accumulator space in B (Figure 2-5).

The splines in B (Figure 2-5) are edge pixel representations in Hough Space; the

three curves represent the three edges existing in A (Figure 2-5).As the splines in B

(Figure 2-5), representing each edge pixels, pass through the accumulator-cells in Hough

space they cause an increment in the count on the accumulator for the number of edge

pixels through which a particular line passes; where each line is uniquely identified by a

p and 0 value. Thus, the best fit line that passes through the maximum number of edge

pixels in an image is equal to the accumulator cell with highest count. The line

corresponding to the maximum accumulator count is picked to represent an edge in the

original image. In B (Figure 2-5), the cell in which all three splines intersect represents









the cell with the highest count of edge pixels. Hence, it is considered to be the best-fit

line, and so the black line in representation of an edge in A (Figure 2-5).

During the initial stage of this research, an implementation of the Hough transform

to extract road features was attempted, but was not considered, as road features were

extracted from the image as straight lines; whereas roads typically exist as splines or

curvilinear features in an image. This led to the implementation of Snakes (Active

Contour model) to extract roads, as they represented road features better than Hough

lines.

Section 2.3 further introduces various methods of road feature extraction developed

over the past couple of decades. This section discusses in detail a Semi-Automatic and

Automatic approach to road feature extraction.

2.3 Approaches to Road Feature Extraction

There are numerous methods that have been developed to extract road features

from an aerial image. Table 2-3 lists a few of the road extraction methods reviewed here,

as part of literature survey, prior to work beginning on the development of a method of

extraction in our study. Methods of extraction developed by researchers have been

developed using a combination of image processing techniques. Techniques implemented

in the methods of extraction may be common to one or more of the listed methods. Road

extraction methods are broadly classified into Semi-automatic approaches and Automatic

approaches, as was discussed in Section 1.1. The methods of extraction listed in Table 2-

3, include a group of Semi-automatic approaches, and an Automatic approach that was

developed by Baumgartner et al. (1999) According to McKeown (1996), one of the early

researchers involved in developing road feature extraction methods, every image









considered for the extraction of a desired feature is unique. Hence, it is difficult to have a

general method for extracting road features from any image.

Table 2-3. Methods of extraction
Method of Extraction Research Group
Cooperative methods of road tracking using road McKeown and
follower and correlation tracker Delinger(1988)
Road feature extraction using camera model and Gruen and Li
snakes. (1995)
Road feature tracing by profile matching and Kalman Vosselman and de
filter Knecht. (1995)
Multi-Scale and Snakes for Automatic Road Extraction Baumgartner et al.
(1999)
Detection of roads from satellite images using optimal Rianto et al.
search and Hough transform (2002)
Semi-Automatic road extraction algorithm for high Shukla et al.
resolution images, using path following approach (2002)

Methods for road feature extraction have been pursued for the past couple of

decades. Methods of extraction developed in the early days of this field of research were

carried out using a manual initialization of the process; also know as Semi-automatic

extraction approaches. A cooperative method of extraction (McKeown and Delinger,

1988), one of the early methods of road feature extraction, was a process that was

developed using a combination of image processing techniques; it extracted roads by

edge tracking and texture correlation matching from the input image. These processing

techniques (edge tracking and correlation matching) supported each other in detecting

road features, in case either of them failed during the extraction process. Hence, the

method of extraction is called a cooperative method of extraction. Later, in 1995, a Semi-

automatic approach for road extraction was developed using a digital terrain model, a

type of camera model, along with dynamic programming and Snakes (Gruen and Li,

1995). This approach extracted road edges by concatenating a set of points that









represented road locations. Another Semi-automatic approach was developed around the

same time, and extracted road features using the Kalman filter and profile matching

(Vosselman and de Knecht, 1995). During the evolution of the various methods of road

feature extraction, a research group lead by Baumgartner et al. (1999) developed an

Automatic approach. Most of methods developed until this date had the similar extraction

steps, but this method tried and tested a different combination of image processing

techniques to work in cooperation with each other in modules. Our study will discuss

further a Semi-automatic method of extraction, a Semi-automatic road extraction

algorithm for high-resolution images using the path following approach (Shukla et al.

2002), and an Automatic method of extraction, the Multi-scale and Snakes road feature

extraction method developed by Baumgartner et al. (1999).

Furthermore, a method of extraction is developed in our study that uses a

combination of image processing techniques, evolved over stages that use cues from past

research to develop a method of road feature extraction. An initial attempt was made to

extract roads using the Hough Transform based on a concept from method of extraction

developed by Rianto et al. (2002), although the results obtained were not as desired.

Hence, many combinations were tested, the final method of extraction that will be

implemented in our study uses the Perona-Malik algorithm (Malik and Perona, 1990),

based on the anisotropic diffusion principle and Snakes, was developed at final stage,

stage 4 (Section 5.1). As part of our study, an attempt was made to automate the

initialization, or road segment identification, stage prior to extraction (Section 5.2.1)

using the Kalman Filter and profile matching (Vosselman and de Knecht, 1995).

Appendix B of our study gives a detailed explanation of the principle and working of the









Kalman Filter, along with its implementation for detecting road segments using profile

matching and Kalman filter. Furthermore, Sections 2.3.1 and 2.3.2 explain in detail the

methods of extraction that were developed by Shukla et al. (2002) and Baumgartner et al.

(1999), each under the Semi-automatic and Automatic approaches to road feature

extraction respectively.

Prior to discussing and evaluating the approaches that have been developed toward

road feature extraction from an aerial image, there is some information to be dealt with

concerning the general observation of a road. Roads are generally uniform in width in

high-resolution images, and appear as lines in low-resolution images, depending on the

resolution of the image and functional classification of the road. In the Automatic

approach discussed below, road features are extracted at various resolutions using

contextual information to complete the extraction of roads from an input aerial image. In

both approaches (Automatic and Semi-automatic), there is a need for human intervention

at some point during the extraction process. A Semi-automatic approach requires initial

human intervention, and at times it requires intervention during the extraction process,

whereas an Automatic approach only needs human intervention at the post processing

stage. In the Semi-automatic approach, road detection is initialized manually with points

representing roads, also called seed points. The roads are tracked using these seed points

as an initial estimation of road feature identifiers. In the case of a fully Automatic

approach, the roads are completely extracted without any human intervention. Post

processing is carried out for misidentified and unidentified roads in both approaches.

2.3.1 Road Extraction Algorithm Using a Path-Following Approach

A Semi-automatic method is usually implemented using one of the techniques

below.









* Post initialization of the road, the road is mapped using a road tracing algorithm.

* Distribution of a sparse set of points along a road segment which are then
concatenated to extract the desired road segment.

McKeown and Delinger (1988) developed a method by which to track and trace

roads in an aerial image, using an edge detector and texture correlation information

(Table 2-3). Whereas Gruen and Li (1995), implemented a road tracing technique using a

sparse set of points spaced along the road to be mapped using dynamic programming.

This section explains in detail a Semi-automatic method of extraction using path

following approach developed by Shukla et al. (2002).

In the method developed using path following approach, a road extraction

algorithm extracts roads using the width and variance information of a road segment,

obtained through the pre-processing and edge detection steps, similar to McKeown and

Delinger (1988) and Vosselman and de Knecht (1995). This process, being a Semi-

automatic approach, is initialized by a selection of a minimum of two road seed points.

These seed points are used to determine the center of the road programmatically from the

edge-detected image. Then, after the desired points representing the initial road segment

are obtained, its orientation and width are calculated. The orientation of the initial seed

point is used to determine the three directions along which the next road segment could

exist. From the three directions, the direction having minimum cost, (i.e., having the

minimum variance based on intensity or radiometric information) is considered as the

next road segment. This process is carried out iteratively, until the cost remains within the

predefined variance value. Below is a detailed systematic explanation of this approach.

Figure 2-6, gives a flow diagram of the extraction process, developed using the path

following approach (Shukla et al. 2002).
















































Figure 2-6. Path-following approach. Flowchart gives a brief overview of extraction
process using path following approach explained in detail in this section.

Pre-processing (scale space diffusion and edge detection). The original image is

diffused or blurred at this step (Figure 2-6), into a sequence of images at different scales.

The blurring in this step is carried out using Non-Linear Anisotropic coherence diffusion

(Weickert, 1999), as this minimizes variance within the regions in an image. Non-Linear









Anisotropic coherence diffusion helps maintain the homogeneity of regions within an

image. Variance across sections of the road segment is then further used to estimate the

cost, based on which road is traced. The anisotropic diffusion approach is a non-uniform

blurring technique, as it blurs regions within an image based on pre-defined criteria. This

is different to Gaussian blurring that blurs in a similar manner across the entire image.

The image diffused using the above diffusion technique is then used to compute the

radiometric variance across the pixels in the image. Edges are then detected from the

diffused image using a canny edge detector. The edge-detected image is used to calculate

the width of the road across road segments later in the process of extracting road

segments.









2 Seed Point Selection


Figure 2-7. Road seed selection. Black line represents the initial seed point selected by
the user.

road width


a -------- -- 1F
e f

road edae direction of seed point


Figure 2-8. Width estimation. Road width and direction of road is estimated from the
initial seed point selected as in Figure 2-7.









Selection of initial seed points. As this algorithm is a Semi-automatic approach to

road feature extraction, the process of detecting and extracting road segments is

initialized by manual selection of road seed points. Road seed points, as in (Figure 2-7),

are two points on or near a road segment in an image that form a line segment

representing the road to be extracted selected by the user. (Figure 2-8) illustrates a road

seed with ends a and b. Comparing (Figure 2-7) and (Figure 2-8), a-b correspond to the

end points for the black road seed in (Figure 2-7).

Orientation and width estimation. In (Figure 2-8), a-b the current seed point's

orientation gives the direction of the road, on the basis of which the rest of the road

segments could be determined. The width of the road at the given seed point is estimated

by calculating the distance from the parallel edges, g-h and e-f, to the road seeds a-b as in

(Figure 2-8). At this point the width of the road at the initial seed points is estimated,

along with the orientation of the road. The orientation of the road at the initial seed points

gives a lead as to three directions in which the road could propagate and form the next

road segment.

g 9g
a--- d


h --h'


Figure 2-9. Cost estimation. This figure gives possible orientations of next road
segment based on the information obtained from Figure 2-7 and Figure 2-9 of
initial road segment.

Cost estimation in three directions. As shown in (Figure 2-9), there could be

three directions b-c, b-d, and b-e, along which the road segment could propagate, based

on the current orientation of the seed point a-b. The edges g-g' and h-h' are road edges









parallel to the current road seed a-b. Thus if a-b is the current direction of the road

segment, b-c, b-d or b-e are the possible choices of direction for the next road segment.

As per this algorithm, the minimum of the lengths in the three directions b-c, b-d and b-e

is considered to be the width of the road at current node b, as in (Figure 2-

9).Furthermore, each of the three directions b-c, b-d and b-e are assigned weights, with

the line having the similar direction to the previous road segment being assigned the

minimum weight, b-d in (Figure 2-9). After assigning weights to each direction, a cost

factor is computed using Equation 2-6:

(Variance bd *Direction bd)
COSbd Length
bd Lengthbd (2-6)

Here,
V (pixelvalue meanbd ) 2
Variance bd
bd(lengthbd) (2-7)

Once the cost is estimated in the three directions using Equation 2-6 and Equation

2-7, the path having the minimum cost is considered. The cost value is stored and is used

to determine the road direction in the next target window. This process continues until the

cost factor remains within the set values. This approach continues, forming consecutive

target windows, and thereby determining the minimal cost of the road direction at each

node. Once all the road points are obtained, the road is traced through the set of points to

extract the minimum cost path.

This approach is also called the minimum path following approach, as the path

having the minimum cost is selected until the end of the road is reached, and is connected

to form the final extracted road. While tracing roads the parameters at intersections vary

drastically, as is explained below.
















a b c f




e



Figure 2-10. Road traversal at intersection.

There would be instances, such as junctions or road intersections, where the width

of the road at a point on the junction will suddenly exceed the width at the previous point

that was traversed on the road segment, and will have the same minimum path in all

directions. As seen in (Figure 2-10), at the junction, point c would have a greater width

than the other road segment points, and the paths in all directions would have an equal

minimum cost. This problem is overcome by backtracking, the width of that point is

reduced by considering the width of the predecessor point that was traversed in this

method, and the problem with the equivalence of the minimum path values is sorted by

following one path and tracing the rest of the path, after the whole extraction process is

completed.

Issues associated with this method of extraction, as with any Semi-automatic

approach, is its inability to extract road segments occluded by shadows and other

occlusions that then need to be initiated manually by the user. Section 2.3.2 illustrates the

working of an Automatic approach to road feature extraction, implemented by

Baumgartner et al. (1999). This method of extraction, as its name suggests, does not need









any initialization or feature identification step, these functions are performed by the

feature extraction method itself. This method of extraction includes some processes that

if implemented as stand-alone processes, would work as a Semi-automatic method of

extraction.

2.3.2 Multi-Scale and Snakes Road-Feature Extraction

The automatic method of extraction developed by Baumgartner et al. (1999),

explained in this section, gives an idea of the working of an Automatic method of

extraction, using information from various sources to extract road features from an aerial

image without any human intervention (Section 2.1).


Figure 2-11. Global road-feature extraction. This picture illustrates the two models used
to extract road features in an image automatically over three modules.









Figure 2-11 illustrates an automatic method of extraction developed by

Baumgartner et al. (1999) to extract road features from aerial images using information

from coarse resolution and fine resolution images. The method of extraction is divided

into two models, a road model (A), and a contextual model (B), as shown in (Figure 2-

11).

The road model extracts the roads from an aerial image, from fine and coarse

resolutions of an input aerial image. At coarse resolution, the roads exist as splines or

long linear features, with intersections and junctions as blobs. At fine resolution, roads

exist as long homogenous regions with uniform radiometric variance. The road model

extracts roads at coarse resolution by assuming that road segments exist as long, bright

linear features. At fine resolution, the road model uses real world (Al) information (e.g.,

road pavement marking, geometry). It also uses material information (A2) determined

based on the width of the road segment and the overall radiometric variance of the road

segment, depending on the pavement type or material (e.g., asphalt and concrete), and

image characteristics of whether the identified road segment is an elongated bright

region. In brief, the road model introduced above extracts roads based on the road

segment's geometric, radiometric, and topologic characteristics (Section 2.1).

The method of extraction developed by Baumgartner et al. (1999) also includes a

context model (B) in (Figure 2-11) that extracts road segments from the input image,

using information about other features that exist near the road segment. The context

model extracts the road from an input image using a global context and a local context.

These contexts support each other in the process of extraction. The global context (B)

sets an input image to an urban (B 1), rural (B2), or forest (B3) as in (Figure 2-11). The









local context exists within the input image (e.g., a tree or building near a road segment),

that is occluded by the feature or its shadow, or individual road segments existing by

themselves. A tree occluding a road segment could occur whether the global context is

urban, rural or forest, whereas a building or its shadow occluding a road segment could

only occur in an urban or a rural area, where buildings such as residences or industrial

infrastructures may exist.

Thus, the global and local context within the context model work together to

extract road segments. This section explains the method of extraction in detail that uses

the road model and context model; it does so with the use of an example of rural (global

context) road feature extraction. Another significant point is that roads existing in an

urban area may not be able to be extracted in a similar fashion to those in a rural area,

since they may have different geometric and radiometric characteristics and contextual

information. Thus, the local context within an input image is assigned to a global context,

based on which roads are to be extracted. The model used depends on what information is

needed to extract a road. Salient roads (Figure 2-12) that are clearly visible and are not

occluded or missing sections may be extracted using geometric and radiometric

characteristics, the geometry and material part of the road model.









Figure 2-12. Salient road. Road in gray in this picture is a salient road as it is not
occluded or there is no section of road missing and exists as a continuous
feature across the image.





















Figure 2-13. Nonsalient road. Road in this picture is a nonsalient road, as it is partially
occluded by shadows of tree thus affecting the radiometric and geometric
property of the road.

Nonsalient roads (Figure 2-13), (road segments within an aerial image that are

occluded by the shadow of a tree or building), may need the use of a context model to

extract them from the image.

Table 2-4. Module of extraction
Module I Module II Module III
(Local Extraction) (Global Extraction) (Network Completion)
Salient road Low-level processing Generation of link
hypotheses
Nonsalient Road Fusion Verification of hypotheses
Road junction linking Graph representation Insertion of accepted road
hypotheses
-- Road network generation --
Module of extraction is composed of three modules, through which roads in an image are
extracted using road and context model in combination.

As per the strategy of extraction developed by Baumgartner et al. (1999) salient

roads are initially extracted; followed by the extraction of Nonsalient roads. This process

is followed as extracted salient road segments, can help to guide the extraction of non-

salient road segments, explained in detail later in this section. After the extraction of all

roads, a network is generated by connecting salient and non-salient roads, forming a road

network within the input aerial image. The method of extraction developed using the road

model and context model can be broadly classified into three modules, as in Table 2-4.









Module I performs road extraction in a local context, using a high-resolution

image, initialized by extraction of salient road segments, followed by nonsalient road

segment extraction, and the extraction of the junctions or intersections that connect the

extracted road segments. Module II performs extraction in a global context, as a low level

processing step, using a low-resolution image as input. This is followed by the fusion of

the extracted road segments from the local level extraction that was implemented in

Module I, and the first step (low-level processing) implemented in Module II. The final

step of Module II involves the generation of a graph representing the road network from

the road segments generated from the fusion. Road segments obtained through this fusion

represent the edges, and its ends represent the set of vertices of the generated graph.

Module III of the developed method improves the extracted road network obtained

through Module I and II. It does so by the generation of link hypotheses, and their

verification, leading to the insertion of links. This allows complete extraction of the road

segments forming a network, without any broken road segment links. What follows in

this section explains in brief the implementation of each module.

2.3.2.1 Module I

This module uses edge and line information to begin extraction. Hypotheses for the

location of the salient roads in the image are determined from the extracted lines and

edge information in the image. Extracted salient roads, along with local contextual

information, are then used for the extraction of non-salient roads. Then, in the final step

of Module I, the road junctions are constructed geometrically, using the extracted road

information at the end of this module. Information about salient, nonsalient roads and

road junctions is passed on as input to Module 2.























A B


Figure 2-14. Salient road-feature extraction. A) Represents the extracted road centerline
in black and edges in white B) represents the road quadrilaterals formed by
information from extracted road edge and centerline in A. (Picture Courtesy
Baumgartner et al. (1999) Figure5 Page 6).

Salient road extraction. In this step, roads are extracted at a local level, using edge

and line information extracted from fine resolution input image, and the image at a coarse

resolution. (Figure 2-14) A represents the road lines, extracted using a coarse resolution

image, in black, and road edges extracted from a fine resolution image in white. The

distance between the pair of extracted edges must be within a certain range. The

minimum and maximum distance depends on the class of road being extracted. For the

extracted edge to be considered as a road edge it must fulfill the following criteria:

* Extracted pairs of edges should be almost parallel.

* The area enclosed by a pair of parallel edges should have homogenous radiometric
variance along the road segment.

* There should be a road centerline extracted along the center of the extracted road
edges. As in A (Figure 2-14), the black road centerlines lie along the middle of the
extracted white road edges.

The edges are selected as road edges by the local fusion of extracted lines and road

edges. Using the road edge information, road segments are constructed as quadrilaterals









(Figure 2-14) that are generated from the parallel road edges. Quadrilaterals sharing

points with neighboring quadrilaterals are connected. The points on their axis, along with

the road width, represent the geometry of the road segments. This road information is

used as semantic information for the extraction of non-salient parts of the road network in

the next step of Module I.

Nonsalient road extraction. Nonsalient road segments cannot be extracted as

salient road segment, since they are occluded by the presence of cultural (e.g., buildings)

or natural (e.g., trees) objects or their shadows. Thus to extract a non-salient road, there is

a need for additional knowledge compared to the information needed for the extraction of

salient roads. This step of Module I, extracts non-salient road segments by linking the

extracted salient roads obtained from the previous step, and assuming that the non-salient

road segments are gaps between salient road segments. In addition to the linking of non-

salient roads, incorrect hypotheses for salient road segments are eliminated at this step.

As most of the road segments extracted by the fusion of local edge and line information

in previous step are short, the linking of correct road segments and the elimination of

false road segments is achieved by grouping salient road segments into longer segments.

This process is performed using the following hypothesis and test paradigm that groups

short salient road segments, bridging the gaps as well as extracting the non-salient road

segments.

Hypotheses concerning which road segments should be bridged, are generated

based on the comparison of the geometric (width, collinearity and distance) and

radiometric properties (mean gray value, standard deviation) of the new segment and the









segment to be linked. The road segments are verified through three stages, using the

following hypotheses:

* In the first stage, the radiometric property of the road segment to be linked is
compared to that of the segments to be linked. If the difference between the
radiometric properties is not too great, then the connection hypothesis is accepted.

* If the connection hypothesis is not accepted from the first stage, the "ribbon snake"
approach is applied to find an optimum path to connect the salient road segments. If
this also fails, final verification is performed using local context information.

* The final verification is the weakest form of hypotheses testing, at this stage local
contextual information is used to extract the non-salient roads.








A B





C D

Figure 2-15. Nonsalient road-feature extraction. A) Represents an occluded road
segment and extracted salient road edge in white B) represents the occluded
road segment extracted using optimal path C) represents the road extracted
using optimal width information D) represented the road extracted by on
width hypothesis (Picture Courtesy of Baumgartner et al. (1999) Figure 8
Page 8).

Figure 2-15A illustrates an occluded or non-salient road segment, with the

corresponding extracted salient road segment in white; this is used to give the initial

hypothesis. In (Figure 2-15), B represents the road extracted using the optimal path

process, C represents the road extracted by optimal width verification, and D represents

the road extracted by selection of hypothesis on the basis of constant width. As can be









understood from the results, the road extracted by the hypothesis based on geometric

characteristics of the road gives a better result than any other stage verification.













Figure 2-16. Road linking This figure illustrates the extracted road edges in white, with
their links represented in black and junctions in white dots(Picture Courtesy of
Baumgartner et al. (1999) Figure 9 Page 8).

Post the extraction of salient and non-salient roads in Module I, the extracted road

segments need to be connected. The connection of road segments is performed in final

step of Module I through road junction linking.

Road junction linking. The hypotheses concerning junctions are based on

geometric calculations. In this step of Module I, the extracted road segments are extended

at their unconnected ends. If an extension intersects with an already existing segment,

then a new road segment is constructed that connects the intersection point with the

extended road. The verification of these new road segments is performed in the same

manner as in the case of non-salient road segment extraction in the previous step.

In A (Figure 2-16), the black dotted lines represent the extension of a road segment to

form a new road segment, and B in (Figure 2-16) illustrates the extracted road segments

with junctions as white dots.

Although this approach leads to the extraction of most of the roads in rural images, it

does not tend to work in the same way in urban images and forest images, as the local









context for rural images is different to that in urban and forest images. In the case of

urban images, the network of roads may be denser, and their appearance may also be

different from the road segments existing in a rural image. The road features extracted in

this Module, i.e. Module I, were based on local context, i.e. within an image. Module I

extracted roads using geometric and radiometric properties of the road segment, and

concentrated on local criteria within an image to extract road edges. Module II performs

extraction on a global context, considering the whole image. In Module II, the topological

properties of the roads are used to extract roads, to support the extraction process

implemented in Module I, and improve upon the extracted results. The road network

extracted in Module II has more road segments than Module I as Module II is less

constrained.

2.3.2.2 Module II

An intrinsic topological characteristic of roads is to connect places. Roads are constructed

along paths that provide the shortest and most convenient way to reach places. This

property leads to searching for an optimal connection between places. The method of

determining the best connection between two points is of importance for road extraction.

This approach is feasible on low-resolution satellite images, as roads exist as linear bright

features, forming a network; they do not do so in high-resolution images, as high-

resolution images are more specific concerning individual road segments and their

geometric and radiometric properties. In this module, the approach adopted is modified to

integrate and process road like features from various input sources, i.e. lines extracted at

different scales. Module II performs extractions over four steps, i.e. Low-level

processing, Fusion, Graph Representation and Road Network generation. The

information obtained from Module I of the extraction process is passed on as input for









Module II. During the initial step of low-level processing, the roads that exist as long

bright features are extracted. These extracted features are further merged with road edges

extracted by local extraction in Module I, in the fusion step of Module II. The Graph

representation step constructs graphs using the fused road segments from the previous

step, with road segments represented by edges and the junctions of road segments as

vertices. The final step in Module II is to use the output of the Graph representation step

to generate the road network. The discussion below briefly explains each step.

Low-level processing. In this step, road segments are extracted by extracting a line

from a low-resolution image. This approach returns lines as sets of pixel chains, as well

as junction points, in sub-pixel precision. Some of the extracted lines may represent

actual roads, and some of the roads that are extracted may not necessarily be roads, they

may be other features such as rivers misidentified as roads. In the analysis of roads, the

behavior of several lines attributes is important, but the most significant change in lines is

high curvature. Hence, from the lines extracted at low-resolution, the lines were split into

road segments and non-road segments, based on points of high curvature, as the

probability of road having a very steep curve is low. If some road segments are

misidentified, or not identified at all, then they will be identified in the next step of the

fusion process. Here each extracted line feature that is classified as road segment is

supported by an extended description, based on the following calculated properties:

* Length

* Straightness, i.e. standard deviation of its direction.

* Width ( mean width of the extracted line).

* Constant Width ( standard deviation of the width).









* Constant radiometric value of a road segment (standard deviation of the intensity
value along the segment).

Fusion. In this step, the road segments obtained from the previous step are

combined with the roads extracted from the local extraction performed in Module I. On

fusion, both types of road segments are stored as one set of linear data. Segments in this

linear data set are unified if they happen to lie within a buffer distance with a suitable

width, and if the two segments have a directional difference less than a set threshold,

otherwise they are evaluated as intersections. Overall, after the roads are extracted in this

Module, the result is a more complete network, than was extracted in Module I. However,

the process may also result in falsely detected road segments. Next, the extracted output

is represented in the form of a network graph.

Graph representation. Once the segments are fused, a graph is constructed, with

the road segments as edges and vertices as points of connectivity. In cases where two or

more segments intersect, only one point/vertex is retained, to preserve the topology of the

road. Attribute values of road segments, assigned in the low-level processing of this

Module, are used to weigh the graph, by associating every edge with a single weight. At

this step of the extraction it is difficult to determine whether a road junction between two

segments truly represents a connection of road segments. Thus, an additional hypothesis

is generated to determine the connections between the edges of the graph. The following

are the criteria that are used to measure the quality of the hypotheses:

Direction difference between adjacent road segments; either orthogonality (within road)

or orthogonality (at a T-junction) is assumed as a reference.

* The absolute length of the connection.

* The relative length of a connection compared to the length of the adjacent road
segment with the lower weight.









* An additional constraint that prevents a connection hypothesis from assigning a
higher weight than its adjacent road segments.

A linear fuzzy function is then defined to obtain fuzzy values for the hypothesis on each

of the above criteria; these values are then aggregated into an overall fuzzy value using

the fuzzy and operation. For example: a fuzzy function is defined for the difference in

direction to determine orthogonality within a road segment or at road segments. To prefer

either a continuation of the road segment, or to support the idea of a possible road

junction, a fuzzy function with two peaks is considered, one at 00 and one at 900, this

supports collinearity and junctions. Thus, a road connection may be classed as either a

road segment or a T-junction, by classing the connections as either a T-junction or a

collinear road segment. This classification can be built using the other parameters used

for evaluating junction hypotheses; for example the length of the connection as compared

to the length of the road segments to be connected, can be used as a weighting function in

the process of determining whether the connection is a junction or a road segment, by

using the above defined fuzzy value. Next the roads are generated using road seed

generation (points or places of interest to which a road connects) as the final step in

Module II.

Road network generation. Here, the road seeds are used in extracting roads, by

determining the optimal path between the seeds representing the origin and destination.

The seeds in this step are points of interest, like buildings, and industrial areas.

The algorithm for road network generation finds the shortest path using the Dijkstra-

Algorithm on the weight assigned to each road segment. Weights are assigned to road

segments depending on their fuzzy value. The weight (w) is assigned to a road segment

based on the fuzzy value that is assigned, these vary between 0 and 1, by using the true









distance between the vertices. If a segment does not form a link between vertices, then a

fuzzy value of 0 is assigned, leading to an infinite weight on the road segment, and

thereby removing it during calculation of the shortest path for road network generation.

Ii

ri, if verticesi andjareconncetedndlengtlanddistancd)etween tfan "r">0
dij if i andj arenotconnectedn theoriginagraph
r. andr> 0, wheralis theeuclideailistancdhetween wvicesiandj
,' J otherwise
oo

wij = (2-8)
In Equation 2-8, the weight is assigned based on r, the fuzzy value as introduced earlier,

w..
and "i" and "j", representing the vertices. Here J is calculated based on the true

distance between the vertices that is used below in generating a road network by

determining the optimal path, using the weights as inputs for the Dijkstra algorithm.

Most of the road segments are extracted from an input image, through the extraction

processes implemented in Module I and Module II. The extracted road segments exist as

fragments of a disconnected road network. Since some road segments were not extracted

in either Module I or II, the resultant road network is further connected to complete the

network using the functional characteristics of a road, along with verification from the

image using a hypothesis generated in Module III. Module III, the final module of the

extraction process, is implemented over three steps. The first step generates a link

hypothesis, based on that the extracted road segments are connected through the

verification and insertion steps in Module III.

2.3.2.3 Module III

In this module, the information about the utility of a road network, its topographical

characteristics and various factors such as environmental restrictions and the locations of









cities and industrial areas, are used in the process of linking extracted road segments. The

results obtained up to this step, through Module I and Module II, along with information

on missing segments, are again used and the whole road is reconstructed based on a

hypothesis generated in this module. This hypothesis is then used in completing the

network at this final stage of the three module extraction process developed by

Baumgartner et al. (1999).



nnd
AB -
B A O AB /


odBC odAC /' d BD
C \BD
r-( Cnd D





Figure 2-17. Network completion hypothesis. This figure is used as an illustration to
explain the process of extraction using the functional and topological
characteristic of the road explained in detail in this section.

Hypotheses for network completion, as they are implemented in this research, work as

follows. A sample network is shown in (Figure 2-17), with four nodes A, B, C, D being

considered.

Among the set of four points or nodes the shortest path in the network is determined;

optimal paths along diagonals are also considered for evaluation. These distances are

evaluated for shortest path, as this is the best means for fast and cheap transport among a

set of options. The network distance nd in Figure 2-17 depends on the actual length and

the road class along which the shortest path is found, where as the optimal distance od in

Figure 2-17 depends on factors such as topography, land use and environmental









conservation, given that we have this information readily available for the generation of

hypotheses.

Generation of link hypotheses. A preliminary link hypothesis is defined between

each possible pair of points or nodes. A so-called "detour factor" is calculated for each

preliminary hypothesis as per Equation 2-9. In Figure 2-17 the calculation is done for

each possible pair of nodes (AD and AC).

Networkdis tan ce(nd)
Detourfactor =
Optimaldis tan ce(od) (2-9)
In this step, potentially relevant link hypotheses are selected. The selection is based on a

detour factor, in the sense that links with locally maximum detour factors are of interest,

and that there is no preferred direction within the road network. Here the link hypotheses

that are generated are verified as per their detour factor. If a link with a higher detour

factor is rejected, then the link with the next highest detour factor is considered for

verification. Verification is carried out based on image data, whether a detour is

considered depends on whether the hypothesis actually matches a road network in the

image. Once a link is accepted, it is included in the road network, thus changing the

topology of the road network. Link hypotheses, once rejected, are not considered again in

the iterative process of hypothesis verification.

Verification of hypotheses. The verification of the hypotheses is carried out in

relation to the image data. In the verification stage, the roads extracted from the prior

Modules are used. Here the link hypothesis is verified against the roads extracted using

the road seed generation from Module II. Verification of the link hypotheses is carried

out by determining the optimal path between the road seeds using the weighted graph. If

the graph provides no connection between two end points, the hypothesis is rejected;









otherwise if a path is found, then it is inserted into the road network and replaced with a

geometrically improved link.

New InsertM Link



// I




NewJunction



Redundant Part ot New
Link

Figure 2-18. Segment insertion.

Insertion of accepted road hypotheses. At this stage, if a road connects two end

points, link hypothesis is detected. The new road is inserted into the whole road network

as is shown in (Figure 2-18).

Sections of the new road that overlap with already existing road segments (the redundant

part of new link) in (Figure 2-18) are eliminated. In most insertions, a larger portion of

the new road is left that is then inserted into the network by connecting its two ends to the

nearest points on the network (the red dot in (Figure 2-18) could have been connected to

the blue dot). If the verified segment is not the end of the segment, a junction is

introduced as per the process explained in Module I. In instances where a completely new

link segment is eliminated based on hypothesis, no portion of the segment is introduced

into the road network. (Figure 2-18) shows a completely extracted road network from

Baumgartner et al. (1999).




































Figure 2-19. Extracted Road Segments. (Picture Courtesy of Baumgartner et al. (1999)
Figure 10 Page 9).

(Figure 2-19) illustrates the complete road extracted using three module two model

process. developed by Baumgartner et al. (1999). In this method, Road model and

context model supported each other through the modules of extraction. Many processes

implemented within this technique can be used to develop an individual Semi-automatic

road extraction method.

As was discussed earlier in this chapter, many of the modules from Automatic

approaches are implemented in Semi-automatic approaches. The extraction results thus

obtained are evaluated further, based on their connectivity and their deviation from the

reference data. The method of extraction discussed above is an example of road feature









extraction for a rural road. In case of urban road extraction, information from sources as

Digital surface models, along with contextual information, are needed to make the

approach automatic.

This chapter was an overview of the various characteristics that affect the road extraction

process, and different approaches to road extraction. Chapters 3 and 4 introduce the

Perona-Malik algorithm and Snakes Theory. In our study a Semi-automatic road feature

extraction method is developed, using anisotropic diffusion, rather than Gaussian blurring

isotropicc diffusion) that is implemented through the Perona-Malik algorithm (explained

in Chapter 3). In Chapter 4, the theory and concept of Snakes, and its implementation for

feature extraction, is explained; it will be implemented in our study to extract road

features from diffused image information using dynamic Snakes. The method of road

feature extraction is explained in Chapter 5 that uses the anisotropic diffusion approach

developed by Perona and Malik and Snakes to extract roads. Chapter 6 discusses the

results obtained, followed by an evaluation and analysis of the results. Chapter 7

concludes the thesis with an overview of the method of extraction implemented in our

study, and the future work to be pursued in this research. The automation of the initial

step of feature identification and the selection of road segments is one of the essential

pieces of work to be carried out in the future. Automation of initial identification, using

the Kalman Filter and Profile matching, is explained, as a possibility for the initial road

identification step, prior to the feature extraction method implementation (Appendix B).














CHAPTER 3
ANISOTROPIC DIFFUSION AND THE PERONA-MALIK ALGORITHM

An image is in general a photometric representation of real world features. Objects

or features from the real world are represented as regions composed of a group of pixels,

typically with similar intensity values, in a digital image. Features or objects represented

in an image may have similar pixel intensity values, at least within each feature or object

existing as a region in an image. Ideally such features may be represented as homogenous

regions within an image, for example buildings, trees, or agricultural land within a high-

resolution image, may be represented as regions with similar pixel intensity values

overall. During the capture and development of this information into a digital image,

noise or undesired information is also generated, affecting the representation of the real-

world features in the image. The noise exists as blobs on the image, with pixel intensity

values different from the overall or average pixel intensity values that represent a

particular region or feature.

Many fields use information extracted from an image for varied purposes: medical

image analysis etc. During the process of extraction, the existence of noise leads to

misrepresentation or false feature extraction. A feature extraction method usually extracts

the desired features from an image based on shape and feature boundary descriptions,

obtained through the edge detection step of the feature extraction method. The existence

of noise within an image affects the feature extraction step, as noise results in false edges

being detected that may not exist in the real world and should not exist in the

representation of the feature in the image.









To overcome this problem, noise across the image is minimized by implementing

blurring or smoothing operations; this is done at the initial step of pre-processing in the

feature extraction method. In general, smoothing operations assign each pixel within the

input image a new intensity value that is calculated from the intensity of the pixel values

in its neighborhood in the digital image. This process thereby minimizes variation across

pixels, and consequently reduces the noise within an image. The resultant image is a

blurred or smoothed variant of the original input image. The image obtained from the

pre-processing step is thus significant in extraction of desired features. Below, Section

3.1 explains the principles of isotropic and anisotropic diffusion; this is followed by a

discussion on the need and implementation of anisotropic diffusion in the Perona-Malik

algorithm in Section 3.2. Section 3.2.1 explains the process of intra region blurring,

carried out using the Perona-Malik algorithm, while simultaneously performing Local

edge enhancement, as is explained in Section 3.2.2. This chapter concludes with an

illustration of the algorithm's implementation on an image lattice structure (Malik and

Perona, 1990).

3.1 Principles of Isotropic and Anisotropic Diffusion

Conventional smoothing operations implemented at the pre-processing step are

usually performed using a Gaussian, Laplacian etc. system. Blurring performed using a

Gaussian system blurs the image by assigning each pixel a value based on the weighted

average of the local pixel intensity values that are calculated using a Gaussian

distribution kernel (Section 2.2.1). Conventional smoothing techniques perform well

when used to minimize variation across the image. The process of blurring performed by

a conventional technique, like Gaussian, is isotropic in nature; conventional technique

will blur the whole image in a similar fashion in all directions. This isotropic property of









conventional techniques, while achieving the desired minimization of noise and variation

across the image, also blurs the boundaries between regions or features in an image.

Therefore it shifts, or leading to the loss of, the location of the actual boundaries between

regions, when they are sought in the edge-detection step.

In the method of road feature extraction developed in this thesis, the pre-processing

step of blurring the image is carried out using the Perona-Malik algorithm; it is an

anisotropic diffusion method of blurring, and will be used instead of Gaussian blurring,

an isotropic diffusion technique. This anisotropic diffusion approach blurs regions in an

image based on location information, (i.e., the blurring within an image is carried out

depending on a predefined set of criteria that specify the locations where blurring can be

performed). In this algorithm, blurring is carried out within regions in an image, while

blurring across regions within an image is restricted by the criteria; the criteria are

discussed in this chapter. This method thus preserves the boundary information in the

output-blurred image. The blurred image is then used to extract the desired boundaries

between regions or shapes, after edge detection.


K (x, y) = -2 exp(- ')
2tc2 2o-2 (3-1)

The idea behind the use of the diffusion equation in image processing arose from

the use of the Gaussian filter in multi-scale image analysis (Weeratunga and Kamath,

K
2001). Equation 3-1, illustrates a Gaussian filter ", where o- is the standard deviation

and x and y represent the coordinates of the generated Gaussian mask. The Gaussian

mask or kernel, generated using Equation 3-1 has cell values corresponding to weights

that are used in calculating new pixel intensity values by convolving with the input image









( Section 2.2.1). Through this convolution, the image is blurred, with a weighted average

value for each pixel arising from the distribution.


I(x, y, t) = VI(x, y, t) = (x, t) + (x, t)
at 9x 9y (3-2)

Equation 3-1 can also be written in the form of the diffusion equation, illustrated in

Equation 3-2. In Equation 3-2, I(x,y, t) is a two dimensional image of I(x, y) at time t =

2
0.5 that denotes the variance Here time t represents the variance; an increment in the

value of t corresponds to, or results in, images at coarser resolutions than the original

resolution of the image. As an initial condition, the variance is zero that represents the

original image I(x, y)

I, = a (x, y, t) = V.(c(x, y, t)VI(x, y, t))
at (3-3)

Equation 3-3 represents a more general form of Equation 3-2. Equation 3-3 is used

to calculate an output image at any variance "t". In Equation 3-3, c(x, y, t) is the

diffusion conductance, or diffusivity, of the equation. V and V. in Equation 3-3 are the

gradient and divergence operators respectively. The general form illustrated in Equation

3-3 reduces to a linear or isotropic diffusion equation, as shown in Equation 3-2, if the

diffusivity (c(x' y, t)) is kept constant, and is independent of (x, y), the location within

the input image. This leads to smoothness or blurring in a similar fashion in all directions

within the image. Gaussian blurring implemented using Equations 3-1 and 3-2 is an

example of isotropic diffusion, where it is dependent only on standard deviation, o, and

not on the location within the image where the blurring is being carried out.









The ability of a diffusion method to blur regions within an image, based on the

location criteria, is known as anisotropic diffusion, blurring process becomes image

dependent and is not the same in all directions or at all locations within an image.

Anisotropic diffusion implementation in images is derived from principle of heat

diffusion. The distribution of the intensity gradient, or change in intensity values, in an

image is analogous to the temperature distribution in a region as a function of space and

time. In heat diffusion, the temperature distribution in a region is a function of space and

time, in images intensity gradient information in the image also a function of space and

time. The need to restrict diffusion across boundaries between regions in an image, and to

permit diffusion within regions and along boundaries, leads to the development of a

criterion to be implemented in Equation 3-3, based on the diffusion conductance, or

diffusivity, c(x ,y ,t).

t = I(x, y, t) = div(c(x, y, t)VI)
I JI(x, y, t) = div(c(x, y, t)J) c(x, y t)A V7A (
at =c(x, y, t)AI+Vc.AI (3-4)

Equation 3-4 is an anisotropic diffusion equation that evolved from the general

diffusion equation, shown in Equation 3-2. In Equation 3-4, c(x, y, t) is a symmetric

positive-definite tensor that allows diffusion parallel to the gradient and limits any

diffusion perpendicular to the gradient, thereby restricting blurring across edges. Here div

is the divergence operator and, V and A are the gradient and Laplacian operators

respectively.

Malik and Perona, (1990) developed an algorithm of converting the linear diffusion

into a non-linear diffusion, or anisotropic diffusion; taking place depending on location. It

occurs within regions and along boundaries, while it is restricted across edges in an









image. The anisotropic diffusion thus implemented in the Perona-Malik algorithm is

carried out locally at the pixel level, and in its neighborhood, based on the "c" value. In

addition to the diffusivity "c", conductance "K" is also used to perform blurring within

regions, while enhancing the local edges, this process is explained later in this chapter.

Section 3.2 explains anisotropic diffusion implementation using "c" and "K" values in

the Perona-Malik algorithm.









Figure 3-1. Anisotropic diffusion using Perona-Malik algorithm. Red block highlights
well defined edge boundary of the intersection.










Figure 3-2. Isotropic diffusion using Gaussian. Green block highlights the blurred and
not so well defined edge boundary of the same intersection as in Figure 3-2.

As can be seen in Figure 3-1, the Perona-Malik algorithm, an anisotropic diffusion

process preserves, and gives a better representation of, the boundaries of road

intersections as compared to the boundary information in Figure 3-2, obtained through

Gaussian blurring an Isotropic diffusion process. The boundaries of the road intersection

are blurred more in Figure 3-2, shown in green block than in Figure 3-1, shown in red

block. The road edges extracted from the Perona-Malik algorithm gives a more complete









and accurate set of road edge information, than would result from the information

obtained from a Gaussian blurring process. The Perona-Malik algorithm was

implemented in the pre-processing stage of road feature extraction method developed in

our study because of the following reasons:

* Its ability to implement intra region smoothing, without inter region smoothing.

* Region boundaries are sharp and coincide with meaningful boundaries at that
particular resolution (Malik and Perona, 1990).

Section 3.2 further explains the implementation of the Perona-Malik algorithm

through intra region blurring and local edge enhancement that is performed using the

diffusivity "c" value and conductance "K" value.

3.2 Perona-Malik Algorithm for Road Extraction

From a road feature extraction perspective, this algorithm would help retain the

much needed edge information that is essential in delineating road edges for extraction;

whilst it also preserves the radiometric characteristics of the road across the image, by

preventing blurring across regions. Hence, using a road's uniform radiometric

characteristics, along with semantically meaningful geometric properties representing the

road edges, the initial step of road identification and road seed generation could be

automated; although this process is performed manually in the method developed in this

thesis ( Section 5.2.1). The identified road segments are further used as inputs for the

feature extraction method implemented using Snakes ( Chapter 4) in this thesis.

Roads are represented as long network structures, with constant width at fine

resolutions, and as bright lines in low-resolution images. The diffusion process is

implemented in high-resolution images, rather than low resolution, as on blurring a low-

resolution image the roads existing as bright lines would disappear. The process of









obtaining a coarse scale (blurred) image, from the original image, involves convolving

the original image with a blurring kernel. In the case of an image, I(x, y), at a coarse scale

"t", where t represents the variance, the output image is obtained by convolving the input

image with a Gaussian kernel K as was illustrated in Equation 3-1.


I(x,y) = 10(x,y)*K (x,y,t) (3-5)

Equation 3-5 represents an anisotropic diffusion equation, convolving an original

image I0 with a Gaussian kernel that performs blurring dependent on the variance t, and

location (x, y). Increasing the time value (variance) leads to the production of coarser

resolution images. The success of blurring within regions and along region boundaries, as

per the principle of the Perona-Malik algorithm, depends on determining the boundaries

between regions in an image; this is done based on the value of c(x, y, t). Blurring is

carried out within regions in an image, depending on the value of the coefficient of

conductance or the value of diffusivity, "c". This can be achieved by assigning the value

of 1 to the diffusivity "c" within regions, and 0 at the boundaries (Perona and Malik,

1990). However, we cannot assign the conduction coefficient value to each pixel or

location within an image, a priori, as the boundaries between regions are not known.

Instead, the location of boundaries between regions is estimated as is explained in further

detail in 3.2.1., in order to assign diffusivity values and to perform intra region blurring.

3.2.1 Intra Region Blurring

Assuming the existence of an original image I(x, y), and blurring represented by t.

the scale to which the image is to be blurred. At a particular scale, t, if the location of

boundaries between regions is known, the conduction coefficient c(x, y, t), defined in

Equation 3-4, could be set to 1 within the region and 0 at the boundaries, as was









discussed earlier. This would result in blurring within regions, whilst the boundaries are

kept sharp and well defined.

The problem is that the boundaries at each scale are not known in advance. The

location of boundaries is instead estimated at the scale of the input image (Malik and

Perona, 1990). The estimation of the location of boundaries is carried out as follows.

Let E(x, y, t) be a potential edge pixel at a particular scale "t", it is a vector valued

function with the following properties: the value of the potential edge is set to 0 If the

pixel or location lies within a region, otherwise it is assigned a value that is the product of

the Conductance (K), or local contrast, and a unit vector normal to the edge at the given

location (e):

E(x, y, t) = 0 if the pixel is within the region

E(x, y, t) = K e(x, y, t) at the edge point

g(S)












S

Figure 3-3. Nonlinear curve. This curve represents the magnitude of gradient used for
estimating boundary locations within an image.

In (Figure 3-3), e is a unit vector normal to the edge at a given point, and K is the

local contrast (i.e., the difference in image intensities on the left and right of the edge),

equivalent to the flux in a heat diffusion equation. Once an estimate of the edge is









available, E(x, y, t), the conduction coefficient c(x, y, t) is set to be a function of g(|| E ||),

the magnitude of E. The value of g(.) is non-negative, and is a monotonically decreasing

function, with g(0) = 1, as illustrated in Figure 3-3.

Once diffusivity is estimated for all the locations within an image. Diffusion is

carried out in the interior of regions, where E = 0, while restricting diffusion along

boundaries between regions, where the magnitude of E is large. Thus, preserving the

boundaries of the roads at each scale of the image. Further this section, explains how the

diffusion coefficient, chosen as a local function of the magnitude and gradient of the

brightness function (Malik and Perona, 1990) within the image, preserves and sharpens

the boundary by the appropriate selection of the g(.) function.

c(x,y,t) =g(g VI(x,y,t) ) (3-6)

In general, scale space blurring of images is used to obtain coarse resolution

images; this helps in filtering out the noise, but also losses a lot of edge information in the

process, leading to the problem of blurring the edges in the image. In anisotropic

diffusion as implemented by Malik and Perona (1990), the conduction coefficient, also

known as diffusion conductance, Equation 3-6 is chosen to be an appropriate function of

the magnitude of the local image gradient. This is chosen as it enhances edges, while

running forward in time/scale, keeping the stability of the diffusion principle (Malik and

Perona, 1990). Section 3.2.2 explains the concept of edge enhancement process acting

locally, while the process steps ahead in the diffusion process to derive coarse scale

images.









3.2.2 Local Edge Enhancement

This Section explains how the edges in an image are enhanced, during the process

of blurring within regions, from the prior scale or time step. Malik and Perona (1990)

modeled an edge as a step function convolved with a Gaussian mask, as is expressed in

Equation 3-7

div(c(c, y, t)VI) = (c(x, y, t)I )
x x (3-7)

It is assumed that an edge is aligned to the y-axis, in order to explain the concept

(Malik and Perona, 1990). Here "c", the diffusivity or conductance coefficient, is chosen

to be a function of the gradient of "I", as illustrated in Equation 3-8:

c(x, t) g(I (x, y, t)) (3-8)

Let K(Ix) = g (Ix). Ix denotes the flux c. Ix of the intensity between pixels along x.

Thus, the 1-D version of the diffusion equation becomes

It P(I V -'(I)Ixx
t 9x x x (3-9)

The interest here lies in determining the variation in time, t (variance), of the slope

of the edge, a/at (Ix). If c(.) > 0 the function I(.) is smooth, and the order of the

differentiation may be inverted:

a(I ) (It) 2 ( I +.I JXX
t x x x x x xx (3-10)

Instead of differentiating the image by the change in scale of the time step "t", the

image at a particular scale, t, is differentiated in space. As is explained in (Malik and

Perona, 1990), if the edge is oriented in such a manner that Ix > 0, then at the point of

inflection Ixx = 0 and Ixxx << 0, as the point of inflection corresponds to the point of









maximum slope (Ivins and Porill, 2000). This has the result that in the neighborhood of

the point of inflection, 8/8t (Ix) has a sign opposite to 4)'(Ix).

If 4)' (Ix) > 0, then this implies that the slope of the edge will decrease with time,

and if 4' (Ix) < 0 this implies an increase in the slope of the edge with time. There should

not be an increase in the slope of the edge with time, as this would contradict the

maximum principle that states that no new information should be formed in coarse

images, derived from the original image (Malik and Perona, 1990). Thus a threshold is

set, based on the value of K and a, below which 4)(.) is monotonically increasing, and

above that it is monotonically decreasing, giving the desirable result of blurring small

discontinuities, whilst enhancing and sharpening edges (Malik and Perona, 1990). A later

section of this chapter explains the whole process of anisotropic diffusion, being carried

out on a square lattice as an example.

3.3 Anisotropic Diffusion Implementation

This section explains anisotropic diffusion on a square lattice, with brightness

values associated with the vertices. The equation for anisotropic diffusion is discretized

for a square lattice. In Figure 3.4, the brightness values are associated with the vertices

and the conduction coefficients are shown along the arcs. Equations 3-11 and 3-12 are,

respectively, general and discrete representations of anisotropic diffusion for the square

lattice shown in Figure 3-4 that represents an image subset as a square lattice.

It = div(c(x, y, t)VI) = c(x, y, t)AI + Vc.VI


It = I + A[CN .V I VI I .VE E w w j (3-12)









( INi,j ij+l



CNIJ





IW ij i-1lj Eij i+lj


csij


() 1sij ij-1



Figure 3-4. Square lattice example. This example explains the working of a Perona-
Malik algorithm with the vertices representing the image pixels and the lines
representing the conductance.

In discrete anisotropic diffusion Equation 3-12, a four neighbor discretization of the

Laplacian operator is used, where 0 <= X <= 14 and N, S, E and W are subscripts used for

the vertex locations along each direction; the symbol V represents the difference in the

nearest neighbor lattice structure, and not the gradient:

V N' I -,j 'ij


SS + j ii,j


V El =I i, j +1 i,ij

VW 1I = J -1 i, j (3-13)








The conduction coefficients, or diffusion conductance, are updated with every

iteration, as a function of the brightness gradient (Equation 3-10), as is shown in the list

of conductances in (3.14):

ct =g(| (VI)t
Ni, i + (1/2


ctS =g(l (VI)t


c = g(| (VI) t
i, Ij i + ( )2j

ct g(|| (VI)t
/ (Y9JW (3-14)

Perona and Malik, in their paper on "Scale space edge detection using anisotropic

diffusion", have proved that image information at the next scale will lie between a

maximum and minimum value in the neighborhood of the pixel under consideration from

the previous time step or scale. Hence, with k e [0,1/4] and c e [0,1] the maximum and

minimum of the neighbors of li,j at iteration t is (IM)ti,j = max{(I,IN, IS, IE, IW)ti,j } and

It+1
(Im)ti,j = min{(I,IN, IS, IE, IW)ti,j }. Thus, the new value at t+1 is Q, that lies between

the maximum and minimum values in its neighborhood, as illustrated in Equation 3-15.

( it+lI
m),j < I ,j < (IM) ,j (3-15)

Hence, it is not possible for there to be local maxima or minima values within the

interior of the discretized scale space.

It =,j + A[CNVNI + CS .VS + CEVEI + c.VwI]
ij i *W i









Tt tt
Ilji A c +CE +CW)J(+(CNIN S +CSIS +CE.IE +CWW I i, J

Mi, j (- N S +E +W iJ + Mi, j N S E W i, j

It
= i, j (3-16)

Similarly,

It1> t (1-A +c +c+c +AI t t
Smi, j A(CN S E W)i, + i, j N S E W, (317)

The scale space diffused edges can be obtained using either of the following

functions for g(.), as used by Perona and Malik in their work to blur images using

anisotropic diffusion.

g(VI)= e (-(||V/K)2) (3-18)
1 (3-18)

g(VI) =
1 ) [V[)2
K (3-19)

The scale space generated by these two functions is different, depending on the

edges that they are used to detect. The first function (Equation 3-18) priorities high

contrast edges over low contrast edges, whereas the second function of g(.) Equation 3-19

is used for wider ranges over smaller regions. This chapter has presented an explanation

of the Perona-Malik algorithm, and how it detects edges through the scale-space of an

image, using anisotropic diffusion. The main reason behind implementing this approach

in road extraction is to get appropriate edge information at each scale, and to obtain a

uniform radiometric variance within the desired features. In this thesis, road edges are

detected using information from the diffused image, and then extracted using Snakes






66


deformablee contour models). Snakes, as implemented in this thesis, uses the information

about an edge, gained from the diffused image around the position of each snaxel, in the

process of relocating the snaxels closer to the road edges. A detailed discussion of this

process, with an explanation of the concept of dynamic programming and Snakes, is

provided in Chapter 4. Chapter 4 introduces the working of a Snake, and its

implementation using dynamic programming.














CHAPTER 4
SNAKES: THEORY AND IMPLEMENTATION

There are numerous methods to extract road features from an edge detected in an

aerial image. In this research, road feature extraction is performed using Snakes (Kass et

al. 1988) on an image that was pre-processed using the Perona-Malik algorithm (Malik

and Perona, 1990), explained in Chapter 3. Snake is a vector spline representation of a

desired boundary that describes the shape of an object or feature in an image, existing as

a group of edges detected from a pre-processed image. This vector is obtained by

concatenating the snaxels, or points, initially located close to the desired edge of the

feature in the image, and then recursively relocating and concatenating them to align to

the desired shape in the image. In our study, in working toward the objective of

extracting road segment edges from an aerial image, an initial set of road point locations,

or snaxels, are generated and used as inputs to be recursively relocated to get the desired

shape by aligning them to edge over a series of iterations.

The reason behind implementing Snakes on the Perona-Malik algorithm processed

image, is the unique nature of the Perona-Malik algorithm that blurs the image within

regions, while preserving boundaries and edges in the image, as was discussed in Chapter

3. This process retains and further defines the boundaries of the road edges in an image

that is significant in the process of extracting road edges, as it is the edge information that

is needed for Snake's implementation. According to Kass et al. (1988), Snake is defined

as an energy minimizing spline, guided by external forces and influenced by image forces









that pull the spline toward desired objects that are defined and predetermined by the user,

as is discussed in further detail in this chapter.

Snakes. is also called the Active Contour Model; 'active' because of its habit of

exhibiting dynamic behavior by recursively relocating the snaxels to align the Snake to

the desired feature in the image. When implementing Snake on an image, the first step in

the process of extracting the desired object is carried out by an initialization, where a set

of points are placed close to the desired feature. This set of points, or snaxels, can be

generated automatically or semi-automatically. In a Semi-automatic approach the user

needs to select the points in or around the vicinity of the desired object, in case of roads,

we need to place points randomly, but close to the road edge features in the image. In the

case of Automatic approaches, the desired features are identified automatically, this

process is followed by the generation of road seeds/points or snaxels.

Snakes relocate the snaxels from their initial positions recursively; they do this by

moving each snaxel individually, to minimize its energy and the overall energy of the

snake so as to get the best possible alignment of the snake to the shape of the desired

feature in the image. This set of points known as snaxels, are iteratively moved closer to

the original location of the edge, using either the dynamic programming or gradient

descent technique to minimize the overall energy of the snake, as will be explained in

detail in Section 4.2.

In what follows is a discussion of the theory and concept behind Snakes and their

implementation. The basic mathematical explanation of Snakes is based on Euler's

theory, as it is implemented by Kass et al. (1988) and is explained in Section 4.1; their









implementation, and how they are going to be used in the process of road feature

extraction is explained in Section 4.2.

4.1 Theory

Snakes are splines, or deformable contours that take different shapes based on a

given set of constraints. There are various forces acting on Snakes, to deform them, so as

to align them closely to the desired object; in general these forces can be classified as

internal force, image force and external force, as is discussed in detail later in this section.

The internal forces (Section 4.1.1), energy developed, due to bending, serves to impose a

smoothing constraint that produces tension and stiffness in the Snakes, restricting their

behavior so as to fit to the desired object using minimal energy. The image forces

(Section 4.1.3) push the Snake toward the desired edges or lines. External constraints

(Section 4.1.2) are responsible for putting the snakes near to the desired local minimum.

External constraints can be either manually specified by the user, or can be automated.

Geometric curves can be as simple as a circle or sine curve, and can be represented

mathematically as y = f(x), where f(x, y) = x2 + y2 = 1, and f(x) = sin(x), respectively.

Mathematical representations of splines or higher order curves are much more complex in

nature than sine and circular curves.

To initialize a Snake, a spline is produced by picking a desired set of points in the

image that are in the vicinity of the edge of the desired object. Snakes are also called

deformable contours, and they are supposed to pass through points that have similar

characteristics. Snaxels, or road points that form a Snake are located on pixels that have

similar intensity values to the desired object and are spread along the road feature. The

Snake is started as a contour, traced through this set of points that represents the edges of

the desired feature in the image.





































Figure 4-1. Snaxel and snakes. Snake (Active contour model) in yellow, with snaxels in
red are relocated iteratively through an energy minimization process to align
the snake to the road edge.

The initialization process can be manual or automated; automation of the

initialization can be done using high-level image processing techniques. (Figure 4-1), is a

sketch, giving a visual illustration of Snake initialization points, or snaxels (red points),

and the Snake as a contour (yellow). Here the red points represent the initial Snake

points, and the yellow spline is the deformable contour or Snake, whose shape changes

depending on the relocation of the snaxels, also called Snake or road points in our study.

Snakes cannot just detect road edge features, and align themselves to the desired

feature's boundary or shape, as they first need some high level information, (i.e.,

someone to place them near the desired object). In this research, snaxels, or edge points,









are relocated iteratively to deform the Snake, or contour, to align it to the desired feature,

by minimizing the total energy of the Snake.

X(S)
A s





Si= 0.2 S


S 0 Y(S B






I S
I' Si =0.2
A C

Figure 4-2. Scale space representation of Snake. A) Represents the orientation of
snaxels forming a snake. B).Represents the position of snaxel along x based
on s C) Represents the position of snaxel along y based on s.

The elements of the Snake/contour, its snaxels (i.e., the points forming the Snake),

are influenced by space and time parameters. They can be implicitly represented on the

basis of space and time as follows. Consider each snaxel position, red points in Figure 4-

1, to have x(s,t) and y(s,t) as its coordinates that depend on s (space) and t (time/iteration)

parameters; this is explained in Figure 4-2. In Figure 4-2 the space, "s", represents the

spatial location of an edge in the image, and "t" represents the time step or iteration of the

energy minimization process. The contour constructed through these snaxels (Snake

elements) is affected by the energy developed using internal and external constraints and

image forces; Sections 4.1.1 through 4.1.3 explain these constraints. These forces move









the snaxels over time and space to new coordinates, while minimizing the energy over

each individual snaxel and whole snake.

The objective is to minimize the overall energy to align the Snake to lay over the

desired edge. The energy minimization process allows the Snake to detect the desired

edge. Here, the energy, ESnake, possessed by the contour is the sum of three energy

terms, (i.e., internal, external and image). The total energy, also known as the potential

energy, is the force that makes the Snake move toward the desired edge objects. The

potential energy is used to detect lines, edges, and terminations in the image. The

potential energy developed by the processing of the image is used in Snakes. The total

energy of a Snake is the sum of the energies of the snaxels, that form the snake, or

deformable contour. The position of a snaxel can be parametrically represented as is

shown in Equation 4-1.

V(s) =(x(s),y(s)) (4-1)

Thus, the contour in A (Figure 4-2) can be represented as:

V(s) = [x(s), y(s)]T, s [0,1] (4-2)

Here the Snake represented by Equation 4-2 is composed of a number of snaxels,

who's locations, (i.e., "x" and "y") coordinates, are restricted by the value of"s" being

set to fall between 0 and 1. The objective is to align the Snake to the desired object; this

can be obtained by minimizing the total energy of the Snake, (i.e., the sum of the energies

of the individual snaxels) forming the Snake or contour:

1
Esnake = Eelement (V(s))ds
0 (4-3)









Equation 4-3 expresses the total energy of the Snake as the integral of the energy of

the individual Snake elements, or snaxels, forming the Snake in Figure 4-1. Thus, the

energy of a Snake, or contour, as an integral of the various snaxels forming the Snake,

with forces affecting the energies of each individual snaxel "s" is expressed as below in

Equation 4-4.

1
Esnake J Eelement (V(s))ds
0

1 1 1
= Eint (V(s))ds + Eextern(V(s, t))ds + f Eimage (V(s, t))ds
0 0 0 (4-4)

Here,

1
SEnt (V(s))ds
0 is the internal constraint that provides the tension and stiffness,

requiring the snake to be smooth and continuous.

1
fEextern (V(s, t))ds
0 is the external constraint, taken from an external operation

that imposes an attraction or repulsion on the Snake, such external factors can be human

operators or automatic initialization procedures.

1
SEimage (V(s, t))ds
0 this energy, also known as the potential energy, is used to

drive the contour toward the desired features of interest; in this case the edges of the road

in the image.









A













Figure 4-3. Internal energy effect. A) Represents the shape of contour due to high
internal energy. B).Represents the shape of contour due to low internal
energy.

4.1.1 Internal Energy

The internal energy of a Snake element is composed of two terms, a first order term

controlled by a(s), and second order term controlled by P(s). The first term makes the

Snake act like a membrane or elastic band, by imposing tension on the snake, while the

second order term makes the Snake act like a stiff metal plate to resist bending. Relative

values of a(s) and P(s) control the membrane and thin plate terms (Kass et al. 1988).Thus,

the internal energy of the spline can be expressed as in Equation 4-5


E. (V(s))ds (a(s) Vs (s)12 +(s) Vss (s) 12)
mt 2 (4-5)

In Figure 4-3, the objective is to trace the edge of the circle using Snakes. If the

internal energy is kept high the Snake remains stiff; A in Figure 4-3 represents the shape

of the contour when the energy is high, and the shape of the contour when the energy is

low is as in B (Figure 4-3). Thus, increasing a, increases the stiffness of the contour, as it

serves as a tension component, while keeping it low keeps the contour more flexible.









4.1.2 External Energy

This energy is derived from processes initialized either manually or automatically.

Either a manual or an automatic process can be used to control the attractive and

repulsive forces that are used to move the contour model toward the desired features.

Here the energy generated is a spring-like force (Kass et al. 1988). One point is

considered to be fixed, the prior position of a snaxel, and another point is taken to be free

in the image, the estimated current position of a snaxel, where it may be relocated at a

given iteration. This energy is developed between the snaxels (the pixel points where the

points are located) and another point in the image that is considered fixed. Here, is the

mathematical representation of this energy: Consider u to be a snake point and v to be a

fixed point in the image (Ivins and Porill, 2000), the external energy is given by:

Eextern = k I v u 1 (4-6)

This energy is minimal, when u = v, when the image point and the Snake point are

the same, and takes a multiple value of k, when -1 < v-u < 1. Along the same lines we

can have a part of an image repel the contour.

k
Eextern k
Iv- u (4-7)

This energy is maximised as infinite when v = u, and is unity when -k < v-u < k.

In Figure 4-4, fixed end represents a point in the image, and the free end is a Snake

point. Spring like forces (springs) developed between the Snake point and the fixed point

in the image, and this adds an external constraint to the Snake that is implemented as an

external energy component in the development of the Snake.























Figure 4-4. Spring force representation. This force aligns the snake to desired edge
based on user information, explained ahead in this section.

4.1.3 Image (Potential Energy)

To make the Snake move toward the desired feature, we need some energy

functional, functions that attract the Snakes toward edges, lines, and terminations (Kass et

al. 1988). Kass et al. (1988) developed three functions, they are shown below, along with

their weights.

By adjusting the weights of these three terms, the Snake's behavior can be

drastically altered. As such, the nearest local minimum potential energy is found using

dynamic programming as is explained in Section 4.2; dynamic programming is therefore

applied in our study for implementing Snakes that will extract road edge features from an

aerial image.

P Eimage = ine line + Wedge Eedge + WtermEterm (4-8)


x -> x + x (4-9)

Here, the image forces, &x, produced by each of the terms in Equation 4-8 are

derived below in Sections 4.1.3.1 to 4.1.3.3.









4.1.3.1 Image-functional (Ei.e)

This is the simplest image functional of the three terms in Equation 4-8

1
Line = I(x(s))ds
0 (4-10)

If we have the image intensity for a pixel as E line, then depending on the sign of

wline in Equation 4-8, the Snake will be attracted either to dark or light lines. Thus, with

conformity to other constraints, the Snake will align with the nearest darkest or lightest

contour of image intensity in the vicinity (Kass et al. 1988).

The image force 6x is directly proportional to the gradient in the image, as it is

expressed in Equation 4-10:

Ox 0c = -VI(x)
ax ax (4-11)

Thus, local minima near a snaxel can be found by taking small steps in x

x -> x W(x) (4-12)

where r is the positive time step used to find the local minima.

4.1.3.2 Edge-functional (Eeage)

Edges in an image can be found using a simple energy functional.


Eedge = VI(x,) (4-13)

Here the Snake is attracted to contours with large image gradients. Edges can be

found using gradient-based potential energies as:

JI 2
E -- 1 02ds
edge J 2a
0 (4-14)









As an example, ifx = (x, y) and has a potential energy P(x) = |VI(x)12; then the

image force acting on the element is given by:

Soc a (I VI 12) = 2VVI(x)VI(x)
ax ax (4-15)

Hence the strong edges could be found using Equation 4-16

x -> x + rVVI(x)VI(x) (4-16)

4.1.3.3 Term-functional (Eterm)

Term functions are used to find the end points or terminal points of Snakes. To do

this curvature of level lines is used in a slightly smoothed image.

C(x, y) = Ga (x, y)* I(x, y)

Here C(x,y) is a Gaussian convolved image with a standard deviation of a. If the

gradient direction/angle is given by.


0 = tan l(- )
x (4-17)

where n = (cosO,sinO) and ni = (-sinO,cosO) are the unit vectors along that are

tangents to the curve at (x, y) and perpendicular to it, respectively. Using this information

the curvature of level contours in C(x,y) is determined using Equation 4-18.



2 2
(C2 C2)2
Eterm = &0 / & nl = &2c /& n21/&c /& n = x (4-18)

Equation 4-18 helps to attract the Snake to corners and terminations.

Snakes Implementation









Section 4.1 discussed the theory and the working principles of Snakes, based on

various energy functions. The objective is to get the desired Snake-deformable contour to

align with the boundary edge that is needed to minimize the overall energy of the Snake,

the sum of the energies of the individual snaxels forming the Snake. So the aim is to

optimize the deformable contour model, by minimizing this energy function to find the

contour that minimizes the total energy. Here, from the discussion in Section 4.1, the

energy E of the active contour model v(s) is:

11
E(X(s))= P(V(s))ds + I + P |2 d
0 0 0 (4-19)

In Equation 4-19, the first term is the potential energy and the second and third

terms control the tension and stiffness of the Snake. The objective here is to minimize the

above energy. Minimization of this energy can be performed using the gradient descent

algorithm, or dynamic programming. In this research, both methods were tried, and

dynamic programming was chosen because of its ability to trace the edge better than the

gradient descent algorithm. Chapter 5 illustrates and explains the difference in results

between the two methods. Dynamic programming does better as it has the ability to

restrict the detection of local minima within a localized region of the location of the

snaxel.

The energy function, E(x), as in Equation 4-19 can be minimized by changing the

variable by a small value 6(x). Here x represents in the (x, y) coordinate system.

x <- x + Sx

By linear approximation, an expression for the new energy can be obtained, as

expressed in Equation 4-20.









OE
E(x + &) E(x) + -. &
ax (4-20)

Hence, a decrease in the value of 6x reduces or minimizes the energy.to

9 E
&oa---
Ox

Thus, the energy function is modified as follows:


E(x + x)E(x) ( )2
Ox (4-21)

The second parameter in Equation 4-21, with its negative sign and the fact that the

result is squared, makes certain that the E function will decrease upon each iteration, until

a minimum is reached. Section 4.2 further illustrates and explains the implementation of

Snakes using dynamic programming. In Section 4.2, principle of dynamic programming

is explained, with an illustration of a capital budgeting problem in Section 4.2.1 that is an

example of dynamic programming implementation, and in Section 4.2.2, the

implementation of dynamic programming to minimize the energy of a Snake is

explained.

4.2.1 Dynamic Programming for Snake Energy Minimization

Dynamic programming determines a minimum, by using search technique within

given constraints. This process is a discrete, multi-stage, decision process. Dynamic

programming when applied to the project of minimizing the energy of a Snake, or

deformable contour, would include the location of snaxels, or pixels, as stages. Here the

decision to relocate a snaxel to a new location, to minimize the energy, is performed by

restricting the movement of the snaxel to a window around its present location.









Section 4.2.1 explains the concept of dynamic programming using an illustration

(Trick, 1997). This section gives an understanding of the principle of dynamic

programming leading us to its implementation in Section 4.2.2. Section 4.2.3 explains the

implementation of dynamic programming in Snakes to minimize the total energy of the

Snake so as to optimally orient the Snake close to the desired edge in the image.

4.2.2 Dynamic Programming

This section explains the principle of dynamic programming, with a capital

budgeting problem as an example. In this demonstration, the objective is to maximize the

firm's revenue from the allocated fund.

Problem definition. A corporation has $5 million to allocate to its three plants for

possible expansion. Each plant has submitted a number of proposals on how it intends to

spend the money. Each proposal gives a cost of expansion (c) and the total revenue

expected (r). The following table gives the proposals generated.

Table 4-1. Proposals
Plant 1 Plant 2 Plant 3
Proposal C1 R1 C2 R2 C3 R3

1 0 0 0 0 0 0
2 1 5 2 8 1 4
3 2 6 3 9 -- --
4 -- -- 4 12 -- --

Solution. There is a straightforward approach to solve this problem, but it is

computationally infeasible. Here, the dynamic programming approach is used to solve

this capital budgeting problem. It is assumed in this problem that if the allocated money

is not spent, it will be lost; hence, the objective is to utilize the allocated amount.









The problem is split into three stages, each stage representing the money allocated

to each plant. Thus, stage 1 represents money allocated to Plant 1, stage 2 and 3

representing money allocated to Plants 2 and 3 respectively. In this approach the order of

allocation is first set to Plant 1 and then to Plant 2 and 3 respectively.

Each stage is further divided into states. A state includes the information required

to go from one stage to the next. In this case the states for stages 1, 2 and 3 are as follows.

{0, 1, 2, 3, 4, 5}: Amount of money spent on Plant 1, as xl,

{0, 1, 2, 3, 4, 5}: Amount of money spent on Plants 1 and 2, as x2, and

{5}: Amount of money spent on Plants 1, 2 and 3 as x3

Thus, each stage is associated with revenue, and to make a decision at Stage 3, only

the amount spent on Plants 1 and 2 needs to be known. As can be seen from the states

above, in states xl and x2, we have a set of options for the amount that can be invested.

Whereas in the case of state x3 we only have an option of 5, as the total amount invested

in the Plants 1, 2 and 3 must be equal to $5 million. Since we cannot spend above it, nor

can we spend less than $5 million dollars, if we do not spend the allocated amount, it will

be lost as per problem definition.

Table 4-2. Stage 1 computation
If Capital Available (xi) Then Optimal Proposal And revenue for Stage 1
0 1 0
1 2 5
2 3 6
3 3 6
4 3 6
5 3 6

The following computation at each stage illustrates the working principle of

dynamic programming. In Table 4-2, we have an option to select from the set of capital

available (xl), for an optimal proposal, and the revenue from them invested in Plant 1,









inferred from Table 4-1. Further, this process evaluates the best solution for Plants 1 and

2 in Stage 2, with a number of pre-defined options for states being represented by x2.

At Stage 2, to calculate the best revenue for a given state x2, this process goes

through all the Plant 2 proposals, and allocates the amount of funds to Plant 2 and then

the remainder of the amount is optimally used for Plant 1, based on the information from

Table 4-2. This example further illustrates the above discussion: suppose the best

allocation for the state x2 = 4, then in Stage 2, one of the following proposals could be

implemented.

From Table 4-1, if we select a particular proposal for Plant 2 in Stage 2, the

remainder of the amount invested from Stage 2 is utilized for Plant 1. Table 4.3 below

illustrates the total revenue based on the combination of the proposals for Plants 1 and 2.

Table 4-3. Proposal revenue combination
If Plant 2 Then Plant 2 Then Funds Maximum revenue Total revenue
proposal Revenue remaining From Stage 1 from Plant 1
For Stage 1 and 2
1 0 4 6 6
2 8 2 6 14
3 9 1 5 14
4 12 0 0 12

Thus, the best proposal to be selected for Plants 1 and 2, would be either proposal 1

for Plant 2 and proposal 2 for Plant 1, returning a revenue of 14, or proposal 2 for Plant 2

and proposal 1 for Plant 1, also returning a revenue of 14. Further Table 4.4 illustrates the

set of options available for state x2 in Stage 2, with the corresponding optimal proposals

for each of the options, and the total revenue return from Stages 1 and 2. Below, Stage 3

is considered, with only one option for the state x3 = 5.









Table 4-4. Stage 2 computation
If Capital Available (x2) Then Optimal Proposal Revenue for Stage 1 and 2
0 1 0
1 1 5
2 2 8
3 2 13
4 2 or3 14
5 4 17

Along the same lines, computations are carried out for Stage 3, but here the capital

available would be x3 = 5. Once again, the process goes through all the proposals for this

stage, determines the amount of money remaining, and uses Table 4.3 to decide the

previous stages. From Table 4.1, for Plant 3, there are only two proposals, where:

* Proposal 1 gives revenue 0, and leaves 5. From Table 4.3 the previous stages give
17, hence a total revenue of 17 is generated.

* Proposal 2 gives revenue 4, and leaves 4. And from Table 4.3 the previous stage
gives 14, hence a total revenue of 18 is generated.

Hence, the optimal solution would be to implement proposal 2 at Plant 3, proposal

2 or 3 at Plant 2, and proposal 3 or 2 (respectively) at Plant 1. Each option gives revenue

of 18.

Thus, the above example illustrates the recursive procedure of this approach. As per

this method, at any particular state, all the decisions for the future are made

independently of how the particular state is reached. This is the principal of optimality,

and dynamic programming rests on this assumption (Trick, 1997).

The following formula is used to perform the dynamic programming calculation. If

r(kj) is the revenue for proposal kj at Stage j, and c(kj) the corresponding cost, then let

fj(xj) be the revenue of the state xj in Stage j. Then

max
fl(xl) = kJ:c(kJ)








and

max
fj(xj) = k:c(k)
This formula is used to compute the revenue function in the forward procedure; it is

also possible to compute it in a backward procedure that gives the same result. Using the

same principle, dynamic programming is implemented in the next section for the energy

minimization of Snakes.

4.2.3 Dynamic Snake Implementation

In an analogous way to the implementation in Section 4.2.2, the snaxels, point

locations along the Snake, are relocated in the deformable model based on the energy

minimization procedure. This is done in a similar fashion to the states of revenue in Stage

1, as was explained in Section 4.2.2. In Section 4.2.2, the revenue was restricted to a

maximum of $5 million, in Snakes, the movement of a snaxel is restricted to a search

window around its current position. The objective of the process is to minimize the total

energy, by minimizing the energy at each stage ( i.e., at each snaxel location in the

model). Figure 4-5, illustrates the movement a snaxel in its vicinity search window, and

the changing orientation of the Snake. Here, each snaxel is analogous to a stage and the

positions in the search window represent the states.

At any snaxel position, the energy is given by the sum of the energies at the

preceding position and the current snaxel position. The minimal sum of these energies is

retained as the optimal value. The process continues through all the snaxels, and at the

end of each iteration of the minimization process, the points, or snaxels, move toward

those new locations that generated the optimal path in each of their neighborhoods, as is









in Figure 4-5. Here the optimal path would be equivalent to the minimization of the total

energy.

Neighborhood
Window













Figure 4-5. Dynamic snake movement.

The total energy of the snake would be given by:

E (vO, vl, vn ) = E1(vl, v2) + E2(v2, v3) +.... En-1(vn-2,vn-1) (4-24)

Where, each variable "v", or snaxel, is allowed to take "m" possible locations,

generally corresponding to adjacent snaxel locations within a search neighborhood. Each

new snaxel location, "vi", corresponding to the state variable in the ith decision stage, is

obtained by dynamic programming as follows. A sequence of functions is generated, {si}|

for i = [1, n-1], an optimal value function. This function, Si for each stage (i.e., snaxel)

is obtained by a minimization performed over vi. To minimize Equation 4-24 with n = 5,

there would be minimization of the state variable for each of the n snaxel locations. This

would require minimizing the sum of the energy between the snaxel location under

consideration and its preceding location, as in the illustration in Section 4.2.2.

Hence for n = 5, we would have to following energy minimization at each stage.

sl (v2) =min vl {El (vl, v2)}









s2 (v3) = min v2 {sl (v2) + E2 (v2, v3)}

.......s4 (v5) = min v4 {s3 (v4) + E4 (v4, v5)}

Min over vl ...v5 of E = min v5 {s4 (v5)}

Thus in general

mmn
sk(vk+l) = vk (sk-l(vk) + Ek(vk,vk+l)) (4-25)

Considering Equation 4-25, with "k" representing the stage and "vk" being the

states, the recurrence relation used to compute the optimal function for the deformable

contour is given by:

mmn
sk(vk+l) =vk { sk-1(vk) + Eext(vk) + |vk+1 vk|2 (4-26)

Assuming that the possible states, or new locations, of snaxels is in a 3x3 window

around the current snaxel location, then there are nine possible states per stage (i.e.,

snaxel location). In this case, the cost associated with each of these possible states is

equivalent to the internal energy of the snaxel at those locations. The objective is to

minimize this energy over the n snaxel points using Equation 4-26. The optimal

deformable contour is successively obtained through an iterative process, until Emin(t)

does not change with time.

This approach is significant as it enforces constraints on the movement of the

Snake; this is not possible in the gradient descent algorithm approach toward the

minimization of Snake energy. Hence, we get better results using the dynamic

programming approach, than we do with the gradient descent algorithm. Chapter 5

explains the overall extraction process using anisotropic diffusion and dynamic Snakes.














CHAPTER 5
METHOD OF EXTRACTION

Numerous methods exist to extract road features from an aerial image. Most of the

feature extraction methods that are developed are implemented using a combination of

image processing techniques, from various levels of an image processing system. Road

representation varies from image to image depending on the resolution of the image, the

weather conditions prevailing at the time of photograph, and the sun's position, as was

discussed in Chapter 2. Hence, it is very difficult to have a common method to extract the

roads from any image. To overcome the hurdle of implementing new methods to identify

and extract road features from any aerial image, depending on the nature of the image,

research in the recent past was targeted toward developing a global method, using a

combination of image processing techniques to extract road features from any aerial

image.

Our study has developed a feature extraction method that could be implemented as

an initial road extraction method in a global model, or as an independent semi-automatic

road extraction method. This method evolved through stages, with implementation of the

Perona-Malik algorithm and Snakes (Deformable Contour Models) using dynamic

programming. Section 5.3 explains in detail this stage based on evolution of the method

over stages in the process of development, and is followed by a detailed explanation of

the implemented method of road feature extraction.









5.1 Technique Selection

A generic feature extraction method is a three-step process involving: pre-

processing, edge detection and feature extraction. The road-edge feature extraction

method developed in our study evolved over stages, and implements a combination of

image processing techniques at each stage. At each stage, a method developed during

research was inspected, and then evaluated, based on its ability to extract road-edge

features from an aerial image. The roads extracted at each stage were visually inspected

and compared to the desired road edge location in the image. Methods were developed in

stages, using a combination of image processing techniques, until the extract roads were

close to the desired or actual road edges in the aerial image, based on visual inspection

and comparison of the results.

Table 5-1. Stages of development
Stages


Stage 1 Stage 2 Stage 3 Stage 4
Steps


Pre- Perona-Malik
Proe i Gaussian Gaussian Gaussian Agori
Processing Algorithm
Edge- Sobel Sobel Sobel Perona-Malik
Detection Algorithm
Feature Hough Gradient Dynamic Dynamic
Extraction Transform Snake Snake Snake
Stages of development in Table 5-1 lists in brief various image-processing techniques
implemented over stages to develop a method of extraction.

Road edges extracted at Stage 4 (Table 5-1), gave results close to the desired road

edge locations in the image. The method developed in Stage 4 involved the Perona-Malik

Algorithm (Chapter 3), and Dynamic Snakes (Chapter 4) implementation. Results

obtained using Stage 3 and Stage 4 were pretty close to the desired road edges upon