<%BANNER%>

Evaluation of Classifiers for Automatic Disease Detection in Citrus Leaves Using Machine Vision

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110114_AAAAAC INGEST_TIME 2011-01-14T06:09:16Z PACKAGE UFE0006991_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 2782 DFID F20110114_AAAEAC ORIGIN DEPOSITOR PATH pydipati_r_Page_112thm.jpg GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
9c50c5e81c38b0c48ec15fd30b42b2a7
SHA-1
cee692f68841c8ef0ddc4918f3be40e3197f89cd
624 F20110114_AAADUH pydipati_r_Page_016.txt
0ff6f59575df003057bf965131d20fe9
17a7225187fe7101895a572e4d3b2915bc8ef763
26964 F20110114_AAADTT pydipati_r_Page_102.pro
abb3cb0ab66f292aaf63025948fe139b
909c7be1db501afba0012e5f2068894242c22915
6419 F20110114_AAAEAD pydipati_r_Page_113thm.jpg
fd806512a8c6b2bcd2806c706846e180
bc689ddbcb75c41db47f87f0b4e48c84100f3ae6
467 F20110114_AAADUI pydipati_r_Page_018.txt
d2f566377c3b398937e8780aa45e12a4
39ba4a542ea6f6fcc7feaaacc196c4bd4aa55c99
6109 F20110114_AAADTU pydipati_r_Page_106.pro
0dc9ee5b4eecdb8188e8e444fe1570b8
c99fbc5f906bf8cab6afc9cb40c17a1c8f49568d
7516 F20110114_AAAEAE pydipati_r_Page_114thm.jpg
96b15adcc495c4e27b7a33bbc7cdcabd
cb3e14a805aa788fbfba8e7ae7786824ffae6c88
1585 F20110114_AAADUJ pydipati_r_Page_019.txt
b1e8aac2e7bdffcbc315d5eaa4342df7
d69fd66a05c4122e7b334ed15b23d37ae2e4b3a3
14036 F20110114_AAADTV pydipati_r_Page_107.pro
9a8ad80b42d558f463e89390fcb7243b
6595ac0eef81b4f51e969b9b3857dc529f35b2da
1980 F20110114_AAADUK pydipati_r_Page_021.txt
2583d1f58b116894a7957d518d5b166f
7abbdfc87391ea583a24f7fea5a1df1f2f925ba6
20394 F20110114_AAADTW pydipati_r_Page_108.pro
75359f73e3f31f64d8bf1214c61560d5
3a69e79a023eb210b28219f028d3a71847e203cd
135610 F20110114_AAAEAF UFE0006991_00001.mets FULL
8cea112de078a6c181d12f30ebb64f91
fcf0ec0070f77fc01695bd930d37759568d6e601
1906 F20110114_AAADUL pydipati_r_Page_027.txt
25a0bab604b7bca735282388e3e3d5fd
e88672777dbc3f132691fe8e4b0556d809bbd345
8493 F20110114_AAADTX pydipati_r_Page_112.pro
10f89211bb20bcd45aa93c0e9f62c2ba
8dee921028d60d901aa37b7125283bd71144e9dc
2080 F20110114_AAADVA pydipati_r_Page_060.txt
7e062529a66e88ad25503369f8d0c6be
197741e696c72163a5d4841290570a0b2e2e013c
2026 F20110114_AAADUM pydipati_r_Page_028.txt
849b9c06aa2c7115873b92302b9b2e7a
84f4dfe6f6c95d5f17dac91438c61c6ce90319b5
53670 F20110114_AAADTY pydipati_r_Page_115.pro
63eba8687f8147f4ca909144e56650a6
887940595d207e5ba7135164d509ca7681603f21
491 F20110114_AAADVB pydipati_r_Page_065.txt
485af5ddb7edbeec3bf2618357ce84c8
3edf37329fa325f38740f8d0a1a5897aea3b0161
1924 F20110114_AAADUN pydipati_r_Page_029.txt
7c77948eea7e0e11182db2e4966b77bb
d90904bc385e6c63c2fe175510136f4cecb2f37b
1000 F20110114_AAADVC pydipati_r_Page_067.txt
6c7f526fb3eb0e69e6bb8253203d37ae
fded6ee57ae04badbb052b7b8a2250265edc66a7
1923 F20110114_AAADUO pydipati_r_Page_032.txt
e2d4d377c78a949e7d9947c87b17b394
659f06112a288508126b99d8f1524c0e205e6215
30137 F20110114_AAADTZ pydipati_r_Page_116.pro
f07798f441206c2d84288102345fde66
2c44625bd0187d1fc62de632eb0d032f5442b2f8
1920 F20110114_AAADUP pydipati_r_Page_034.txt
ffcb3676fb217f40a58c54b258351395
1247390a257d8f42891533e78a3e0f429885ed4f
3541 F20110114_AAADVD pydipati_r_Page_069.txt
44d85b69af0c89a74fdd2104026a757d
704ce64c2ddee39725103a2b2b40dd0eda131ab1
1672 F20110114_AAADUQ pydipati_r_Page_037.txt
2aecbde24856a083aca52af491ca784d
0f2911dc8476ca27e93848d8bb9e0ada3dfb10c9
1366 F20110114_AAADVE pydipati_r_Page_070.txt
20a666f748e259f2488cef407c012d99
21ab79e0e5ce017170b4915c481737b13ea5dc8e
1469 F20110114_AAADUR pydipati_r_Page_038.txt
dc123f7b27358a1871fd9cc312ee838e
f76c4796e681a616beea5c042d0405ee02603fc6
726 F20110114_AAADVF pydipati_r_Page_075.txt
7f3bad5744ba0e7f43011253de1350e7
786eca2f744795ce603b4952f384981d032fb609
993 F20110114_AAADUS pydipati_r_Page_039.txt
d021296facae7dead33b78219fa0768e
bf1541d5c67fd7b6c1e6425b195da9d53fdec2e5
1178 F20110114_AAADVG pydipati_r_Page_079.txt
be57995597d6af080ff41f7b87fdf36a
bffdcec655a40499d567ea8c1d43906cb33ae398
729 F20110114_AAADUT pydipati_r_Page_041.txt
3e015130db42c9067bcda44554e5bdc8
61471eea14a14a331df4188ba5ce6617def3f102
2029 F20110114_AAADVH pydipati_r_Page_081.txt
587ff9498d050d901c4db2ce7baf921f
bef17f84aab87c42891a4240700f7a273abd5ea6
1706 F20110114_AAADUU pydipati_r_Page_042.txt
9ed47181872adfc56436872df64d82a7
15b436fdb7a5df8c1bd35701b5b9c9647a2d507d
1300 F20110114_AAADVI pydipati_r_Page_083.txt
a3aecaa40c2ca4cce15f2c0f45ae05d7
f20023fde06f4d8c1cc02e7cc689e40031a3908d
1233 F20110114_AAADUV pydipati_r_Page_045.txt
dd929a6efcee9aca81ecfa11385c9de6
00b4a065d32079f01d1980805e842b789c7425a1
1600 F20110114_AAADVJ pydipati_r_Page_086.txt
a407dcf1b6539645697c1f76e6d80b5d
dd9558705e310faeb04dde872850ee41c47ec45e
1753 F20110114_AAADUW pydipati_r_Page_047.txt
09d9666b136e89f9696a2dbadb769303
64511e3c92a85cb8fbd0b7fc97ae31041dec6cab
1654 F20110114_AAADVK pydipati_r_Page_089.txt
16210b2b18bad3f091c7ea23f62a50da
e15cb892c165c9ed126ea2120bba5f50a214cbfa
279 F20110114_AAADUX pydipati_r_Page_048.txt
dbb35b7dbcd4e5fce1232a587c5bc8d5
ef25c8cf139a20c19eff27ed73c219e18fe28edd
935 F20110114_AAADVL pydipati_r_Page_090.txt
3e12853b64f669854bdeb065d9a82c41
a951f99025c76961429dd43beb54142324946013
1866 F20110114_AAADUY pydipati_r_Page_052.txt
7905c6a888cb899d3357b3cb80407a54
9a7e6c8c4f6213d1d95cae62e77380116af5a608
3294 F20110114_AAADWA pydipati_r_Page_002.QC.jpg
e95e5b84994c1799add44a84f0a54cbe
bd5df23cb3374eb883445d88e56c192f3a8ec54c
1393 F20110114_AAADVM pydipati_r_Page_091.txt
254d04571b3f45a94fbead76413981a9
9f35193ce11fdc84dbc778f9420fe0e9ff837def
1318 F20110114_AAADUZ pydipati_r_Page_055.txt
470c50a86e195efd997eac67d94018fc
abd9cfc08da194098eabb829531a0b8596360c06
4201 F20110114_AAADWB pydipati_r_Page_003.QC.jpg
d6176eecf4b336a3aed8d04eb9de3cb0
4771dcbcda886e3624a687afe7f19be89970ef85
1714 F20110114_AAADVN pydipati_r_Page_094.txt
cfbef6281b041d83aab323e16f35de27
6df4835319844f7cbebaa6effe2e6af04137c09d
1736 F20110114_AAADWC pydipati_r_Page_003thm.jpg
fdfe6898ba32fdfd4d11acb84befaa7c
c71f8e975e59dfbc7df9995ca3bcf7608033b4a3
1984 F20110114_AAADVO pydipati_r_Page_095.txt
05c761287d3967f0fa1accd8eb66b585
150ac6ce6647c15da6c0958d57b8861e2ba6155c
3916 F20110114_AAADWD pydipati_r_Page_004thm.jpg
dd01089318ebcdf7271dc57f93f9e42a
67d87f7a48932f9a4a7be8d02c92e50e8118c2d4
1993 F20110114_AAADVP pydipati_r_Page_096.txt
598837fdca2db8b6ba1a0fe746ffbda4
e7c49985a2c467c8fbcaeeaf0b9bff5331b72c83
16578 F20110114_AAADWE pydipati_r_Page_005.QC.jpg
504deb7a46a1b449d1e15df1818e49c6
da70ad5edaf71496e56ae09e051770e7292ac015
1876 F20110114_AAADVQ pydipati_r_Page_099.txt
eca138ccb9faa0be046c589cca17e990
54064e7f72e8f351dbae4e2800598383f99af02a
4394 F20110114_AAADWF pydipati_r_Page_005thm.jpg
46b90a30be85f5f63b4b207e5b58b932
1cbff4c71bf010855b11c65a13bd247c003a67f5
1052 F20110114_AAADVR pydipati_r_Page_102.txt
b256b2ec0c2a6f7e60eae52f0c843b77
8ed848a01f8126c239b2ee3d5ca410750ab7b220
18724 F20110114_AAADWG pydipati_r_Page_006.QC.jpg
4a88e6a92d6c148173cf6f3f5b81daa2
41f60ad340208b1504bd16afc0b25e9633dd25ee
1125 F20110114_AAADVS pydipati_r_Page_103.txt
a6742fd5886fc46bdc1735671568ffda
8d92d841db45fffd7611a52d41d29d0dd98fa9b5
4982 F20110114_AAADWH pydipati_r_Page_006thm.jpg
ca57a727e296d3deb052b2de1e43b44f
349b2b4512eb36ae04fb1209e33f701283b42b5b
1165 F20110114_AAADVT pydipati_r_Page_104.txt
edebebd6f31877490c63e90c59b98a51
b07f04a013785884c9783587c671ca96e9bfdc36
18816 F20110114_AAADWI pydipati_r_Page_010.QC.jpg
a1a02a2cc4547f384b79b7a6eb2bb85d
26f5931df50e13d19494fe9fccf485a4b3bded76
952 F20110114_AAADVU pydipati_r_Page_108.txt
83dd9fa32b5a744204bee7007f9486cd
2db30148f64be98d9d492a9151e914619906866e
5295 F20110114_AAADWJ pydipati_r_Page_010thm.jpg
d7d2dd1b8aaf2339feedb659bad0cb40
3a4b7bb5efe241dba07a9645f12d386faf2e4991
775 F20110114_AAADVV pydipati_r_Page_110.txt
03b0b33f46a42c422681e6b5e149c213
f24a00c3253d6d26b3661beb7797ee5841959855
23339 F20110114_AAADWK pydipati_r_Page_013.QC.jpg
3c83128ad3c686902e0fb318682fa849
7c927ddc57a3de119108b2e4f4c993fd2844e780
464 F20110114_AAADVW pydipati_r_Page_112.txt
390976af25976803a3b8e0cf213f4c5d
e29bb799e2a05a06260f1b41bcba7796eb45b6ef
6534 F20110114_AAADWL pydipati_r_Page_013thm.jpg
85c8b5aa87d4feab8b6d0f89334e1683
fcd6c813b97cbae45f4376e97f53d45124b8ed24
2208 F20110114_AAADVX pydipati_r_Page_115.txt
d538eb46b80e49ef27a41dea55f41486
66826efae7b72a1d8cff85e47e18d6fc5efccd37
6598 F20110114_AAADXA pydipati_r_Page_029thm.jpg
d494e4f216a2b3709b87dc6044ad1701
41fb353fdaf56abf2a492fc88cea65924cbe0461
20745 F20110114_AAADWM pydipati_r_Page_015.QC.jpg
b44ec69ab50a408110798fefa9f33f5b
7e6c76cd0bab1c8b9365ca14a8935623e7af8c88
1246 F20110114_AAADVY pydipati_r_Page_116.txt
f51b1ab954b571d74b3c6c6b20019e3d
de569584271bfc9e8a4b02afa7c7ffe4c437a3f7
3754 F20110114_AAADXB pydipati_r_Page_031thm.jpg
79d8b73cab80675f70de0a5d304ff6cd
885bee3494e50943691727f07a3f0a8a8101ec71
6095 F20110114_AAADWN pydipati_r_Page_015thm.jpg
54058746080da85e59a6d2754d1bf5c9
9b92276dd52f5a074969df8cdffde6e425320f68
7480 F20110114_AAADVZ pydipati_r_Page_001.QC.jpg
df05487d40ab3dbcbdbcdf561494ca92
c31365b828890dd23946c3c8d814e3acfde40441
21744 F20110114_AAADXC pydipati_r_Page_032.QC.jpg
31ddab7d6c26e433aa07b991e7255b11
8fdf47116b5ce4c4f5c8fec6b6aa5b9daa2faa68
4146 F20110114_AAADWO pydipati_r_Page_016thm.jpg
a1151305d58396f2eae0bd9a9f0cceca
161369b4305be138c417a21cda7a529bd4be6970
6285 F20110114_AAADXD pydipati_r_Page_032thm.jpg
6dd9f8f7ebdb543875779b3a36325f55
9d6a9af90bf8328595ec5a307968db06ad72cee1
13679 F20110114_AAADWP pydipati_r_Page_018.QC.jpg
afae1358056d5a18f69ba4e5c5e535b3
5306fba5843ad6e76fe1bd79247ad6f041fee442
6447 F20110114_AAADXE pydipati_r_Page_033thm.jpg
d46034cc0a6513a4539a62593cdee5a5
cec28bb2057ac29d70c8906a9a9aaf05a5704741
4392 F20110114_AAADWQ pydipati_r_Page_018thm.jpg
bb928a4222e9212ae6b6f392ebfc429d
693549046d49d53d72437d45fcad406d699d933c
6542 F20110114_AAADXF pydipati_r_Page_034thm.jpg
8df4aa97411fa8123bfb3116e65aae94
bbd9acd5f2057fa6cb39c6e45d459d51221ec884
5407 F20110114_AAADWR pydipati_r_Page_019thm.jpg
5db0c50f9cc71178ea580c223ea2b3ee
dbed7834c23b12722e11c6f983b3fd4bf96fa7e1
6674 F20110114_AAADXG pydipati_r_Page_035thm.jpg
e0450fd0c80442fc2360ca1025f25f19
7ae078e4297c503ec8f4b153fc05ffc88ba10d77
20596 F20110114_AAADWS pydipati_r_Page_020.QC.jpg
7e34443eb68cab82cde2dfe78ec39971
a6266df9b68e8b17c42a9d6c29f9b8672ad5bbe4
18582 F20110114_AAADXH pydipati_r_Page_038.QC.jpg
2f3c9313dfe489367c240a7d80be46dd
804196deb9f363548982af1a716152801a386da7
23770 F20110114_AAADWT pydipati_r_Page_022.QC.jpg
725ae3c0a0aa5f67318f837ec6255886
9c1e361e13af17ccb3d19af76e1b3f214f322638
51669 F20110114_AAADAA pydipati_r_Page_049.jpg
3a66b816447f3d77b0bb8e19808f9f3a
3118f295485218144d76752302879d04f182c692
4274 F20110114_AAADXI pydipati_r_Page_040thm.jpg
0b6ae098fd737a4f6e87085243596117
9b8d566ffb7779632f434a74ea92bf3941fffa21
23791 F20110114_AAADWU pydipati_r_Page_023.QC.jpg
96ef40e7ad6e2c9729362d5ce1077b2b
856eef2372be997133179a86b2dd80e670318083
78986 F20110114_AAADAB pydipati_r_Page_115.jpg
12ffd4195293fdcf33c190501ed228e3
227cb3b39e70dcf41fae78aa6545895e642b32dc
15324 F20110114_AAADXJ pydipati_r_Page_041.QC.jpg
92086a772b86f246f96edb95b1a49eda
7177c10d718c5758fdcf4f7ebd3c580edb4db8c3
6354 F20110114_AAADWV pydipati_r_Page_025thm.jpg
dadf2a0db5e89be55bf75dab55893fd3
1b6aad140a3359b338fed60449c73ca60cee34b7
22485 F20110114_AAADAC pydipati_r_Page_034.QC.jpg
1b25d296e14e3544c01e725990291340
1e1d06855e235fbde247c3b5f0a17efda3fd518a
4686 F20110114_AAADXK pydipati_r_Page_041thm.jpg
646f3912dcab67b465cc3ef55940a710
497182dd9ae24d16dc9e1ed2af0f44532dfb85d7
23294 F20110114_AAADWW pydipati_r_Page_026.QC.jpg
dd323d768e658ab1ac932c5ff050ebef
68af62de1c288ced878ddc5763745c6eaba1aa4a
72198 F20110114_AAADAD pydipati_r_Page_022.jpg
d20ab074e01e6ee94f598e1cd4f75693
48ca65e45bbd63beb509879fe2b30b4660daef24
17526 F20110114_AAADXL pydipati_r_Page_042.QC.jpg
c300f3063d387715ce5b9f69bc3f95d5
7ffba1181b0cc9c417eec40898e885d19e5e4a5e
6553 F20110114_AAADWX pydipati_r_Page_026thm.jpg
014d3eb58c62848b798303bdea308608
c63758ec0ca33490628c54355b79703b9a4aaa38
21676 F20110114_AAADAE pydipati_r_Page_071.pro
a883bd10f7e11cea585bdb74177c0fda
b5646bb03b6abfeb4438149bede89e8c7ee29d44
5107 F20110114_AAADXM pydipati_r_Page_042thm.jpg
ffa88849e0d897f642cd2c29cdb06691
ea8e0668163be7daef25e871c64c9985f51e9c56
23158 F20110114_AAADWY pydipati_r_Page_027.QC.jpg
a615ddf22f60c9c750e8f551258a5d26
2a4fd0bb15789290d70b2bacd2e7b16b4e663b37
12122 F20110114_AAADAF pydipati_r_Page_111.pro
453adce662134f30f257d904123c5103
69d6953d08096e92f95f2478727b8370ecbee50d
24069 F20110114_AAADYA pydipati_r_Page_058.QC.jpg
515267c76f1b394cff7a984e71209013
4232671b33803a6e7a8095dc5e2eba461af79fcb
15509 F20110114_AAADXN pydipati_r_Page_043.QC.jpg
c25cff0a209856e9770932566bb64be1
b55f3fac06dc6c64ef9024f37a533ef5d5c267cb
24582 F20110114_AAADWZ pydipati_r_Page_028.QC.jpg
8d4931864c4abb3203bf19b8d37f8cfc
3547db6b3001aebc0c9ff5f920d3c314279dc1bf
6649 F20110114_AAADYB pydipati_r_Page_058thm.jpg
d2fa7e38b29bc502caaddf84c45cae3a
6dd1488ea01aa5ee08b4a0d3d1cdcf73b1ac7920
5130 F20110114_AAADXO pydipati_r_Page_043thm.jpg
ea99d9c3e8de4819ca6344a9bbebe128
f6c52028701209c0fc5a8774312bb3635b61948e
1437 F20110114_AAADAG pydipati_r_Page_063.txt
290f5ff4ad6107210290dee8c4684275
427b00cb423dcb0ec07c74240cc0d1394671300e
1756 F20110114_AAACVA pydipati_r_Page_068.txt
b078317a8c499106b69110812a5d7eed
f3b30240977c7369d11cdb43cbd089ba4e7bb665
23028 F20110114_AAADYC pydipati_r_Page_059.QC.jpg
688a20d08bdb58a790e9392363f12387
d8929a1e64d088e2a266439daa1aacec27679e62
3322 F20110114_AAADXP pydipati_r_Page_045thm.jpg
13a62bb7379d8ae093c80932f88c725f
965cfe0a7930184a20d9b851fe77bde49a17c8c4
102021 F20110114_AAADAH pydipati_r_Page_101.jp2
55ee1d4b662f0ebf28c2e20367f45d4c
8f85290ae50448fb10289ac6ee83db10785e0777
51313 F20110114_AAACVB pydipati_r_Page_088.jpg
e7c5f9dea96fbda113fdd8b72685010d
fa0d49162f3fce91fa596d845f94f3b7986e6306
6918 F20110114_AAADYD pydipati_r_Page_060thm.jpg
664eb8438ef8457815aaea809aad9c97
5553b4f79ead858209375403e5bcfa5b6a127c17
4246 F20110114_AAADXQ pydipati_r_Page_046thm.jpg
0f73e3857743a33c51c57c2e8547a93b
11c520633b0f490259dd6b37bdf9e4d74ef5ca94
936403 F20110114_AAADAI pydipati_r_Page_014.jp2
c83766534d2f4c4d0c106224649215e7
9abcd20be60f1ffe4cb720793d0b0364e1f22282
20254 F20110114_AAACVC pydipati_r_Page_014.QC.jpg
941d95c9a30dd50e56a2cda5dc9eb66c
025b6c6023a432c424e16236a95800b199eaaf5f
20095 F20110114_AAADYE pydipati_r_Page_061.QC.jpg
aa411bc89fa065a98070aa72ad5d84d1
f43cfe3577b29af7c458e8cbd416f6b2faaf9e7f
5624 F20110114_AAADXR pydipati_r_Page_047thm.jpg
ad21f3b7cf58aa6d9979bf0132d6cc0a
2bafb891c380475e933367c7733cc76fad6d6d1b
74570 F20110114_AAADAJ pydipati_r_Page_057.jpg
505593c791119073facff462b28796c2
c4ad2ce72833adb58b9b626be46524bf718e0303
4547 F20110114_AAACVD pydipati_r_Page_044thm.jpg
5bfb78d18c50cf4f4daa910f5cff6fb0
3a1e8513137898257ce09b025f25421c84a5fd0e
7268 F20110114_AAADYF pydipati_r_Page_062thm.jpg
c8238c96866a017130125c8ba15e28f3
2aa74a2d19d5cb3db067ae4f9685f16ef81c127a
5831 F20110114_AAADXS pydipati_r_Page_048.QC.jpg
66cd4d1900403b9958872164b6aca187
460cccd5b18a8198abbdf5357a8d80e2ae3f6830
1813 F20110114_AAADAK pydipati_r_Page_036.txt
818e11f199be0ea4a6f6c7a9e6ffd073
835e4cf3db7051c56e8631e77a75235414405e83
1053954 F20110114_AAACVE pydipati_r_Page_036.tif
d30bcf1440dec57e6bcefdff6b1735a2
3574e34355b9740f503732df4eae2caeaa50d222
20885 F20110114_AAADYG pydipati_r_Page_063.QC.jpg
051dea81a85870a118f0919d7e67633d
730429ef4861b8712d71b75f9bc73499250f2767
2071 F20110114_AAADXT pydipati_r_Page_048thm.jpg
285c322327044380bc601727b37eebd7
d2e88aeed60621958bad5fefb457f2de01a5d9de
67124 F20110114_AAADBA pydipati_r_Page_068.jpg
afe18e5d8269dbcc9e1d6ac706ae2196
050ba22ddfd7c2f55b69c187d38c2ad477b2db41
62250 F20110114_AAADAL pydipati_r_Page_014.jpg
f59549669aa4c43357436e1f21465f28
052556e72ac11c2b15b583577af9c0fc6942aefd
13996 F20110114_AAACVF pydipati_r_Page_040.QC.jpg
c48aab35e847075eafdcdff2a0e023bb
3480b29a55f049144dab34894f26279a94641712
4835 F20110114_AAADYH pydipati_r_Page_064thm.jpg
a7719a89dccb8419505c2d579ae3fb7f
eeddf10ceebeaf8a06a40390e1137212e0ca5eaa
24463 F20110114_AAADXU pydipati_r_Page_050.QC.jpg
553ca41424f7485fd19e24235d1be7ab
8c58f0ba26b49bebbcc1ab45815d624b4fa3be48
42101 F20110114_AAADBB pydipati_r_Page_017.pro
2d2f0bac2da15ee0b00b43d9e4677e9d
22f45ee0533b6ea6b451f089024a8880c8dfee4a
107 F20110114_AAADAM pydipati_r_Page_002.txt
f32b0a7d35bdb0d02da57e434c7fbe3d
876a4d79d617ce0c98ec551de26eb6a4895dcb4e
951175 F20110114_AAACVG pydipati_r_Page_078.jp2
e3402ac3c59f8efef9ae4e07c37b65b0
486f946f7c619b305abdb1ac8b46e3db74735f3a
13387 F20110114_AAADYI pydipati_r_Page_067.QC.jpg
cbe3c54dd35aa05c2f3d658f77624852
ea94b44e0fe0c195f887f5b196e65d2fa422cfb9
6493 F20110114_AAADXV pydipati_r_Page_052thm.jpg
2b387a39be9cdf9483635800f1955eff
bd47ebbfd8655609a69a8d6292b79ebb2986bff5
6656 F20110114_AAADBC pydipati_r_Page_096thm.jpg
e67cd0c51951cfa38336730eedf0036e
ddc715e924df420c75766678468a2d1bee29b113
22997 F20110114_AAADAN pydipati_r_Page_082.pro
455cc330ff434aee1504c77ce9bac5f4
5a215b4e25988fab2bc7edbf2585d51c9a2674b5
13354 F20110114_AAACVH pydipati_r_Page_073.QC.jpg
55fb7ad14ccf20f6cda9913781db25b9
67f90256e426c296bb53f223ffd415ba6dfe92c6
4558 F20110114_AAADYJ pydipati_r_Page_067thm.jpg
8d9520cbecea57ece2db120d0fce46a5
497f6561a5555f64290591814179cb03c6917699
5554 F20110114_AAADXW pydipati_r_Page_054thm.jpg
0a1afc4ab0b14915f36751d9c8c9743c
a8ffdf754a5720f7827203cd8c7c0367c6ed7347
1051935 F20110114_AAADBD pydipati_r_Page_062.jp2
1d0a9f51b9ae70d11dc2e8556fa4ee03
84d22e33d927fc9dd57f317622eb86f05604f8c2
1751 F20110114_AAADAO pydipati_r_Page_080.txt
32da0a27f16abb771ab8e759cc5dcf45
414c25e5368aef7b10648de4b440be159294724e
F20110114_AAACVI pydipati_r_Page_088.tif
604d9af6ff7ff25ed2f2ad05ca45bec9
0bbd756c4772ea1bc3dafef3212fa516cfd83f8a
21373 F20110114_AAADYK pydipati_r_Page_068.QC.jpg
18950cd3c47a65107bacf817a68c82e6
a9bd32e445886dc3fe2cb3d1fff3d9cc7084e59c
15539 F20110114_AAADXX pydipati_r_Page_055.QC.jpg
ad75b5d72474a04c3a1ebd1383f3ee18
5e342bcb29883e190e1edd4b3159b073a8c44473
18898 F20110114_AAADBE pydipati_r_Page_101.QC.jpg
f7ab7aed94d9aa40bc3bd9e1a9f93ecb
999bbe5ed576741bffb66dbbf92074a6a3fed63b
25271604 F20110114_AAADAP pydipati_r_Page_086.tif
9b5198e81be69643e57d101d26b87292
f625a559de3698cdcfe7c343db15c91ca8c8a307
4891 F20110114_AAACVJ pydipati_r_Page_088thm.jpg
b7df91431ad7b3ae03fe46250970eb9a
871dd54621d41335840d340143a6d570b7bc39f4
5816 F20110114_AAADYL pydipati_r_Page_070thm.jpg
0574ea211dbf105787235f91150f3cbd
1b07b1941a5ed3ba0502be6578e2ff95f7c6ba8d
65047 F20110114_AAACUV pydipati_r_Page_044.jp2
d430d6d7c09dfc477b4036be624eff5a
c8df2ed2898438b0e20692216ccd6ed64e0b94f3
16216 F20110114_AAADXY pydipati_r_Page_056.QC.jpg
12927b35be899b21ffd8e3d8d6caae40
7526afd59d2612dcaecc656d13815ce92b764519
57588 F20110114_AAADBF pydipati_r_Page_091.jpg
c8c07e03e20cd93dd2432b1570788302
8d928d4d54335bb1117dbf14761f38e9b3945ec3
1738 F20110114_AAADAQ pydipati_r_Page_020.txt
9f45a77196c3bc74d45a89278d46182d
716c039efece6fa4945e9fd0d4eeb7551dbe51db
5719 F20110114_AAADZA pydipati_r_Page_089thm.jpg
194b6975e091158351d41e4c9e682c80
61daf19f222d978bd5d2099f763d30799497edf5
F20110114_AAACVK pydipati_r_Page_046.tif
c9a28950083a803e6995e80c1845612f
3dbb53e3a9c720c5183ca70ae3678bd61333cd21
4600 F20110114_AAADYM pydipati_r_Page_073thm.jpg
631c6d5e7e0553e137f4baa85e746d76
a3479a16100de25769359047c50ba707d23f4370
10301 F20110114_AAACUW pydipati_r_Page_074.pro
2522107dd1ee7221b71f8a1807977c73
353f9daebcba7b20c0f921f7e58342cc4ed6fdb3
15417 F20110114_AAADZB pydipati_r_Page_090.QC.jpg
c79194a89e88815db92771a1dd5431de
c0c29c78d9a163f9e6cfc4316a3dc17cddb5e94b
5337 F20110114_AAADXZ pydipati_r_Page_056thm.jpg
34fdd1c8c3ee5bcf14f7a0b8ba9b0070
7b24517e51f9712961a191bf0dd4647fb4b2f06c
103126 F20110114_AAADBG pydipati_r_Page_052.jp2
a31baa6aff0d550fbe38c224a1463819
7e77ca4fd44bd8a450c20f800547de1cd182751f
437835 F20110114_AAADAR pydipati_r_Page_110.jp2
d24bb764fc0325015d15755322b775f9
7194f7a17b56f164abb789c537519c7c81e5f6df
10950 F20110114_AAACVL pydipati_r_Page_018.pro
952df8095342b6e7333173c50ef0f2a1
7c65bae05a98f7bda49815e9b4c48c1092ab3e61
7954 F20110114_AAADYN pydipati_r_Page_074.QC.jpg
b470c9978305c2595dbf39a17a06623c
0fdfffcedb1ce7226f81c96ffeb165ea1a479e95
6017 F20110114_AAACUX pydipati_r_Page_011thm.jpg
f314703c87df6031868ca7ea2c5e2042
239d6b0f9ea0685bc31cfe1642a64a074b635330
5012 F20110114_AAADZC pydipati_r_Page_090thm.jpg
020013f0fc3af02cad45caba2dd80867
d84ccafa920fe621d78e7c72607282abf84a0efb
1913 F20110114_AAACWA pydipati_r_Page_035.txt
562b204c97d96ce8138b8b73e8a86620
c1c5574145251630853f0610020e76d8a6ba8e92
32395 F20110114_AAADAS pydipati_r_Page_107.jpg
1342b033da8d2d6e1ddd2c3cee26ed7a
03be3dd3001b34635af2046c76e3a9d719c61536
21730 F20110114_AAACVM pydipati_r_Page_025.QC.jpg
42c4f2c4e188157c87afa2413245bc72
9d36bb28930f2c4ebc5faef18c1d923a373f3d96
2718 F20110114_AAADYO pydipati_r_Page_074thm.jpg
a8c019700fac74594e9e9d50937b3161
2015b2ea8e0edd44328f4e72dcfcc17e6a5aaa71
F20110114_AAACUY pydipati_r_Page_083.tif
0d6aa204daaac857536cfab71bbe1256
c6a97fdebb0610f5d0c09d65b19f165dedc423d6
18628 F20110114_AAADZD pydipati_r_Page_091.QC.jpg
253a1472eed67b768c99577733f49920
a5d88a75711c727562939176949b0319d90915ab
F20110114_AAADBH pydipati_r_Page_069.tif
fc24d0eac043544f5e68ac23c4a1bd81
fb65ff512227831dadfe06a01d6db1d95b6a4c3f
50348 F20110114_AAACWB pydipati_r_Page_056.jpg
3e5338c67a01612ba6bb3965d6765c82
47fdb9b3043f2c0d3da2c828c2db16c04add205a
1051937 F20110114_AAADAT pydipati_r_Page_086.jp2
e0dd235b9d166897000f5fe159695eb9
bba80b3d6db7a06cd3ba8fd7130f096d690a11bf
1934 F20110114_AAACVN pydipati_r_Page_022.txt
179d823366534b4d125ed3d135ac8604
b764c97a5a11e5d3b6af2fa4717f1f6b31f5fbb8
13591 F20110114_AAADYP pydipati_r_Page_075.QC.jpg
812dbc9e5362adbbe3b0860c072e409c
4d26ad078762fedd09c469c617fec844c3be9c0f
114090 F20110114_AAACUZ pydipati_r_Page_028.jp2
1e0da1d9386d6838ce89e4c9462268d6
689eb151ec82f7c61d9d9754116e2c8c84b5630a
18926 F20110114_AAADZE pydipati_r_Page_092.QC.jpg
64c89f5ac2b7b0a70ac9a7a744bf4d25
29343c2bcef25dce8430ef0305810dac1958929a
72458 F20110114_AAADBI pydipati_r_Page_021.jpg
47382f138d6ee1953c8afcf0ecdeff5c
9f6cb46ceb2dea06cda19945c9d58962a0200a1f
5738 F20110114_AAACWC pydipati_r_Page_012thm.jpg
eeaba6d472f28acd1e9ab8c2db37e7a6
0ce3b6f7d5956cb384aa9373ad84ae01598d1574
45564 F20110114_AAADAU pydipati_r_Page_099.pro
538593f4dc728c2e3eab335739382a23
cb305e0c0d9fa18885a93831c7ab37f429752793
F20110114_AAACVO pydipati_r_Page_105.tif
8b2848b5e92294a11d91db7b32a8c993
9a02ba53abaca1289fc736f92ede6fc73c4d9eed
4578 F20110114_AAADYQ pydipati_r_Page_075thm.jpg
9b1dd21f94a4478a86f1a5233843b951
3e9b6e8dfbb5101df567310ccae24cc530a3a136
23467 F20110114_AAADZF pydipati_r_Page_093.QC.jpg
23d3845503b855a10e1ff78cf8fdbbfa
d48bee2ce3300ce01dc0aafe16f4d62302eefdce
487 F20110114_AAADBJ pydipati_r_Page_074.txt
bd9bf383ec281a0a155460d7b0592189
56f87035e5f5dd3a5d908ccb2915a025c911db70
1816 F20110114_AAACWD pydipati_r_Page_011.txt
5c1e9308814c67078fb2eb98c9c1b678
45361c9c45cfb38fb1681cbba13e66775e594ab5
5797 F20110114_AAADAV pydipati_r_Page_085thm.jpg
9ecb89644c016a562ac54872b10776d0
82be69dbe6e264a15d56e7d7db0ce35bd380c964
13314 F20110114_AAACVP pydipati_r_Page_071.QC.jpg
019809b298e43938ee90eacbdc788942
939a71841d46203dc18acd50471ad6173991aa6e
17453 F20110114_AAADYR pydipati_r_Page_077.QC.jpg
ad682258de2b9a4c095937eec9f089b2
f60858051e60f971f0b3089f1952a28e1692172a
6737 F20110114_AAADZG pydipati_r_Page_093thm.jpg
63ea9b10e938bd7f0200b993d894dc9e
c3b6f49765769c62510edd5e2fb185efc5a51232
97430 F20110114_AAADBK pydipati_r_Page_076.jp2
fcd06d3f235a9e03062573ddc4a640f9
90a6b9403bfdd7f9dff9a0bba9d296fdece6edc3
F20110114_AAACWE pydipati_r_Page_081.tif
1ef96a3191e267b98b607e6c68017772
5f520986715a4eedc4afa01639742b0372f9cfaa
43131 F20110114_AAADAW pydipati_r_Page_012.pro
a95611f0d774f6ff04c636ebd1f45ded
3d41f11f6697b66140b6d1a93bd854cb2ddb6988
28795 F20110114_AAACVQ pydipati_r_Page_045.jpg
ab43ee6c7da7c674b46ecee35db36f59
31f592c932b0ec5143ab2d3eb3afce8b13853478
21547 F20110114_AAADYS pydipati_r_Page_079.QC.jpg
cabaeb25074b22921d25341075d0e290
a7cfdf56ea3dc18e81ca0a04cb0a0edb77bc8a35
6250 F20110114_AAADZH pydipati_r_Page_094thm.jpg
57249004849cb41ecdfdda242f766cf8
708d807df72a8768938bac49364ef26196ef4054
14573 F20110114_AAADBL pydipati_r_Page_003.jpg
adad928e7f90a206f69d8b14b6a51cc5
63a481df2d5ab68b2b0d4855d9253c2657e1c229
40041 F20110114_AAACWF pydipati_r_Page_045.jp2
10a448717c5324a0c9709dc2b2768cea
69376e800f0fd694c5e655913fe51769cc56615b
67541 F20110114_AAADAX pydipati_r_Page_099.jpg
22dcf8c44536e3e3f02f8a8b2dda2335
f2cb01b3530ebe000cb6c9425b619902557cb183
19958 F20110114_AAACVR pydipati_r_Page_089.QC.jpg
8ae416bef74bd5240fc39bb556e41ce3
0dc20d50ff7b14e6d092c26595bae8e6618f9ae0
6355 F20110114_AAADYT pydipati_r_Page_079thm.jpg
6a30966a9f6c5f88735140cd7e368de1
a30edfb374b48a2997d8204bd82d3096cd02cc46
25428 F20110114_AAADCA pydipati_r_Page_064.pro
9e49105cd3f496d2dcd90dae3006f397
5c72273e6f8bca62058fc30a9ddc655b12437707
23159 F20110114_AAADZI pydipati_r_Page_095.QC.jpg
950b4e50831ccbf2eb6dee3416cecf8b
6c0e2818283affe8900aedd6d2d262c3c0cc467d
F20110114_AAADBM pydipati_r_Page_085.tif
f65a997844a11a11d188900a48401c13
78aaa4b2a3105e596dbdf76165892e5834018c4b
F20110114_AAACWG pydipati_r_Page_082.tif
4247a5acb581f3fc4846068dd34aba4b
128e4420790f0dd4288e2fa6f7df3e8562637259
32311 F20110114_AAADAY pydipati_r_Page_091.pro
359c540f3f7b7801ac7ce80e2ad2c71f
2917f81ba98f832826cfa2e409d1e84d8a10a600
F20110114_AAACVS pydipati_r_Page_019.tif
05b0b0e97b8f9c135a6ca5b4582224aa
db6914f8aebc36e889cea740f6246091099b7a32
21803 F20110114_AAADYU pydipati_r_Page_080.QC.jpg
1e0bb5137e4d53142df86b16c876f99c
9b490ffbdfdee3c67fe6b871505682cd431eb481
49276 F20110114_AAADCB pydipati_r_Page_090.jpg
e1a0ef8b6f332a27f80e531238ed3a72
6e9777d4bb5f67598cf9fe94a49e10c780069c51
23989 F20110114_AAADZJ pydipati_r_Page_096.QC.jpg
16a735bafe5eb1d587255164d5c2a7c9
1b0207f8759729e583317490cd78a1de626b411a
6626 F20110114_AAADBN pydipati_r_Page_022thm.jpg
d71425d91358935091c15c1b946a9279
ee54582f6b2251b481c0022ee48d4b53f646df48
F20110114_AAACWH pydipati_r_Page_004.tif
91d5271ed9636ff2f3ee630d92faeac1
a88db4edf47f667a92494f8a9ee94bebc3c6b31d
27997 F20110114_AAADAZ pydipati_r_Page_114.QC.jpg
eafbb63cd5c368aaad81572ba4199bfc
b3f4cd85195cf1e369fe126cd8437b3b749da488
217709 F20110114_AAACVT pydipati_r_Page_112.jp2
f23d786a737422f822547df54f15fcb6
d0a86e939c45e8b2eb3005ada241a303bdd4278f
6280 F20110114_AAADYV pydipati_r_Page_080thm.jpg
fe2e45cf3479b1fe29415c13c8618093
9253774f52e3b40cd2370248d372c5a5e09ab9b5
1123 F20110114_AAADCC pydipati_r_Page_073.txt
5688894abc94c34b2925f6a9bbaaae0c
a3b056c617fa25a05351f4fe442648cb7ebf3501
5972 F20110114_AAADZK pydipati_r_Page_097thm.jpg
ec6f3fe1433e969e2570b7003862d530
0ed90bbb02ccdb94cf176f4d70c77210c5370c8e
62184 F20110114_AAACWI pydipati_r_Page_084.jp2
ec2bba5db321a161c0d98aa4b91a85be
1b8f9e82183d2706b09a9aa062bb291dde1e3a59
1170 F20110114_AAACVU pydipati_r_Page_109.txt
f1c0677bfcc32c1597dd405d6e6dfd1b
eab1c8f73960574c3e281fd431d6bf2f655f27bb
24574 F20110114_AAADYW pydipati_r_Page_081.QC.jpg
b2d0702e515a8d26f444826ae81cda1a
a21841393dbd4ebab230d59d89377d730d80cf04
F20110114_AAADCD pydipati_r_Page_110.tif
119ff8f39e6d590e04a229f0f18c9f08
65593ec07dac4e8df9b3a0a7a1d001a5e73e967b
58110 F20110114_AAADBO pydipati_r_Page_051.jpg
ddf6283267cd95b972e7323bfc647d77
995a7c9d852760eb4e0343bc47798231547ea1a8
13149 F20110114_AAADZL pydipati_r_Page_098.QC.jpg
3cb24c7c8438921c9513e77046f48db4
b386b63c9672fce98e20fb8a26e25345c3a11b78
47584 F20110114_AAACWJ pydipati_r_Page_043.jpg
548408f874aa393754cba46ec6ad2113
aba854ea8989fd24cac2d6027b2ce05057842dee
2041 F20110114_AAACVV pydipati_r_Page_009thm.jpg
bbeee28ac02d968d195cd7ed0adb5658
fe435d212800b183221b3382a51470b2c28ee4fc
6943 F20110114_AAADYX pydipati_r_Page_081thm.jpg
b48cc7c43afea91273f5b13d49d57e28
434e90b753ced26a4514ef20d3651efd76b74d01
1254 F20110114_AAADCE pydipati_r_Page_040.txt
01d51605c7278d2e6111e40eda73342e
2707eed443a959a439a498540dfd3c4b7dbf2722
40640 F20110114_AAADBP pydipati_r_Page_037.pro
2c79a30a4d6f9ea8d66ab1b38aca0db8
658fa43b2d90861cb4b1a0588c5f57ec1ea9dfff
22179 F20110114_AAADZM pydipati_r_Page_099.QC.jpg
4beaaa4a12e232839330ac55aa91f34d
5dbadddaf67d6d9aa9858a2dc4b7910e1aed1891
75324 F20110114_AAACWK pydipati_r_Page_028.jpg
36e0f0a24e2cb0000aca880387cec8ff
ec7a3a7f385508973f141baab56deffca48564f6
2020 F20110114_AAACVW pydipati_r_Page_058.txt
6ac7d289c47cd9a7ee94d17d8dfcb01f
d096b907c26eaaed4b38efd4ac8432a5b4e08fd6
20300 F20110114_AAADYY pydipati_r_Page_085.QC.jpg
67617ea900a073258ab06623d5b3ed72
9fb25bfd8eb9a266a9ac15fc7804108ae533be94
3844 F20110114_AAADCF pydipati_r_Page_103thm.jpg
f651f00e3cd98b6b2ad8a8094fb13b8f
89e074d5acf215dae4911862ed0a9f157c787d56
F20110114_AAADBQ pydipati_r_Page_048.tif
4bf10e6aa9ac34da489a329defd9d229
479cf4172fd87b3b143ce61c6a4fbf27a6a62f2b
20249 F20110114_AAADZN pydipati_r_Page_100.QC.jpg
73f07e21bad33c5da68092588a3e2113
23b658b6477308584946ae4722c6dcc948f01345
5422 F20110114_AAACWL pydipati_r_Page_091thm.jpg
b6ec326fb3c08d3ea48a67a800dcd05a
1139e57e96163f670ecde302680dcc987141fa9e
5942 F20110114_AAACVX pydipati_r_Page_014thm.jpg
1a572df7468b1515614fa15ec88f9c0e
629e8198087b7223395b636e3908f413a7acee92
18919 F20110114_AAADYZ pydipati_r_Page_087.QC.jpg
dea2c1e073eb01038884b6f68e34b81a
0d590cfc904181a85ea9170272da574af2f2fe98
1613 F20110114_AAADCG pydipati_r_Page_101.txt
25ce8936c1a5fbe5c884b6047a75bb76
9d9ce8572b65ecdd17abe784ba772fef95f1e424
16845 F20110114_AAADBR pydipati_r_Page_088.QC.jpg
bdf3024457d29be808236fb366d8dd81
bc78b7aa57bd7caac695d7ca0f83bd1971fe7efc
5901 F20110114_AAADZO pydipati_r_Page_100thm.jpg
f58c7a35b77b24b37edb3a14a2b77f88
ad562c213455e6fd7cc18a002659a6de7a6b98d0
1051979 F20110114_AAACWM pydipati_r_Page_008.jp2
85155a55b1fa450cd45c6a7c7c1c7e84
e78c68dab7e1bc08cfaf165b13a6e79cd1f55b43
F20110114_AAACVY pydipati_r_Page_109.tif
19eb19726220a21f631e074bafcca59d
e77a2f1c10010cefd1c7e081d9b0d899bc19ff8a
24663 F20110114_AAADCH pydipati_r_Page_001.jpg
1cb25fd2a611802879b62e9b665408b8
b090062ee82ddabc70c9c403a3658f03b6f52462
115555 F20110114_AAACXA pydipati_r_Page_115.jp2
3f947930030bfe03924b7892dd814270
87f13754a8f0434c071c15056d6b30b5f2a577f2
48743 F20110114_AAADBS pydipati_r_Page_022.pro
0129d77dab99dd9483fec2b096d69ed6
ab0760aa0ab7ae922b858e0716e2846eb3fe51e6
3949 F20110114_AAADZP pydipati_r_Page_102thm.jpg
7edcd1b97c15e34076f0db0423dc5131
da757246597c8a9054307785ecb90b3bb464d727
75048 F20110114_AAACWN pydipati_r_Page_049.jp2
9426dbfa94162f34113574d40c785a45
ec96bcf5dd0740853e958b60bbc1b4dab7222757
F20110114_AAACVZ pydipati_r_Page_044.tif
88a4477e39da7be4afc65025960ad782
f596804c916d34e78864e4dbc0322ac0a7da4e2c
4280 F20110114_AAACXB pydipati_r_Page_039thm.jpg
6cb43b9fb4bfe87c49ac0d504e7750f6
3d9ac5457c845f9a25bbd00587d92e02ff1cd14d
42621 F20110114_AAADBT pydipati_r_Page_093.pro
aff144ab5cea0875146e9803056114f2
8853c4b0a3ab1b03efc4ac2c35e201b477645233
14479 F20110114_AAADZQ pydipati_r_Page_104.QC.jpg
ecc3205a0d9250434f40955b12623933
f6d27b6baa195613f959e9f552d293fc010fca00
1432 F20110114_AAACWO pydipati_r_Page_061.txt
a9ddf50dc0806dbe7b502cbaec840fcb
79c0bc53418f7cd797d25907858c353f3ee78f92
21632 F20110114_AAADCI pydipati_r_Page_011.QC.jpg
0bb5e1122a5c1d3d7c90e8a93810fd1e
e8c200e9f584fc8713e89323a067d8129d8fce50
68996 F20110114_AAACXC pydipati_r_Page_083.jpg
dc84865ed16299263350cc046379ded5
a32e07018258e13cf30c92d769bd232ae07e07be
1727 F20110114_AAADBU pydipati_r_Page_093.txt
b3c29596c4d4dcd4a93254564c804fc0
8b0c8d1b222c63ae95d2df080fe714b9bf65abfa
3990 F20110114_AAADZR pydipati_r_Page_104thm.jpg
9db726c242ad5322705d589ae85b8f77
69f3b3a776f1e7cdc860a126a6626e44a512522a
113991 F20110114_AAACWP pydipati_r_Page_060.jp2
4472f3b06eb49ee5fd368252f0e6619f
d92c379b0283ec995954dbc078b6d11c30714c10
1912 F20110114_AAADCJ pydipati_r_Page_026.txt
547d96f7af2fd666470028a8d86cf171
f4c04aadbd240082c3245fc2960febb84f720f78
20314 F20110114_AAACXD pydipati_r_Page_017.QC.jpg
aa136ce73234fec127a035073fd36fc6
de4532cf5e01f4845fadb6ee85d86864476018d7
2186 F20110114_AAADBV pydipati_r_Page_006.txt
4680aff0b9b662d0d295ed1e4fed21d5
3762883ad70bcb06d3369304ca43a52bc8696cbf
10942 F20110114_AAADZS pydipati_r_Page_105.QC.jpg
464430365be65d09242f32a7579834f9
91124bf6e084153f8e75111c83e9e86335dfdb22
F20110114_AAACWQ pydipati_r_Page_095.tif
aa33d5b03ad4f06d0a6063fa033db224
532ee443343ee769aaab7bcbb770850ee19a9f02
44601 F20110114_AAADCK pydipati_r_Page_076.pro
a37335a94f5763de74aaee02be3c94da
5bd040df44555eb90048d3ecf13162e4d3a28f93
F20110114_AAACXE pydipati_r_Page_096.tif
155b94f68c1e7594d709b3b842b07384
ae490dc57517182a71648140b3794c3ecf780be6
80220 F20110114_AAADBW pydipati_r_Page_053.jpg
ede9f2d87e1664ff75ac563f66af791b
51ae44279898eb806bdda4a502e33511aa686398
3256 F20110114_AAADZT pydipati_r_Page_105thm.jpg
8e9545ef3eb34ca35d0fc34489910574
a5dce8c78450b3d3990ff28a969ea7ad641e012a
405 F20110114_AAACWR pydipati_r_Page_066.txt
a34f4c97262a0833b21c5146b3274e39
dfbd6836eec1b141c7b36fc5e8657f5cc0bdba36
46831 F20110114_AAADDA pydipati_r_Page_033.pro
ab32d4ae114a1f3ee5e4d0c8e73a6227
0e38d7615a64ecb86db1756d3c69a70165b20433
25159 F20110114_AAADCL pydipati_r_Page_062.QC.jpg
f0d3af01e3949c9aaf4b756f37c790c9
4384c713eb985e1f89750ae94f96ddbf7a1bf423
4817 F20110114_AAACXF pydipati_r_Page_008thm.jpg
a14707eff563c6b3ef8059ae0ed3b2d5
6bb373f67cea3a99fe4eecc3027f49b0ae053aaf
17529 F20110114_AAADBX pydipati_r_Page_008.QC.jpg
64d94d27019b665bbaea2b9eff67b31c
90a9bef0e5f8cde024ad3a938d02971122d3f1c3
5008 F20110114_AAADZU pydipati_r_Page_106.QC.jpg
9243c53a86b6ee79351824d19bf1f85e
e9bf64bec6138ecfb1f338aaa932f312377c1de3
F20110114_AAACWS pydipati_r_Page_055thm.jpg
4d1186987f01f8d3262fdca2a5477a76
c82fcbbd591a4113b5318fd07a603c7668c2d7f8
34566 F20110114_AAADDB pydipati_r_Page_092.pro
da3bf786989d53de0364ec8af28085da
231e5fe25889946a9334aebb7e5b142cfca58600
87298 F20110114_AAADCM pydipati_r_Page_051.jp2
8d468117959af0f8d1ed3058804ba233
4ea690ce99f7f7769629d9b3883ec974b47f383c
F20110114_AAACXG pydipati_r_Page_061.tif
24eb561abf332333268ee177671598e2
0198343700383052d5e7627874f320bfdd32d093
15422 F20110114_AAADBY pydipati_r_Page_064.QC.jpg
0c5a7b403d8b85f56dd7271416df638f
2f5841acaa038ea36f732b8fdd4ce65f3b76f02b
1824 F20110114_AAADZV pydipati_r_Page_106thm.jpg
7606fe57fb8c1976a8767fab93db632e
9237dbd2e0737ac369635a33030c5ffece4053a3
21977 F20110114_AAACWT pydipati_r_Page_033.QC.jpg
eeca92173a6fb878da80b9a78f9f3985
a7b67a5b3f89c55c7697b478c742b98714f02b67
111069 F20110114_AAADDC pydipati_r_Page_058.jp2
8e2aead565a270412144e68e0424a295
034a484a44acc304719f755319503feccd0a2727
362920 F20110114_AAADCN pydipati_r_Page_107.jp2
9005d94cd534d9301358df738d336542
3edaf00c2a748837692bcbbc1b8381bb7771f0de
F20110114_AAACXH pydipati_r_Page_060.tif
2d0e69296326ce789dc14b7ef325040a
9a35e62fdeaa6dff9c4127f3895f1e947a2d09c2
398996 F20110114_AAADBZ pydipati_r_Page_066.jp2
23d3e0939350025eee036eda279af16a
c83ff119bcf5fe0a18d2d12040810e767145b2f0
10306 F20110114_AAADZW pydipati_r_Page_107.QC.jpg
801df336b2f7a269b462e40b9ee1e81e
6b6c951f7cf47b6c52ad6bb9e6770e02a0f1e206
18360 F20110114_AAACWU pydipati_r_Page_048.jp2
7a3b1492e9ca3d2d483f325298ded7ef
980fe286134c0fd884e2d0b253b48aaffbb02548
5857 F20110114_AAADDD pydipati_r_Page_082thm.jpg
ece233680c8081a1924207d876fa47c2
cfa1c9b5c6f658ccb15f1aed4b4060dc7020eac9
42990 F20110114_AAADCO pydipati_r_Page_103.jpg
368f8f4cd23e759cb310ad7ff5261778
d812d590e37504c4610e2e6e70c71f1ab4dd117a
F20110114_AAACXI pydipati_r_Page_030.tif
a2ea8f3a34bf8139b8d7854a8fd1b9d2
45a4019390418c68e0e67473d09bfe0b5adfa64f
14908 F20110114_AAADZX pydipati_r_Page_108.QC.jpg
a987630e4488e63f5ebc14b28ec3597a
143ed78233e4e5f555e238f491e9abb8d33d9f6c
F20110114_AAACWV pydipati_r_Page_084.tif
b63b42a2bc611f2a695c7285acb2a063
957b87deda8a4f6c2c9a7698624f72e5cae433ce
100411 F20110114_AAADDE pydipati_r_Page_025.jp2
fb17f82787b2e3a876d4f80492522324
e7bd417b3f359d6dd700f6b9ab15c7b888c0b8b4
6260 F20110114_AAADCP pydipati_r_Page_078thm.jpg
990bc3fe2f03b6d2112f6bd35d8efe65
16b059de614c74a44a821ee659d5d8b9609e621a
1910 F20110114_AAACXJ pydipati_r_Page_059.txt
4b391aed3db081000a58bc19231fce23
949266a8646cf344eb4ac70ba6ef6c001e0bec8b
16878 F20110114_AAADZY pydipati_r_Page_109.QC.jpg
1612690e024bfca51c73ca25eb7ef879
4c55c679aec70e01356ff979a3f1e5af78da0de6
15963 F20110114_AAACWW pydipati_r_Page_106.jp2
19b8b8624bf86091f5d53d370c98a135
a8ff453164a07a66fa7b0dbf0439d5a2eeaea26b
32466 F20110114_AAADDF pydipati_r_Page_007.jpg
dd83eb6bec955a2cd8fed39af6ffe19e
683e3bbb3adac4a8c9510e5a9721e22e16198ca7
67780 F20110114_AAADCQ pydipati_r_Page_094.jpg
cb20e0864be0107c3390668287d5c607
149b2f220d62527a588cf07136f574548c865862
21375 F20110114_AAACXK pydipati_r_Page_046.pro
64e14442a34b465c7166d269e06bfe53
4946fbf323a7d6dfdb056e041005c19e3eb8e792
5278 F20110114_AAADZZ pydipati_r_Page_109thm.jpg
2329657d43c6ae8c2e1f2d7ed940906a
e697bff4bb609247f41a574c53b4ded4aab549ea
69940 F20110114_AAACWX pydipati_r_Page_035.jpg
363b04f60d9b69427901c8bf9ee3ee8a
9f31d1e89004c8fecc5ee37a7a0a41a24935bfdb
52293 F20110114_AAADDG pydipati_r_Page_050.pro
006511df69294877447a858bcdbf1e2c
7846d599a5b229f9e6d6901bc48976955352691e
64012 F20110114_AAADCR pydipati_r_Page_078.jpg
ef8ab624c1ae7c156c2952273da7702e
57f7eb4296aa8fccbcdd6901c822dc57d5777530
F20110114_AAACXL pydipati_r_Page_032.tif
539ff059f4ef12b273b7d0236b6735f1
c249b09346e422d7edf6cb23956dd1dcf59c16af
66408 F20110114_AAACWY pydipati_r_Page_036.jpg
cd5f0ce4167ba8d106d9c0839cfa3d5d
b289371c17557aad8be19f45c5d42b4803246772
3560 F20110114_AAADDH pydipati_r_Page_111thm.jpg
315e6bee0fb991f2b9ab5cc2fc8d2ae7
8d7962171c270ab9feea34583e6a0d123c303353
842 F20110114_AAACYA pydipati_r_Page_031.txt
1c32d8945b43bc6d78ea080ea6504c29
aff39b12487f5e6327247f41f28a3066614ff301
74246 F20110114_AAADCS pydipati_r_Page_093.jpg
8a6dec5b958c5569595e86c697cb6cfc
3f6970dc47517d1fdaafc38e3a8e8a336008577d
F20110114_AAACXM pydipati_r_Page_075.tif
2ec2a104ac0faeffa4ca241a5b05a84d
1b0bd98ff4e51f7655cb4c28445ef661ee4c83a3
49628 F20110114_AAACWZ pydipati_r_Page_023.pro
027f226c09172ccbf20926b46d5d1fd8
6d786f48f140a4fb369f80ca90e00f06b4e2da4f
1764 F20110114_AAADDI pydipati_r_Page_076.txt
986d55643acb919d51e3b85d607f821e
55b6b34476fe409e8c7bf9334ebb5c7ea011ef96
42446 F20110114_AAACYB pydipati_r_Page_085.pro
1c12ebf623e880933010741898c3d679
e97081eec805dfaa4720aa39c57b3fcf311a767f
21846 F20110114_AAADCT pydipati_r_Page_076.QC.jpg
6bdcf983dc9a0c3c2dddc23760cfd20e
32d4803aa77a003dccf45f591c42b7e43b9a2308
93786 F20110114_AAACXN pydipati_r_Page_012.jp2
1468ff64faadb5821cbd4e610377eec7
3567d66194f6d08c75140a7309e96ae417503402
F20110114_AAACYC pydipati_r_Page_115.tif
b6af68544274c638f7bc3915f18076ab
9b66df67d84aad4050478dc8bb3fb8734b97d48f
50399 F20110114_AAADCU pydipati_r_Page_096.pro
0aa0dba6679f3eb638c42ed117ff6b85
19f9cdd5644ade7f355950d56ab8aed527c19d49
5748 F20110114_AAACXO pydipati_r_Page_051thm.jpg
8dc8fc6ab63e9ff977d0228f6598e4fe
70477b8863a3a3e9411af0c173a497ffc9042e23
12074 F20110114_AAADDJ pydipati_r_Page_066.QC.jpg
4e14fe29d531ec71a7381c57100c2c04
779603815f6e8c24dfd91ed1cc7cd6d1ad79a281
16012 F20110114_AAACYD pydipati_r_Page_072.pro
9ca6e5fcebf8014c5ce8a7d2ec838718
5c081b764da139b9dff2ad1c78a9b8c1d070b752
33151 F20110114_AAADCV pydipati_r_Page_049.pro
6287294fa7b3b35be8b405c2f7285a01
aa6178b0bc7df1f9ed8e8430e9591d01308ccead
5257 F20110114_AAACXP pydipati_r_Page_101thm.jpg
a7a2b687d05b4c9de4cd38cdc6707d8c
3907ee2694478b6d855ecadef30351a2a7e7e1b9
46535 F20110114_AAADDK pydipati_r_Page_052.pro
7fb68d39c02b2e9b34e9aa4fdcce27fc
f6155299edb1f453fc08491a257c7f28dee73e13
3604 F20110114_AAACYE pydipati_r_Page_107thm.jpg
66aae93deaccc98874498f8cd766fc8d
fdac17ea4d9f6de5d7b32168f0bbd28f9d5bfad9
39918 F20110114_AAADCW pydipati_r_Page_051.pro
9493af9833bfc98e77436c4fd6e490d1
a7bb0919f35284b2ff3e570ed149d4ecc648ccf0
925390 F20110114_AAACXQ pydipati_r_Page_015.jp2
6e92c777c8c1f2a2c0031814a48f004d
40f84221d50913873802349492933d9f45955fb9
63858 F20110114_AAADDL pydipati_r_Page_061.jpg
a5972fffb5a919a94557ebd6aa138460
957e6c9152a4f4624c35ac49fc40e0933e2e2527
21304 F20110114_AAACYF pydipati_r_Page_078.QC.jpg
e9bc4bd61b73249906733607b856b684
1f15dc792ca21f4d26db3fe7206befcf8f91d9ed
1734 F20110114_AAADCX pydipati_r_Page_030.txt
5e8b79aec3773f4ce5ca1b3014c9916e
ef2a05f8604b4c0e10d2def194e7ba570d631f65
284 F20110114_AAACXR pydipati_r_Page_106.txt
d5435e6aac8f92d6d0c0de39e3fdf211
6c59f47ae5b4623267239161d782e3143c9fc518
F20110114_AAADEA pydipati_r_Page_111.tif
7e1b0252c83100b65c92bbb1506d4bc9
d28922eedc3e12bb91aada4e466264acf7eb4254
1996 F20110114_AAADDM pydipati_r_Page_023.txt
ae834589d0d6cad904bea4b3699e7f96
f327befb498aaf23f44bf0454e9b05669942fb43
1702 F20110114_AAACYG pydipati_r_Page_100.txt
2f825a4e2ef847373e4d9fff7498ed9a
ca3a0131ffaa7a4d985e5c12ab0ce14312a03dcb
55037 F20110114_AAADCY pydipati_r_Page_038.jpg
f0ae815b485a518960dff265238e67eb
a013b67a6dec59ef5f1d6fb00af0921eaf7f1114
F20110114_AAACXS pydipati_r_Page_091.tif
439f88658eeae73cc106341667c9fe08
cb663b03651c44b4d4571d372b2db6e3ac32b0b4
F20110114_AAADEB pydipati_r_Page_080.tif
8a392e48ec8f82d4afe1acb13de4d1d2
0db79afd8da2a11e054d711cb9024e48140f3e2c
1794 F20110114_AAADDN pydipati_r_Page_025.txt
6d15311a41e0708dd4a3e7a8a962428d
91bc3ac7419242000fa376c81f62b2a927be148e
6046 F20110114_AAACYH pydipati_r_Page_083thm.jpg
fd54899a0f3fe7855e1bbab2417d80eb
a503048dcdc3331d79ebcb54482d8b9dcb14f9f6
25340 F20110114_AAADCZ pydipati_r_Page_040.pro
73b6eae40ba430aa08d0e95e58a5b54c
89c35abd5d4c18154e186e398a3adca08e9ccaf7
47838 F20110114_AAACXT pydipati_r_Page_108.jpg
71f6ae3f8d741078dc8e4cc44622475e
133413ba5bbcccf6378faa88a7fa4dbf4b1dad80
105877 F20110114_AAADEC pydipati_r_Page_035.jp2
d21e12f6a789cb9ec5aab8f7753f474b
4f8c77d1e37d1bd84081aedb28de8d56bd7290e1
2063 F20110114_AAADDO pydipati_r_Page_053.txt
53f6220387347decbc7c459b39e9ed06
ffb04841bd393e9af058d865adf2e303ee729918
22865 F20110114_AAACYI pydipati_r_Page_094.QC.jpg
f772ec9d5716fc1f95cd145f670467c6
82b51bcf49f8caa7dabfc61284a63f0a9e92bd0f
48292 F20110114_AAACXU pydipati_r_Page_059.pro
155452da206cdad5401c31fd1ed95379
ec19ac546e17025fa1f611eb3c0b42c098ed1c7d
24348 F20110114_AAADED pydipati_r_Page_057.QC.jpg
aaf2fdcf2cedc2438b71bc079d2e5a0f
16ae1457d0222e547477a6c0269c7ff979aa96b0
32994 F20110114_AAADDP pydipati_r_Page_070.pro
7dfb46b5ddfb6bd4831e37e466568e40
0d544e0be3de4d9476e4161bbd0cf9a3f5b0c203
F20110114_AAACYJ pydipati_r_Page_078.tif
3a0e7bbd028ae2e1345cd809a99d2375
cebcaa5f761d4159c95aaf05af8c0a512c2fdfb3
21812 F20110114_AAACXV pydipati_r_Page_097.QC.jpg
ca4c3b87e36c94f3078de4a47557e8c8
a7842e2e0d4290c73b412ba215eb74926b1d4b5a
284185 F20110114_AAADEE pydipati_r_Page_065.jp2
0e9488bfb9b4c6e4984af17be531d543
f088bb7b49f7f425db037d1f6258a9271d3dcf00
80097 F20110114_AAADDQ pydipati_r_Page_047.jp2
7149add1368838353ceb5c1dfbc3fd2b
4ea397a143cf5cbf2dcfbd350971339b3e1aca1c
F20110114_AAACYK pydipati_r_Page_056.tif
467d750a8220e4b1c3187298d026c8cd
50379807c1f8328d60d76aaf035f1784aba30cb0
21787 F20110114_AAACXW pydipati_r_Page_098.pro
7ef54b484dbf6d6c8f394ff1185b976e
c1cb61c351bb14d94e6366f939e39777e0fc6985
6725 F20110114_AAADEF pydipati_r_Page_053thm.jpg
616789c90e24bcffa43c3858f1f2d482
4e2beab437c80d5db5849f71f9977461d9ed2336
1919 F20110114_AAADDR pydipati_r_Page_024.txt
c08405f2cfc0ebc4a1d7ae02639b6529
168008e29cdd678634df914ab50fa847006ef827
21392 F20110114_AAACYL pydipati_r_Page_083.QC.jpg
51e50199dc2df467a1966e7964313fa6
907a2c1122885c060a816acb8e2b6bb3e28c4cd7
27189 F20110114_AAACXX pydipati_r_Page_103.pro
f1ba75eb5246f4e8176c0dbfd63ebaec
f52c51ba6e1e429067c294fde442f74146a0a387
93188 F20110114_AAADEG pydipati_r_Page_017.jp2
2ecbc9d3486f07b31d60c5eaa78b6efe
a4f68879fb7fa76a4c182485f0056aa661d41e36
12654 F20110114_AAACZA pydipati_r_Page_103.QC.jpg
9b8ab9bc14c6bba3638b1a51cbc59ec8
2b68364209e47eb2894d4d15d4c4cd82aae2cccc
91276 F20110114_AAADDS pydipati_r_Page_100.jp2
a045be7c8b675b7341834301a94cc6ac
77c1f406e5f0aa8f07e5fa6691713cb5e89ec016
64709 F20110114_AAACYM pydipati_r_Page_080.jpg
2c402a93ab63b0962b96adb3f2836eae
3acd1dbc3d575a2d7c738e6b1cc0eadd7834e201
25039 F20110114_AAACXY pydipati_r_Page_060.QC.jpg
37250277f041948d2a6a15974597c913
9a4b0f628bc116695ff68faf440bccaa7e6fe4d5
51428 F20110114_AAADEH pydipati_r_Page_081.pro
8e69d78a0daf5443d021180ebc6820dd
5100e875ab0434cf47884332e7b037406614a77a
10104 F20110114_AAACZB pydipati_r_Page_007.QC.jpg
e56512c4e98b5c9999a4252e5a730d17
2a267328d85764ac1b54c72081223bb752ac2476
18593 F20110114_AAADDT pydipati_r_Page_019.QC.jpg
61514df8cbd04a9416cc03a926b8f9f2
0144be055fb862f3b4018056cc1b7f777441a3c1
47772 F20110114_AAACYN pydipati_r_Page_072.jpg
137cc89d7a2682974da05fc45081cda6
6df727d5092f3a901145ab4c4ab216469c238375
66300 F20110114_AAACXZ pydipati_r_Page_033.jpg
8369470c863a09cafe442c4b264a7a85
99229cd78e4d5368a9047857fb22d26476093d4b
24973 F20110114_AAADEI pydipati_r_Page_109.pro
1791fb282c35b6b13a8408f9da85cd9a
bab4d4078a58ac12fee442bff2301a381af7745c
98153 F20110114_AAACZC pydipati_r_Page_093.jp2
958bdc8549baa6f19d1e2a641c600e63
bd1478f2d6c2c8dff4c8750d65b0cc107d421354
81812 F20110114_AAADDU pydipati_r_Page_087.jp2
3a2e12e8f4e0dd81ffa6f25fdcac05db
b8619f9939a9569c597365118d9d9e7172c4423e
16493 F20110114_AAACYO pydipati_r_Page_110.pro
26045c8c482f698335653f202f59a450
4b772339299b66e309ee7352198c765ec81ce674
17057 F20110114_AAADEJ pydipati_r_Page_049.QC.jpg
71eb5ea32a97f74dbdb526e219802511
ac2b7816af308dc3905c1c38d31d77cc665218b9
6739 F20110114_AAACZD pydipati_r_Page_028thm.jpg
abc742966cfe94b16f5afbffdb92341a
0d060e8493499e6a5d4290f2eb70427ef5f3efe9
7362 F20110114_AAADDV pydipati_r_Page_069thm.jpg
e6cb1bbdd26063304076e88f211d1cca
189c3fe5f076b5d42fcfce709539dd252b4c2368
471 F20110114_AAACYP pydipati_r_Page_001.txt
80f44c5ea7b1b10ef3f379b2edbf39a1
5641f78fd688f3ec04e47d955f2461bb9d66e7a5
4706 F20110114_AAACZE pydipati_r_Page_116thm.jpg
6f9ba579b9e8474e2f4435ec30179613
f2486d084d0cefd03db7da2755f88a3d8e829a1a
5769 F20110114_AAADDW pydipati_r_Page_037thm.jpg
faa92d258f68a2409e6ded9a4f0c1201
36fdfdd5a718eba1a1944e565a676409502859b8
51382 F20110114_AAACYQ pydipati_r_Page_028.pro
01d473b566f1c0f2475da0151d111810
5bfc046adac38872e587642aca49edb2727fdab4
6268 F20110114_AAADEK pydipati_r_Page_030thm.jpg
be1719bf975070b258cdfb83b36d995a
559a931a338dab92a036d5cc3f1e66126f078e9d
19586 F20110114_AAACZF pydipati_r_Page_051.QC.jpg
334a59d28a68ab3fec9451e24ed4c3ca
2b2b0a352df729e957c88d0a66466cc2f0f9eda5
1883 F20110114_AAADDX pydipati_r_Page_033.txt
624e4647865e0f0ebcd6ce052af6053e
29b857bfb4b0f432a64357ab9c3537c9b4821f62
50981 F20110114_AAACYR pydipati_r_Page_084.jpg
6ff584c97d9c2c87a7bbc9dc5ac58daf
581f1994598f8b5921a54638f5cea74fc74ca7b1
75270 F20110114_AAADFA pydipati_r_Page_050.jpg
f94247074a716bc97a0c139ea0b7f7e9
c7ff6696ae4c173129528ab82e4cdb4426ae0943
53484 F20110114_AAADEL pydipati_r_Page_098.jp2
ae88930b574c5b779eea9f8b8eb31f34
a4401691b4309ad2bfebf5e30cdf32d89b9cc97b
36067 F20110114_AAACZG pydipati_r_Page_047.pro
052adb7c794bd17999db41e96876ca66
1bef6c2db4bdc8b7d99bc52ff7a4663888572089
938 F20110114_AAADDY pydipati_r_Page_098.txt
0e517f995faf2a23c2c33a07849af928
86fcf30002f90d02f970d7b10193ab62ea6fa4c8
F20110114_AAACYS pydipati_r_Page_026.tif
67456b95866b34084897750344e59c2f
7692ef983c83173d6dffa9d544252186435fa989
2620 F20110114_AAADFB pydipati_r_Page_114.txt
a5ce647e993edbc005a274c1c8086e8d
256f0cab3b445d927169d1e82d2cde3271d2f0b4
66354 F20110114_AAADEM pydipati_r_Page_086.jpg
c5c63043632dc3e21ac2227d1b83f5a4
73f3da98652855cc4e27d445a47aef09fa1605b6
2320 F20110114_AAACZH pydipati_r_Page_005.txt
7608ea8e1597af5ebdb5a266ff012223
bc55714ebfd6436fa618a6dbe11aeeacc977ccb6
14015 F20110114_AAADDZ pydipati_r_Page_016.pro
7f8de040d39108890eb672757940db4b
40cec336b660efafac31465775f450c60097039f
67519 F20110114_AAACYT pydipati_r_Page_032.jpg
5e74e67d46372bd018f4afe3b3806cf8
92f185ac216cb1ee391d2f2d5815c17022deefb9
1173 F20110114_AAADFC pydipati_r_Page_078.txt
d8b8729008e8d172e3b2078583b5e119
1405aa43e4298297bfba749e912f7cf51ba0cabf
61904 F20110114_AAADEN pydipati_r_Page_020.jpg
4839785693dc7f51286aafb3a884babe
65e1f7b2523c129ce02a0e032ed5f67390beed3f
1499 F20110114_AAACZI pydipati_r_Page_092.txt
f6aadf22aafc5f88ee2ffcf1cfe749e5
cd15e4caf9d09d4b64e94e24964d6c4a2ca8e75e
735 F20110114_AAACYU pydipati_r_Page_107.txt
97bb2bfad1dfea5ddad85149baeaeaa0
a84a99cfbd15cf747180c30be864ebef74f4b593
37823 F20110114_AAADFD pydipati_r_Page_054.pro
ea9efc0d34fa329050c66e479b4830ad
0a7993b0ee075b9438563423ad70aae4c92f24e9
39014 F20110114_AAADEO pydipati_r_Page_004.jpg
d5918d5bcfe089c0937dfb0dcd428044
24c3c7e4724b6399f53413e9b05e0b1e57d8d573
5272 F20110114_AAACZJ pydipati_r_Page_038thm.jpg
bacfe0ae98ab05483b35db861f05a953
4e3214be1ef9fd02fef72bef4c1d363ff8ee4043
48445 F20110114_AAACYV pydipati_r_Page_034.pro
e4aac2965d8290039b88a299e88a0977
53d878c658335638e10319b811ed849899538ff2
3079 F20110114_AAADFE pydipati_r_Page_007thm.jpg
c87619d3849f1b3d51dbb59e9fa2b106
78be69dde999283bcdb1e68429a4ae75c3affc87
69920 F20110114_AAADEP pydipati_r_Page_006.jpg
c03632a2c2b488922e1dbcd193d841da
1b1e8d82b6e0d1a646c9316ec22b26a55911e721
318 F20110114_AAACZK pydipati_r_Page_009.txt
c0c26c4bc1c23b393d2d600e43c5a7e6
40d2645fa0064eabb95ca249cac309743df369ac
F20110114_AAACYW pydipati_r_Page_018.tif
207519914d113c73ce0614c95e1ba0ba
02e54bc245c58e739172f0ee9234914e3ae724ab
22537 F20110114_AAADFF pydipati_r_Page_115.QC.jpg
b83583d740c340450fa02acf71421b46
afe517d49875805e9df8e381a9ca9012ea5351c4
48185 F20110114_AAADEQ pydipati_r_Page_031.jp2
14153f48aaef5c47265e73221ab84e4d
a0af95c4020bf80e2ca810ebd5f498c983ebcb1b
3182 F20110114_AAACZL pydipati_r_Page_065thm.jpg
8517b10297efff9a74d4d1d44e82934c
3ff6da781b3cbfbca29cc80f2b6e63792a27d227
6413 F20110114_AAACYX pydipati_r_Page_036thm.jpg
3cc699900285e18b1d583aa98da55787
4a97abe59df76e048acc1fdc4d959235885fda0a
741 F20110114_AAADFG pydipati_r_Page_072.txt
366f77e00dcba1912b0292a8a1307dab
88eeb62426b911e15e99c3382b37998d6fce1e9e
F20110114_AAADER pydipati_r_Page_113.tif
3dc82588dc82c1318bfee3ecb1f4fbff
ac7c3d844302b30a3a61edf3a1a9966ab1d0d0c9
43156 F20110114_AAACZM pydipati_r_Page_075.jpg
eee42ef674157513525496bc00884a4d
bba739a53fade9339cb59e28257dbf0dab91c6a8
562029 F20110114_AAACYY pydipati_r_Page_090.jp2
a1afaeb034eb303d537477b3f86eb8fd
719a2bdfa66abb11d43b8851e38bf37444519597
12264 F20110114_AAADFH pydipati_r_Page_016.QC.jpg
23a8b0527ea607bfeb40fabfb06b4c48
cf3b0756f2f109d8a64cc0157c99a99326bf1c54
1473 F20110114_AAADES pydipati_r_Page_015.txt
54830cd05b0cc597c5aa14d3e786bb34
db8e71e5de9e278f07852385fe56d6e7c30628cd
6087 F20110114_AAACZN pydipati_r_Page_077thm.jpg
e3ebc61364c3653ebac30fd4c61bbf7b
d6e6315ef9a881852b96035c7a6f12d6482e4452
22373 F20110114_AAACYZ pydipati_r_Page_024.QC.jpg
c9de8ed1908839ed1e311fb666bbb115
e7708027af2b59f5a44b635ac22487ca2afad23f
7678 F20110114_AAADFI pydipati_r_Page_077.pro
0e8ee9fb5e2689991b464de44ef069e0
ad98368b359770391dd16b09eae18e579ce35f66
9661 F20110114_AAADET pydipati_r_Page_045.QC.jpg
872838c14cc2a6924390b762a921ba7e
68a39cdc9e214525a0af934e8f84b4076d356d58
22770 F20110114_AAADFJ pydipati_r_Page_052.QC.jpg
edb8db042cadf34b0d16c2d3b0f34cc1
2188d62eea4967bbb19630918adbabbfda70242a
60566 F20110114_AAADEU pydipati_r_Page_037.jpg
275d5a257ea3ced31f8a0435e9b2cbee
c44214baeaff5f56c8077d6b9cd8eda2885dc898
66978 F20110114_AAACZO pydipati_r_Page_076.jpg
10532ba7de4b52ce7135e563b3d67b8a
ba722e24b20018b081324c672f5c732213a83ea2
64774 F20110114_AAADFK pydipati_r_Page_114.pro
753cf310a4f05ba77ffb939b7e6ee288
58b2f51258a6e3183dc9ff5069dbe67d0bc30745
43355 F20110114_AAADEV pydipati_r_Page_030.pro
3bfe5fdbab647dbc38291892d50c125e
7444de52b1fb049323408b87d1d659bec51757a5
12832 F20110114_AAACZP pydipati_r_Page_102.QC.jpg
e528a8afb8d91c16aeeec28ad8ca1f05
a146e3897bd2eceb70623975473569a8b6af4132
6124 F20110114_AAADEW pydipati_r_Page_063thm.jpg
4c03c791158135bc9bbc0bdebdcead58
4bf05b31c6690b1a8705891a3918bed2442caaf1
15893 F20110114_AAACZQ pydipati_r_Page_075.pro
4eb46d4ba92f112a18e3bb3f8db0e3ef
bf5b99eb0c32cf1d013cb72dde157c55e9d4d565
70587 F20110114_AAADGA pydipati_r_Page_095.jpg
5c4fc24a89385cb2e4f0ff54a2318ff6
018b07ad5157ee8ebe38bdb1b459ea6789c1a02d
1342 F20110114_AAADFL pydipati_r_Page_043.txt
80a617c452a2959ca377b3e050395438
a64b1f49df9795cf23f1d97975562ae4d8cbe6da
1387 F20110114_AAADEX pydipati_r_Page_002thm.jpg
e44650c174cd52cc72b4dbea9c3d39c9
41763eb1164bdba4db3694b55be7e21022fe5bc6
F20110114_AAACZR pydipati_r_Page_045.tif
cf671914c53b1df07c72fa5c05ec3ea7
1fc1d8b201a3e0b781edf5a380b5c41fe22cdcda
38717 F20110114_AAADGB pydipati_r_Page_110.jpg
7080ed3db90f510f0f3753709d9bb414
4539954b41a3b3f648992bf49359ad6fb8f345b6
44185 F20110114_AAADFM pydipati_r_Page_080.pro
5e985c71879e9e66aab69410f214f1a4
52de400d9e01b6ee16d2be56e81c31e38f650131
82562 F20110114_AAADEY pydipati_r_Page_038.jp2
7fdfe07eaccfadcef91bc4fc4721006b
298fab5a3626f9ed9bfc9a31c0a9bacf6d2e1402
F20110114_AAACZS pydipati_r_Page_016.tif
73b10624a43f843ed34c9a633a931861
df394296979f680a86b4023cbcf74f8bc4b98b62
2447 F20110114_AAADGC pydipati_r_Page_001thm.jpg
7a6696b3115577d0076588f54cb724d0
28973aade49809ade377a6a2be0be2ac4f07131c
56272 F20110114_AAADFN pydipati_r_Page_019.jpg
c788cc76c61b424e4483e60512337c5f
b3f7ed6db3b8ef4f0a1ee1890a10dec72b21d1f1
111772 F20110114_AAADEZ pydipati_r_Page_081.jp2
78802787766a46b7ae60977c001d99b4
0091b4ec1466312e6bffb8bc23fec6ed987b33b6
12366 F20110114_AAACZT pydipati_r_Page_046.QC.jpg
2d22bb466f65778377cc4e21c6ff3a97
c9e2bc20cc11af042384b1ad7d1b9bd3c08329d5
F20110114_AAADGD pydipati_r_Page_068thm.jpg
e5ce63e83bb106fef15ea72312353042
b04edf0d6649162c71b20126f78a92c22e7bcc51
F20110114_AAADFO pydipati_r_Page_058.tif
6be9e069029067d0fb2065c22217b2cc
601a7f7dd497c642c5fb9c8568aea176a2fb17bc
112141 F20110114_AAACZU pydipati_r_Page_050.jp2
1a4adcf0339d43a9bbe78db12289197e
cd64d697051391e996e493cfd024edbb18b9e0e1
56670 F20110114_AAADGE pydipati_r_Page_113.pro
078a1758b4536599c3f72a76c3de1fbc
8782102c3603654a1dffa2c3699f373ce7a78848
F20110114_AAADFP pydipati_r_Page_053.tif
629f453df3db5424a2571a85bf6d2add
b6c12f93b5d5960cd20e711612de420c01a28a36
9223 F20110114_AAACZV pydipati_r_Page_065.QC.jpg
be5499b6913cdc563dd5f9421a7bd1d7
90f37f230698e50c07f9cfd6537a7ae489004a5f
23645 F20110114_AAADGF pydipati_r_Page_074.jpg
8f7af25bea153b7e0cc25351d1ba2814
e6e21973657b4168ad6c86ed714ff062eb6e8b3a
23497 F20110114_AAADFQ pydipati_r_Page_035.QC.jpg
54df982fc3d7c68cedffc27829f95374
958e7a57534509a17ca58f71b3d74735d7e1565e
948 F20110114_AAACZW pydipati_r_Page_082.txt
12a568c0592711bfccbc482f2f089f9a
8cb964e1924d32790dff35fba11b13f253cfe310
80265 F20110114_AAADGG pydipati_r_Page_092.jp2
d1b8bd156ed93b2e0db91c403b967d08
51a36a214b7dba00933fe2664ef12715fd42eadb
6950 F20110114_AAADFR pydipati_r_Page_050thm.jpg
83ae5330283bf6aaccc1978842627f5a
e34030745296e7e2370e38906d8315d3725ba770
24719 F20110114_AAACZX pydipati_r_Page_113.QC.jpg
1d810fc6f35c146bd0cc472668570155
d635bfa318410fa16d33ededff5acee2c4a2b8b1
39539 F20110114_AAADGH pydipati_r_Page_101.pro
d64da389ea85bec40bdef6cc360e977c
149d51a2ae5025989fcff8a102637217df841aa2
1760 F20110114_AAADFS pydipati_r_Page_085.txt
9b6c72421c0685873722ffe4733baf21
868518cf6f2d06e86ddab5a9964199350ff9d3ee
47957 F20110114_AAACZY pydipati_r_Page_104.jpg
c63320c4ae741f8464d9715976d03158
abeb7ecb23ef31a560c1d5499f5a7872a1acf729
21573 F20110114_AAADGI pydipati_r_Page_030.QC.jpg
2392798edb123ed76096fb21660869b4
70a362efd327703e3b7a28cfa69961436855d907
28982 F20110114_AAADFT pydipati_r_Page_104.pro
3a1295d1cf65b17c16e63047452ce445
bc44c1f16189fbc8ce79b96c9e64d5865d141fad
5501 F20110114_AAACZZ pydipati_r_Page_009.QC.jpg
d5fcd8ab233bce0b834e5685db746651
b71ae9e37da0af7052c9e025a9108e00c946a1d7
52210 F20110114_AAADGJ pydipati_r_Page_109.jpg
a010de4f41790ec8648049e35befd88d
528fc69c768e499bc7082d65a769e00299aa4bf8
F20110114_AAADFU pydipati_r_Page_008.tif
cf173482403d5dd2e31fa5e3c64d7234
18c4898ffeaa56b6ee69034a14a2c7accbf0230a
20804 F20110114_AAADGK pydipati_r_Page_031.pro
ab47e9459563e71e8223a2a2c711da30
4a5353f38ed677a1b2086a8a92f94060bc5db23e
F20110114_AAADFV pydipati_r_Page_093.tif
c0805cfd70c32fe68f461dddc696b2c9
3fe4a2bead8dbaf4e5f3d567c7f8a7cea8c08488
823059 F20110114_AAADGL pydipati_r_Page_063.jp2
895667a52381e883fc6588eb9ac747b4
b00cad3bb114e0ba2570d30e756afcfe8e8245ad
11628 F20110114_AAADFW pydipati_r_Page_031.QC.jpg
418f692d9345c8c9d0b6627b37fce02a
19e67358d8bcc63dd80df3248684f1b02b59b424
2312 F20110114_AAADFX pydipati_r_Page_113.txt
a1359d185cdadd1602d925822607d29e
2a9cea49d49a852223f62ba3483bfae8ad2c1620
F20110114_AAADHA pydipati_r_Page_064.tif
de9690990472ce1100a364ff8d8703b8
53265e58c56c52e4cc6534adefed12cfeef690e0
23399 F20110114_AAADFY pydipati_r_Page_112.jpg
6a9b0688e2e6cf737c29128b4d67b9bc
9c7a8a7318780388dc98f9cffb8f83aab9a11c6d
45120 F20110114_AAADHB pydipati_r_Page_044.jpg
932667fd61cece3eebcaebdb2a6f0816
f0315576936fa2acf1a3f8e647ef2512ceb6b6dd
50350 F20110114_AAADGM pydipati_r_Page_053.pro
57ff8640b5ba57cf6fa66f22366dfcfb
fcbe4c838881fc54b6a2abbf645a031b4e7d8281
F20110114_AAADFZ pydipati_r_Page_042.tif
af0024dae759208b00331548999d31f1
6d00a5522602b32337a777d9b71f9b9e7abb6c40
30029 F20110114_AAADHC pydipati_r_Page_065.jpg
655ba29805db3da77be37e312915f3b6
3bed5891e52c987fd9ddcfd087a8b4a8d86b41f9
108741 F20110114_AAADGN pydipati_r_Page_022.jp2
9a2f50c3a3bb10a5b85529eb2a44d7ff
51c6932b3aa19ab3ec940968e9c2d4a7aaedad08
1146 F20110114_AAADHD pydipati_r_Page_084.txt
dbe22f2556355588860c0bb2c48142f9
8e679671dc2f64a54f9f8fda3bebe016c86adcfb
1160 F20110114_AAADGO pydipati_r_Page_056.txt
eb6bd0137b8d90c889cf7305848c812b
742db43c2ae7b23d04aa1f64ca3650d2ba8ec5e6
84148 F20110114_AAADHE pydipati_r_Page_054.jp2
6f5e08dbff3e2e3b6989ab83fa9743be
8cdef6dbe4aa222b1f558d49c4c034c22c606be5
22457 F20110114_AAADGP pydipati_r_Page_004.pro
54e8b92a25bafcf5c883bb054667f382
cc3a78c37f02d90219031d8ad1bb1bc56951eb30
956325 F20110114_AAADHF pydipati_r_Page_068.jp2
6ab11bd772cfb95bbdb9830e05da1b95
08fb628840e2563bf374f33c7fe6ff52ad57f463
20952 F20110114_AAADGQ pydipati_r_Page_012.QC.jpg
f05a00e0839fe9b6dbacac35a74de9e8
f00a0c923db35c8bd52554989e1b9687bd4ea0c7
28225 F20110114_AAADHG pydipati_r_Page_079.pro
0862f889e11485c641f5e987464571cb
6577783e56cad03417850897f0d293087baccb9e
5366 F20110114_AAADGR pydipati_r_Page_092thm.jpg
28b48a2efe5507cff7614e59094005b2
c8171a214dc70f441dc49e2f89b71c5180e7922c
100856 F20110114_AAADHH pydipati_r_Page_099.jp2
4aa617986c133b889a4ec34a4ab0730f
96d44d869c88778fecd83f6076452ea558b09dd6
F20110114_AAADGS pydipati_r_Page_025.tif
f94ae2306ef5da88718c532f8d7530fc
1f8a278897530b2bf29ea8804ccffb5b4a540268
6517 F20110114_AAADHI pydipati_r_Page_095thm.jpg
4d71964f5196fb2b86db7f5b24d9c7fe
fd5e97e216e0521dba47dd12fb44f8e87b61c704
1445 F20110114_AAADGT pydipati_r_Page_049.txt
a2ac4dd23f3e98bda7b84ace31f3da3e
de8ccfcdc766a9885da5f065041f8339900a1002
60658 F20110114_AAADHJ pydipati_r_Page_010.jpg
27dcb2aaa3c2bfbbb2a8c17bf494b6ff
c0e6b13d278af6951085e3538395f9618e9703c8
12610 F20110114_AAADGU pydipati_r_Page_110.QC.jpg
4fb2f72ef4eac6ac166815254666258c
435eb44acd135071af4ff17f7c809f458ce31065
6202 F20110114_AAADHK pydipati_r_Page_115thm.jpg
4c2dcb464313c72466102d194f94c39e
13103f11f5287753ce3144a0cd5f8b76637cad5f
1963 F20110114_AAADGV pydipati_r_Page_062.txt
9b6e13aeb864eeedec48927f6ab650a4
01d6c35aebf7c6cbd329d5feb47ead34c1a9d3f4
1426 F20110114_AAADHL pydipati_r_Page_088.txt
146758b8d9ef4db043091462acd08cb7
faa1e2283f81cd868740bd6ff2e948b963c3d1f4
5756 F20110114_AAADGW pydipati_r_Page_017thm.jpg
9d1a1774b2d546372a0a0054ed44a477
7320dda1f78f0f4cadcb8a91079086b4662c61d4
990 F20110114_AAADIA pydipati_r_Page_105.txt
b419b7086b3cc9a5323960f2af2f3974
ab7de9358f3fe9b143967671bffc3a919ec43f89
F20110114_AAADHM pydipati_r_Page_079.tif
b14686ebf16e4c3576fd92df33a5cda8
b2240220fc324bae2ab962b476ef1b6c4ec55881
986 F20110114_AAADGX pydipati_r_Page_071.txt
d6f97b49660703a1cb0b2bcf0a1fc166
7e9ec7ed6e56c727150646100557add51be09c3e
43705 F20110114_AAADIB pydipati_r_Page_073.jpg
872e9ce74a07053f3164e5a7c34e448d
3a35bc7b0642dcc8fc5513c77b9044d6fc4734c7
19994 F20110114_AAADGY pydipati_r_Page_037.QC.jpg
3e3511a327bb066536d64fd39c1f87e9
6962515c3163ff6dae018e283b90eb1a74474179
6467 F20110114_AAADIC pydipati_r_Page_024thm.jpg
2a358d035490e143f07a8ba81ff80ec8
e345c7a7b14914376022666985bcea8411be1c21
18554 F20110114_AAADHN pydipati_r_Page_054.QC.jpg
d5458187b3d3d92d1157c8e8420ac7d4
599217bf7de589d21b530e53ce536e23e793053b
F20110114_AAADGZ pydipati_r_Page_039.tif
455a5173ff481c4ad0898733e618daae
31ea1516067ad9c390bac5432b28e67577583463
6269 F20110114_AAADID pydipati_r_Page_099thm.jpg
bc9466c2b3a42858db3e7bbfa8b069a3
6666c47db0cabd924281e7115d3a80101e6830df
13250 F20110114_AAADHO pydipati_r_Page_004.QC.jpg
ef647b164be43a6214fb6c80a0d2baf1
1fdc6dffe0ce2ea45e9b60e0bbfdfa3b2663427a
43716 F20110114_AAADIE pydipati_r_Page_018.jpg
0c80e33109cadb85f737f241da0513df
7da2c6737325c446abeebbd0a218c8fe739de29f
108897 F20110114_AAADHP pydipati_r_Page_023.jp2
e0d7bd95f4eabf3f1eff19b9afee066a
f9e9191ae0f3ccb97c8e48ae499d96e0b6d641cf
66131 F20110114_AAADIF pydipati_r_Page_101.jpg
a763a7d1ff8caba2935b99bb0b4de114
64153bd72a73e2a44460354a230d1b31579cf986
5074 F20110114_AAADHQ pydipati_r_Page_108thm.jpg
d5c714ccb032391cdc906afcd71c0beb
f28a38e737c29ff712556b3d7901d809842fffae
5779 F20110114_AAADIG pydipati_r_Page_061thm.jpg
09c8b52e0ceadcdd898f52826c3f84a1
469406d09d867f370a1adfa189f794cadd9650b8
48911 F20110114_AAADHR pydipati_r_Page_021.pro
c2ad45b39279dc55a825b2a88f69bf46
a9eecd7eebc855030e05481da106666c39ea3fc1
19404 F20110114_AAADIH pydipati_r_Page_082.QC.jpg
3d811df56db1f1160ff4e3124dd18107
36b045812049b33f2668a86c31439c864ac12882
42387 F20110114_AAADHS pydipati_r_Page_097.pro
a339c1d11ed4330caa2fe0d63c31a17b
9d83538084113b5566dd36a51f07ad2d857108a5
31336 F20110114_AAADII pydipati_r_Page_069.QC.jpg
fe92a1303db64753892674f10fa3605e
2525ef465f7193d72c448fba7a77f9bbddb8ea03
1374 F20110114_AAADHT pydipati_r_Page_044.txt
f9427e819ba353fa49cf750e1a8ad9fb
133b71f9e9c7d7a4da288ccdf1fc26f91a9704f1
355 F20110114_AAADIJ pydipati_r_Page_077.txt
367199d4a747bc66e34260561c6aeb81
6a71df93922f53df03dbbb548da4c29e850f789f
51464 F20110114_AAADHU pydipati_r_Page_057.pro
3c667cedcaabcfa212458f41ff7d5bf8
9be79309f61f9589a543e9d9ed0c435b556e5e72
578118 F20110114_AAADIK pydipati_r_Page_055.jp2
bbc5bea83f2e18cc0c510525cb6e0552
26aeadf3c7f6de1ce782fcc34f3e6eec4111d0ed
5851 F20110114_AAADHV pydipati_r_Page_020thm.jpg
2f52bb8010e9e5a52456c48ed9adc1ce
523f00ecbb09d4c2e0128afe3f9fb6772b6af209
6450 F20110114_AAADIL pydipati_r_Page_086thm.jpg
c61cf1690efded43f1fc5a93993bb829
29d97a9e2e85afe0cb6cf81427f923bfe1f4a6b0
1627 F20110114_AAADHW pydipati_r_Page_051.txt
edf4a640a8fffe92106411f6db371d8d
abaca1a7b5dd92fdc941349ff342a57fa150b7e8
1519 F20110114_AAADIM pydipati_r_Page_054.txt
53e3226fa1077fad6f2459ded7e0fcfa
5e46bc0d191b6ce6f3e193b96accdecd68663e91
100197 F20110114_AAADHX pydipati_r_Page_033.jp2
7b1ded661ecf7eb2df1c60b797b4f4c7
298c143cbd478f744604534239838c763c2c36f2
F20110114_AAADJA pydipati_r_Page_023thm.jpg
cb406d9598028c4502fff29306fcb722
9beed30497a30bf163c3a8664a7ac87c95c183e0
71402 F20110114_AAADIN pydipati_r_Page_029.jpg
b4da81c8fa09fb8417305eab5e70911d
052da7c41968cc5413803de8ebf7593816aca984
17328 F20110114_AAADHY pydipati_r_Page_084.QC.jpg
dc3c307d0e20b7dfcb2000060edc912c
d6a469b8ed4d7a03d5b86b45ddfd8d6528d4e293
28163 F20110114_AAADJB pydipati_r_Page_044.pro
0e706b0e4ce05085f6e1ecffbde12ba6
7589cb8e6bf26e5054b3d58dce9fe11a37492af2
F20110114_AAADHZ pydipati_r_Page_043.tif
82fa548f7ea01d4fca643a1f8fab9044
72aad244f3aae9c98530fb0c1825337d3c042cab
2018 F20110114_AAADJC pydipati_r_Page_057.txt
c5adad86b774b444295121156bedc2e1
eb0947ff01e1bc446bec599a4baa219286c4d9a5
43914 F20110114_AAADIO pydipati_r_Page_067.jpg
4b3e564cb00c43bb69c15e2097389bf7
41b541db649ec9dd276b0d73058c2d811390939c
1562 F20110114_AAADJD pydipati_r_Page_087.txt
9f1832972e9e44c2356e54876e79ea68
e23104cc13027fd5a51b3185e51e8aa9a1b8797c
4862 F20110114_AAADIP pydipati_r_Page_049thm.jpg
1a5c99053b755c96d9b18f72c957f22a
f38ed64644ff1b781f903158c163ecf3117247ac
34010 F20110114_AAADJE pydipati_r_Page_061.pro
beda97747efa290dce7cbc92a11e5f29
71bfe23f2d794cbcdcbca78ea53598040fcc3be3
1314 F20110114_AAADIQ pydipati_r_Page_046.txt
70aca4307251156f5889bf0af222a717
0da8e6d962a566dba4adea3bf083807d9bbaa543
F20110114_AAADJF pydipati_r_Page_068.tif
454b7bfd2306cfaf45d2595d04f2cc27
6b264b209618145fb01c24a880b500158d0f5e78
500891 F20110114_AAADIR pydipati_r_Page_039.jp2
e9572bab9a30ac6dc58ac65d96a51f52
41e3caf8305ccb427d0b907b0983bdb3b5f59bc8
6855 F20110114_AAADJG pydipati_r_Page_021thm.jpg
f984a4d81360f5ea09f5887aba2783b7
f21f91075db0db6e5fc28d6fd49609c79cc5ef84
24428 F20110114_AAADIS pydipati_r_Page_053.QC.jpg
474340fff3c9f863198df589af872036
c608e0b247e70d09ebdfa5f5b711b0bbfa09dcea
15128 F20110114_AAADJH pydipati_r_Page_044.QC.jpg
27ab6f991fb6932ac552112767fb5e36
dd0fecb4922ba85b711f4e554b6074eb8dc008d1
95000 F20110114_AAADIT pydipati_r_Page_097.jp2
0c5c584a3bb6c427dbad9b0abbe01afb
99ca110cd2c93c158b64aa81c90b06ee23c79ed5
4872 F20110114_AAADJI pydipati_r_Page_072thm.jpg
18590814cee9b06455d02e0551ce06c4
fa262a97b73a97210d7a2bfbbafb838bf8e5decd
17867 F20110114_AAADIU pydipati_r_Page_067.pro
fa5ef975d24e938d1803dff976a196f8
86dfbd2d1b2bc19519deaa90062f578af8fdbc54
28518 F20110114_AAADJJ pydipati_r_Page_078.pro
911d30514af701412e1c1801183891a7
15a0a2acaca17f76971677d374210e4931acde42
51899 F20110114_AAADIV pydipati_r_Page_006.pro
cfbeabaf55538c9c50eb7ffacc3f2741
58dd3717bca61edbe3e43bdef3ef110f0da3d969
13807 F20110114_AAADJK pydipati_r_Page_039.QC.jpg
84709a7922beacd6494b6955dfe9da3d
222942cb5f196235b6cd5253ffe79f09935e8aae
42099 F20110114_AAADIW pydipati_r_Page_094.pro
58618d99213f895bc4a59383a2141dd2
9a20766f0f002d58684219301387944b90268d12
1051972 F20110114_AAADJL pydipati_r_Page_113.jp2
0b8d5c74bbb4961c397cf0753b536fa4
5ab6713b148db36079a63e211e736ad11891e27d
23293 F20110114_AAADIX pydipati_r_Page_105.pro
88b7257415c0cd7807381e56287f7294
f67d5fb2cbb602564239bfc9e6b4e973af0944b8
F20110114_AAADKA pydipati_r_Page_051.tif
06f86e6f764acfdecf55dc0809cdc10a
69c2e028d323edd2c961d0b6f7cefd2ac9ce32a8
63918 F20110114_AAADJM pydipati_r_Page_063.jpg
de2e55fa2b1d096011ebdbbafa40e2b9
d80aaf142e45afbbb5fcfa6b5421d486766447a2
26224 F20110114_AAADIY pydipati_r_Page_084.pro
6b981e01a9f7805748d4696cb00cc6df
fc8c3bdadcf7481784dc901432cd4050495b348c
18297 F20110114_AAADKB pydipati_r_Page_047.QC.jpg
33eff69705377660c3d5ad07ae484f10
b34d701b7f9f3752cb61165c837eaa0e85f5659f
F20110114_AAADJN pydipati_r_Page_101.tif
0a2697ccd4fc0b86c47496afde62ae93
7f9f1752027a5cd5fca14f4b0421f96887fd81a8
39240 F20110114_AAADIZ pydipati_r_Page_086.pro
c489d93df18b347f69fa638c6d0d9fe5
6a7632e2a3d5a28c47aa90f8df967c0cfdbe0966
1732 F20110114_AAADKC pydipati_r_Page_097.txt
1a3392dec5e01a686f7abd74b6657296
e039bbb3f9dad34d1d6b025800dbd0ae02090e8b
563910 F20110114_AAADJO pydipati_r_Page_108.jp2
9541fb74adc4b441c833801781da13b7
c86fb49541d94b8f11cd1db4c3f6ee76e1046159
57138 F20110114_AAADKD pydipati_r_Page_054.jpg
a03b0e997982786aa5325e3d3f0d60c2
4aa450a5d12b06b6558d3952a38ddb47b485c495
21222 F20110114_AAADKE pydipati_r_Page_086.QC.jpg
b2cd8ecf580cfd309139112dc3d82d92
60967a05ad245c811e94bb6a031725a0a6385970
70828 F20110114_AAADJP pydipati_r_Page_027.jpg
5f2d5eb5db7423e99b1c0c3ce46a7e21
05b50986331b701bf698428ac71f8e0c1e956412
6695 F20110114_AAADKF pydipati_r_Page_059thm.jpg
bc879c918dd22adcc9a9f0aa8beb2834
6e47bf56bb88d1650a566871d7a4936fae3fc09c
36822 F20110114_AAADJQ pydipati_r_Page_019.pro
307306d72ca4ede39cc325442df13a03
8882144bff4fe8d290bab642dd8652d0f22b8132
F20110114_AAADKG pydipati_r_Page_076.tif
39449f575f096e8d38527a24f4a2d521
06e3ff8c3cb2cdbfbedbb2abcd872a50e0acb569
15604 F20110114_AAADJR pydipati_r_Page_072.QC.jpg
fae4b29c844f1d50f27427cb3e7412cb
154ace8bcecde63ef8ef658d1ddbcaace51ce4df
23962 F20110114_AAADKH pydipati_r_Page_021.QC.jpg
c326748da5ee5dda75e26d06fec01e43
2831b59632ea54de33e53fb467c4496bdd570ee3
6596 F20110114_AAADJS pydipati_r_Page_027thm.jpg
8f915176cc6fb14f870fb89526ac5863
4f433f754f04270cec0f6f2096e2a15a1dec0953
6900 F20110114_AAADKI pydipati_r_Page_057thm.jpg
d925922fecaf413707be41bac3987cc0
69968268358dbbb14dd7094963631bd38fbefed7
3848 F20110114_AAADJT pydipati_r_Page_098thm.jpg
1cb8bf4f25a6126531762016ea969671
33e2959f7b0bb0b9f7715030c5a55a0d44987603
F20110114_AAADKJ pydipati_r_Page_087.tif
2a464225a0b99a6550c4319d0826d329
fb60312535c80b5ddd9169cf77d003664ffd6b61
4520 F20110114_AAADJU pydipati_r_Page_110thm.jpg
a0f197c060c6b0f6220a0c78405f49f2
5f9dfc04e142bc65ae5437837cc401bd40fc5ce8
70500 F20110114_AAADKK pydipati_r_Page_059.jpg
1656912122ed40cd194ee7e4f72c5bf4
139044e5844e6ef4b8c6794f18d6fd5af8396350
6214 F20110114_AAADJV pydipati_r_Page_066.pro
39bdc4d1fb2e02d203a621c55fb08886
0ae948809ebacd08a992f59cb30c9fee820c6bed
1198 F20110114_AAADKL pydipati_r_Page_064.txt
e539b11b45b469c874cf365fda706f27
e0d4771058eeca460e620623c5a4cd6ff882c449
2061 F20110114_AAADJW pydipati_r_Page_050.txt
5afc522adce0d79c51c8134f2ee99355
13eb0cb0e163122cf85fa11b056e2c50221b45ba
110388 F20110114_AAADLA pydipati_r_Page_057.jp2
75a42cfcd32d778ea1f5163142af187f
a5813bafd3dc77677a5e6da9f1433d7339903150
5388 F20110114_AAADKM pydipati_r_Page_084thm.jpg
4c7bd8bf562e5ef1dedb7a44cf376395
3debc5e1dc4123ee0782066ef40ca1588104f4f5
2000070 F20110114_AAADJX pydipati_r.pdf
368bcd4e39a2cb29db9846a37ccf2ffa
61e57f8efb25be88e241a522634381b25aca2540
F20110114_AAADLB pydipati_r_Page_116.tif
f968f69383cacc017a5323638090de2e
6329874452a636a8a993dc5dd78277c25357821e
5674 F20110114_AAADKN pydipati_r_Page_087thm.jpg
03489658bfb65d050f63f1c90fa32c4c
482378f64fdffbfe334533203ba2854cf6e62ad8
47857 F20110114_AAADJY pydipati_r_Page_041.jpg
ee8ee4b6a6d163fbc5860ffed05deea0
a485dbd9e34da3fa4aca1baa7881f6b780695bb1
107019 F20110114_AAADLC pydipati_r_Page_095.jp2
6ff4542a1011540e86b8502dd88fc73a
ffdb9a0296c1264777fd743ef313c00e3a6715a0
68763 F20110114_AAADKO pydipati_r_Page_043.jp2
8d14e9a2348395e0523e0e17c8602ee6
67d454d1cd1165c4d5000afca2d2517cc78730bb
91424 F20110114_AAADJZ pydipati_r_Page_113.jpg
8b4b62476d965563d93f59bc44989fdd
5ca04cc89a50f6134085f874a87ad8b14cc69a33
593 F20110114_AAADLD pydipati_r_Page_111.txt
489eb766ffa8c861b7ef8a6a577b7d1a
81d12ef73cf9b4a1b579c16182172b74ad982e86
52785 F20110114_AAADKP pydipati_r_Page_005.pro
83541146988ca7922df682d2a864788b
d5d00c0946269b92298e31b2e70de707962443b7
39118 F20110114_AAADLE pydipati_r_Page_010.pro
92ebeefd7c5737f4f96aade0c1235ad9
ef9d55b99c5a3240638563d71a4e8d765596b21e
33732 F20110114_AAADLF pydipati_r_Page_014.pro
0851827056af81f2c8d7e3b31ce351bd
c6b3a4acf404d8c3ddf6611110e981a3c9658df1
632843 F20110114_AAADKQ pydipati_r_Page_056.jp2
7246a976ac6c98318d0849816d154e30
ad61ffec445592e643750ffa80ea6bb699bbfa2f
21455 F20110114_AAADLG pydipati_r_Page_036.QC.jpg
bd969f73edf18ea7869c440f316153f9
e7629c30be135262c37966b690160bfc729201f8
34210 F20110114_AAADKR pydipati_r_Page_042.pro
302cf4a2e2c03a442836c605e752ee6b
2ad118197b8a2ee660474a2d2f6dd3e1d21af3ef
16331 F20110114_AAADLH pydipati_r_Page_116.QC.jpg
a80a06b0ac5abe9fa8fd37e0a1a3602c
7cfea377c767f904c0b52de515c118d3b39a914b
F20110114_AAADKS pydipati_r_Page_055.tif
22b46b37df6443a705f40c0dcdd384b1
09f5ff419354827d06821fa7ce2476187b26a0a2
52999 F20110114_AAADLI pydipati_r_Page_060.pro
00b52681ab4cdab8fca219b2247bf8e7
afbd4385be1ab3f06c144a9a5a6f8d41a51c22ce
526584 F20110114_AAADKT pydipati_r_Page_073.jp2
45fb6446fcad6ea6e2922697fb4d7e43
33e3d2d78f865f4ba01b111b2472d953aa9b999f
4091 F20110114_AAADKU pydipati_r_Page_071thm.jpg
a4343c4016d6021a4011c67a49a4e789
4751ef3d88a43fb786e4c12a309edf86cd9ae9ea
23177 F20110114_AAADLJ pydipati_r_Page_029.QC.jpg
6f00bc19209b1af548d05d663b8dea40
cfaff7181dfedcc7c514e48617e660f8f6ff0f05
F20110114_AAADKV pydipati_r_Page_106.tif
b4819eb3a9eb886fb0c0787a0cc72fa2
1926e8bf391ca205cd7ecd62712d1ec7070314b3
F20110114_AAADLK pydipati_r_Page_015.tif
6638af42068e5bad824ddc584a591ce1
ae3893fd613e751a413de3b122c33f01da539857
F20110114_AAADKW pydipati_r_Page_037.tif
e50bc080f95531ff3614d4d54c04a557
f51f87d74f8ce512412cbb1638a06e1063290e12
65837 F20110114_AAADLL pydipati_r_Page_030.jpg
08898e4b7936062d669effa8e1db8960
13f663553eefe846c873eb79441586c68b112af0
3807 F20110114_AAADKX pydipati_r_Page_066thm.jpg
4276e437168b46441094954ffc143a24
150a33ea03a3850248dcd1e0bfb78d81d95890cd
64143 F20110114_AAADMA pydipati_r_Page_015.jpg
ab442cb3199356a55e94e3b95387f6d4
186cab8d9420286616dc079721bdb1660424d96c
20112 F20110114_AAADLM pydipati_r_Page_070.QC.jpg
0aae63a326cae39399d39194e1c93b2b
a3aa4b7f91d92dd2adf895a99a4c67403c7e7c22
F20110114_AAADKY pydipati_r_Page_049.tif
1eda6dda4f4254af31f1984ab426f821
63b758a643b08578bb333b50ee6ed03cdc040ce6
36586 F20110114_AAADMB pydipati_r_Page_016.jpg
99afd5b7160590d6211d308ae3f571dc
b43d6d923c619c01f7088606c8be68f76ed52df7
66172 F20110114_AAADLN pydipati_r_Page_097.jpg
17358b48dc53e4537fe56df5ba8a1ac6
ae9ee994a43ad313b5eb7f2a5b925a1c46315ffe
6362 F20110114_AAADKZ pydipati_r_Page_076thm.jpg
11aa60f2dd3f2163c1a38e98c97c9c39
6d56607955b08f0f39b7e95cfd22c03e524b8d68
61975 F20110114_AAADMC pydipati_r_Page_017.jpg
85402b88d3162d086dfc1524b426b95e
8f1f7569f89cb437da7d521dde6b541e6a998683
1678 F20110114_AAADLO pydipati_r_Page_017.txt
be85f4920b6c69aabc6a711a4a55bb30
3ea18a7fb6ddeff54acc496a4ca54e5f547c69a7
71897 F20110114_AAADMD pydipati_r_Page_023.jpg
2dec80f9ab004b2a032ec13485319a05
3d018faadaa271bb027a94dd30f9efe961e8bbf7
1376 F20110114_AAADLP pydipati_r_Page_014.txt
12ad51e705cb164c4b91c5ecba36fb97
53e72ce5a3be9592095b2ae129e050097e731cbd
68858 F20110114_AAADME pydipati_r_Page_024.jpg
32c8800bc53b0a55800b8c8597210a0e
70ce7e9c8c24d02c5e37a4ab1d5427428691d7cc
175857 F20110114_AAADLQ UFE0006991_00001.xml
c79ae8d58b79ff1c5a86bb113204ee2b
ef0d151f79ef5d87d27d8f1e2dbdf51677bef2d7
65573 F20110114_AAADMF pydipati_r_Page_025.jpg
f8fc47fc20c032438f1be2364112ee1b
58776eaddcfef9e5b96e4d53b811acba6203bdce
69790 F20110114_AAADMG pydipati_r_Page_026.jpg
e79b4b39cfa167ab2452ff2cbd19dfcc
0ceb234e4e079f6ac7e47956dc8ce21d0facaaf3
35500 F20110114_AAADMH pydipati_r_Page_031.jpg
1fe42c983855b0be264ff443293d8b89
bd0f21ca4a614bc30d946e0d9cf75726f8ea8c63
69027 F20110114_AAADMI pydipati_r_Page_034.jpg
ad501536b90eb852db010bb05b4e3822
8aa6b13fcbc96caf47c67b60a07057ea156db461
10312 F20110114_AAADLT pydipati_r_Page_002.jpg
5cdfcf2736144179dce93e9e2a2951e1
f5fcdfa38f1525a4862eb0f4505f0bbbff26fa10
42149 F20110114_AAADMJ pydipati_r_Page_039.jpg
abdcb57f3ac7e78a0c51bfa1561bf99e
357bb6d49094016954cb69f4cc1612da2a1f2ccb
63057 F20110114_AAADLU pydipati_r_Page_005.jpg
0de2a9edc73e42edde9f563bba0d1893
c5d681636a2fdcace0f8dd686e8bfe8380c5dcf6
43521 F20110114_AAADMK pydipati_r_Page_040.jpg
58c5031d8e9243ddff5da383f032957b
2845ce7e371d0306b8c2f08dc9ede2502b84d707
59121 F20110114_AAADLV pydipati_r_Page_008.jpg
d211f7f3d40854f3a6c7c04c26853991
ec204e06d6fb6e9bf8564ba8e1f038ff1fee15e2
52900 F20110114_AAADML pydipati_r_Page_042.jpg
44a7be354fc83a2c27a70fc9339fcdcc
c9e75ba27f71c2ea1a972c9805da74e958114ee1
18148 F20110114_AAADLW pydipati_r_Page_009.jpg
1899ed023a37034a114cdee0fd962ac5
afd1a5cfd5f670adf23e70099a0fa48247869d96
65700 F20110114_AAADNA pydipati_r_Page_079.jpg
bd4ad1a028da56591c5a549ef519d42c
499a201f5264d0485ff9138591c898997272b0bf
37925 F20110114_AAADMM pydipati_r_Page_046.jpg
43a023708677020518d32bb9e263e215
5d49b1ddc1080a7d052b24aaf4b9806e7c70cdd2
66936 F20110114_AAADLX pydipati_r_Page_011.jpg
dedc1e3a79166df56c274d5d93d95a5c
8912f5506c78eb3464bdeb6e990a792db4f96ff2
73569 F20110114_AAADNB pydipati_r_Page_081.jpg
56ca2d61419abbbacc610fac745a27a9
8e8adc44e64c957448f340e74fc44007d252b0e8
57002 F20110114_AAADMN pydipati_r_Page_047.jpg
28abfc83f52863246b6c4497f7275210
25dc0a8ec1d3d77d44963d7ee8f633c5dd441bab
64764 F20110114_AAADLY pydipati_r_Page_012.jpg
bd633e86de73ee72628f9f9d0bd27926
63700d8fbc2ddc19a3717135f37ef4424ee1ae4b
59088 F20110114_AAADNC pydipati_r_Page_082.jpg
5663d3391ff49ed7e1d91982c5619fbc
a5ab7d6aae728537de06ef26bdaa2fd7d00905b3
17018 F20110114_AAADMO pydipati_r_Page_048.jpg
0165e063b8bb2a7fd2780ebb16800f18
83162f412afab67e8a98312c8bfd1f3660efc04a
70779 F20110114_AAADLZ pydipati_r_Page_013.jpg
f16695866efac376dc5546f481029a04
9cae081290d790e2203cae56239f12fb8d6c16c7
61752 F20110114_AAADND pydipati_r_Page_085.jpg
da01005f56083a7a098a9c3d68279d5b
35a123be95db30181068864ecdd1b402483b52fb
68236 F20110114_AAADMP pydipati_r_Page_052.jpg
325f90aff9fd7bf67cbc08384b4dd4d7
57bdc7aae1f4b8f06c7d4b5bdf21443029051187
56481 F20110114_AAADNE pydipati_r_Page_087.jpg
4ce6c67c6737b53663c012c4e4cca395
b0da65b4c3d34bcf6535dfb832ba26ce98054bc2
46432 F20110114_AAADMQ pydipati_r_Page_055.jpg
7205bf73192a78a449eca4b8105d2521
51b94085b68f081ba09b9a116b6c83db3994331b
59677 F20110114_AAADNF pydipati_r_Page_089.jpg
ebb71431208aa7304d4ee361046129fd
a2bf54dd81360154f235cf868cbbb788fc2fdfd0
73893 F20110114_AAADMR pydipati_r_Page_058.jpg
6d01726d1ed544be4cb6006352c16ed4
04b2e5622540ccfb15eced96e523bfc6e907adde
58924 F20110114_AAADNG pydipati_r_Page_092.jpg
c1fbeb3a27d44600de3d4aab58a19e4d
9f81855981ee67022039a309d1f8f139bd4f8ec4
72611 F20110114_AAADNH pydipati_r_Page_096.jpg
b504445f7e35e32701ce38abf9474709
d08355f58dfc9d7e19907b3a927d20761aed8b87
76603 F20110114_AAADMS pydipati_r_Page_060.jpg
7409469b149dd8aab2bf08b7169b04e0
377627b5c227645005886572154dd4e1c1048a3d
41154 F20110114_AAADNI pydipati_r_Page_098.jpg
af7b0dce601bf05625a16738cb30e535
828c7c41a488566d25719dbf20a407b6a5933d58
79597 F20110114_AAADMT pydipati_r_Page_062.jpg
846d1e811e4b0e702abf908736d37ff5
d66cb7ad0be57b58ab6ac5f3842bc208d5fdf6d5
61634 F20110114_AAADNJ pydipati_r_Page_100.jpg
9e0f8beca168f21a9652a93b9187882c
d6e175b329e4fe628f0a03eb3a3042b3461e004c
47579 F20110114_AAADMU pydipati_r_Page_064.jpg
94ba9375e5411e0280dc7cca3f0dfe89
3c19f4a17821034508f201772f688852a1ce4e6c
42153 F20110114_AAADNK pydipati_r_Page_102.jpg
41d564af668d09353df7751e6004dc68
426fbea8dc58d973caf60699a1c5da9131cfb24f
37523 F20110114_AAADMV pydipati_r_Page_066.jpg
edd77306a0725d85dd6d1f3b2836fe61
a6f4f941b3b0635fb2ef735738c576d00032a362
37192 F20110114_AAADNL pydipati_r_Page_105.jpg
a61efce33152b10d80be7d5c872e89fb
622b8e3ab8b8124ba35ca1c05a392a12364af6c7
122943 F20110114_AAADMW pydipati_r_Page_069.jpg
27bb919f8025f2ce546351ca05812ed5
7b99c775f55aa1a1ed503d5feee3a21e0a450e34
16102 F20110114_AAADNM pydipati_r_Page_106.jpg
697d102c429087f56b2f35fd1b6d9158
7b65b244b40f8f78185c0eed446941c4a008b2bd
62205 F20110114_AAADMX pydipati_r_Page_070.jpg
34693974850e27e8b925d10629b232c8
6bce5249aafbb6f38434a8c6e30a6981ee582612
106915 F20110114_AAADOA pydipati_r_Page_013.jp2
353b35e835d161c6b665471ab5bf3122
f3ac4a6311c4efb7eb4b7bc373c2a3b2dac3a3e6
31746 F20110114_AAADNN pydipati_r_Page_111.jpg
39c07636d271a43a79faeaecab859d34
38d08bf5f84135928d40bc34ad6da37a02e5f0a7
39878 F20110114_AAADMY pydipati_r_Page_071.jpg
b63b1b5ac0f5c82af42cdd172be14823
013dfc00464b253e787588bb5108671e6bd2c3cd
519416 F20110114_AAADOB pydipati_r_Page_016.jp2
7c68ae02fd9da2d4c713a649052f478a
9ab589abf82e007b340f67288a8e62b12e909466
100796 F20110114_AAADNO pydipati_r_Page_114.jpg
09dc9ecab4d9826ab64348be15c117d0
ddbbe19ad2f16cf21bca739c1afaf7546e01aaf0
51066 F20110114_AAADMZ pydipati_r_Page_077.jpg
92cde1118644149efb67f9c6d7bccc73
006dd2c97c294938322acf596bfb6794f361499c
462293 F20110114_AAADOC pydipati_r_Page_018.jp2
a7251b20575bd7753a800ea82615200a
b68b3806d78e94afd489fd089fcb0e74f8727dab
49356 F20110114_AAADNP pydipati_r_Page_116.jpg
55045e1600318d5a5d90f296cfe7fd67
ad5c4372f34667626f67ad54855a447042c55acc
82768 F20110114_AAADOD pydipati_r_Page_019.jp2
16e5051d59713cbf9b424f066c37f827
f47ab9d3207544d6118600f9c39d007739999414
24546 F20110114_AAADNQ pydipati_r_Page_001.jp2
e8de467c024055845e5e5175a22596af
902dd04226b2954e9d2822e54773b7a06d794300
92112 F20110114_AAADOE pydipati_r_Page_020.jp2
3906efb088b55ca4a91cac82263580d6
fc33032cc43fc70ac4d027d40db23e52d26b3e3b
F20110114_AAADNR pydipati_r_Page_002.jp2
eba63d9a67a39bf82d3d2bdee0ebb18a
746bfad79f3c396a05cd1c3ac5d4814b5e878823
109549 F20110114_AAADOF pydipati_r_Page_021.jp2
abf03973a33f4d761546c6eb9033141a
1e04808f41cb689491599d8801aedc46b5733337
11736 F20110114_AAADNS pydipati_r_Page_003.jp2
7b3b4a81492a50164ec2a38b82240c12
ae07d61ee24b242106e2b8d0f970d3ff4d6b4564
103199 F20110114_AAADOG pydipati_r_Page_024.jp2
35b77b10363b92c9ddf514ec92d8bcc4
531b91b4cf20d2d6826bc9497c377ccfe624a821
105071 F20110114_AAADOH pydipati_r_Page_026.jp2
42a0e75dca0f7d4a9cfc40e8991f660a
045a7a044d25dddaf0bcc167ad5aa5c1ba6e4d45
52742 F20110114_AAADNT pydipati_r_Page_004.jp2
0afc6be420b56d892ee611aeb8db0af4
fc35402131cb64f5898d9d73aaf94393558c5fae
106123 F20110114_AAADOI pydipati_r_Page_027.jp2
7e4b956b93b8ced12b21e10a6d987061
ae67e8edda3e8df3478ac7b7dd13a5f837703601
1051978 F20110114_AAADNU pydipati_r_Page_005.jp2
42dccc6c4a22a9b5e9f11bcf34d253e2
10d4b48e1ec16a66affb315d9047cb45a0ae5ffc
108012 F20110114_AAADOJ pydipati_r_Page_029.jp2
950451177c0bcb9693f15012e5238a82
7d222bda6fa4ed5104c6d164361766f094f91a33
1051986 F20110114_AAADNV pydipati_r_Page_006.jp2
d3d51031b7056419962055435db86244
8a05032d7e291623152188b3c93f595e56234154
99392 F20110114_AAADOK pydipati_r_Page_030.jp2
9d760b1aac9c821064bb02e94f3bd19a
f288c1fb3c0f66d7ab54bf9bce925fd3f3b53ac9
747601 F20110114_AAADNW pydipati_r_Page_007.jp2
04e90fc5e43274d539b7cf4983dda166
2d790d49e4c6eadde639702eccdb5d5723092372
99592 F20110114_AAADOL pydipati_r_Page_032.jp2
2d0ad70f7472564ba11f35f83d1121f3
3ee9fcb037522b7c2fe24149c3b5e40cba64adb0
339977 F20110114_AAADNX pydipati_r_Page_009.jp2
0f96054a402c249cf8ee549edd70c893
71dd958071deffd96aeb31dbf6b7fbe6a2b25038
515706 F20110114_AAADPA pydipati_r_Page_071.jp2
65e436b0a45888895a7f8e16c2343a38
3c2019f29a6f2b333bab631b01b8aa8c384afc0e
104301 F20110114_AAADOM pydipati_r_Page_034.jp2
ee8ecc686923b0a26d20dc5633b688f9
d19a4adfc6280ce438a61be1afb116418d02c613
85782 F20110114_AAADNY pydipati_r_Page_010.jp2
9852d70f8e766d605f2907a62e32aaa7
9408a9e9dd287d4b3fe30e9928559ea2b79f9616
688534 F20110114_AAADPB pydipati_r_Page_072.jp2
51b26939c24def9121de2104e47415d4
6723d9657e2784533a5d629c16c30c264cbabca4
99995 F20110114_AAADON pydipati_r_Page_036.jp2
49be279664ed6365f7b3a10f7ee3cb7e
acbffa2bd5b0885d12122a496452fcf6a4615d26
102071 F20110114_AAADNZ pydipati_r_Page_011.jp2
d84f2ccef0055b4f96b7eed5620b535a
47fae0e9985d8fbac083953f91f3f8c90bd1f95e
229907 F20110114_AAADPC pydipati_r_Page_074.jp2
498c39668d6b8a6e3a015cd7014b7912
68ce824a69d5a720dc1d6973c8f69469549a7e40
91569 F20110114_AAADOO pydipati_r_Page_037.jp2
3fa08492b11e7f8c60fe691f47b38df3
dddc3c8221b878233bb9ada0153deb72250c4ce2
690639 F20110114_AAADPD pydipati_r_Page_075.jp2
d524a4afa103c58f4d3cf181d7f52c44
b3a7585fa1b6e490cd3fb7b009985a7aff1fa1b1
60812 F20110114_AAADOP pydipati_r_Page_040.jp2
e58c03a4b8f0e74e37244dee4772bb02
7e95f71b972ad77a245182eadc14564376087616
814461 F20110114_AAADPE pydipati_r_Page_077.jp2
c94be8a7384a612c3b39162470a6316a
27ac3779a7310189aea11f1bece2dc7cb553427f
571753 F20110114_AAADOQ pydipati_r_Page_041.jp2
044fa2e682aea3e667cd8ac578598c38
76cb6def18f0adfda2594e7a255198235498ed34
F20110114_AAADPF pydipati_r_Page_079.jp2
37dc693513120196957f10028ee5f3b7
443f26cd0f6a4f71ff3b8fe21775e2cc1b447f7a
77164 F20110114_AAADOR pydipati_r_Page_042.jp2
bc89c9e0405f0e771d4a561f7f4eb0cb
e8173c67c6068f9b49ba4bbb0163cdc9ff1f7be9
100274 F20110114_AAADPG pydipati_r_Page_080.jp2
8280d75b354a4e0d34a3da2f6bdcd495
faad84f1f15f0bf8bfeffa3ec252e1d9e9e3ca14
52064 F20110114_AAADOS pydipati_r_Page_046.jp2
febf5c06c0162055ba2c8edbe53904c1
a9f1e8f22b27aa7cf17fc6d8ee2693f036e2b3a4
730129 F20110114_AAADPH pydipati_r_Page_082.jp2
c9e9bf3121f2cbae0d103cee7606cb3c
125cffd704af9879cf204dc0dfdf981c47e6da50
1051936 F20110114_AAADOT pydipati_r_Page_053.jp2
e1ee194dfa8ea316e2a4015297a28254
f0b75ffd48799c1078cad8a95b780f18b4c64c67
1051917 F20110114_AAADPI pydipati_r_Page_083.jp2
25b9ec99dd5d0fdf335617e4cbb139cd
e3e033dccdfea44446bee99809d6b5d12a0eb064
92635 F20110114_AAADPJ pydipati_r_Page_085.jp2
3ab2a0aab4991752f2119c9614eff172
86f2cb2de2e5fb552d458de98c21e51259e9561d
107123 F20110114_AAADOU pydipati_r_Page_059.jp2
d44448e5a778829ce2e6a37d349ed859
147bb70c22087a4cb350096396a4d7dfb9f097c6
71917 F20110114_AAADPK pydipati_r_Page_088.jp2
e146bf3863cfacb6ccdd76ae1de4f315
841ed069b7c5043666ed2135af83b21a0335e7a7
848662 F20110114_AAADOV pydipati_r_Page_061.jp2
6b52101505335da6caa5b11bcbbd65ee
b8c3a73295207c509078bb1892305a31a11de493
87401 F20110114_AAADPL pydipati_r_Page_089.jp2
95b38bce03f01aa7ecc6c5ee210d2b21
53e4b42050daa72787c3e180c3ebb5a3ef481634
609633 F20110114_AAADOW pydipati_r_Page_064.jp2
63787e5ef9f9ee3032d8b644ad1111a8
3c6a11cdf52034f12d22d6049ff39f6799704fbe
F20110114_AAADQA pydipati_r_Page_005.tif
8a67047196b3327273b664806ac57a77
0b2654264e63b55e5f908e15d2a9b69d6d0a7599
881434 F20110114_AAADPM pydipati_r_Page_091.jp2
4ed4cd309c3d1f491f90773a2736ef97
09365aeefbad58cbf245e951081df86b499e6fc4
500191 F20110114_AAADOX pydipati_r_Page_067.jp2
478321f33c24c6bb1d5e4fd4371ab220
d243cd7ed6cd32b673e26b33c8b09bf7aa5bf731
F20110114_AAADQB pydipati_r_Page_006.tif
f37532df88f379e76b9b53149686e5c9
034d0735c8fdf1920f09d62c369d4020569e6a6a
96147 F20110114_AAADPN pydipati_r_Page_094.jp2
3418d18b7e7c401d7c0a28288e48b540
892bcbcfd5059ae52b91261935a6c21c1b4ce6f2
179733 F20110114_AAADOY pydipati_r_Page_069.jp2
a673d5b560e05355dc0965e41c06c0a6
47b7e164d60860df6a6f509996ac4d308411c9f2
F20110114_AAADQC pydipati_r_Page_007.tif
48c55901ca5aeef0089c23666a152054
de013f59af3c41eb5f2439a3b59c4d87fbcf6e3d
108239 F20110114_AAADPO pydipati_r_Page_096.jp2
a4bdcbd3cc07915918288b195b44a4f9
c5d2c34f174146922b5d5a026653ce6238e4e303
805564 F20110114_AAADOZ pydipati_r_Page_070.jp2
29e2018c1c4ecaee830fd253cdf93dca
2b0e0579f9b08c4cd9757f53f54c1e57d4d3ff11
F20110114_AAADQD pydipati_r_Page_009.tif
f495fd11232ff27b005c24e8a60623f4
5bca1bd8ecb7897efcb1b5857981f76567e9cd6e
64088 F20110114_AAADPP pydipati_r_Page_102.jp2
71718b043b649d15a86bbffbf373910e
1db5bc8994fc614f3dbc013021d4cea0087f51ac
F20110114_AAADQE pydipati_r_Page_010.tif
27a467535b09aa25676885a92940a93e
b992f837e04101a2d95c4973c162b23b15aefe61
63115 F20110114_AAADPQ pydipati_r_Page_103.jp2
43a25bf21c1742c9f61c6c53a6cd5b09
77ed0357e8c545778ffcfcbc3683cdbe73192a8c
F20110114_AAADQF pydipati_r_Page_011.tif
526ae4dc8c9cc7d3d6db556e06001915
d19e06b0f25c07364fbb2076faabdf9f9fc5a383
73371 F20110114_AAADPR pydipati_r_Page_104.jp2
d027524731d9ebedf29e80ad3e3aa934
c28561567195eb42d86d69491c92785f4ef3d935
F20110114_AAADPS pydipati_r_Page_105.jp2
9dcf05815b8750e52529a0ec7a843027
4375cdc8c1fd126e06927d91634c9da9adc5280e
F20110114_AAADQG pydipati_r_Page_012.tif
679134bc26a0534fffebf1e941988513
3d34c4bece4fae1d5776f91bba48d70c452e5e36
614655 F20110114_AAADPT pydipati_r_Page_109.jp2
9acfa88926fdad151008cc66f2e46b61
883877ffe140660773db9ed17ce646d6690c368e
F20110114_AAADQH pydipati_r_Page_013.tif
7455c0e903bcf5134a3717838ade0d50
dd4586463eb7249f3579d3a751ad449771a98f23
306897 F20110114_AAADPU pydipati_r_Page_111.jp2
6bbb75f232bd8f3081ebb584375f564f
1168c8456b679326e9859ed149b04a62ec8bf69e
F20110114_AAADQI pydipati_r_Page_014.tif
fe7fea96d63a773ec1f644132ed73147
662f356d583212b3b568be73f039089e5111de5d
F20110114_AAADQJ pydipati_r_Page_017.tif
aa7e264e2ef9348fa83cba53d64e9a1e
78ae39d0b2b09fde7cd8a70c6908efd4ac65d090
1051983 F20110114_AAADPV pydipati_r_Page_114.jp2
6dbb9e25b2cb5bcfe8019cdc48c33843
c02ce7887b82d96a9aedacd53b648c70c8eb0711
F20110114_AAADQK pydipati_r_Page_020.tif
106ee2d62ea80084f6858533770dcbcc
5f19f02f96e8cce8ade8e0a74dc40ca8288b189d
68806 F20110114_AAADPW pydipati_r_Page_116.jp2
a0532c763e6f13c2c095b3c5865c69a6
7da3a643468c09502054bfe8ac1d9c818a8e1d86
F20110114_AAADQL pydipati_r_Page_021.tif
c8859c68ef6be8dbd3ae6d097dc57d17
ed71492a1bb30b0aa96f9dcc584be66a794b12e8
F20110114_AAADPX pydipati_r_Page_001.tif
04f23ada12488a1200e3adde7b57f63d
9fb5ed0410d4884d164071250db6b490ab991cef
F20110114_AAADRA pydipati_r_Page_050.tif
c5c58da706fa998bf998af1d2ba6d4d0
f05d75b59e69b3e2fad9ccff1f5a6b1586c1983c
F20110114_AAADQM pydipati_r_Page_022.tif
3d6f4a989eded69bb672027facfafadd
fff45dfef31d5ad0fcc4369c7ac8b525806a9e87
F20110114_AAADPY pydipati_r_Page_002.tif
034d6a0ef5d9a8195d9163c1f728be33
00055584b57c3ef57daa094e9c92a8e21fbebd74
F20110114_AAADRB pydipati_r_Page_052.tif
1e056540d541078987202e95323e1d56
1af40df79aa06bcf3dee0c3a9696997db0c399f4
F20110114_AAADQN pydipati_r_Page_023.tif
b5080ae2e39ba4229ffd80884cc9c4b9
635dce9d0d02aed8a34ffcda7069b6b9997f1a7f
F20110114_AAADPZ pydipati_r_Page_003.tif
ba452e877b47e5a717dfedc69ffcb7db
42635564ba943e5a607247a8130d36f6d39f8981
F20110114_AAADRC pydipati_r_Page_054.tif
48345d0f2927d749dd68d3f9443870bd
985edb3ef6548cd31fb5e389a8cc22f4a3a19875
F20110114_AAADQO pydipati_r_Page_024.tif
aac8b5039fe58036740a25299a7ab6e7
40e691b77fd34d002ae326a9fb90056c810ea442
F20110114_AAADRD pydipati_r_Page_057.tif
803af680658c4c9a0c2a5a0bced55999
d81d9be0e2ca6d5ec9f645a96cdcad7225d5c55d
F20110114_AAADQP pydipati_r_Page_027.tif
8c8dd565eea5a87bb9bf27b293b83f7f
7deae87e20a220edb00bc4f8bd8c926a1ea558a8
F20110114_AAADRE pydipati_r_Page_059.tif
d26f63186a6a9b9c6fa75b5d972e74a3
adaed2be4121cfeef735c8f0cd26e58df20b8d2d
F20110114_AAADQQ pydipati_r_Page_028.tif
f90c48d168c18bfd1d7b9ec628def9f8
54e9a7bae24d14781c9887e4dad36e7e59e2827b
F20110114_AAADRF pydipati_r_Page_062.tif
16ec4075e3f4d1cebf62bf3bcb2ce15c
786322df97ca6bf105bacfddd4fd77fd50313014
F20110114_AAADQR pydipati_r_Page_029.tif
6c70fc1d5b5b984e63f5d8b93cc1c16a
36df416f6c7f0bac0d512c525ae71d014309796f
F20110114_AAADRG pydipati_r_Page_063.tif
02daca2e0b24e7516ac2f26898b3fe70
312e2784e0fc308c140ca4f763832deb7d11db35
F20110114_AAADQS pydipati_r_Page_031.tif
ae7841b28207b718bfb4e6e6d51102ba
8646e278f76a94802c5cc4f615b9292067422077
F20110114_AAADRH pydipati_r_Page_065.tif
4c05f054428fe8816e085c105970bf73
92be2b6e5086799a0ef3e359adadf0e9b6c9e776
F20110114_AAADQT pydipati_r_Page_033.tif
120b2ff8522bcc29aab47d623a9287fe
a87aecb919b596de8a523637c6a1e65ab5ba3ace
F20110114_AAADRI pydipati_r_Page_066.tif
fee489331e8a23afc43c6ac4ed2dcc6f
f9fbb0dd98833bcda2ad06d0b2327268f35159d4
F20110114_AAADQU pydipati_r_Page_034.tif
b622484508ebe7d3766a28a9e5999ea3
73cf280f6a4819df780930fd35b41cbd1664fa30
F20110114_AAADRJ pydipati_r_Page_067.tif
8bf93cf71d53863a5d0886b591fc65cc
b18bae55765d07c9f36bb93edb08fa4d859aa8ff
F20110114_AAADQV pydipati_r_Page_035.tif
e392986542170548955e6bd65678bd63
49f5f8f2c16359a37619472a63fb12102be4dff7
F20110114_AAADRK pydipati_r_Page_070.tif
eff3e001208df06caafdd4288a371dff
b5838fb2f8509b15d60b1610011467731347f76a
F20110114_AAADRL pydipati_r_Page_071.tif
02b9f67e536b7e41012981a92c41ee8b
825b7899636ac7c9acbcf83102eb82ab50ba6d5f
F20110114_AAADQW pydipati_r_Page_038.tif
4698b5234d3b164f490723f19521aec9
85cc005477ba3bdd2a32eeefcca27c34aaa57cda
F20110114_AAADSA pydipati_r_Page_104.tif
a288d0ff6b4099f48f13578d3a37ffae
8b1866e27ca9a28ed9ef367ee381df009dc73af1
F20110114_AAADRM pydipati_r_Page_072.tif
ef4678152535f24da8b37c5fc9bc4652
c30b7a5b2bbafe984062d0cc36ed773e5685c405
F20110114_AAADQX pydipati_r_Page_040.tif
2220035cf7f9bda40e694b92a37a7ead
9b4a63d19d20311db5b2c5dcefc33b76388f8358
F20110114_AAADSB pydipati_r_Page_107.tif
df9bb9d29a4aa966f5bc190411e1e998
a3be8cb38481c16a94dcaaa7fb34d80d3eadb263
F20110114_AAADRN pydipati_r_Page_073.tif
5873e72b75a66434cb250ee8d2c8ff55
2f4cb2b2d71f6d21165ca68269b7bf577c79386f
F20110114_AAADQY pydipati_r_Page_041.tif
aa435df95ea25e0822135684f295285f
1810b9a90c58eb52ed920d7262dc1356d353fabe
F20110114_AAADSC pydipati_r_Page_108.tif
7c1fdd80b5c0474f5966cbdb2221df77
ce0800f49ea98e197d9aa9d26c0c7f377ff3eebc
F20110114_AAADRO pydipati_r_Page_074.tif
c42e490342e6ceaaf7d9783b8f3d6b5d
66b06e69caa115f611f987aee89669e19e07dc0d
F20110114_AAADQZ pydipati_r_Page_047.tif
c7a3ec3b7c896a0ad097ae4f439dc071
7a7f35b1930d76a6f5fa63b5b8bf748ca8675ead
F20110114_AAADSD pydipati_r_Page_112.tif
c6ef2d0bf81d66354e00285c3e98e779
4d914ad7db19d5dce3914dc85cbc22d71aafd7d6
F20110114_AAADRP pydipati_r_Page_077.tif
173ca6705d8f1994e781ac2d8e420c2a
d3a3558341c484fe818291fab8bc37e01f34c939
F20110114_AAADSE pydipati_r_Page_114.tif
b3fad6a109b4953d3e28e9dbbcfd4510
114b3e771b7138020ab8c3b75ef032be2f890fa0
F20110114_AAADRQ pydipati_r_Page_089.tif
dd2a6b7609216ebf002822a2f2de2f07
836955401d2727e6867acddee22204d5170002e6
8443 F20110114_AAADSF pydipati_r_Page_001.pro
25e83a940d918b5757abd335f83ae7ca
4967942b6abac8512aa9ca89b958b82703ec75ed
F20110114_AAADRR pydipati_r_Page_090.tif
cec5f3ae31df70af3612a3fa776e08a6
e4329841d9af9b71a4ec367ac062c461a4b58ebb
1200 F20110114_AAADSG pydipati_r_Page_002.pro
4b8b0c700f379b5839470d3269d098b5
ef0eb8531ba76fde430239afdf92766e02312838
F20110114_AAADRS pydipati_r_Page_092.tif
3a4a85aa9764a833c4d637f33da8cedd
ef37f8fcef1c5a98f38f6682dd87efb98629c70a
4227 F20110114_AAADSH pydipati_r_Page_003.pro
ff70ba5e5bbdafd6addfe1a71b2512e1
7066a7d2d57e76328de1a05ae03673284626efaa
F20110114_AAADRT pydipati_r_Page_094.tif
717beb4b62d7da0be9c3ba23f058e44f
73527d338b48bd365ee1971093237c4e7e4dbe3b
19532 F20110114_AAADSI pydipati_r_Page_007.pro
a2ed76f0bc9e990522d5554c3c0c78c5
b9afaffc44866af38f8b18924d5b4e898264d138
F20110114_AAADRU pydipati_r_Page_097.tif
f84dfdca01ecb4b2a6e3c8fa69dd7a82
5e53e0e3f345cda906277a983b6fba353b9c61f8
45320 F20110114_AAADSJ pydipati_r_Page_008.pro
058e37d0c3c465fdac3d442fdace28e1
f2ce031dae0b9fbd229075ed28931ea8c8ddcc29
F20110114_AAADRV pydipati_r_Page_098.tif
43edf5e5b2edbc5a0be733fee3250f3e
57e7adacb6a8c9c84b19b4d4766270fe9539866a
7883 F20110114_AAADSK pydipati_r_Page_009.pro
0f26785951f3ae6e1cd906cd4398e457
584c99e56ce2781f9abd86b75b6dd4f37654e19d
F20110114_AAADRW pydipati_r_Page_099.tif
cd78cc43d9a90364fda2d1f05d5f97bc
e49b41f8654790578cc1790401287ddf79bb1df2
45773 F20110114_AAADSL pydipati_r_Page_011.pro
8100735856c6eb93fdf8d600979662d6
b22bc7b3fcf4f086ab66f507cf3cd6cea5abb87e
48611 F20110114_AAADSM pydipati_r_Page_013.pro
ee034b1144f3f2318b3353176ea1fa26
d14134568d109a24e6c5fd454780e499be1b33cf
F20110114_AAADRX pydipati_r_Page_100.tif
3787fc5703113976355f1e5c005949f8
6da8a78058b16462db58cbf1b107fcd9ac38f5a9
29799 F20110114_AAADTA pydipati_r_Page_043.pro
e899d1e28176af94bd8a384e50137a05
d352edc4d3db9cd6547461fa5ad2518c62532c24
36174 F20110114_AAADSN pydipati_r_Page_015.pro
92466530b81c81f7bb12a2a75422862b
cf66194f764a949c97acf5ae74406650b04c7488
F20110114_AAADRY pydipati_r_Page_102.tif
4e36ff18784d5ffb6027bd7cc75b2444
369e0a3f56d7f30d0b00a8c76966354db1688413
17430 F20110114_AAADTB pydipati_r_Page_045.pro
8a2c8e3f06ae5247845608318840d24b
d4e026fc996b4ff89c0a0485087cb6bfdade740a
41155 F20110114_AAADSO pydipati_r_Page_020.pro
29c6a336f09c150c64860881e05d8ac9
348ae95f4f0254df878e03e693555bf28f0445d0
F20110114_AAADRZ pydipati_r_Page_103.tif
c6787fd64339c8fd39b0017394f92bf2
f099087664806ecfd22391a335feda5f76c7d4e6
6971 F20110114_AAADTC pydipati_r_Page_048.pro
e1eaaadf161b5325727aef52279f88c6
09bc7dece1cd5771292a6383116db03ef5dcf243
47729 F20110114_AAADSP pydipati_r_Page_024.pro
5650782bfc1a8f2e046db8d81fe4b9a1
61af2294b54ec09fae0bd699d3443e992f605648
23644 F20110114_AAADTD pydipati_r_Page_055.pro
47ee8b22d43032b528b50d5b3b491e8f
3aaa9b6211728bd524959f142f9413a9004fe3f9
44428 F20110114_AAADSQ pydipati_r_Page_025.pro
8992895762979f0ea802b7fcbb11b7ec
093823eb1780789af862648dae0e44eaf969ae68
27007 F20110114_AAADTE pydipati_r_Page_056.pro
34edab0f14b452caabeddab8ea5e989e
109a1fabec12e6c9299ecf927d2ef656b205492e
47213 F20110114_AAADSR pydipati_r_Page_026.pro
17e39143d3622d13d773a249131ef100
80e688b1d97120a346cfceddfc9cfafa5cdfbd99
51325 F20110114_AAADTF pydipati_r_Page_058.pro
d423a607724b0fbfb23e36fb6b89aaa8
ff34e2df428b7cc3acc795162663dba4a0f3e54d
48274 F20110114_AAADSS pydipati_r_Page_027.pro
fda737073049068457d8b4694a4b86b9
a4d327c1401877b69c84c742708226416a0728be
49473 F20110114_AAADTG pydipati_r_Page_062.pro
b072ac78a06eb47b65b05de095f0f000
34067174187fec02ab36296e71e602a93d4364ee
48607 F20110114_AAADST pydipati_r_Page_029.pro
d103a9b5b4388979671ebae175a4ffb8
e0962655079054c664165be1f2f05c7f9d8bf248
31947 F20110114_AAADTH pydipati_r_Page_063.pro
9a44b951570d1a88198fa4cc9d35ff60
d26ffd53aaf1368985e71d86014b4d4f079359da
46084 F20110114_AAADSU pydipati_r_Page_032.pro
6fdb32c13105c142fa52245fd081e155
615d216d7785f434665e8a4c082e015a44a6b6fd
9109 F20110114_AAADTI pydipati_r_Page_065.pro
63bff8d711f600bf5aa7c3614f2ef3b2
8356dfccede2da4f3db5218b97009905f9f7dd55
48480 F20110114_AAADSV pydipati_r_Page_035.pro
8cbc05bb880dc3aa2765cacc014079d6
4f3faa9630672d0b700f0ace6c63901c575f5650
43555 F20110114_AAADTJ pydipati_r_Page_068.pro
6c98f732ce384859cff1dddc493a86be
a6156c8973518787781059b4cd701dc6f66ff38c
45550 F20110114_AAADSW pydipati_r_Page_036.pro
e555393e504ae9763f930a74da556ba6
651e0e57d3372e2ef1d5d7c4629e9c5adcca7994
92440 F20110114_AAADTK pydipati_r_Page_069.pro
60a8ba3949369f3272cdd53986425b40
b9833d7064e773d14f707a616634a7d41673f624
36486 F20110114_AAADSX pydipati_r_Page_038.pro
7279589d9648fca066b8454b4762ac60
77cf5a4ddffeaa21dc9619ae1619eb8996f12142
21545 F20110114_AAADTL pydipati_r_Page_073.pro
ed26bb5922eb5dcf98896d6fcbd0f904
accc5dcc909a707f35dc73e310a651eb277ad596
228 F20110114_AAADUA pydipati_r_Page_003.txt
1f405af997e313be493826cba473357f
f328843bd750bd2acf14f52d6f3e95be45376431
30635 F20110114_AAADTM pydipati_r_Page_083.pro
b24e6b71d7a7c866aeb3c640c9f3fc64
79a836cbee02cdd0e8c59f11be1e108501812f87
20449 F20110114_AAADSY pydipati_r_Page_039.pro
879bcc66b11e838a2bb45f29c9c6076e
d9417dc02533712468577aa29c5f9554d943fc94
943 F20110114_AAADUB pydipati_r_Page_004.txt
2d3debf756a68e3121d901963053275e
5d7738268aa87227cd1e6d51128e12dd79e67b6e
37745 F20110114_AAADTN pydipati_r_Page_087.pro
9ee0222fcf511db45cc79482d35c76c5
a69aff2aa448d1e8fe8ed5e9e56d841acd1b4d43
16173 F20110114_AAADSZ pydipati_r_Page_041.pro
b085093d02a95e1ac27da32dadeef006
06880e518be4e9690936004c6f1984f4c08f8ddf
841 F20110114_AAADUC pydipati_r_Page_007.txt
ae2b1de39cc2a34ce1803a6964a8da88
79297cc7be290d113ac06a88673d132f42cbda59
32171 F20110114_AAADTO pydipati_r_Page_088.pro
f3c00142b9b4044faa76d0717596644d
3ddcd2772dd868ffb01e3b2bfb05dd26ec6037f5
1898 F20110114_AAADUD pydipati_r_Page_008.txt
c3332a5bd1ef8b959304987ede8984df
a0c44bca4db3d6c8cb27de02eaa2920f750dddda
39395 F20110114_AAADTP pydipati_r_Page_089.pro
fcb04b78e8ce06ad370ae5a493d3e7ac
326c479dacd61433c098f2f308c9efd96ff705b8
1729 F20110114_AAADUE pydipati_r_Page_010.txt
3aa7758c8ad008b3c3481c07415d4dff
c710c7bb94174c49f8c9ff08691565e38a827afc
21140 F20110114_AAADTQ pydipati_r_Page_090.pro
c07a7bd2f0a4a46593d0fa94dff44591
c410f35ad9ec3c2e1e9d346e2fc249a7ad2061f8
10382 F20110114_AAAEAA pydipati_r_Page_111.QC.jpg
11e859d2a74e3c6e131e999908fb922c
cc8ba7989b9d89168bb0d81319b6753baa92ddb0
F20110114_AAADUF pydipati_r_Page_012.txt
5692701d7e027f957b409f67a5b4d27c
8b9a9ae9e70a1587c3fce33e8fbee5e8ce050484
50007 F20110114_AAADTR pydipati_r_Page_095.pro
17a626b0ac2ae24e9ee1d3a9f11e8dab
4719d1e3a8d094c37dcc4c0107988544bd0205f8
7734 F20110114_AAAEAB pydipati_r_Page_112.QC.jpg
0cb6fac0beeba04536d02ba9175017cc
bc40a2a3a484bff2087a3854c355f8376cc2ac7c
1956 F20110114_AAADUG pydipati_r_Page_013.txt
280690e157a94a1c4a3e20e22e8d8f21
864e752e176abc7d25128490c34eb62d51aa29b5
41836 F20110114_AAADTS pydipati_r_Page_100.pro
c88d7cf09a2573034ba30654dfe8b7d3
3daf9879f59a8ee084253b02b77750f027d6e6a8



PAGE 1

EVALUATION OF CLASSIFIERS FOR AUTOMATIC DISEASE DETECTION IN CITRUS LEAVES USING MACHINE VISION By RAJESH PYDIPATI A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ENGINEERING UNIVERSITY OF FLORIDA 2004

PAGE 2

Copyright 2004 by Rajesh Pydipati

PAGE 3

I dedicate this document to my parents and my teacher Dr. Burks for their support and friendship. Without them, this work would not have been possible.

PAGE 4

ACKNOWLEDGMENTS I would like to express my gratitude to my parents for their love and support in all stages of my life. My sincere thanks go to my professor, friend and guide, Dr. Thomas F. Burks, whose support and faith in me have always been an inspiration. I would also like to thank Dr. Wonsuk Lee and Dr. Michael C. Nechyba, who agreed to be on my committee and gave valuable advice in the completion of this research work. My warm greetings to all my friends in our research group (Agricultural Robotics and Mechatronics, ARMg) and the personnel in the Agricultural and Biological Engineering Department for their friendship. Special thanks go to Ms. Melanie Wilder for proofreading this document. I extend my special thanks to the United States Department of Agriculture (USDA) for providing the necessary funds to carry on this research. iv

PAGE 5

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES............................................................................................................vii LIST OF FIGURES.........................................................................................................viii ABSTRACT.........................................................................................................................x CHAPTER 1 INTRODUCTION........................................................................................................1 Motivation.....................................................................................................................1 Citrus Diseases..............................................................................................................2 Image Processing and Computer Vision Techniques...................................................5 2 OBJECTIVES...............................................................................................................8 3 LITERATURE REVIEW.............................................................................................9 Object Shape Matching Methods..................................................................................9 Color Based Techniques.............................................................................................10 Reflectance Based Methods........................................................................................12 Texture Based Methods..............................................................................................13 Experiments Based On Other Methods......................................................................15 4 FEATURE EXTRACTION........................................................................................21 Texture Analysis.........................................................................................................21 Co-occurrence Matrices.......................................................................................23 Autocorrelation Based Texture Features.............................................................24 Geometrical Methods..........................................................................................25 Voronoi Tessellation Functions...........................................................................25 Random Field Models.........................................................................................26 Signal Processing Methods..................................................................................26 Color Technology.......................................................................................................26 RGB Space..........................................................................................................27 HSI Space............................................................................................................29 v

PAGE 6

Co-occurrence Methodology for Texture Analysis....................................................30 SAS Based Statistical Methods to Reduce Redundancy............................................36 5 CLASSIFICATION....................................................................................................38 Statistical Classifier Using the Squared Mahalanobis Minimum Distance................39 Neural Network Based Classifiers..............................................................................41 Considerations on the Implementation of Back Propagation..............................57 Radial Functions..................................................................................................59 Radial Basis Function Networks.........................................................................60 6 MATERIALS AND METHODS...............................................................................64 SAS Analysis..............................................................................................................72 Input Data Preparation................................................................................................74 Classification Using Squared Mahalanobis Distance.................................................74 Classification Using Neural Network Based on Back Propagation Algorithm:.........75 Classification Using Neural Network Based on Radial Basis Functions:..................78 7 RESULTS...................................................................................................................81 Generalized Square Distance Classifier from SAS....................................................81 Statistical Classifier Based on Mahalanobis Minimum Distance Principle...............82 Neural Network Classifier Based on Feed Forward Back Propagation Algorithm....82 Neural Network Classifier Based on Radial Basis Functions....................................83 8 SUMMARY AND CONCLUSIONS........................................................................88 APPENDIX A MATLAB CODE FILES............................................................................................90 B MINIMIZATION OF COST FUNCTION.................................................................96 LIST OF REFERENCES.................................................................................................102 BIOGRAPHICAL SKETCH...........................................................................................105 vi

PAGE 7

LIST OF TABLES Table page 6-1 Classification models...............................................................................................73 7-1 Percentage classification results of the test data set from SAS................................81 7-2 Percentage classification results for mahalanobis distance classifier......................82 7-3 Percentage classification results for neural network using back propagation..........82 7-4 Percentage classification results for neural network using RBF..............................83 7-5 Classification results per class for neural network with back propagation..............86 7-6 Comparison of various classifiers for model 1B......................................................87 vii

PAGE 8

LIST OF FIGURES Figure page 1-1 Image of citrus leaf infected with greasy spot disease...............................................3 1-2 Image of a citrus leaf infected with melanose............................................................4 1-3 Image of a citrus leaf infected with scab....................................................................5 1-4 Image of a normal citrus leaf......................................................................................5 4-1 RGB color space and the color cube........................................................................28 4-2 HSI color space and the cylinder..............................................................................30 4-3 Nearest neighbor diagram........................................................................................32 5-1 A basic neuron..........................................................................................................44 5-2 Multilayer feedforward ANN...................................................................................50 5-3 An RBF network with one input..............................................................................59 5-4 Gaussian radial basis function..................................................................................60 5-5 The traditional radial basis function network...........................................................61 6-1 Image acquisition system.........................................................................................64 6-2 Full spectrum sunlight..............................................................................................66 6-3 Cool white fluorescent spectrum..............................................................................66 6-4 Spectrum comparison...............................................................................................66 6-5 Visual representation of an image sensor.................................................................67 6-6 Coreco PC-RGB 24 bit color frame grabber............................................................68 6-7 Image acquisition and classification flow chart.......................................................71 6-8 Edge detected image of a leaf sample......................................................................72 viii

PAGE 9

6-9 Network used in feed forward back propagation algorithm.....................................75 6-10 Snapshot of the GUI for the neural network toolbox data manager.........................79 6-11 Neural network based on radial basis function used in leaf classification...............80 ix

PAGE 10

Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Engineering EVALUATION OF CLASSIFIERS FOR AUTOMATIC DISEASE DETECTION IN CITRUS LEAVES USING MACHINE VISION By Rajesh Pydipati August 2004 Chair: Thomas F. Burks Cochair: Wonsuk Lee Major Department: Agricultural and Biological Engineering The citrus industry is an important part of Floridas agricultural economy. Citrus fruits, including oranges, grapefruit, tangelos, tangerines, limes, and other specialty fruits, are the states largest agricultural commodities. The economic impact of citrus industry on the overall economy of the state of Florida is substantial. The citrus industry is also one of the leading producers of jobs for people in Florida and thus has huge potential for the overall economic balance of the state. These facts prove beyond doubt the importance of the citrus industry in the states economy. As such, several important decisions regarding safe practices for the production and processing of citrus fruits have been made in the recent past. One of the main concerns is proper disease control. Every year, large quantities of chemicals are used as fungicides to control various diseases common to citrus crops, thus evoking serious concern from environmentalists over deteriorating groundwater quality. Likewise, farmers are also concerned about the huge costs involved x

PAGE 11

in these activities and severe profit loss. To remedy this situation various alternatives are being searched to minimize the application of these hazardous chemicals. Several key technologies incorporating concepts from image processing and artificial intelligence were developed by various researchers in the past to tackle this situation. The focus of these applications was to identify the disease in the early stages of infection so that selective application of the chemicals in the groves was possible using other technologies like robotics and automated vehicles. As part of this thesis research, a detailed study was implemented to investigate the use of computer vision and image processing techniques in the classification of diseased citrus leaves from normal citrus leaves. Four different classes of citrus leaves, greasy spot, melanose, normal and scab, were used for this study. The image data of the leaves selected for this study were collected using a JAI MV90, 3 CCD color camera with 28-90 mm zoom lens. Algorithms based on image processing techniques for feature extraction and classification were designed. Various classification procedures were implemented to test the classification accuracies. The classification approaches that were used are statistical classifier using the Mahalanobis minimum distance method, neural network based classifier using the back propagation algorithm and neural network based classifier using radial basis functions. The analyses proved that such methods could be used for citrus leaf classification. The statistical classifiers gave good results averaging above 95% overall classification accuracy. Similarly, neural network classifiers also achieved comparable results. xi

PAGE 12

CHAPTER 1 INTRODUCTION Motivation The citrus industry is an important constituent of Floridas overall agricultural economy. Hodges et al. (2001) present statistical highlights emphasizing the impact of citrus industry in Florida. According to them, citrus fruits, including oranges, grapefruit, tangelos, tangerines, limes, and other specialty fruits, are the states largest agricultural commodities. Florida is the worlds leading producing region for grapefruit and is second only to Brazil in orange production. The state produces over 80 percent of the United States supply of citrus. In the 1999-2000 season, a total of 298 million boxes of citrus fruit were produced in Florida from 107 million bearing citrus trees growing on 832,000 acres. The farm-level value of citrus fruit sold to packing houses and processing plants amounted to $1.73 billion. Total economic impacts associated with the citrus industry were estimated at $9.13 billion in industry output, $4.18 billion in value added, and 89,700 jobs. These facts prove that citrus industry is a major boost to the economy of the state. Proper disease control measures must be undertaken so that, crop yield losses may be minimized and excessive application of fungicides may be avoided which is a major contributor for environmental pollution as well as a major source of spending. Many key enabling technologies have been developed so that automatic identification of disease symptoms may be achieved using concepts of image processing and computer vision. The design and implementation of these technologies will greatly aid in selective 1

PAGE 13

2 chemical application, reducing costs and thus leading to improved productivity, as well as improved produce. Citrus Diseases Citrus trees can exhibit a host of symptoms reflecting various disorders that can adversely influence their health, vigor and productivity to varying degrees. Identifying disease symptoms is essential as inappropriate actions may sometimes prove to be costly and detrimental to the yield. The disease symptoms that will be addressed in this thesis are an important aspect of commercial citrus production programs. Proper disease control actions or remedial measures can be undertaken if the symptoms are identified early. The common types of disease symptoms observed in commercial citrus crop production will be discussed in the following paragraphs. These descriptions were extracted from a publication titled A Guide to Citrus Disease Identification, released by the Institute of Food and Agricultural Sciences at the University of Florida Greasy spot (Mycosphaerella citri). Greasy spot is caused by Mycosphaerella citri. Management of this disease must be considered in groves intended for processing or for fresh fruit market. Greasy spot is usually more severe on leaves of grapefruit, pineapple, hamlins and tangelos than on valencias, temples, murcotts, and most tangerines and their hybrids. Infection by greasy spot produces a swelling on the lower leaf surface. A yellow mottle appears at the corresponding point on the upper leaf surface. The swollen tissue starts to collapse and turn brown and eventually the brown or black symptoms become clearly visible. Airborne ascopores produced in decomposing leaf litter on the grove floor are the primary source of inoculum for greasy spot. These spores germinate on the underside of the leaves and the fungus grows for a time on the

PAGE 14

3 surface before penetrating through the stomat es (natural openings of the lower leaf surface). Internal growth is slow and does not appear for several months. Warm humid nights and high rainfall, typical of Florid a summers, favor infections and disease development. Major ascopore release usually o ccurs from April to July, with favorable conditions for infection occurring from June through September. Leaves are susceptible once they are fully expanded and remain susceptible throughout their life. Figure 1-1. Image of citrus leaf infected with greasy spot disease Melanose (Diaporthe citri). Control of melanose, caused by Diaporthe citri, is often necessary on mature groves where fruit is intended for fresh market, particularly if recently killed twigs and wood are present as a result of freezes or other causes. Grapefruit is very susceptible to melanose, but the disease may damage all other citrus. On foliage, melanose first appears on the young leaves as minute, dark circular depressions with yellowish margins. Later th ey become raised, are rough, brown in color, and the yellow margins disappear. Leaves infected when very young may become distorted. Infested leaves do not serve as an inoculum source. Young green twigs can also be infected.

PAGE 15

4 Star Melanose. Star melanose occurs when copper is applied late during hot, dry weather, and is due to copper damage to leaves. It has no relationship to melanose but may resemble symptoms of that disease. Copper causes the developing tissues to become more corky and darker than normal and the shape of the lesion often resembles a star. Figure 1-2. Image of a citrus leaf infected with melanose Citrus scab (Elsinoe fawsettii). Citrus scab caused by elsinoe fawsettii affects grapefruit, temples, murcotts, tangelos, and some other tangerine hybrids. Small, pale orange, somewhat circular, elevated spots on leaves and fruit are the first evidence of the disease. As the leaves develop, the infection becomes well defined, with wart-like structures or protuberances on one side of the leaf, often with a conical depression on the opposite side. The crests of the wart-like growths usually become covered with a corky pale tissue and become somewhat flattened as the fruit matures especially on grapefruit. The pustules may run together, covering large areas of the fruit or leaves. Badly infected leaves become very crinkled, distorted, and stunted. Fruit severely attacked when very small often become misshapen. Scab can be particularly severe on temples and lemons, and is often troublesome on murcotts, minneola tangelos and grapefruit.

PAGE 16

5 Figure 1-3. Image of a citrus leaf infected with scab. Figure 1-4. Image of a normal citrus leaf In this research, the focus will be on these diseases since they are more common among the citrus trees. Image Processing and Computer Vision Techniques Computer vision techniques are used for agricultural applications, such as detection of weeds in a field, sorting of fruit on a conveyer belt in fruit processing industry, etc. The underlying approach for all of these techniques is the same. First,

PAGE 17

6 digital images are acquired from environment around the sensor using a digital camera. Then image-processing techniques are applied to extract useful features that are necessary for further analysis of these images. After that, several analytical discriminant techniques, such as statistical, bayesian or neural networks will be used to classify the images according to the specific problem at hand. This constitutes the overall concept that is the framework for any vision related algorithm. Figure 1-5 given below depicts the basic procedure that any vision-based detection algorithm would use. The first phase is the image acquisition phase. In this step, the images of the various leaves that are to be classifies are taken using an analog CCD camera interfaced with a computer containing a frame grabber board. In the second phase image preprocessing is completed. Usually the images that are obtained from the first phase are not suited for classification purposes because of various factors, such as noise, lighting variations, etc. So, these images would be preprocessed using certain filters to remove unwanted features in the images. In the third phase, edge detection is completed to discover the actual boundary of the leaf in the image. Later on, feature extraction is completed based on specific properties among pixels in the image or their texture. After this step, certain statistical analysis tasks are completed to choose the best features that represent the given image, thus minimizing feature redundancy. Finally, classification is completed using various detection algorithms

PAGE 18

7 Figure 1-5. Classification procedure of a general vision based detection algorithm Classification Process Classification Edge detection Feature Extraction Statistical Anal y sis Image preprocessing Image Acquisition The above figure outlines the various steps involved in any kind of general vision based classification process. In the following chapters, those steps would be discussed in detail.

PAGE 19

CHAPTER 2 OBJECTIVES The main objectives of this research are outlined as follows: 1) To collect image data sets of various common citrus diseases. 2) To evaluate the Color Co-occurrence Method, for disease detection in citrus trees. 3) To develop various strategies and algorithms for classification of the citrus leaves based on the features obtained from the color co-occurrence method. 4) To compare the classification accuracies from the algorithms. The image data of the leaves selected for this study would be collected. Algorithms based on image processing techniques for feature extraction and classification would be designed. Manual feeding of the datasets, in the form of digitized RGB color photographs would be done for feature extraction and training the SAS statistical classifier. After training the SAS classifier, the test data sets would be used to analyze the performance of accurate classification. The whole procedure of analysis would be replicated for three alternate classification approaches to include; statistical classifier using the Mahalanobis minimum distance method, neural network based classifier using the back propagation algorithm and neural network based classifier using radial basis functions. Comparison of the results obtained from the three approaches would be completed and the best approach for the problem at hand would be determined. 8

PAGE 20

CHAPTER 3 LITERATURE REVIEW In the past decade, agricultural applications using image processing and pattern recognition techniques have been attempted by various researchers. Object shape matching functions, color-based classifiers, reflectance-based classifiers and texture-based classifiers are some of the common methods that have been tried in the past. The following sections will discuss some past work done using these methods. Object Shape Matching Methods Tian et al. (2000) developed a machine vision system to detect and locate tomato seedlings and weed plants in a commercial agricultural environment. Images acquired in agricultural tomato fields under natural illumination were studied extensively and an environmentally adaptive segmentation algorithm, which could adapt to changes in natural light illumination, was developed. The method used four semantic shape features to distinguish tomato cotyledons from weed leaves and a whole plant syntactic algorithm was used to predict stem location of whole plant. Using these techniques, accuracies of 65% for detection of tomato plants were reported. Guyer et al. (1993) implemented an algorithm to extract plant/leaf shape features using information gathered from critical points along object borders, such as the location of angles along the border (and/or) local maxima and minima from the plant leaf centroid. A library of 17 low level features was converted into 13 higher-level quantitative shape features. This demonstrated the ability to combine and structure basic 9

PAGE 21

10 explicit data into more subjective shape knowledge. This knowledge-based system as a pattern recognition system achieved a classification accuracy of 69%. Woebbecke et al. (1995a) developed a vision system using shape features for identifying young weeds. Shape feature analyses were performed on binary images originally obtained from color images of 10 common weeds, along with corn and soybeans. The features included were roundness, aspect, perimeter, thickness, elongatedness and several invariant central moments (ICM). Shape features that best distinguished these plants were aspect and first invariant central moment, which classified 60 to 90 % of dicots from monocots. Various researchers have made additional efforts in the past. Franz et al. (1991) identified plants based on individual leaf shape described by curvature of the leaf boundary at two growth stages. Ninoyama and Shigemori (1991) analyzed binary images of whole soybean plants viewed from the side. Width, height, projected area; degree of occupancy and x and y frequency distributions about main axis and its centroid were used to describe plant shape as a possible tool for classification. Guyer et al. (1986) identified young corn plants based on spatial features, including the number of leaves and shape of individual leaves. Thompson et al. (1991) suggested that plant shape features might be necessary to distinguish between monocots and dicots for intermittent or spot spraying. The main disadvantages of methods based on shape matching were occlusion, inadequate description of leaves with variable leaf serrations, aggregate boundaries of multiple leaves. Color Based Techniques Kataoka et al. (2001) developed an automatic detection system for detecting apples ready for harvest, for the application of robotic fruit harvesting. In this system, the

PAGE 22

11 color of apples was the main discriminating feature. The color of apples that were suitable for harvest and of those picked earlier than harvest time were measured and compared using a spectrophotometer. Both of these showed some differences in color. The harvest seasons apple color and the color of apples picked before harvest were well-separated based on Munsell color system, the L*a*b color space and XYZ color system. The threshold, which detects the harvest season apples, was produced based on the evaluation of these color systems. Slaughter (1987), investigated the use of chrominance and intensity information from natural outdoor scenes as a means of guidance for a robotic manipulator in the harvest of orange fruit. A classification model was developed which discriminated oranges from the natural background of an orange grove using only color information in a digital color image. A Bayesian form of discriminant analysis correctly classified over 75% of the pixels of fruit in the natural scenes that were analyzed. Woebbecke et al. (1995b) developed a vision system using color indices for weed identification under various soil, residue and lighting conditions. Color slide images of weeds among various soils and residues were digitized and analyzed for red, green and blue (RGB) color content. It was observed that red, green and blue chromatic coordinates of plants were very different from those of background soils and residue. For distinguishing living plant material from a non-plant background, several indices of chromatic coordinates were tried and were found to be successful in identifying weeds. A weed detection system for Kansas wheat was developed using color filters by Zhang and Chaisattapagon (1995). Gray scale ratios were used to discriminate between weed species common to wheat fields.

PAGE 23

12 Reflectance Based Methods Hatfield and Pinter (1990) discuss the various techniques, which are in use today in remote sensing for crop protection. Research and technological advances in the field of remote sensing have greatly enhanced the ability to detect and quantify physical and biological stresses that affect the productivity of agricultural crops. Reflected light in specific visible, nearand middle-infrared regions of electromagnetic spectrum has proved useful in detection of nutrient deficiencies, disease, weed and insect infestations. A method to assess damage due to citrus blight disease on citrus plants, using reflectance spectra of entire tree, was developed by Edwards et al. (1986). Since the spectral quality of light reflected from affected trees is modified as the disease progresses, spectra from trees in different health states were analyzed using a least squares technique to determine if the health class could be assessed by a computer. The spectrum of a given tree was compared with a set of library spectra representing trees of different health states. The computed solutions were in close agreement with the field observations. Franz et al. (1991) investigated the use of local properties of leaves as an aid for identifying weed seedlings in digital images. Statistical measures were calculated for reflectance of insitu leaf surfaces in the near-infrared, red and blue wavebands. Reflectance was quantified by image intensity within a leaf periphery. Mean, variance and skewness were selected as significant statistical measures. Intensity statistics depended on NIR reflectance, spatial density of veins and visibility of specular reflections. Experiments and analyses indicated that in order to discriminate among individual leaves, the training set must account for leaf orientation with respect to illumination source.

PAGE 24

13 Texture Based Methods In many machine vision and image processing algorithms, simplifying assumptions are made about the uniformity of intensities in local image regions. However, images of real objects often do not exhibit regions of uniform intensities. For example, the image of a wooden surface is not uniform, but contains variations of intensities which form certain repeated patterns called visual texture. The patterns can be the result of physical surface properties such as roughness or oriented strands, which often have a tactile quality, or they could be the result of reflectance differences such as the color on a surface. Coggins (1982) has compiled a catalogue of texture definitions in the computer vision literature. Some examples are listed as follows. 1) We may regard texture as what constitutes a macroscopic region. Its structure is simply attributed to the repetitive patterns in which elements or primitives are arranged according to a placement rule. 2) A region in an image has a constant texture if a set of local statistics or other local properties of the picture function are constant, slowly varying, or approximately periodic. Image texture, defined as a function of the spatial variation in pixel intensities (gray values), is useful in a variety of applications and has been a subject of intense study by many researchers. One immediate application of image texture is the recognition of image regions using texture properties. Texture analysis has been extensively used to classify remotely sensed images. Land use classification in which homogeneous regions with different types of terrains (such as wheat, bodies of water, urban regions, etc.) need to be identified is an important application. Haralick et al. (1973) used gray level co

PAGE 25

14 occurrence features to analyze remotely sensed images. They computed gray level co-occurrence matrices for a pixel offset equal to one and with four directions (04590135. For a seven-class classification problem, they obtained approximately 80% classification accuracy using texture features. Tang et al. (1999) developed a texture-based weed classification method using Gabor wavelets and neural networks for real-time selective herbicide application. The method comprised a low-level Gabor wavelets-based feature extraction algorithm and a high-level neural network-based pattern recognition algorithm. The model was specifically developed to classify images into broadleaf and grass categories for real-time herbicide application. Their analyses showed that the method is capable of performing texturebased broadleaf and grass classification accurately with 100 percent classification accuracy. In this model, background features like soil were eliminated to extract spatial frequency features from the weeds. The color index used for image segmentation in their research was called Modified excess green (ExG) defined by: ExG=2*G-R-B With constraints: if (G
PAGE 26

15 Burks (2000a) developed a method for classification of weed species using color texture features and discriminant analysis. The image analysis technique used for this method was the color-cooccurence (CCM) method. The method had the ability to discriminate between multiple canopy species and was insensitive to leaf scale and orientation. The use of color features in the visible light spectrum provided additional image characteristic features over traditional gray-scale representation. The CCM method involved three major mathematical processes: 1) Transformations of an RGB color representation of an image to an equivalent HSI color representation. 2) Generation of color cooccurence matrices from the HSI pixels. 3) Generation of texture features from the CCM matrices. In this study CCM texture feature data model for six classes of ground cover (giant foxtails, crabgrass, velvet leaf, lambs quarter, ivy leaf morning glory and soil) were developed and then stepwise discriminant analysis techniques were utilized to identify combinations of CCM texture feature variables which have highest classification accuracy with the least number of texture variables. Then a discriminant classifier was trained to identify weeds using the models generated. Classification tests were conducted with each model to determine their potential for classifying weed species. Overall classification accuracies above 93% were achieved when using hue and saturation features alone. Experiments Based On Other Methods Ning et al. (2001) demonstrated a computer vision system for objective inspection of bean quality. They used a combination of features based on shape, as well as color, in making their decisions on bean quality. Instead of using a CCD camera, as done by other

PAGE 27

16 researchers, they used a high resolution scanner to acquire the images and to test the implementation of their method using cheaper alternatives. The procedure involved the following steps: determine bean image threshold intensity, separate individual kernels via a disconnection algorithm, extract features of interest and make decisions based on their range selection method. Using this method, they reported accuracies ranging from 53-100%, suggesting that the computation algorithm worked well with some features, such as foreign matter in the bean, small beans, off color and badly off color, but not as well with others, such as cracks and broken beans. Pearson and Young (2001) developed a system for automated sorting of almonds with embedded shell using laser transmittance imaging. They constructed a prototype device to automatically detect and separate kernels with embedded shell fragments. The device images laser light transmitted through the kernel. Shell fragments block nearly all the transmitted light and appear as a very dark spot in the image. A computer vision algorithm was developed to detect these dark spots and activate an air valve to divert kernels with embedded shell from the process stream. A 3x3 minimum filter was used to eliminate the effect of light diffracting around the edges of the almond kernel and causing camera saturation. They selected two different types of features for their classification algorithm. The first was the number of pixels that were found to fall within a valley in the image intensity map. The second was a two dimensional histogram bin values based on the image intensity and gradient. Using a one pass sorting operation, they reported that the system with vision technique was able to correctly identify 83% of the kernels with embedded shell fragments.

PAGE 28

17 Yang et al. [01] developed an infrared imaging and wavelet-based segmentation method for apple defect detection. They proposed that the reflectance spectrum of apple surfaces in the near-infrared region (NIR) provided effective information for a machine vision inspection system. The differences in light reflectance of the apple surfaces caused the corresponding pixels of bruised areas and good areas to appear different in intensities in a NIR apple image. Segmenting the defective areas from the non-defective apple images was a critical step for the apple defect detection. In their work, they used a 2-D multiresolution wavelet decomposition to generate wavelet transform vectors for each pixel in the NIR apple images. These vectors are combined and weighted by dynamic modification factors to produce the pixel vectors. Then a cluster analysis method is used to classify the pixels according to the pixel vectors. The pixels with similar properties are labeled as one class, to separate the defective areas from the good areas of apples in the NIR image. They reported 100% accuracy of detecting good apples and 94.6% accuracy of detecting defective apples. Chao et al. [2000] assembled a dual-camera system for separating wholesome and unwholesome chicken carcasses. For their machine vision inspection system, object oriented programming paradigms were utilized to integrate the hardware components. The image was reduced to a size of 256x240 pixels before the carcass was segmented from the background suing simple thresholding. A total of 15 horizontal layers were generated from each segmented image. For each layer a centroid was calculated form the binarized image. Based on these centroids, each layer was divided into several square blocks for a total of 107 blocks. The averaged intensity of each block was used as the input data to neural network models for classification. Using their vision system, they

PAGE 29

18 achieved classification accuracies of 94% for wholesome chicken and 87% for unwholesome chicken. Kim et al. (2001) designed and developed a laboratory based hyperspectral imaging system with several features. The system was capable of capturing reflectance and fluorescence images in the 430 to 930 nm region with 1 mm spatial resolution. They tested their system on classifying apples which were healthy, as well as fungal apples, based on their hyperspectral images. The research showed promising results and it is envisioned that multispectral imaging will become an integral part of food production industries in the near future for automated on-line applications because of acquisition and real time processing speeds. Clark et al. (2003) used transmission NIR spectroscopy to determine whether sample orientation and degree of browning were significant factors requiring consideration in the design of online detection systems. Their results suggested that single NIR transmission measurements could lead to a worthwhile reduction in the incidence of internal browning disorder in commercial lines containing infected fruit. Burks et al. (2000b) completed an evaluation of neural network classifiers for weed species discrimination. Color co-occurrence texture analysis techniques were used to evaluate three different neural network classifiers for potential use in real time weed control systems. The texture data from six different classes of weed species was used. The weed species used were: foxtail, crabgrass, common lambsquarter, velvetleaf, morning glory and clear soil surface. The three neural network classifiers that were evaluated were: back-propagation based classifier, counter-propagation based classifier and radial basis function. It was found that the back-propagation neural network classifier

PAGE 30

19 provided the best classification performance and was capable of classification accuracies of 97% with low computational requirements. Lee and Slaughter [98] developed a real time robotic weed control system for tomatoes, which used a hardware-based neural network. A real-time neural network board named ZISC (Zero Instruction Set Computer, IBM Inc) was used to recognize tomato plants and weeds. With the hardware based neural network, 38.9% of tomato cotyledons, 37.5 % of tomato true leaves, and 85.7% of weeds were correctly classified. Moshou et al. (2002) developed a weed species spectral detector based on neural networks. A new neural network architecture for classification purposes was proposed. The Self-Organizing Map (SOM) neural network was used in a supervised way for a classification task. The neurons of the SOM became associated with local linear mappings (LLM). Error information obtained during training was used in a novel learning algorithm to train the classifier. The method achieved fast convergence and good generalization. The classification method was then applied in a precision farming application, the classification of crops and different kinds of weeds by using spectral reflectance measurements. Yang et al. (1998) developed an artificial neural networks (ANNs) to distinguish between images of corn plants and seven different weeds species commonly found in experimental fields. The performance of the neural networks was compared and the success rate for the identification of corn was observed to be as high as 80 to 100%, while the success rate for weed classification was as high as 60 to 80%.

PAGE 31

20 Nakano (1998) studied the application of neural networks to the color grading of apples. Classification accuracies of over 75% were reported for about 40 defected apples using their neural network. As described in the above research abstracts, automation in agriculture is undergoing unique technological innovations which will significantly improve farm productivity, as well as quality of the food. Machine vision technology is an inherent part of all of these methods and thus is an important area of study. The application of machine vision is both an art and a scientific pursuit, requiring the experience and knowledge of the researcher, to chalk out an effective strategy based on a specific problem. In the next chapter the theory employed in this research, will be discussed.

PAGE 32

CHAPTER 4 FEATURE EXTRACTION In this chapter, the theory involved in feature extraction, which is the first step in the classification process would be discussed. The method followed for extracting the feature set is called the color co-occurrence method or CCM method in short. It is a method, in which both the color and texture of an image are taken into account, to arrive at unique features, which represent that image. It is well known in the image processing research community that classification accuracies are highly dependent on the feature set selection. In other words, the classification accuracy is as good as the feature set that is selected to represent the images. Therefore, careful consideration must be given to this particular step. Various researchers have used several methods of feature representation, such as those based on shape, color, texture, wavelet analysis, reflectance, etc. All these have been discussed in the literature review section. Previous research by Burks (2000a) proved that CCM features can be used effectively in classification of weed species. The present work is an extension of that research, providing a feasibility analysis of the technology in citrus disease classification. Before further describing the theory of the CCM method, a description of texture analysis and color technology will be given. Texture Analysis Texture is one of the features that segments images into regions of interest and classifies those regions. It gives information about the spatial arrangement of the colors or intensities in an image. Part of the problem in texture analysis is defining exactly what texture is. There are two main approaches, the structural and statistical approaches. 21

PAGE 33

22 Structural approach. States that texture is a set of primitive texels in some regular or repeated relationship. Statistical approach. States that texture is a quantitative measure of the arrangement of intensities in a region. Jain and Tuceryan (1998) gave taxonomy of texture models. Identifying the perceived qualities of texture in an image is an important first step towards building mathematical models for texture. The intensity variations in an image, which characterize texture, are generally due to some underlying physical variation in the scene (such as pebbles on a beach or waves in water). Modeling this physical variation is very difficult, so texture is usually characterized by the two-dimensional variations in the intensities present in the image. This explains the fact that no precise, general definition of texture exists in the computer vision literature. In spite of this, there are a number of intuitive properties of texture, which are generally assumed to be true. Texture is a property of areas; the texture of a point is undefined. So, texture is a contextual property and its definition must involve gray values in a spatial neighborhood. The size of this neighborhood depends upon the texture type, or the size of the primitives defining the texture. Texture involves the spatial distribution of gray levels. Thus, two-dimensional histograms or co-occurrence matrices are reasonable texture analysis tools. A region is perceived to have texture when the number of primitive objects in the region is large. If only a few primitive objects are present, then a group of countable objects are perceived, instead of a textured image. In other words, a texture is perceived when significant individual forms are not present.

PAGE 34

23 Image texture has a number of perceived qualities, which play an important role in describing texture. Laws (1980) identified the following properties as playing an important role in describing texture: uniformity, density, coarseness, roughness, regularity, linearity, directionality, direction, frequency, and phase. Some of these perceived qualities are not independent. For example, frequency is not independent of density and the property of direction only applies to directional textures. The fact that the perception of texture has so many different dimensions is an important reason why there is no single method of texture representation, which is adequate for a variety of textures. There are various methods for texture analysis. A discussion of those is given next. Statistical methods: One of the defining qualities of texture is the spatial distribution of gray values. The use of statistical features is therefore one of the early methods proposed in the machine vision literature. Statistical patterns (stochastic) are random and irregular and usually occur naturally. Co-occurrence Matrices Statistical methods use second order statistics to model the relationships between pixels within the region by constructing Spatial Gray Level Dependency (SGLD) matrices. A SGLD matrix is the joint probability occurrence of gray levels i and j for two pixels with a defined spatial relationship in an image. The spatial relationship is defined in terms of distanced and angle . If the texture is coarse and distanced is small compared to the size of the texture elements, the pairs of points at distance d should have similar gray levels. Conversely, for a fine texture, if distance d is comparable to the texture size, then the gray levels of points separated by distance d should often be quite

PAGE 35

24 different, so that the values in the SGLD matrix should be spread out relatively uniformly. Hence, a good way to analyze texture coarseness would be, for various values of distance d, some measure of scatter of the SGLD matrix around the main diagonal. Similarly, if the texture has some direction, i.e. is coarser in one direction than another, then the degree of spread of the values about the main diagonal in the SGLD matrix should vary with the direction d. Thus, texture directionality can be analyzed by comparing spread measures of SGLD matrices constructed at various distances d. From SGLD matrices, a variety of features may be extracted. The original investigation into SGLD features was pioneered by Haralick et al. (1973). From each matrix, 14 statistical measures were extracted including: angular second moment, contrast, correlation, variance, inverse different moment, sum average, sum variance, sum entropy, difference variance, difference entropy, information measure of correlation I, information measure of correlation II, and maximal correlation coefficient. The measurements average the feature values in all four directions. Autocorrelation Based Texture Features The textural character of an image depends on the spatial size of texture primitives. Large primitives give rise to coarse texture (e.g. rock surface) and small primitives give fine texture (e.g. silk surface). An autocorrelation function can be evaluated to measure this coarseness. This function evaluates the linear spatial relationships between primitives. If the primitives are large, the function decreases slowly with increasing distance whereas it decreases rapidly if texture consists of small primitives. However, if the primitives are periodic, then the autocorrelation increases and decreases periodically with distance.

PAGE 36

25 Geometrical Methods The class of texture analysis methods that falls under the heading of geometrical methods is characterized by their definition of texture as being composed of texture elements or primitives. The method of analysis usually depends upon the geometric properties of these texture elements. Once the texture elements are identified in the image, there are two major approaches to analyzing the texture. One computes statistical properties from the extracted texture elements and utilizes these as texture features. The other tries to extract the placement rule that describes the texture. The latter approach may involve geometric or syntactic methods of analyzing texture. Voronoi Tessellation Functions Tuceryan and Jain (1998) proposed the extraction of texture tokens by using the properties of the Voronoi tessellation of the given image. Voronoi tessellation has been proposed because of its desirable properties in defining local spatial neighborhoods and because the local spatial distributions of tokens are reflected in the shapes of the Voronoi polygons. First, texture tokens are extracted and then the tessellation is constructed. Tokens can be as simple as points of high gradient in the image or complex structures such as line segments or closed boundaries. Structural methods. The structural method is usually associated with man-made regular arrangements of lines, circles, squares, etc. The structural models of texture assume that textures are composed of texture primitives. The texture is produced by the placement of these primitives according to certain placement rules. This class of algorithms, in general, is limited in power unless one is dealing with very regular

PAGE 37

26 textures. Structural texture analysis consists of two major steps: (a) Extraction of the texture elements, and (b) Inference of the placement rule. Model Based Methods. Model based texture analysis methods are based on the construction of an image model that can be used not only to describe texture, but also to synthesize it. The model parameters capture the essential perceived qualities of texture. Random Field Models Markov random fields (MRFs) have been popular for modeling images. They are able to capture the local (spatial) contextual information in an image. These models assume that the intensity at each pixel in the image depends on the intensities of only the neighboring pixels. MRF models have been applied to various image processing applications such as, texture synthesis, texture classification, image segmentation, image restoration, and image compression. Signal Processing Methods Psychophysical research has given evidence that the human brain performs a frequency analysis of the image. Texture is especially suited for this type of analysis because of its properties. Most techniques try to compute certain features from filtered images, which are then, used in either classification or segmentation tasks. Some of the techniques that are used are spatial domain filters, Fourier domain filters, Gabor and wavelet models etc. Color Technology According to Zuech (1988), The human perception of color involves differentiation based on three independent properties; intensity, hue and saturation. Hue

PAGE 38

27 corresponds to color, intensity is the lightness value, and saturation is the distance from lightness per hue. Color Spaces. Color is a perceptual phenomenon related to the human response to different wavelengths in the visible electromagnetic spectrum. Generally, a color is described as a weighted combination of three primary colors that form a natural basis. There are many color spaces currently being used. The three color spaces most often used are RGB, normalized RGB and HSI spaces. RGB Space Red-green-blue (RGB) space is one of the most common color spaces representing each color as an axis. Most color display systems use separate red, green, and blue as light sources so that other colors can be represented by a weighted combination of these three components. The set of red, green, and blue can generate the greatest number of colors even though any other three colors can be combined in varying proportions to generate many different colors. All colors that can be displayed are specified by the red, green, and blue components. One color is presented as one point in a three-dimensional space whose axes are the red, green, and blue colors. As a result, a cube can contain all possible colors. The RGB space and its corresponding color cube in this space can be seen in Figure 4.1. The origin represents black and the opposite vertex of the cube represents white.

PAGE 39

28 Figure 4-1. RGB color space and the color cube Any color can be represented as a point in the color cube by (R, G, B). For example, red is (255, 0, 0), green is (0, 255, 0), and blue is (0, 0, 255).The axes represent red, green, and blue with varying brightness. The diagonal from black to white corresponds to different levels of gray. The magnitudes of the three components on this diagonal are equal. The RGB space is discrete in computer applications. Generally, each dimension has 256 levels, numbered 0 to 255. In total, 256 different colors can be represented by (R, G, B), where R, G, and B are the magnitudes of the three elements, respectively. For example, black is shown as (0, 0, 0), while white is shown as (255, 255, 255).

PAGE 40

29 HSI Space Hue-saturation-intensity (HSI) space is also a popular color space because it is based on human color perception. Electromagnetic radiation in the range of wavelengths of about 400 to 700 nanometers is called visible light because the human visual system is sensitive to this range. Hue is generally related to the wavelength of a light and intensity shows the amplitude of a light. Lastly, saturation is a component that measures the colorfulness in HSI space. Color spaces can be transformed from one to another easily. A transformation from RGB to HSI can be formulated as below: Intensity: I = (R + G + B) / 3 (4.1) Saturation: S = )()],,[min(*3BGRBGR 1 (4.2) Hue: otherwiseBGGRGRBRGRACOSHGBBGGRGRBRGRACOSH,))(()(2)]()[(,))(()(2)]()[(222 (4.3) HSI space can be considered as a cylinder as represented in Figure 4.2, where the coordinates r, and z are saturation, hue, and intensity, respectively.

PAGE 41

30 Figure 4-2. HSI color space and the cylinder The coordinates r, and z represent saturation, hue, and intensity, respectively. Hue plane is obtained by all colors that have the same angle. Co-occurrence Methodology for Texture Analysis The image analysis technique selected for this study was the CCM method. The use of color image features in the visible light spectrum provides additional image characteristic features over the traditional gray-scale representation. The CCM methodology consists of three major mathematical processes. First, the RGB images of

PAGE 42

31 leaves are converted into HSI color space representation. Once this process is completed, each pixel map is used to generate a color co-occurrence matrix, resulting in three CCM matrices, one for each of the H, S and I pixel maps. The color co-occurrence texture analysis method was developed through the use of spatial gray level dependence matrices or in short SGDMs. The gray level co-occurrence methodology is a statistical way to describe shape by statistically sampling the way certain grey-levels occur in relation to other grey-levels. As explained by Shearer and Holmes [1990], these matrices measure the probability that a pixel at one particular gray level will occur at a distinct distance and orientation from any pixel given that pixel has a second particular gray level. For a position operator p, we can define a matrix Pij that counts the number of times a pixel with grey-level i occurs at position p from a pixel with grey-level j. For example, if we have four distinct grey-levels 0, 1, 2 and 3, then one possible SGDM matrix P (i, j, 1, 0) is given below as shown I (x, y) = 3121302320121300 P (i, j, 1, 0) = 0122103223012212 If we normalize the matrix P by the total number of pixels so that each element is between 0 and 1, we get a grey-level co-occurrence matrix C.

PAGE 43

32 Different authors define the co-occurrence matrix in two ways: By defining the relationship operator p by an angle and distance d, and By ignoring the direction of the position operator and considering only the (bidirectional) relative relationship. This second way of defining the co-occurrence matrix makes all such matrices symmetric. The SGDMs are represented by the function P (i, j, d, ) where i represents the gray level of the location (x, y) in the image I(x, y), and j represents the gray level of the pixel at a distance d from location (x, y) at an orientation angle of The nearest neighbor mask is as shown in the figure below. 135 o o90 o45 00 5 6 7 8 1 4 2 3 Figure 4-3. Nearest neighbor diagram The reference pixel at image position (x, y) is shown as an asterix. All the neighbors from 1 to 8 are numbered in a clockwise direction. Neighbors 1 and 5 are located on the same plane at a distance of 1 and an orientation of 0 degrees. An example image matrix and its SGDM are already given above. In this research, a one pixel offset distance and a zero degree orientation angle was used.

PAGE 44

33 The CCM matrices are then normalized using the equation given below, where P (i, j, 1, 0) represents the intensity co-occurrence matrix p (i, j) = 1010)0,1,,()0,1,,(NgiNgjjipjip (4.4) Where Ng is the total number of intensity levels. The hue, saturation and intensity CCM matrices are then used to generate the texture features described by Haralick and Shanmugam (1974). Shearer and Holmes (1990) reported a reduction in the 16 gray scale texture features through elimination of redundant variables. The resulting 13 texture features are defined by Shearer and Holmes (1990) and Burks (1997). The same equations are used for each of the three CCM matrices, producing 13 texture features for each HSI component and thereby a total of 39 CCM texture statistics. These features and related equations are defined as follows along with a brief description as pertains to intensity. Similar descriptions would also apply to saturation as mentioned by Shearer (1986). Matrix Normalization: p (i, j) = 1010)0,1,,()0,1,,(NgiNgjjipjip (4.5) Marginal probability matrix:

PAGE 45

34 (4.6) 10),()(Ngjxjipip Sum and difference matrices: (4.7) 1010),()(NgiNgjyxjiPkp k=I+j; for k=0,1,2 .. 2(Ng-1) (4.8) 1010),()(NgiNgjyxjiPkp k=|I-j|; for k=0,1,2 .. 2(Ng-1) Where P(i,j) = the image attribute matrix and Ng = total number of attribute levels Texture features: The angular moment (I1) is a measure of the image homogeneity. 101021)],([NgiNgjjiPI (4.9) The mean intensity level (I2) is a measure of image brightness derived from the co-occurrence matrix. 102)(NgixiiPI (4.10) Variation of image intensity is identified by the variance textural feature (I3).

PAGE 46

35 10223)()(NgixiPIiI (4.11) Correlation (I4) is a measure of the intensity linear dependence in the image. 31010224),(IIjiijPINgiNgj (4.12) The product moment (I5) is analogous to the covariance of the intensity co-occurrence matrix. 1010225),())((NgiNgjjiPIjIiI (4.13) Contrast of an image can be measured by the inverse difference moment (I6). 101026)(1),(NgiNgjjijiPI (4.14) The entropy feature (I7) is a measure of the amount of order in an image. 10107),(ln),(NgiNgjjiPjiPI (4.15) The sum and difference entropies (I8 and I9) are not easily interpreted, yet low entropies indicate high levels of order. )1(208)(ln)(NgkyxyxkPkPI (4.16) 109)(ln)(NgkyxyxkPkPI (4.17) The information measures of correlation (I10 and I11) do not exhibit any apparent physical interpretation.

PAGE 47

36 HXHXYII1710 (4.18) 2/1)2(211]1[7IHXYeI (4.19) Where 10)(ln)(NgixxiPiPHX (4.20) 1010)]()(ln[),(1NgiNgjxxjPiPjiPHXY (4.21) )]()(ln[)()(21010jPiPjPiPHXYxxNgiNgjxx (4.22) SAS Based Statistical Methods to Reduce Redundancy The 13 texture features from the hue, saturation and intensity CCM matrices provide a set of 39 characteristic features for each image. It is, however, desirable to reduce the number of texture features to minimize redundancy, reduce computational complexity during classification and representation of the image. Burks (2000a) implemented a reduction technique using SAS (Statistical analysis package). SAS offers a procedure for accomplishing the above tasks, referred to as PROC STEPDISC. In this research, PROC STEPDISC was used to create various models of data using various combinations of the HSI/CCM texture statistic data sets. PROC STEPDISC may be used to reduce the number of texture features by a stepwise process of selection. The assumption made is that all the classes of data included in the data set are multivariate normal distributions with a common covariance matrix. The stepwise procedure begins with no entries in the model. At each step in the process, if the variable within the model which contributes least to the model, as determined by the Wilks

PAGE 48

37 lambda method, does not pass the test to stay, it is removed from the model. The variable outside and which contributes most to the model and passes the test to be admitted is added. When all the steps are exhausted, the model is reduced to its final form.

PAGE 49

CHAPTER 5 CLASSIFICATION Image classification is the final step in any pattern recognition problem. It is of two types. They are: Supervised classification and Unsupervised classification In supervised classification, a priori knowledge of the images to be classified is known. Hence, the classification is simply a process of testing whether the computed classification agrees with the a priori knowledge. In unsupervised learning, there is not any a priori knowledge on the images to be classified. Hence, the classification is a little bit more tedious since we have no prior knowledge of the various data classes involved. There are various classification techniques. In this research, two classification approaches based on the supervised classification approach are implemented. They are listed below. 1) Statistical classifier using the squared Mahalanobis minimum distance 2) Neural network classifiers i) Multi layer feed-forward neural network with back propagation ii) Radial basis function neural network Earlier research by Burks (2000a and 2000b) had shown good results for the application of weed detection in wheat fields using the above mentioned techniques, thus favoring the choice of these methods in this research. 38

PAGE 50

39 Statistical Classifier Using the Squared Mahalanobis Minimum Distance The Mahalanobis distance is a very useful way of determining the similarity of a set of values from an unknown sample to a set of values measured from a collection of known samples. The actual mathematics of the Mahalanobis distance calculation has been known for some time. In fact, this method has been applied successfully for spectral discrimination in a number of cases. One of the main reasons the Mahalanobis distance method is used is that it is very sensitive to inter-variable changes in the training data. In addition, since the Mahalanobis distance is measured in terms of standard deviations from the mean of the training samples, the reported matching values give a statistical measure of how well the spectrum of the unknown sample matches (or does not match) the original training spectra. This method belongs to the class of supervised classification. Since this research is, a feasibility study to analyze whether such techniques give accurate enough results, so that the technology is viable for an autonomous harvester, supervised classification is a good approach to test the efficacy of the method. The underlying distribution for the complete training data set, consisting of the four classes of leaves, was a mixture of Gaussian model. Earlier research by Shearer et al. (1986) had shown that plant canopy texture features could be represented by a multi-variate normal distribution. Each of the 39 texture features represented a normal Gaussian distribution. Thus, the feature space can be approximated to be a mixture of Gaussians model containing a combination of 39 univariate normal distributions, if all the features are considered. For other models (having a reduced number of features), the feature space is a mixture of Gaussians model containing a combination of N univariate normal distributions, where N is the number of texture features in the model.

PAGE 51

40 Since, the feature space of various classes of leaves is a mixture of Gaussians model; the next step is to calculate the statistics representing those classes. Four parameter sets X [(, )] (mean and covariance), representing the various classes of diseased and normal leaves, namely greasyspot, melanose, normal and scab, were calculated, using the training images. The procedure until this stage represented the training phase, where in, we calculate the necessary statistical features representing various classes of leaves. After the parameter sets were obtained, the classifier was tested on the test images for each class. This constitutes the testing phase. The classifier was based on the squared Mahalanobis distance from the feature vector representing the test image to the parameter sets of the various classes. It used the nearest neighbor principle. The formula for calculating the squared mahalanobis distance metric is as given below. r (5.0) )()(1T2xx Where, x is the N-dimensional test feature vector (N is the number of features considered), is the N-dimensional mean vector for a particular class of leaves, is the N x N dimensional co-variance matrix for a particular class of leaves. During testing phase of the method, the squared mahalanobis distance, for a particular test vector representing a leaf, is calculated with all the classes of leaves in this problem. The test image is then classified using the minimum distance principle. The test

PAGE 52

41 image is classified as belonging to a particular class to which its squared mahalanobis distance is minimum among the calculated distances. Neural Network Based Classifiers Artificial Neural Networks (ANNs) are computational systems whose architecture and operation are inspired by knowledge about biological neural cells (neurons) in the brain. According to Ampazis (1999), ANNs can be described either as mathematical and computational models for non-linear function approximation, data classification, clustering and non-parametric regression, or as simulations of the behavior of collections of model biological neurons. These are not simulations of real neurons in the sense that they do not model the biology, chemistry, or physics of a real neuron. They do, however, model several aspects of the information combining and pattern recognition behavior of real neurons in a simple yet meaningful way. Neural modeling has shown incredible capability for emulation, analysis, prediction, and association. ANNs can be used in a variety of powerful ways: to learn and reproduce rules or operations from given examples; to analyze and generalize from sample facts and make predictions from these; or to memorize characteristics and features of given data and to match or make associations from new data to the old data. Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by

PAGE 53

42 example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well. Various authors have given various definitions to a neural network. The following are some of them. According to the DARPA Neural Network Study (1988, AFCEA International Press, p. 60): A neural network is a system composed of many simple processing elements operating in parallel whose function is determined by network structure, connection strengths, and the processing performed at computing elements or nodes. According to Haykin, S. (1994), Neural Networks: A Comprehensive Foundation, NY: Macmillan, p. 2: A neural network is a massively parallel-distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: 1. Knowledge is acquired by the network through a learning process. 2. Interneuron connection strengths known as synaptic weights are used to store the knowledge. ANNs have been applied to an increasing number of real-world problems of considerable complexity. Their most important advantage is in solving problems that are too complex for conventional technologies -problems that do not have an algorithmic solution or for which an algorithmic solution is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are well suited to problems that people are good at solving, but for which computers are not. These problems include pattern recognition and forecasting (which requires the recognition of trends in data). Ampazis (1999) introduced ANNs in his paper Introduction to neural networks. The following discussion is quoted from his paper:

PAGE 54

43 ANNs are able to solve difficult problems in a way that resembles human intelligence. What is unique about neural networks is their ability to learn by example. Traditional artificial intelligence (AI) solutions rely on symbolic processing of the data, an approach which requires a priori human knowledge about the problem. Also neural networks techniques have an advantage over statistical methods of data classification because they are distribution-free and require no a priori knowledge about the statistical distributions of the classes in the data sources in order to classify them. Unlike these two approaches, ANNs are able to solve problems without any a priori assumptions. As long as enough data is available, a neural network will extract any regularity and form a solution. As ANNs are models that resemble biological neuronal structures, the starting point for any ANN would be a basic neural element whose behavior follows closely that of real neurons. Each computing unit in an ANN is based on the concept of an idealized neuron. An ideal neuron is assumed to respond optimally to the applied inputs. Neural network is a collective set of such neural units, in which the individual neurons are connected through complex synaptic connections characterized by weight coefficients. Every single neuron makes its contribution towards the computational properties of the whole system. The figure 5-1, shows a basic neuron.

PAGE 55

44 Figure 5-1. A basic neuron The neuron has several input lines and a single output. Each input signal is weighted; that is, it is multiplied with the weight value of the corresponding input line (by analogy to the synaptic strength of the connections of real neurons). The neuron will combine these weighted inputs by forming their sum and, with reference to a threshold value and activation function; it will determine its output. In mathematical terms, the neuron is described by writing the following pair of equations: (5.1) Niiixwu1 and )( ufy (5.2) Where are the input signals, are the synaptic weights, u is the activation potential of the neuron, Nxxx,.......,21 Nwww,.......,21 is the threshold, y is the output signal of the neuron, and f (.) is the activation function.

PAGE 56

45 For notational convenience, the above equations may be reformulated by letting ow and setting. Then 1ox NiiiNiiixwxw01 (5.3) and (5.4) Niiixwfy0)( The combination of a fixed input 1 ox and of an extra input weight ow accounts for what is known as a bias input. Note that the new notation has augmented any input vector to the vector (-1, ) and also the weight vector of the neuron, to the vector ( ) The activation function, denoted by (.), defines the output of the neuron in terms of the activity level at its input. The most common form of activation function used in the construction of ANNs is the sigmoid function. An example of the sigmoid is the logistic function, defined by )exp(11)(auuf (5.5) Where is the slope parameter of the sigmoid function. By varying the parameter we can obtain sigmoid functions of different slopes. In the limit, as the slope parameter approaches infinity, the sigmoid function becomes simply a threshold

PAGE 57

46 function. The threshold function, however, can take only the values 0 or 1, whereas a sigmoid function assumes a continuous range of values from 0 to 1. Also the sigmoid function is differentiable, whereas the threshold function is not. Differentiability is an important feature of neural network theory since it has a fundamental role in the learning process in ANNs. Ampazis (1999) gave a good historical perspective regarding the development of neural networks. The first model of a neuron was proposed in 1943 by McCulloch and Pitts when they described a logical calculus of neural networks. In this model, the activation function used was the threshold function. The McCulloch-Pitts neuron models connected in a simple fashion (forming a single layer), were given the name "perceptrons" by Frank Rosenblatt in 1962. In his book "Principles of Neurodynamics" he described the properties of these neurons, but more importantly, he presented a method by which the perceptrons could be trained in order to perform simple patterns recognition tasks. He also provided a theorem called the perceptron convergence theory which guarantees that if the learning task is linearly separable (that is, if the data classes can be separated by a straight line in input space) then the perceptron will yield a solution in a finite number of steps. Perceptrons, however, are unable to solve problems that are not linearly separable. It was the pointing of this limitation of the perceptrons in 1969 by Minsky and Papert in their famous book "Perceptrons" (using elegant mathematical analysis to demonstrate that there are fundamental limits on what one-layer perceptrons can compute) and their pessimism about the prospects of discovering efficient algorithms for the training of multilayer perceptrons (multilayer perceptrons can solve non-linearly separable problems) which lead to the recline of the subject of neural computing for more

PAGE 58

47 than a decade. The development, however, of the back propagation algorithm in 1986 by Rumelhart, Hinton and Williams and the subsequent publication of the book "Parallel Distributed Processing: Explorations in the Microstructures of Cognition" by Rumelhart and McClelland answered Minsky and Papert's challenge (in the sense that it was proved that there can indeed exist algorithms for the training of multilayer perceptrons) and that resulted in the current resurgence of interest in neural computing. There is a fair understanding of how an individual neuron works. However there is still a great deal of research needed to decipher the way real neurons organize themselves and the mechanisms used by arrays of neurons to adapt their behavior to external stimuli. There are a large number of experimental ANN structures currently in use reflecting this state of continuing research. Among the many interesting properties of all these structures, the property that is of primary significance is the ability of the networks to learn from their environment, and to improve their performance through learning. ANNs learn about their environment through an iterative process of adjustments applied to their free parameters, which are the synaptic weights and thresholds. The type of learning is determined by the manner in which the parameter changes take place. There are three basic types of learning paradigms: supervised learning, reinforcement learning, and self-organized (unsupervised) learning As its name implies supervised learning is performed under the supervision of an external "supervisor". The supervisor provides the network with a desired or target response for any input vector. The actual response of the network to each input vector is then compared by the supervisor with the desired response for that vector, and the network parameters are adjusted in accordance with an error signal which is defined as

PAGE 59

48 the difference between the desired response and the actual response. The adjustment is carried out iteratively in a step-by-step fashion with the aim of eventually making the error signal for all input vectors as small as possible. When this has been achieved then the network is believed to have built internal representations of the data set by detecting its basic features, and hence, to be able to deal with data that has not encountered during the learning process. That is, it can generalize its "knowledge". Supervised learning is by far the most widely used learning technique in ANNs because of the development of the back propagation algorithm which, allows for the training of multilayer ANNs. In the next section, the mathematics of the derivation of the algorithm would be considered and the factors behind its wide acceptance as the standard training algorithm for multilayer ANNs would be examined. In the material discussed above, a general introduction of Neural Networks and its working was given. However, to make these networks functional on a variety of applications several algorithms and techniques were developed. Back propagation algorithm is one such method. The Back propagation Algorithm. Ampazis (1999) also gave an articulate description regarding back propagation algorithm and its advantages and disadvantages. The following material is quoted from his paper. The Error Back propagation (or simply, back propagation) algorithm is the most important algorithm for the supervised training of multilayer feed-forward ANNs. It derives its name from the fact that error signals are propagated backward through the network on a layer-by-layer basis. As a tool for scientific computing and engineering applications, the morphology of static multilayered feed forward neural networks

PAGE 60

49 (MFNNs) consists of many interconnected signal processing elements called neurons. The MFNN is one of the main classes of static neural networks and it plays an important role in many types of problems such as system identification, control, channel equalization and pattern recognition. From the morphological point of view, a MFNN has only feedforward information transmission from the lower neural layers to higher neural layers. On the other hand, a MFNN is a static neural model, in the sense that its inputoutput relationship may be described by an algebraic nonlinear mapping function. The most widely used static neural networks are characterized by nonlinear equations that are memory less; that is, their outputs are a function of only the current inputs. An obvious characteristic of a MFNN is its capability for implementing a non linear mapping from many neural inputs to many neural outputs. The Back propagation (BP) algorithm is a basic and most effective weight updating method of MFNNs for performing some specific computing tasks. The BP algorithm was originally developed using the gradient descent algorithm to train multi layered neural networks for performing desired tasks. The backpropagation algorithm is based on the selection of a suitable error function or cost function, whose values are determined by the actual and desired outputs of the network. The algorithm is also dependent on the network parameters such as the weights and the thresholds. The basic idea is that the cost function has a particular surface over the weight space and therefore an iterative process such as the gradient descent method can be used for its minimization. The method of gradient descent is based on the fact that, since the gradient of a function always points in the direction of maximum increase of the function. Then, by moving to the direction of the negative gradient induces a maximal "downhill" movement that will eventually reach the minimum of the function surface

PAGE 61

50 over its parameter space. This is a rigorous and well-established technique for the minimization of functions and has probably been the main factor behind the success of backpropagation. However, as shall be seen in the next section the method does not guarantee that it will always converge to the minimum of the error surface as the network can be trapped in various types of minima. Figure 5-2. Multilayer feedforward ANN A typical multilayer feed-forward ANN is shown in figure 5-2. This type of network is also known as a Multilayer Perceptron (MLP). The units (or nodes) of the network are nonlinear threshold units described by equations (5.3) and (5.4) with their activation function given by equation (5.5). The units are arranged in layers and each unit in a layer has all of its inputs connected to the units of a preceding layer (or to the inputs from the external world in the case of the units in the first layer). However, it does not have any connections to units of the same layer to which it belongs. The layers are arrayed one succeeding the other so that there is an input layer, multiple intermediate layers and an output layer. Intermediate layers, that are those that have no inputs or

PAGE 62

51 outputs to the external world, are called hidden layers. Figure 5-2 shows a MLP with only one hidden layer. Backpropagation neural networks are usually fully connected. This means that each unit is connected to every output from the preceding layer (or to every input from the external world if the unit is in the first layer), as well as to a bias signal which is common to all the units according to the convention described earlier. Correspondingly, each unit has its output connected to every unit in the succeeding layer. Generally, the input layer is considered as just a distributor of the signals from the external world and is not therefore counted as a layer. This convention would be retained in this analysis and hence, in the case of figure 5-2 the hidden layer is the first layer of the network. The backpropagation training consists of two passes of computation: a forward pass and a backward pass. In the forward pass, an input pattern vector is applied to the sensory nodes of the network. That is, to the units in the input layer. The signals from the input layer propagate to the units in the first layer and each unit produces an output according to equation (5.4). The outputs of these units are propagated to units in subsequent layers and this process continuous until; finally, the signals reach the output layer where the actual response of the network to the input vector is obtained. During the forward pass the synaptic weights of the network are fixed. During the backward pass, on the other hand, the synaptic weights are all adjusted in accordance with an error signal which is propagated backward through the network against the direction of synaptic connections The mathematical analysis of the algorithm is as follows: In the forward pass of computation, given an input pattern vector each hidden node receives a net input

PAGE 63

52 (5.6) Where represents the weight between hidden node and input node Thus, node produces an output (5.7) Each output node thus receives (5.8) Where represents the weight between output node and hidden node Hence, it produces for the final output as shown in equation 5.9. (5.9) The backpropagation algorithm can be implemented in two different modes: on-line mode and batch mode. In the on-line mode the error function is calculated after the presentation of each input pattern and the error signal is propagated back through the network modifying the weights before the presentation of the next pattern. This error function is usually the Mean Square Error (MSE) of the difference between the desired and the actual responses of the network over all the output units. Then the new weights remain fixed and a new pattern is presented to the network and this process continuous until all the patterns have been presented to the network. The presentation of all the patterns is usually called one epoch or one iteration. In practice, many epochs are needed before the error becomes acceptably small. In the batch mode, the error signal is again

PAGE 64

53 calculated for each input pattern but the weights are modified only when all input patterns have been presented. Then the error function is calculated as the sum of the individual MSE errors for each pattern and the weights are accordingly modified (all in a single step for all the patterns) before the next iteration. Thus, in the batch mode, the error or cost function calculated as the MSE over all outputs units and over all patterns p is given by or p is given by (5.10) Clearly, E is a differentiable function of all the weights (and thresholds according to the bias convention given earlier) and therefore we can apply the method of gradient descent. For the hidden-to-output connections the gradient descent rule gives (5.11) where is a constant that determines the rate of leaning; it is called the learning rate of the backpropagation algorithm. Using the chain rule, we have

PAGE 65

54 (5.12) giving (5.13) Thus (5.14) where (5.15) For the input-to-hidden connections the gradient descent rule gives (5.16)

PAGE 66

55 which shows that we must differentiate with respect to the 's, which are more deeply embedded in equation (5.10). Using the chain rule, we obtain (5.17) Equation 5.17, leads to equation 5.18. (5.18) Now for we have (using chain rule) (5.19)

PAGE 67

56 Thus (5.20) Equation 5.20 leads to equation 5.21. (5.21) with (5.22) From equations (5.13) and (5.20) it is observed that if the activation function was not differentiable, then it is impossible to implement the gradient-descent rule, as it would be impossible to calculate the partial derivatives of E with respect to the weights. It is for this reason that the differentiability of the activation function is so important in backpropagation learning. Note also that the derivative of the sigmoid function is very easy to compute since (5.23)

PAGE 68

57 Therefore, it is not necessary to compute separately once we have found This is one of the advantages of the sigmoid function as computation time can reduce significantly. Although we have written the update rules for the batch mode of training it is clear that the weight updates in the on-line case would be given again by equations (5.14) and (5.21) without of course the summation over the training patterns. Note that equation (5.21) has the same form as equation (5.14) but with a different definition of the 's. In general, with an arbitrary number of layers, the backpropagation update rule always has the form: Weight Correction ( ) = [Learning rate ( )] [Local gradient ( )] [Input signal of node ( )] (5.24) The general rule of Equation 5.24 given above, for the adaptation of the weights, is also known as the generalized delta rule. An important aspect of backpropagation training is the proper initialization of the network. Improper initialization may cause the network to require a very long training time before it converges to a solution and even then there is a high probability of converging to non-optimum solutions. Considerations on the Implementation of Back Propagation Ampazis (1999) discussed the key features in the implementation of the back propagation algorithm. The following discussion is quoted from his paper. The initial step in backpropagation learning is the initialization of the network. A good choice for the initial values of the free parameters (i.e., synaptic weights and thresholds) of the network can significantly accelerate learning. It is also important

PAGE 69

58 to note that if all the weights start out with equal values and if the solution requires that unequal weights be developed, the system can never learn. This is because the error is propagated back through the weights in proportion to the values of the weights. This means that all hidden units connected directly to the output units will get identical error signals, and, since the weight changes depend on the error signals, the weights from those units to the output units must always be the same. This problem is known as the symmetry breaking problem. Internal symmetries of this kind also give the cost-function landscape periodicities, multiple minima, (almost) flat valleys, and (almost) flat plateaus or temporary minima (Murray, 1991a). The last are the most troublesome, because the system can get stuck on such a plateau during training and take an immense time to find its way down the cost function surface. Without modifications to the training set or learning algorithm, the network may escape from this type of "minimum" but performance improvement in these temporary minima drops to a very low, but non-zero level because of the very low gradient of the cost function. In the MSE versus training time curve, a temporary minimum can be recognized as a phase in which the MSE is virtually constant for a long training time after initial learning. After a generally long training time, the approximately flat part in the energy landscape is abandoned, resulting in a significant and sudden drop in the MSE curve (Murray, 1991a; Woods, 1988). The problem of development of unequal weights can be counteracted by starting the system with random weights. However as learning continues internal symmetries develop and the network encounters temporary minima again. The customary practice is to set all the free parameters of the network to random numbers that are uniformly distributed inside a small range of values. This is so because if the weights are too large the sigmoids will saturate from the very beginning of training and the system will become stuck in a kind of saddle point near the starting point (Haykin, 1994). This phenomenon is known as premature saturation (Lee et al.., 1991). Premature saturation is avoided by choosing the initial values of the weights and threshold levels of the network to be uniformly distributed inside a small range of values. This is so because when the weights are small the units operate in their linear regions and consequently it is impossible for the activation function to saturate. Gradient descent can also become stuck in local minima of the cost function. These are isolated valleys of the cost function surface in which the system may "stuck" before it reaches the global minimum. In these valleys, every change in the weight values causes the cost function to increase and hence the network is unable to escape. Local minima are fundamentally different from temporary minima as they cause the performance improvement of the classification to drop to zero and hence the learning process terminates even though the minimum may be located far above the global minimum. Local minima may be abandoned by including a momentum term in the weight updates or by adding "noise" using the on-line mode training which is a stochastic learning algorithm in nature. The momentum term can also significantly accelerate the training time that is spent in a temporary minimum as it causes the weights to change at a faster rate.

PAGE 70

59 Radial basis function networks. After the Feed Forward networks, the radial basis function network (RBFN), comprises one of the most used network models. A radial basis function network is a neural network approached by viewing the design as a curve-fitting (approximation) problem in a high dimensional space. Learning is equivalent to finding a multidimensional function that provides a best fit to the training data, with the criterion for best fit being measured in some statistical sense. Figure 4-2 illustrates an RBF network with inputs ,..., and output The arrows in the figure symbolize parameters in the network. The RBF network consists of one hidden layer of basis functions, or neurons. At the input of each neuron, the distance between the neuron center and the input vector is calculated. The output of the neuron is then formed by applying the basis function to this distance. The RBF network output is formed by a weighted sum of the neuron outputs and the unity bias shown. Figure 5-3. An RBF network with one input Radial Functions Mark Orr (1996) described radial basis fuctions, in his paper Introduction to radial basis function networks. The material quoted in this chapter is taken from his paper.

PAGE 71

60 Radial functions are a special class of function. Their characteristic feature is that their response decreases (or increases) monotonically with distance from a central point. The centre, the distance scale, and the precise shape of the radial function are parameters of the model, all fixed if it is linear. A typical radial function is the Gaussian which, in the case of a scalar input, is (5.25) Its parameters are its centre c and its radius r. Fig 5.3, illustrates a Gaussian RBF with centre c = 0 and radius r = 1. Figure 5-4. Gaussian radial basis function Radial Basis Function Networks Radial functions are simply a class of functions. In principle, they could be employed in any sort of model (linear or nonlinear) and any sort of network (single-layer

PAGE 72

61 or multi-layer). Radial basis function networks (RBF networks) have traditionally been associated with radial functions in a single-layer network such as shown in the Fig.5.4 Figure 5-5. The traditional radial basis function network In Fig 5-5, each of n components of the input vector feeds forward to m basis functions whose outputs are linearly combined with weights {wj into the network output f(X). m1j} When applied to supervised learning with linear models the least squares principle leads to a particularly easy optimisation problem. If the model is (5.26)

PAGE 73

62 and the training set is {(, then the least squares recipe is to minimise the sum-squared-error which is given as shown in equation 5.27. piiiyx1^)}, (5.27) with respect to the weights of the model. If a weight penalty term is added to the sum-squared-error, as is the case with ridge regression then the following cost function is minimised (5.28) where{are the regularisation parameters. mjj1} The minimisation of the cost function (shown in appendix b) leads to a set of m simultaneous linear equations in the m unknown weights and the linear equations can be written more conveniently as the matrix equation (5.29) where the design matrix, is (5.30)

PAGE 74

63 the variance matrix, is (5.31) the elements of the matrix are all zero except for the regularisation parameters along its diagonal and is the vector of training set outputs. The solution is the so-called normal equation, Tpyyyy],[^^2^1^ ..... (5.32) and is the vector of weights which minimises the cost function. Tmwwww]......,[^^2^1^

PAGE 75

CHAPTER 6 MATERIALS AND METHODS In this chapter, the image acquisition system will be discussed along with the strategies that have been followed to achieve the classification results. The figure below shows the image acquisition system. Figure 6-1. Image acquisition system The system consisted of the following components. 1) Four 16W cool white fluorescent bulbs (4500K) with natural light filters and reflectors. 2) JAI MV90, 3 CCD color camera with 28-90 mm Zoom lens. 5) Coreco PC-RGB 24 bit color frame grabber with 480 by 640 pixels 4) MV Tools Image capture software. 64

PAGE 76

65 5) Matlab Image Processing Toolbox and Neural Network toolbox 6) SAS Statistical Analysis Package. 7) Windows based computer It was decided for initial experiments that images would be taken indoors for samples collected in an outdoor environment, in order to minimize the detrimental effects of variation in the lighting during the daytime conditions. Although a real autonomous harvester or a sprayer would have to operate in an outdoor environment, that endeavor would need further research to make the imaging and analysis procedure invariant to lighting conditions. Since the effort in this research was to test the efficacy of the method, it was decided to perform the imaging in an indoor environment inside a laboratory. However, the conditions in which the images were taken were simulated to be similar to an outdoor environment. To accomplish this, four 16W cool white fluorescent bulbs (4500 K) with natural light filters and reflectors were used. The lighting set up is shown in the Figure 6.1 above. The choice of cool white fluorescent light for the imaging system can be best explained by looking at the spectrums of natural light and those of the cool white bulbs with natural light filters, which are shown the figures 6.2 to 6.4. It is obvious that the natural light filters mimic the spectrum of sunlight. The camera used for image acquisition was a JAI MV90 3 CCD color camera with 28-90mm zoom lens. The camera was interfaced with the CPU. Using a frame grabber embedded, which converted the analog signals from the camera into digital RGB images. The frame grabber was a Coreco PC-RGB 24 bit color frame grabber with 480 by 640 pixels in image size.

PAGE 77

66 Figure 6-2. Full spectrum sunlight Figure 6-3. Cool white fluorescent spectrum Figure 6-4. Spectrum comparison A CCD type camera (Analog and Digital) image sensor use photo diodes toconvert photons to electrons. CCD sensors create high quality, low-noise images.

PAGE 78

67 CCD sensors have been mass produced for a longer period of time, so they are more mature. They tend to have higher quality pixels, and more of them. An image sensor consists of a rectangular array (imagers) or a single line (line-scan) of equi-spaced, discrete light-sensing elements, called photo-sites. The charges that build up on all of the array photo-sites were linked or "coupled" together so that they could be transferred out of the array directly (digital output) or processed into a time-varying video si gnal (analog output). A visual representation of an image sensor is given elow. Figure 6-5. Visual representation of an image sensor b As shown in the figure above, the image sensor could be represented in the form of an array of buckets (photo diodes). Whenever the buckets were filled with sufficient amount of light (pixels), they were transferred. The camera in this experiment was an analog camera. The final output from an analog camera would be an analog signal, which needs to be processed further so that a digital image can be

PAGE 79

68 captured. This was accomplished using a frame grabber. The frame grabber used in this research w as a Coreco PC-RGB 24 bit color frame grabber with 480x640 pixels image size. in Figure 6-6. Coreco PC-RGB 24 bit color frame grabber PC-RGB is a high performance, low-cost PCI-bus image capture board supplying the industry's fastest color or monochrome video data transfer to host computer memory. PC-RGB combined a high performance frame grabber, 4MB of onboard memory to speed image transfers, unique onboard circuitry for precise interrupt control, OPTO-22 compatible digital I/O for controlling external events, and support for a wide range of cameras. PC-RGB could acquire monochrome (up to 5 simultaneous input channels with 6 total inputs) or RGB color data (2 RGB channels) from RS-170, CCIR, progressive scan, and variable scan cameras. PC-RGB used a special hardware scatter gather technique to provide continuous high-speed image transfers acros s the PCI-bus at rates up to 120MB/sec sustained with almost no host CPU involvement.

PAGE 80

69 PC-RGB could transfer images and selected Areas of Interest (AOIs, up to 4K x 4K) out of memory and at full speed without the need to write any special code. Additionally, onboard pixel packing circuitry enabled nondestructive graphic overlays, such as annotation, windows, and cross-hairs on live images. PC-RGB provided additional advanced features including padding (sending 8-bit data to a 16-bit VGA color surface), clipping (allowing data to be bus mastered directly to a VGA target without seeing "color dots"-Windows reserved values), and programmable X/Y zooming (x2, x4). Key features of the PC-RGB are as follows: 1) PCI-bus moves images to the host PC in less than 4 ms, providing an additional 29 ms of free processing time 2) Hardware scatter gather bus mastering feature for high-speed simultaneous grab and transfer to the host with minimal CPU involvement 5) Image modification on transfer to host memory with no additional CPU involvement (flip, rotate, zoom, etc.) 4) Image deinterlacing on image transfers with no CPU involvement 5) Supports programmable OPTO-22 compatible I/O, trigger, strobe, and frame reset for developers with "real world" applications and demanding cycle times 6) DMA of Areas of Interest (AOIs) minimize transfer times Also, software for capturing images of leaves was used. The software used was GUI-based software with easy-to-use features for image acquisition. Algorithms for feature extraction and classification were written in MATLAB. A statistical package called SAS was also used for feature reduction. The technicalities of feature extraction and classification were explained in chapters 4 and 5.

PAGE 81

70 The leaf samples that were used in this research were collected in a citrus grove located in Bowling green, Florida. The main reason to get samples in the field was to prove the concept that color texture analysis with classification algorithms could be used to attain the objective of leaf classification. Ambient light variability was nullified by performing the analyses in controlled lab conditions. The image data for this analysis was obtained from leaf samples, collected from a grapefruit grove. Grapefruit has diseases that can easily be identified by visual inspection. The samples were collected over a number of different grapefruit trees, for each diseased condition. The leaf samples within arms reach were pulled off with leaf stems intact and then sealed in Ziploc bags to maintain the moisture level of the leaves. Forty samples were collected for each of the four classes of leaves. The samples were bought to the laboratory, and then, lightly rinsed and soaped, to remove any nonuniform distribution of dust. This was done so that the image data would have similar surface conditions for all classes of leaves. The leaf samples were then sealed in new bags with appropriate labels, and then put in environmental control chambers maintained at40. The leaf samples were then taken to the imaging station and images of all the leaf samples were acquired, both in the front portion and the back portion. Forty images from each leaf class were stored in uncompressed jpeg format. The 40 images per class were divided into 20 samples each for training dataset and test dataset. The selection method was based on alternate selection with sequential image capture. F0 The camera used for image acquisition was calibrated under the artificial light source using a calibration grey card. An RGB digital image was taken of the grey-card and each color channel was evaluated using histograms, mean and standard deviation

PAGE 82

71 statistics. Red and green channel gains were adjusted until the grey-card images had similar means in R, G, and B equal to approximately 128, which is mid-range for a scale from 0 to 255. Standard deviation of calibrated pixel values was approximately equal to 5.0. The detailed step by step account of the image acquisition and classification process is illustrated in the following flowchart. 480x640 240x320 8 bit Leaf Edge Background Image pixel RGB Image Noise zeroe d Detection Reduction Ac q uisitio n RGB to HSI SGDM Matrix Texture Translation Generation Statistics Hue, Saturation & Intensity Matrices 6 bit Figure 6-7. Image acquisition and classification flow chart In the initial step, the RGB images of all the leaf samples were obtained. For each image in the data set the subsequent steps were repeated. Edge detection of the leaf is done on each image of the leaf sample using cannys edge detector. A sample edge detected image of the leaf sample is shown in the following figure 6.8.

PAGE 83

72 Figure 6-8. Edge detected image of a leaf sample Once the edge was detected, the image was scanned from left to right for each row in the pixel map and the area outside the leaf was zeroed to remove any background noise. In the next step, the image size was reduced from 480x640 pixel resolution to 240x520 pixel resolution. The reduced images were then converted from RGB format to HSI format. The SGDM (Spatial gray level dependency matrices) matrices were then generated for each pixel map of the image. The SGDM is a measure of the probability that a given pixel at one particular gray-level will occur at a distinct distance and orientation angle from another pixel, given that pixel has a second particular gray-level. From the SGDM matrices, the texture statistics for each image were generated. SAS Analysis Once the texture statistics were generated for each image, statistical analyses were conducted using SAS statistical analysis package to reduce redundancy in the texture feature set which acted as the input data to our classifiers. Based on these analyses several models of data sets were created which are shown below:

PAGE 84

73 MODEL LEAF COLOR STEPDISC Variable sets 1B Back HS S5,S2,H7,S6,S9,H8,H11,S12,H1,H12 2B Back I I2,I13,I8,I7,I6,I3 3B Back HSI I2,S5,I10,H11,S1,I13,S13 4B Back HSI All Variables 1F Front HS S2,H10,H6,H2,H8,S9,S4 2F Front I I2,I5,I4,I12,I13 3F Front HSI I2,I5,I4,S2,H11,S4,H4 4F Front HSI All Variables Table 6-1. Classification models The variables listed, correspond to the color feature set and the particular Haralick texture feature. For instance, S2 corresponds to the saturation feature and I2 corresponds to the intensity texture feature. Detailed description can be found in the section co-occurrence methodology for texture extraction in chapter 4. In the table above, H represents hue, S represents saturation and I represents intensity. In the classification process only data models from image sets of back portion of the leaves (IB, 2B, 3B, 4B) were used. Earlier research by Burks (2000) had revealed that the classification

PAGE 85

74 accuracies in case of both front portions and back portions matched closely. Due to the unavailability of sufficient number of datasets for front portions of leaves it was decided that only back portions of leaf images would be used for the research. Input Data Preparation Once the feature extraction was complete, two text files were obtained. They were: 1) Training texture feature data ( with all 39 texture features) and 2) Test texture feature data ( with all 39 texture features) The files had 80 rows each, representing 20 samples from each of the four classes of leaves as discussed earlier. Each row had 39 columns representing the 39 texture features extracted for a particular sample image. Each row had a unique number (1, 2, 3 or 4) which represented the class of the particular row of data represented greasy spot disease infected leaf. represented melanose disease infected leaf. represented normal leaf and represented scab disease infected leaf. Once the basic data files were obtained as explained above, training and text files for each of the models mentioned above in Table 6.1, were obtained by selecting only the texture features needed in that particular model from the total 39 texture features in the original data files. Classification Using Squared Mahalanobis Distance A software routine was written in MATLAB that would take in text files representing the training and test data, train the classifier using the train files, and then use the test file to perform the classification task on the test data. The train files were as follows:

PAGE 86

75 1) g1.txt: contains a 20 x n matrix where each row represents the n selected texture features of the 20 training images of greasy spot leaves 2) m1.txt: contains a 20 x n matrix where each row represents the n selected texture features of the 20 training images of melanose leaves 3) n1.txt: contains a 20 x n matrix where each row represents the n selected texture features of the 20 training images of normal leaves 4) s1.txt: contains a 20 x n matrix where each row represents the n selected texture features of the 20 training images of scab leaves Here n is dependent on the model selected which is discussed in Table 6-1 above. The test file contained the 20 x n data matrix representing one class of test data and therefore classification task had to be done in 4 iterations for each class of data. Classification Using Neural Network Based on Back Propagation Algorithm: The network used in the analysis is as follows: Figure 6-9. Network used in feed forward back propagation algorithm As shown in figure 6.9, the network has three layers. The hidden layers had 10 processing elements or neurons in each layer. The output layer had four neurons. The inputs were the texture features, as discussed earlier. Based on the model selected appropriate texture features were selected as input. The transfer function used at the hidden layers and the output layer was a TANSIG function.

PAGE 87

76 A Matlab routine would load all the data files (training and test data files) and make modifications to the data according to the model chosen. The routine is included at the end of this thesis in appendix A. The ideal output (target vectors) of the various leaf samples was represented as follows: Greasy spot ---[. T]0;0;0;1 Melanose -----[. T]0;0;1;0 Normal--------[. T]0;1;0;0 Scab-----------[. T]1;0;0;0 In the target matrix tt1, the first 20 columns represented greasyspot, columns 20 through 40 represented melanose, columns 40 through 60 represented normal and the final 20 columns from 60 through 80 represented scab. After importing the training matrix and the target matrix in to the matlab workspace, a network was constructed using the command newff, which is inherent in the Matlab neural network toolbox. After creating the network, it was trained using the function train. After training, the test data was simulated using the function sim. The MATLAB technical literature gave the following explanation for the function, newff. NEWFF creates a feed-forward back propagation network and returns an N layer feed-forward back propagation network. The syntax for the function is as follows: net = newff (PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) Description: NEWFF (PR, [S1 S2...SNl], {TF1 TF2...TFNl}, BTF, BLF, PF) takes, PR R x 2 matrix of min and max values for R input elements.

PAGE 88

77 Si Size of ith layer, for Nl layers. TFi Transfer function of ith layer, default = 'tansig'. BTF Backprop network training function, default = 'trainlm'. BLF Backprop weight/bias learning function, default = 'learngdm'. PF Performance function, default = 'mse'. The transfer functions TFi can be any differentiable transfer function such as TANSIG, LOGSIG, or PURELIN. The training function BTF can be any of the back propagation training functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc. The learning function BLF can be of the back propagation learning functions such as either LEARNGD, or LEARNGDM. The performance function can be any of the differentiable performance functions such as MSE or MSEREG. The architecture of the network used in this study was as follows: 1) Number of hidden layers : 3 2) Number of inputs to the input layer, n (representing the number of texture features selected) depended on the model used. 3) Number of outputs in the output layer : 4. 4) The parameters of the network were as follows: Network: Feed forward back propagation Training function: TRAINLM Adaptation learning function: learngdm Performance function: MSE Epochs: 3000 Goal: 0.0000001

PAGE 89

78 The number of epochs and the performance goal were specified in the matlab routine as shown below: net.trainParam.epochs = 3000 net.trainParam.goal = 0.0000001 With these parameters, the network was trained. Once the training was complete, the test data for each class of leaves was tested. The results of the classification task are given in the next chapter. Classification Using Neural Network Based on Radial Basis Functions: For classification using neural networks, the Neural Network Toolbox of MATLAB version 6.12 was used. The Neural Network Toolbox is a GUI based tool, which allows the user to load data files, specify the parameters and specify the kind of training to be used. The GUI is very user friendly. When using it, the user can input data, specify the type of network to be built, choose appropriate training method and specify the parameters for the constructed network. It also allows the user to simulate the trained network using the test data. To invoke the toolbox, the command nntool has to be typed at the Matlab command prompt. This command opens up the graphical user interface (GUI). The GUI is a convenient way of constructing any type of Neural Network and to specify the parameters of that network. The GUI has number of buttons, which add the necessary functionality of inputting data, constructing the network, training the network and simulating it on test data. The GUI that appears, after invoking the nntool command is shown in figure 6.10.

PAGE 90

79 Figure 6-10. Snapshot of the GUI for the neural network toolbox data manager Using the import button, the training and testing data files could be loaded into the MATLAB workspace. Once the data files were loaded, a neural network was designed using the New Network button as shown. The dialog box that pops up allows the user to specify the number of layers and the type of training that has to be used. The detailed literature regarding the creation and training of the neural networks can be found in the MATLAB technical literature The network used for this technique is as shown in figure 6.11.

PAGE 91

80 Figure 6-11. Neural network based on radial basis function used in leaf classification The network used 80 basis functions as shown in Figure 6-11. This is obvious because the input had 80 total input vectors for training. The output is 2 x 1 column vector. The outputs of the RBF are fuzzy outputs giving a measure of strength. In this analysis the ideal target output vector used for training, for various classes of leaves are as follows: Greasy spot: [0 0] ; Normal: [1 0] Melanose: [0 1] ; Scab: [1 1] The level of fuzziness was determined to be as follows: 1) Any value < 0.5 was determined to be equivalent to 0 2) Any value > 0.5 was determined to be equivalent to 1. The parameters used in building this network are as follows: 1) Function used: Radial basis function (exact fit) 2) Spread constants used: 3.3822e+003, 956.2873, 629.2455, 1.6532e+003 3) Input consisted of various models of data as discussed earlier. The input was normalized before being fed into the network. After the network was built, test data for each class was fed to the network and the classification task was completed based on the target vectors and fuzzy criterion as described above.

PAGE 92

CHAPTER 7 RESULTS The results for the various classification strategies that were used are given below: Generalized Square Distance Classifier from SAS Model Color Feature Greasy spot Melanose Normal Scab Overall 1B HS 100 100 90 95 96.3 2B I 100 100 95 100 98.8 3B HSI 100 100 100 100 100 4B ALL 100 100 100 100 100 Table 7-1. Percentage classification results of the test data set from SAS The results shown in Table 7-1 were obtained using a generalized square distance classifier in SAS (PROC DISCRIM). The results were part of preliminary investigations by Burks (2002). The results reported better classification accuracies for all the data models. In particular, models 3B and 4B achieved perfect overall classification rates. Model 1B achieved an overall accuracy of 96.3 % and model 2B an accuracy of 98.8%. However, it should be noted that models 2B, 3B and 4B involve calculation of intensity texture features, which is disadvantageous in terms of computational complexity. Therefore, it is deciphered that model 1B is the overall best model in this classifier. One more advantage of using 1B is the decrease in computational time for training and classification. This is because of the elimination of intensity features and because of the less number of features present in the model. 81

PAGE 93

82 Statistical Classifier Based on Mahalanobis Minimum Distance Principle Model Color Feature Greasy spot Melanose Normal Scab Overall 1B HS 100 100 100 95 98.75 2B I 100 95 95 100 97.5 3B HSI 0 100 100 100 75 4B ALL 90 100 80 60 85 Table 7-2. Percentage classification results for mahalanobis distance classifier The results shown in Table 7-2 were obtained using the Mahalanobis minimum distance classifier. In particular, models 1B and 2B achieved better overall classification rates. Model 1B achieved an overall accuracy of 98.75 % and model 2B an accuracy of 97.5%. However, it should be noted that model 2B involves calculation of intensity texture features as already explained in the above paragraphs. Therefore, model 1B is the overall best model in this classifier. In general, tables 7.1 and 7.2 prove that the results for both classifiers based on statistical classification agree closely, especially concerning models 1B and 2B. Although model 2B performed well in both cases, it is not useful in real world applications, since choosing only intensity may be detrimental to the classification task due to inherent intensity variations in an outdoor lighting environment. Hence, model 1B emerges as the best model in classifiers based on statistical classification. Neural Network Classifier Based on Feed Forward Back Propagation Algorithm Model Color Feature Greasy spot Melanose Normal Scab Overall 1B HS 100 90 95 95 95 2B I 95 95 15 100 76.25 3B HSI 100 90 95 95 95 4B ALL 100 95 100 100 98.75 Table 7-3. Percentage classification results for neural network using back propagation

PAGE 94

83 The results shown in Table 7.3 were obtained using Neural Network based on Back Propagation principle. In particular, models 1B, 3B and 4B achieved better overall classification rates. Models 1B and 3B achieved an overall accuracy of 95 % and model 4B an accuracy of 98.75%. Based on the explanation already given model 1B emerged as the best model in this classification strategy. Neural Network Classifier Based on Radial Basis Functions Model Color Feature Greasy spot Melanose Normal Scab Overall 1B HS 100 100 85 60 86.25 2B I 100 20 75 0 48.75 3B HSI 95 80 85 85 86.25 4B ALL 100 95 100 95 97.5 Table 7-4. Percentage classification results for neural network using RBF The results shown in Table 7.4 were obtained using Neural Network based on Radial Basis Functions (RBF). In particular, models 1B, 3B and 4B achieved better overall classification rates. Models 1B and 3B achieved an overall accuracy of 86.25% and model 4B an accuracy of 97.5%. Model 4B achieved higher overall classification accuracy. It is justifiable since model 4B consists of more texture features, which is beneficial in neural network applications, which are curve-fitting problems. However, in an outdoor application model 4B may be disadvantageous due to the large CPU calculation time required in calculating the texture features. Therefore, from the experiments, it is concluded that model 1B consisting of features from hue and saturation is the best model for the task of citrus leaf classification. Elimination of intensity in texture feature calculation is the major advantage since it

PAGE 95

84 nullifies the effect of light variability. In addition, algorithms would be faster due to the less number of texture features (10) required. Discussion. Burks (2002) completed preliminary investigations, for the task of citrus leaf classification. In that study, the data sets consisted of both the front portion as well as the back portion of leaves. From those data sets, the texture features for various models as described earlier were obtained. The classification results using the SAS generalized square distance classifier gave similar overall accuracies for both the front as well as the back data models. However, in the case of back portion the accuracies were a little bit higher. Hence, for this study it was decided to use the back portion of leaves for feature extraction of various data models. It is evident from Table 7.2 that, for models 3B and 4B, the classification accuracies for some classes of leaves were inconsistent with the excellent results that were obtained for other models. Mahalanobis distance method determines the similarity of a set of values from an unknown sample (test data) to a set of values measured from a collection of known samples (training data). For models 3B and 4B, it may be the case that the spectrums for training and test were dissimilar for the particular choice of texture features in those models. Since the classification was predominantly based on the minimum distance, even a slight off bound may significantly affect the test accuracies. Therefore, the choice of the model and hence the texture features selected, significantly affect the classification results. Similarly, from Tables 7.3 and 7.4, the results for neural network classifiers also show some discrepancies in terms of accuracies for some models. In the case of neural network with back propagation as well as with radial basis functions, model 2B (with

PAGE 96

85 only intensity features) performs poorly. This can be attributed to the fact that the network did not have enough information (in other words, there was overlapping of data clusters) to make a perfect classification hyperplane to separate the data clusters belonging to various classes of leaves. The intensity of the leaves when considered as a whole may not have incorporated enough information, for the network to make correct classification decisions. This proves the fact that for neural network classifiers using only intensity texture features will not yield good classification. One significant point to be noted in neural network classifiers is that the results may not be consistent across several trials using the same input and parameters. This is because the weight initialization in the network is random. Hence, the outputs vary. The results for neural network classifiers that were shown in this research were the average of outputs (classification accuracies) for three successive trials. Model 1B emerged as the best model among various models. It was noted earlier, that this was in part because of the elimination of the intensity texture features. Elimination of intensity is advantageous in this study because it nullifies the effect of intensity variations. Moreover, it reduces the computational complexity by foregoing the need to calculate the CCM matrices and texture statistics for the intensity pixel map. However, in an outdoor application, elimination of intensity altogether may have an effect on the classification, since the ambient variability in outdoor lighting is not taken into consideration. Hence, future research for outdoor applications should consider outdoor lighting variability. Table 7-5 shows the number of leaf samples classified into each category for the case of a neural network classifier with back propagation algorithm using model 1B. The

PAGE 97

86 results show that a few samples from melanose, normal and scab leaves were misclassified. For the case of melanose infected leaves, two test images were misclassified. One leaf sample was misclassified as belonging to normal leaf class and the other as a scab infected leaf. Similarly, in the case of normal and scab images, one test image form each class was misclassified as belonging to the other class. In general, it was observed in various trials that misclassifications mainly occurred in three classes namely melanose, normal and scab. From Species Greasy spot Melanose Normal Scab Accuracy Greasy spot 20 0 0 0 100 Melanose 0 18 0 0 90 Normal 0 1 19 1 95 Scab 0 1 1 19 95 Table 7-5. Classification results per class for neural network with back propagation Figures 1-1 through 1-4 show the leaf samples belonging to various classes. It is obvious that leaves belonging to melanose, normal and scab classes showed significant difference form greasy spot leaves in terms of color and texture. The leaves belonging to these three classes had minute differences as discernible to the human eye, which may justify the misclassifications as shown in table 7-5. The false positives that were observed in these leaves may be because for some test images, the feature set for the model chosen, overlapped with the feature set of leaves belonging to other classes. Table 7-6 lists the overall classification results for various classifiers for the particular case of model 1B. The classification accuracies for each of the four classes of leaves, using various classifiers, are shown in the table.

PAGE 98

87 Classifier Greasy spot Melanose Normal Scab Overall SAS 100 100 90 95 96.3 Mahalanobis 100 100 100 95 98.75 NNBP 100 90 95 95 95 RBF 100 100 85 60 86.25 Table 7-6. Comparison of various classifiers for model 1B The table serves as a benchmark in comparing the efficacy of various classifiers. The overall accuracies are well above 95% for statistical classifiers as well as for neural network classifier using the back propagation algorithm. Radial basis function (RBF) network, achieved an overall accuracy of 86.25%. Since the RBF network is a curve fitting problem in a higher dimensional space, the low accuracy may be attributed to the fact that due to some overlap among the various data clusters, the classification may have had some false positives as well as false negatives thus affecting the overall accuracy.

PAGE 99

CHAPTER 8 SUMMARY AND CONCLUSIONS A detailed study was completed to investigate the use of computer vision and image processing techniques in agricultural applications. The task of citrus leaf disease classification using the above mentioned techniques was successfully implemented. Three different classes of citrus diseases: greasy spot, melanose and scab were used for this study. The image data of the leaves selected for this study was collected using a JAI MV90, 3 CCD color camera with 28-90 mm zoom lens. Algorithms for feature extraction and classification based on image processing techniques were designed. The feature extraction process used color co-occurrence methodology (CCM method). It is a method, in which both the color and texture of an image are taken into account, to arrive at unique features, which represent that image. The manual feeding of the datasets, in the form of digitized RGB color photographs was implemented for feature extraction and training the SAS statistical classifier. After training the SAS classifier, the test data sets were used to analyze the performance of accurate classification. The whole procedure of analysis was replicated for three alternate classification approaches to include; statistical classifier using the Mahalanobis minimum distance method, neural network based classifier using the back propagation algorithm and neural network based classifier using radial basis functions. The analyses prove that such methods can be used for agricultural applications in areas such as precision farming. Model 1B emerged as the best data model for the task of citrus leaf classification. The statistical classifiers gave good results averaging above 88

PAGE 100

89 95% overall classification accuracy. Similarly, neural network classifiers also achieved comparable results. This research was a feasibility analysis to see whether the techniques investigated in this research can be implemented in outdoor applications. The results that were obtained prove that these methods can indeed be used for such applications. However, it should be kept in mind that all the analyses in this study were done in controlled laboratory conditions. The real world conditions are much more different due to the inherent variability in natural outdoor lighting and tree structure. That would be a major challenge to overcome in future implementations so as to make the research portable for real time leaf classification. Future work. The future implementations of the present research would include analyzing disease conditions of the citrus trees in an outdoor environment. Moreover, the tree canopy as a whole could be taken into consideration instead of just leaves. Specifically, the research would include the study of tree canopies to determine the presence of any disease condition. The whole technology could be incorporated onboard an autonomous vehicle with GPS. This would not only identify the diseased trees but also map out their positions in the grove so that selective chemical application is possible. However, the above task is complex in terms of implementation due to the high variability in outdoor conditions. The present research should be modified, to make it feasible for outdoor applications, before such a system could be implemented.

PAGE 101

APPENDIX A MATLAB CODE FILES 1) Mahalanobis minimum distance classifier to identify diseased citrus leaves from normal leaves. %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Citrus Disease Identification Project %% Agricultural & Biological engineering department %% UNIVERSITY OF FLORIDA %% Matlab program developed by Rajesh Pydipati, RESEARCH ASSISTANT (Ag & %%Bio Eng) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% g1.txt contains a 20 x n matrix where each row represents the n HSI features of the 20 training images of greasy spot leaves %% %% m1.txt contains a 20 x n matrix where each row represents the n HSI features of the 20 training images of melanose leaves %% %% n1.txt contains a 20 x n matrix where each row represents the n HSI features of the 20 training images of normal leaves %% %% s1.txt contains a 20 x n matrix where each row represents the n HSI features of the 20 training images of scab leaves %% %% In the 'testinputdata.txt'file the 20xn matrix of the test dataset should be input. Each row represents the n features of an image%% a=load ('g1.txt'); % greasy spot features are input here x=a'; %get the transpose to ease calculations s=size(x); m1=sum(x,2)/s(2); % find the mean vector(39x1) for greasy spot for i=1:20 x1(:,i)=x(:,i)-m1; end w=zeros(39,39); for i=1:20 w=w+((x1(:,i))*(x1(:,i))'); end cv1=w/s(2); % 39x39 covariance matrix for greasy spot b=load('m1.txt'); % melanose features are input here y=b'; t=size(y); m2=sum(y,2)/t(2); % mean vector(39x1) for melanose for i=1:20 y1(:,i)=y(:,i)-m2; 90

PAGE 102

91 end w1=zeros(39,39); for i=1:20 w1=w1+((y1(:,i))*(y1(:,i))'); end cv2=w1/t(2); % covariance for melanose(39x39) c=load('n1.txt'); %normal leaf features are input here z=c'; u=size(z); m3=sum(z,2)/u(2); % 39x1 mean vector for normal leaf class for i=1:20 z1(:,i)=z(:,i)-m3; end w2=zeros(39,39); for i=1:20 w2=w2+((z1(:,i))*(z1(:,i))'); end cv3=w2/u(2); % covariance(39x39) for normal leaf class d=load('s1.txt'); % scab features are input here v=d'; o=size(v); m4=sum(v,2)/o(2); % mean vector (39x1) for scab for i=1:20 v1(:,i)=v(:,i)-m4; end w3=zeros(39,39); for i=1:20 w3=w3+((v1(:,i))*(v1(:,i))'); end cv4=w3/o(2); % covariance for scab count1=0; count2=0; count3=0; count4=0; greasyspot=0; melanose=0; normal=0; scab=0; tes=load('testinputdata.txt'); % test data features for a particular class say normal or scab etc are input here as a 20x39 matrix test=(tes)'; for i=1:20 warning off ma1i=abs(((test(:,i)-m1)')*(inv(cv1))*((test(:,i)-m1))); % mahalanobis distance from feature set of test image to greasyspot class warning off

PAGE 103

92 ma2i=abs(((test(:,i)-m2)')*(inv(cv2))*((test(:,i)-m2))); % mahalanobis distance from feature set of test image to melanose class warning off ma3i=abs(((test(:,i)-m3)')*(inv(cv3))*((test(:,i)-m3))); % mahalanobis distance from feature set of test image to normal leaf class warning off ma4i=abs(((test(:,i)-m4)')*(inv(cv4))*((test(:,i)-m4))); % mahalanobis distance from feature set of test image to scab class warning off ns=[ma1i ma2i ma3i ma4i]; mp=min(ns); % find the minimum mahalanobis distance and classify accordingly if mp==ma1i class=1; count1=count1+1; else if mp==ma2i class=2; count2=count2+1; else if mp==ma3i class=3; count3=count3+1; else class=4; count4=count4+1; end end end end greasyspot=count1; melanose=count2; normal=count3; scab=count4; greasyspot % number of test images classified as greasyspot diseased leaf melanose % number of test images classified as melanose diseased leaf normal % number of test images classified as normal leaf scab % number of test images classified as scab diseased leaf

PAGE 104

93 2) Neural network classifier based on back propagation algorithm %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Citrus Disease Identification Project %% Agricultural & Biological engineering department %% UNIVERSITY OF FLORIDA %% Matlab program developed by Rajesh Pydipati, RESEARCH ASSISTANT (Ag & %%Bio Eng) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Model studied here is model 4B clear a= load('train.txt'); b= load('test.txt'); d = load(dd.txt); tp1=flipud(rot90(a)); tp2=flipud(rot90(b)); tt1=d'; h1=[20 15 10 8]; h11=15; h12=15; input_size=size(tp1,1); for i=1:input_size ip1(i,:)=tp1(i,:)/(max(tp1(i,:))); end clear tp1; for i=1:input_size ip2(i,:)=tp2(i,:)/(max(tp2(i,:))); end clear tp2; for i=1:4 sprintf('Trail %d',i) h11 = h1(i); h12=h1(i); net = newff(minmax(ip1),[h11 h12 4],{'tansig' 'tansig' 'tansig' }); net.trainParam.epochs = 3000; net.trainParam.goal = 0.0000001; [net,tr]= train(net, ip1, tt1); aout = sim(net,ip2); class1=aout(:,1:20); class2=aout(:,21:40); class3=aout(:,41:60); class4=aout(:,61:80); [max1,win1] = max(class1); specie1=zeros(4); for i=1:20

PAGE 105

94 if win1(i)==1 specie1(1)= specie1(1)+1; elseif win1(i) == 2 specie1(2)= specie1(2)+1; elseif win1(i) == 3 specie1(3)= specie1(3)+1; elseif win1(i) == 4 specie1(4)= specie1(4)+1; end end specie1 grespotacc=specie1(1)/20*100 [max2,win2] = max(class2); specie2=zeros(4); for i=1:20 if win2(i)==1 specie2(1)= specie2(1)+1; elseif win2(i) == 2 specie2(2)= specie2(2)+1; elseif win2(i) == 3 specie2(3)= specie2(3)+1; elseif win2(i) == 4 specie2(4)= specie2(4)+1; end end specie2 melacc=specie2(2)/20*100 [max3,win3] = max(class3); specie3=zeros(4); for i=1:20 if win3(i)==1 specie3(1)= specie3(1)+1; elseif win3(i) == 2 specie3(2)= specie3(2)+1; elseif win3(i) == 3 specie3(3)= specie3(3)+1; elseif win3(i) == 4 specie3(4)= specie3(4)+1; end end specie3 noracc=specie3(3)/20*100 [max4,win4] = max(class4); specie4=zeros(4); for i=1:20 if win4(i)==1

PAGE 106

95 specie4(1)= specie4(1)+1; elseif win4(i) == 2 specie4(2)= specie4(2)+1; elseif win4(i) == 3 specie4(3)= specie4(3)+1; elseif win4(i) == 4 specie4(4)= specie4(4)+1; end end specie4 scabacc=specie4(4)/20*100 end

PAGE 107

APPENDIX B MINIMIZATION OF COST FUNCTION As is well known from elementary calculus, to find an extremum of a function the procedure is : 1. differentiate the function with respect to the free variable(s), 2. equate the result(s) with zero, and 3. solve the resulting equation(s). In the case of least squares applied to supervised learning with a linear model the function to be minimised is the sum-squared-error (b.1) where (b.2) and the free variables are the weights The minimum of the cost function 96

PAGE 108

97 (b.3) used in ridge regression would be shown in the paragraphs below. The cost function includes an additional weight penalty term controlled by the values of the non-negative regularisation parameters, To get back to ordinary least squares (without any weight penalty) is simply a matter of setting all the regularisation parameters to zero. The optimisation for the j-th weight involves the following steps: First, differentiating the cost function yields: (b.4) In the above equation the derivative of the model which, because the model is linear, is particularly simple and is given as (b.5) Substituting this into the derivative of the cost function and equating the result to zero leads to the equation (b.6)

PAGE 109

98 There are m such equations, for each representing one constraint on the solution. Since there are exactly as many constraints as there are unknowns the system of equations has, except under certain pathological conditions, a unique solution. To find that unique solution we employ the language of matrices and vectors: linear algebra. These are invaluable for the representation and analysis of systems of linear equations like the one above which can be rewritten in vector notation as follows. (b.7) where (b.8) Since there is one of these equations (each relating one scalar quantity to another) for each value of j from 1 up to m we can stack them, one on top of another, to create a relation between two vector quantities. (b.9) However, using the laws of matrix multiplication, this is just equivalent to

PAGE 110

99 (b.10) where (b.11) and where which is called the design matrix, has the vectors as its columns, and has p rows, one for each pattern in the training set. Written out in full it is (b.12) The vector can be decomposed into the product of two terms, the design matrix and the weight vector, since each of its components is a dot-product between two m-dimensional vectors. For example, the i-th component of when the weights are at their optimal values is (b.13)

PAGE 111

100 where Note that while is one of the columns of is one of its rows. is the result of stacking the one on top of the other, or (b.14) Finally, substituting this expression for into the previous equation gives the solution to which is (b.15) which is where the normal equation comes from.

PAGE 112

101 The latter equation is the most general form of the normal equation which we deal with here. There are two special cases. In standard ridge regression so Ordinary least squares, where there is no weight penalty, is obtained by setting all regularisation parameters to zero so

PAGE 113

LIST OF REFERENCES Ampazis N. 1999. Introduction to neural networks, Artificial neural networks laboratory, Greece, http://www.iit.demokritos.gr/neural/intro (Accessed July 14, 2004). Burks, T.F. 2002. Early detection of citrus diseases using machine vision. Presentation at ASAE conference. Chicago, USA. 2002. Burks, T.F., S.A. Shearer, and F.A. Payne. 2000a. Classification of weed species using color texture features and discriminant analysis. Transactions of ASAE, Volume 43(2), Page(s), 441:448. Burks, T.F., S.A. Shearer, R.S. Gates, and K.D. Donohue. 2000b. Backpropogation neural network design and evaluation for classifying weed species using color image texture. Transactions of ASAE, Volume 43(4), Page(s), 1029:1037. Clark, C.J., V.A. McGlone, and R.B. Jordan. 2003. Detection of brownheart in braeburn apple by transmission NIR spectroscopy. Elsevier science transactions on post-harvest biology and technology, Volume 28(1), April 2003, Page(s), 87:96. Coggins, J. M., A framework for texture analysis based on spatial filtering, Ph.D. Dissertation, Computer Science Department, Michigan State University, East Lansing, Michigan, 1982. Edwards, J.G., and C.H.Sweet. 1986. Citrus blight assessment using a microcomputer; quantifying damage using an apple computer to solve reflectance spectra of entire trees. Florida Scientist, Volume 49(1), Page(s), 48:54. Franz, E., M.R. Gebhardt, and K.B. Unklesbay. 1991. The use of local spectral properties of leaves as an aid for identifying weed seedlings in digital images. Transactions of ASAE, Volume 34(20), Page(s), 682:687. Guyer, D.E., G.E. Miles, D.L. Gaultney, and M.M. Schreiber .1993. Application of machine vision to shape analysis in leaf and plant identification. Transactions of ASAE, Volume 36(1), Page(s), 163:171. Guyer, D.E., G.E. Miles, M.M. Schreiber, O.R. Mitchell, and V.C. Vanderbilt. 1986. Machine vision and image processing for plant identification. Transactions of ASAE, Volume 29(6), Page(s), 1500:1507. Hatfield, J.L., and P.J. Pinter, Jr. 1993. Remote sensing for crop protection. Crop Protection. Volume 12(6), Page(s), 403:414. 102

PAGE 114

103 Haralick, R., K. Shanmugam, I. Dinstein. 1973. Texture features for image classification, IEEE Transactions on Systems, Man, and Cybernetics,Volume(3), Page(s), 610:621. Hodges, A., E. Philippakos, D. Mulkey, T. Spreen, and R. Muraro. 2001. Economic impact of Floridas citrus industry. University of Florida,Gainesville. IFAS publications. Economic information report 01-2. Jain, A.K., and M. Tuceryan. 1998. Texture Analysis, In the handbook of pattern recognition and computer vision (2nd Edition), by C. H. Chen, L. F. Pau, P. S. P. Wang (eds.), Page(s), 207-248, World Scientific Publishing Co., Singapore(Book Chapter). Kataoka, T., O. Hiroshi, and H. Shun-ichi. 2001. Automatic detecting system of apple harvest season for robotic apple harvesting. Presented at the 2001 ASAE Annual international meeting, Sacramento, California. Paper No. 01-3132. Kim, M.S., Y.R. Chen, and P.M. Mehl. 2001. Hyperspectral reflectance and fluorescence imaging system for food quality and safety. Transactions of ASAE, Volume 44(3), Page(s), 721:729. Chao, K., Y.R. Chen, and M. S. Kim. 2002. Machine vision technology for agricultural applications. Elsevier science transactions on computers and electronics in agriculture, Volume 36(2-3), November 2001, Page(s), 173:191. Laws, K. I., 1980. Textured image segmentation. Ph.D. Dissertation, University of Southern California. Lee, W.S, and D. Slaughter. 1998. Plant recognition using hardware-based neural network. Presented at the 1998 ASAE Annual international meeting, Orlando, Florida. Paper No. 983040. Mark J.L. Orr. 1996. Introduction to radial basis function networks, Center for cognitive science, Edinburgh, Scotland, http://www.anc.ed.ac.uk/~mjo/intro/intro.html (Accessed July 14, 2004). Moshou, D., H. Ramon, and J. De Baerdemaeker. 2002. A weed species spectral detector based on neural networks, Precision Agriculture Journal, Volume 3 (3), Page(s), 209-223. Nakano, K. 1998. Application of neural networks to the color grading of apples. Computers and electronics in agriculture, Volume 14, Page(s), 105-116. Ninoyoma, S and I. Shigemori. 1991. Quantitative evaluation of soybean plant shape by image analysis. Japan. J. Breed, Volume 41, Page(s), 485:497. Ning, K., M. Zhang, R. Ruan, and P.L. Chen, 2001. Computer vision for objective inspection of beans quality. Presented at the 2001 ASAE Annual international meeting, Sacramento, California. Paper No. 01-3059.

PAGE 115

104 Pearson, T and R. Young. 2001. Automated sorting of almonds with embedded shell by laser transmittance imaging. ASAE Annual international meeting, Sacramento, California. Paper No. 01-013099. Slaughter, D.C. 1987. Color vision for robotic orange harvesting. PhD Dissertation submitted to the University of Florida, Gainesville. Tang, L., L.F. Tian, B.L. Steward, and J.F. Reid. 1999. Texture based weed classification using gabor wavelets and neural networks for real time selective herbicide applications. ASAE/CSAE-SCGR Annual international meeting, Toronto, Canada. Paper No. 993036. Thompson, J.F., J.V. Stafford, and P.C.H. Miller. 1991. Potential for automatic weed detection and selective herbicide application. Crop Protection. Volume 10, Page(s), 254-259. Tian, L., D.C. Slaughter, and R.F. Norris. 2000. Machine vision identification of tomato seedlings for automated weed control. Transactions of ASAE, Volume 40(6), Page(s), 1761:1768. Woebbecke, D.M., G.E. Meyer, K. Von Bargen, and D.A. Mortensen. 1995a. Shape features for identifying young weeds using image analysis. Transactions of ASAE, Sacramento, California Volume 38(1), Page(s), 271:281. Woebbecke, D.M., G.E. Meyer, K. Von Bargen, and D.A. Mortensen. 1995b. Color indices for weed identification under various soil, residue, and lighting conditions. Transactions of ASAE, Volume 38(1), Page(s), 259:269. Yang, C., S. Prasher, and J. Landry. 1998. Application of artificial neural networks to image recognition in precision farming. Presented at 1998 ASAE annual meeting, Orlando, Florida. Paper No. 983039. Yang, T., Y.R. Chen, and X. Cheng. 2001. Infrared imaging and wavelet-based segmentation method for apple defect inspection. ASAE Annual international meeting, Sacramento, California. Paper No. 01-3109. Zeuch, N. 1988. Applying Machine Vision. John Wiley & Sons, New York, NY. Zhang, N., and C. Chaisattapagon. 1995. Effective criteria for weed identification in wheat fields using machine vision. Transactions of ASAE, Volume 38(3), Page(s), 965:974.

PAGE 116

BIOGRAPHICAL SKETCH Rajesh Pydipati was born in India in the state of Andhra Pradesh. He graduated with honors from the college of engineering at Sri Krishna Devaraya University in India with a bachelors degree in electrical and electronics engineering. He was mentioned in the deans list having topped the university in the engineering stream. He then started his graduate studies at the University of Florida working towards a dual masters degree program in the Departments of Electrical and Computer Engineering and Agricultural and Biological Engineering. He was a member of the Agricultural Robotics and Mechatronics Group (ARMg) in the Department of Agricultural and Biological Engineering, where he worked as a research assistant under the guidance of Dr.Thomas F. Burks. He is a member of Eta Kappa Nu honor society of electrical engineers and Alpha Epsilon honor society of agricultural engineers. When not indulged in academic pursuits he likes to spend his time on the tennis courts. He was a junior champion in the sport of tennis in India and also represented his university in the national collegiate competitions in India. 105


Permanent Link: http://ufdc.ufl.edu/UFE0006991/00001

Material Information

Title: Evaluation of Classifiers for Automatic Disease Detection in Citrus Leaves Using Machine Vision
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0006991:00001

Permanent Link: http://ufdc.ufl.edu/UFE0006991/00001

Material Information

Title: Evaluation of Classifiers for Automatic Disease Detection in Citrus Leaves Using Machine Vision
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0006991:00001


This item has the following downloads:


Full Text











EVALUATION OF CLASSIFIERS FOR AUTOMATIC DISEASE DETECTION IN
CITRUS LEAVES USING MACHINE VISION















By

RAJESH PYDIPATI


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF ENGINEERING

UNIVERSITY OF FLORIDA


2004































Copyright 2004

by

Raj esh Pydipati


































I dedicate this document to my parents and my teacher Dr. Burks for their support and
friendship. Without them, this work would not have been possible.
















ACKNOWLEDGMENTS

I would like to express my gratitude to my parents for their love and support in all

stages of my life. My sincere thanks go to my professor, friend and guide, Dr. Thomas F.

Burks, whose support and faith in me have always been an inspiration. I would also like

to thank Dr. Wonsuk Lee and Dr. Michael C. Nechyba, who agreed to be on my

committee and gave valuable advice in the completion of this research work. My warm

greetings to all my friends in our research group (Agricultural Robotics and

Mechatronics, ARMg) and the personnel in the Agricultural and Biological Engineering

Department for their friendship. Special thanks go to Ms. Melanie Wilder for proof-

reading this document. I extend my special thanks to the United States Department of

Agriculture (USDA) for providing the necessary funds to carry on this research.





















TABLE OF CONTENTS


page


ACKNOWLEDGMENT S .............. .................... iv


LI ST OF T ABLE S ............. ...... .__ .............. vii..


LIST OF FIGURES ............. ...... ..............viii..


AB S TRAC T ......_ ................. ............_........x


CHAPTER


1 INTRODUCTION ................. ...............1.......... ......


Motivation............... ...............
Citrus Diseases............... .. .... . ........

Image Processing and Computer Vision Techniques .............. .....................


2 OB JECTIVE S ................. ...............8.......... ......


3 LITERATURE REVIEW .............. ...............9.....


Obj ect Shape Matching Methods. ................. ....__ .... ...........9
Col or Based Techniques ................. .............. 10......... ....
Reflectance Based Method s............. ...... .__ ...............12.....
Texture Based Methods ......................._ ...............13......

Experiments Based On Other Methods .............. ...............15....


4 FEATURE EXTRACTION............... ...............2


Texture Analysis............... ...............21
Co-occurrence M atrices............... ..... .............2
Autocorrelation Based Texture Features ......___ ...... .___ ..........._.....24
Geometrical Methods .............. ...............25....
Voronoi Tessellation Functions............... ...............2
Random Field Models .............. ...............26....

Signal Processing Methods............... ...............26
Color Technology .............. ...............26....
RGB Space .............. ...............27....
H SI Space .............. ...............29....











Co-occurrence Methodology for Texture Analysis .................. ................3
SAS Based Statistical Methods to Reduce Redundancy .............. .....................3


5 CLAS S IFICATION ............ ...... .._ .............. 8...


Stati stical Classifier Using the Squared Mahalanobi s Minimum Di stance ..............3 9
Neural Network Based Classifiers .............. ........__ .... ....__ ............4
Considerations on the Implementation of Back Propagation ............................57
Radial Functions................. ......_ ...............59....
Radial Basis Function Networks .............. ...............60....


6 MATERIALS AND METHODS .............. ...............64....


S AS Analy si s............... ...............72
Input Data Preparation .................... ......___ ... ...__ ............7
Classifieation Using Squared Mahalanobi s Di stance..............___ ...........__ ...........74
Classifieation Using Neural Network Based on Back Propagation Algorithm: .....75
Classifieation Using Neural Network Based on Radial Basis Functions: ...._...78

7 RE SULT S .............. ...............8 1....


Generalized Square Distance Classifier from SAS .............. ....... ...........8
Statistical Classifier Based on Mahalanobis Minimum Distance Principle ...............82
Neural Network Classifier Based on Feed Forward Back Propagation Algorithm....82
Neural Network Classifier Based on Radial Basis Functions .................. ...............83


8 SUMMARY AND CONCLUSIONS .............. ...............88....

APPENDIX


A MATLAB CODE FILES ................. ...............90................

B MINIMIZATION OF COST FUNCTION ................. ...............96........... ...

LIST OF REFERENCES ................. ...............102................

BIOGRAPHICAL SKETCH ................. ...............105......... ......

















LIST OF TABLES


Table pg

6-1 Cl assif cation model s .............. ...............73....

7-1 Percentage classification results of the test data set from SAS ............... .... ...........81

7-2 Percentage classification results for mahalanobis distance classifier ......................82

7-3 Percentage classification results for neural network using back propagation ..........82.

7-4 Percentage classification results for neural network using RBF ................... ...........83

7-5 Classifieation results per class for neural network with back propagation..........._...86

7-6 Comparison of various classifiers for model IB ....._._._ ... ....... ................87



















LIST OF FIGURES


Figure pg

1-1 Image of citrus leaf infected with greasy spot disease ................ .. ......___.........3

1-2 Image of a citrus leaf infected with melanose ......____ ..... ... ._ ..........._....4

1-3 Image of a citrus leaf infected with scab .....__.....___ ........... ...........

1-4 Image of a normal citrus leaf. ............ ..... .__ ...............5.

4-1 RGB color space and the color cube .............. ...............28....

4-2 HSI color space and the cylinder............... ...............30

4-3 Nearest neighbor diagram .............. ...............32....

5-1 A basic neuron............... ...............44.


5-2 Multilayer feedforward ANN ................. ...............50................

5-3 An RBF network with one input .............. ...............59....

5-4 Gaussian radial basis function............... ...............60

5-5 The traditional radial basis function network............... ...............61


6-1 Image acquisition system .............. ...............64....

6-2 Full spectrum sunlight ............ ...... .._ ...............66..

6-3 Cool white fluorescent spectrum ...._ ......_____ .......___ ...........6

6-4 Spectrum comparison ................. ...............66....... ......

6-5 Visual representation of an image sensor............... ...............67.

6-6 Coreco PC-RGB 24 bit color frame grabber............... ...............68

6-7 Image acquisition and classification flow chart ......____ ............. ..............71

6-8 Edge detected image of a leaf sample .............. ...............72....











6-9 Network used in feed forward back propagation algorithm ................. ................ .75

6-10 Snapshot of the GUI for the neural network toolbox data manager. ......................79

6-11 Neural network based on radial basis function used in leaf classification ...............80
















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Engineering

EVALUATION OF CLASSIFIERS FOR AUTOMATIC DISEASE DETECTION IN
CITRUS LEAVES USING MACHINE VISION


By

Raj esh Pydipati

August 2004

Chair: Thomas F. Burks
Cochair: Wonsuk Lee
Major Department: Agricultural and Biological Engineering

The citrus industry is an important part of Florida's agricultural economy. Citrus

fruits, including oranges, grapefruit, tangelos, tangerines, limes, and other specialty fruits,

are the state's largest agricultural commodities. The economic impact of citrus industry

on the overall economy of the state of Florida is substantial. The citrus industry is also

one of the leading producers of jobs for people in Florida and thus has huge potential for

the overall economic balance of the state. These facts prove beyond doubt the importance

of the citrus industry in the state's economy. As such, several important decisions

regarding safe practices for the production and processing of citrus fruits have been made

in the recent past. One of the main concerns is proper disease control. Every year, large

quantities of chemicals are used as fungicides to control various diseases common to

citrus crops, thus evoking serious concern from environmentalists over deteriorating

groundwater quality. Likewise, farmers are also concerned about the huge costs involved









in these activities and severe profit loss. To remedy this situation various alternatives are

being searched to minimize the application of these hazardous chemicals. Several key

technologies incorporating concepts from image processing and artificial intelligence

were developed by various researchers in the past to tackle this situation. The focus of

these applications was to identify the disease in the early stages of infection so that

selective application of the chemicals in the groves was possible using other technologies

like robotics and automated vehicles.

As part of this thesis research, a detailed study was implemented to investigate the

use of computer vision and image processing techniques in the classification of diseased

citrus leaves from normal citrus leaves. Four different classes of citrus leaves, greasy

spot, melanose, normal and scab, were used for this study. The image data of the leaves

selected for this study were collected using a JAI MV90, 3 CCD color camera with 28-90

mm zoom lens. Algorithms based on image processing techniques for feature extraction

and classification were designed. Various classification procedures were implemented to

test the classification accuracies. The classification approaches that were used are

statistical classifier using the Mahalanobis minimum distance method, neural network

based classifier using the back propagation algorithm and neural network based classifier

using radial basis functions.

The analyses proved that such methods could be used for citrus leaf classification.

The statistical classifiers gave good results averaging above 95% overall classification

accuracy. Similarly, neural network classifiers also achieved comparable results.















CHAPTER 1
INTRODUCTION

Motivation

The citrus industry is an important constituent of Florida's overall agricultural

economy. Hodges et al. (2001) present statistical highlights emphasizing the impact of

citrus industry in Florida. According to them, citrus fruits, including oranges, grapefruit,

tangelos, tangerines, limes, and other specialty fruits, are the state's largest agricultural

commodities. Florida is the world's leading producing region for grapefruit and is second

only to Brazil in orange production. The state produces over 80 percent of the United

States' supply of citrus. In the 1999-2000 season, a total of 298 million boxes of citrus

fruit were produced in Florida from 107 million bearing citrus trees growing on 832,000

acres. The farm-level value of citrus fruit sold to packing houses and processing plants

amounted to $1.73 billion. Total economic impacts associated with the citrus industry

were estimated at $9.13 billion in industry output, $4.18 billion in value added, and

89,700 jobs. These facts prove that citrus industry is a maj or boost to the economy of the

state.

Proper disease control measures must be undertaken so that, crop yield losses may

be minimized and excessive application of fungicides may be avoided which is a major

contributor for environmental pollution as well as a major source of spending. Many key

enabling technologies have been developed so that automatic identification of disease

symptoms may be achieved using concepts of image processing and computer vision.

The design and implementation of these technologies will greatly aid in selective









chemical application, reducing costs and thus leading to improved productivity, as well as

improved produce.

Citrus Diseases

Citrus trees can exhibit a host of symptoms reflecting various disorders that can

adversely influence their health, vigor and productivity to varying degrees. Identifying

disease symptoms is essential as inappropriate actions may sometimes prove to be costly

and detrimental to the yield.

The disease symptoms that will be addressed in this thesis are an important aspect

of commercial citrus production programs. Proper disease control actions or remedial

measures can be undertaken if the symptoms are identified early. The common types of

disease symptoms observed in commercial citrus crop production will be discussed in the

following paragraphs. These descriptions were extracted from a publication titled A

Guide to Citrus Disease Identification, released by the Institute of Food and Agricultural

Sciences at the University of Florida

Greasy spot (Mycosphaerella citri). Greasy spot is caused by Mycosphaerella

citri. Management of this disease must be considered in groves intended for processing or

for fresh fruit market. Greasy spot is usually more severe on leaves of grapefruit,

pineapple, hamlins and tangelos than on valencias, temples, murcotts, and most

tangerines and their hybrids. Infection by greasy spot produces a swelling on the lower

leaf surface. A yellow mottle appears at the corresponding point on the upper leaf

surface. The swollen tissue starts to collapse and turn brown and eventually the brown or

black symptoms become clearly visible. Airborne ascopores produced in decomposing

leaf litter on the grove floor are the primary source of inoculum for greasy spot. These

spores germinate on the underside of the leaves and the fungus grows for a time on the










surface before penetrating through the stomates (natural openings of the lower leaf

surface). Internal growth is slow and does not appear for several months. Warm humid

nights and high rainfall, typical of Florida summers, favor infections and disease

development. Major ascopore release usually occurs from April to July, with favorable

conditions for infection occurring from June through September. Leaves are susceptible

once they are fully expanded and remain susceptible throughout their life.



















Figure 1-1. Image of citrus leaf infected with greasy spot disease

Melanose (Diaporthe citri). Control of melanose, caused by Diaporthe citri, is

often necessary on mature groves where fruit is intended for fresh market, particularly if

recently killed twigs and wood are present as a result of freezes or other causes.

Grapefruit is very susceptible to melanose, but the disease may damage all other citrus.

On foliage, melanose first appears on the young leaves as minute, dark circular

depressions with yellowish margins. Later they become raised, are rough, brown in color,

and the yellow margins disappear. Leaves infected when very young may become

distorted. Infested leaves do not serve as an inoculum source. Young green twigs can also

be infected.










Star Melanose. Star melanose occurs when copper is applied late during hot, dry

weather, and is due to copper damage to leaves. It has no relationship to melanose but

may resemble symptoms of that disease. Copper causes the developing tissues to become

more corky and darker than normal and the shape of the lesion often resembles a star.




















Figure 1-2. Image of a citrus leaf infected with melanose

Citrus scab (Elsinoe fawsettii). Citrus scab caused by elsinoe fawsettii affects

grapefruit, temples, murcotts, tangelos, and some other tangerine hybrids. Small, pale

orange, somewhat circular, elevated spots on leaves and fruit are the first evidence of the

disease. As the leaves develop, the infection becomes well defined, with wart-like

structures or protuberances on one side of the leaf, often with a conical depression on the

opposite side. The crests of the wart-like growths usually become covered with a corky

pale tissue and become somewhat flattened as the fruit matures especially on grapefruit.

The pustules may run together, covering large areas of the fruit or leaves. Badly infected

leaves become very crinkled, distorted, and stunted. Fruit severely attacked when very

small often become misshapen. Scab can be particularly severe on temples and lemons,

and is often troublesome on murcotts, minneola tangelos and grapefruit.





























Figure 1-3. Image of a citrus leaf infected with scab.


Figure 1-4. Image of a normal citrus leaf

In this research, the focus will be on these diseases since they are more common among

the citrus trees.

Image Processing and Computer Vision Techniques

Computer vision techniques are used for agricultural applications, such as

detection of weeds in a field, sorting of fruit on a conveyer belt in fruit processing

industry, etc. The underlying approach for all of these techniques is the same. First,










digital images are acquired from environment around the sensor using a digital camera.

Then image-processing techniques are applied to extract useful features that are

necessary for further analysis of these images. After that, several analytical discriminant

techniques, such as statistical, bayesian or neural networks will be used to classify the

images according to the specific problem at hand. This constitutes the overall concept

that is the framework for any vision related algorithm.

Figure 1-5 given below depicts the basic procedure that any vision-based

detection algorithm would use. The first phase is the image acquisition phase. In this step,

the images of the various leaves that are to be classifies are taken using an analog CCD

camera interfaced with a computer containing a frame grabber board. In the second phase

image preprocessing is completed. Usually the images that are obtained from the first

phase are not suited for classification purposes because of various factors, such as noise,

lighting variations, etc. So, these images would be preprocessed using certain filters to

remove unwanted features in the images. In the third phase, edge detection is completed

to discover the actual boundary of the leaf in the image. Later on, feature extraction is

completed based on specific properties among pixels in the image or their texture. After

this step, certain statistical analysis tasks are completed to choose the best features that

represent the given image, thus minimizing feature redundancy. Finally, classification is

completed using various detection algorithms





Image preprocessing


Edge detection


Feature Extraction


Statistical Analysis


Classification


Figure 1-5. Classification procedure of a general vision based detection algorithm
The above figure outlines the various steps involved in any kind of general vision

based classification process. In the following chapters, those steps would be discussed in
detail.


Classification Process


Image Acquisition















CHAPTER 2
OBJECTIVES

The main obj ectives of this research are outlined as follows:

1) To collect image data sets of various common citrus diseases.

2) To evaluate the Color Co-occurrence Method, for disease detection in citrus trees.

3) To develop various strategies and algorithms for classification of the citrus leaves

based on the features obtained from the color co-occurrence method.

4) To compare the classification accuracies from the algorithms.

The image data of the leaves selected for this study would be collected.

Algorithms based on image processing techniques for feature extraction and

classification would be designed. Manual feeding of the datasets, in the form of

digitized RGB color photographs would be done for feature extraction and training the

SAS statistical classifier. After training the SAS classifier, the test data sets would be

used to analyze the performance of accurate classification. The whole procedure of

analysis would be replicated for three alternate classification approaches to include;

statistical classifier using the Mahalanobis minimum distance method, neural network

based classifier using the back propagation algorithm and neural network based

classifier using radial basis functions. Comparison of the results obtained from the

three approaches would be completed and the best approach for the problem at hand

would be determined.















CHAPTER 3
LITERATURE REVIEW

In the past decade, agricultural applications using image processing and pattern

recognition techniques have been attempted by various researchers. Object shape

matching functions, color-based classifiers, reflectance-based classifiers and texture-

based classifiers are some of the common methods that have been tried in the past. The

following sections will discuss some past work done using these methods.

Object Shape Matching Methods

Tian et al. (2000) developed a machine vision system to detect and locate tomato

seedlings and weed plants in a commercial agricultural environment. Images acquired in

agricultural tomato fields under natural illumination were studied extensively and an

environmentally adaptive segmentation algorithm, which could adapt to changes in

natural light illumination, was developed. The method used four semantic shape features

to distinguish tomato cotyledons from weed leaves and a whole plant syntactic algorithm

was used to predict stem location of whole plant. Using these techniques, accuracies of

65% for detection of tomato plants were reported.

Guyer et al. (1993) implemented an algorithm to extract plant/leaf shape features

using information gathered from critical points along object borders, such as the location

of angles along the border (and/or) local maxima and minima from the plant leaf

centroid. A library of 17 low level features was converted into 13 higher-level

quantitative shape features. This demonstrated the ability to combine and structure basic










explicit data into more subjective shape knowledge. This knowledge-based system as a

pattern recognition system achieved a classification accuracy of 69%.

Woebbecke et al. (1995a) developed a vision system using shape features for

identifying young weeds. Shape feature analyses were performed on binary images

originally obtained from color images of 10 common weeds, along with corn and

soybeans. The features included were roundness, aspect, perimeter, thickness,

elongatedness and several invariant central moments (ICM). Shape features that best

distinguished these plants were aspect and first invariant central moment, which

classified 60 to 90 % of dicots from monocots.

Various researchers have made additional efforts in the past. Franz et al. (1991)

identified plants based on individual leaf shape described by curvature of the leaf

boundary at two growth stages. Ninoyama and Shigemori (1991) analyzed binary images

of whole soybean plants viewed from the side. Width, height, projected area; degree of

occupancy and x and y frequency distributions about main axis and its centroid were used

to describe plant shape as a possible tool for classification. Guyer et al. (1986) identified

young corn plants based on spatial features, including the number of leaves and shape of

individual leaves. Thompson et al. (1991) suggested that plant shape features might be

necessary to distinguish between monocots and dicots for intermittent or spot spraying.

The main disadvantages of methods based on shape matching were occlusion,

inadequate description of leaves with variable leaf serrations, aggregate boundaries of

multiple leaves.

Color Based Techniques

Kataoka et al. (2001) developed an automatic detection system for detecting

apples ready for harvest, for the application of robotic fruit harvesting. In this system, the









color of apples was the main discriminating feature. The color of apples that were

suitable for harvest and of those picked earlier than harvest time were measured and

compared using a spectrophotometer. Both of these showed some differences in color.

The harvest season's apple color and the color of apples picked before harvest were well-

separated based on Munsell color system, the L~a~b color space and XYZ color system.

The threshold, which detects the harvest season apples, was produced based on the

evaluation of these color systems.

Slaughter (1987), investigated the use of chrominance and intensity information

from natural outdoor scenes as a means of guidance for a robotic manipulator in the

harvest of orange fruit. A classification model was developed which discriminated

oranges from the natural background of an orange grove using only color information in a

digital color image. A Bayesian form of discriminant analysis correctly classified over

75% of the pixels of fruit in the natural scenes that were analyzed.

Woebbecke et al. (1995b) developed a vision system using color indices for weed

identification under various soil, residue and lighting conditions. Color slide images of

weeds among various soils and residues were digitized and analyzed for red, green and

blue (RGB) color content. It was observed that red, green and blue chromatic coordinates

of plants were very different from those of background soils and residue. For

distinguishing living plant material from a non-plant background, several indices of

chromatic coordinates were tried and were found to be successful in identifying weeds.

A weed detection system for Kansas wheat was developed using color filters by

Zhang and Chaisattapagon (1995). Gray scale ratios were used to discriminate between

weed species common to wheat Hields.









Reflectance Based Methods

Hatfield and Pinter (1990) discuss the various techniques, which are in use today

in remote sensing for crop protection. Research and technological advances in the Hield of

remote sensing have greatly enhanced the ability to detect and quantify physical and

biological stresses that affect the productivity of agricultural crops. Reflected light in

specific visible, near- and middle-infrared regions of electromagnetic spectrum has

proved useful in detection of nutrient deficiencies, disease, weed and insect infestations.

A method to assess damage due to citrus blight disease on citrus plants, using

reflectance spectra of entire tree, was developed by Edwards et al. (1986). Since the

spectral quality of light reflected from affected trees is modified as the disease

progresses, spectra from trees in different health states were analyzed using a least

squares technique to determine if the health class could be assessed by a computer. The

spectrum of a given tree was compared with a set of library spectra representing trees of

different health states. The computed solutions were in close agreement with the Hield

observations.

Franz et al. (1991) investigated the use of local properties of leaves as an aid for

identifying weed seedlings in digital images. Statistical measures were calculated for

reflectance of insitu leaf surfaces in the near-infrared, red and blue wavebands.

Reflectance was quantified by image intensity within a leaf periphery. Mean, variance

and skewness were selected as significant statistical measures. Intensity statistics

depended on NIR reflectance, spatial density of veins and visibility of specular

reflections. Experiments and analyses indicated that in order to discriminate among

individual leaves, the training set must account for leaf orientation with respect to

illumination source.









Texture Based Methods

In many machine vision and image processing algorithms, simplifying

assumptions are made about the uniformity of intensities in local image regions.

However, images of real objects often do not exhibit regions of uniform intensities. For

example, the image of a wooden surface is not uniform, but contains variations of

intensities which form certain repeated patterns called visual texture. The patterns can be

the result of physical surface properties such as roughness or oriented strands, which

often have a tactile quality, or they could be the result of reflectance differences such as

the color on a surface.

Coggins (1982) has compiled a catalogue of texture definitions in the computer

vision literature. Some examples are listed as follows.

1) "We may regard texture as what constitutes a macroscopic region. Its structure is

simply attributed to the repetitive patterns in which elements or primitives are arranged

according to a placement rule."

2) "A region in an image has a constant texture if a set of local statistics or other local

properties of the picture function are constant, slowly varying, or approximately

periodic."

Image texture, defined as a function of the spatial variation in pixel intensities

(gray values), is useful in a variety of applications and has been a subj ect of intense study

by many researchers. One immediate application of image texture is the recognition of

image regions using texture properties. Texture analysis has been extensively used to

classify remotely sensed images. Land use classification in which homogeneous regions

with different types of terrains (such as wheat, bodies of water, urban regions, etc.) need

to be identified is an important application. Haralick et al. (1973) used gray level co-









occurrence features to analyze remotely sensed images. They computed gray level co-

occurrence matrices for a pixel offset equal to one and with four directions

(00, 450, 900, 1350) For a seven-class classification problem, they obtained

approximately 80% classification accuracy using texture features.

Tang et al. (1999) developed a texture-based weed classification method using Gabor

wavelets and neural networks for real-time selective herbicide application. The method

comprised a low-level Gabor wavelets-based feature extraction algorithm and a high-

level neural network-based pattern recognition algorithm. The model was specifically

developed to classify images into broadleaf and grass categories for real-time herbicide

application. Their analyses showed that the method is capable of performing texture-

based broadleaf and grass classification accurately with 100 percent classification

accuracy. In this model, background features like soil were eliminated to extract spatial

frequency features from the weeds. The color index used for image segmentation in their

research was called Modified excess green (ExG) defined by:

ExG=2*G-R-B

With constraints: if (G
normalized red, green and blue intensities of a pixel.

In order to distinguish broadleaf and grass efficiently, a specific filter bank with

proper frequency levels and a suitable filter dimension was determined. Features were

generated based on random convolution points. A three-layer feed forward back

propagation artificial neural network was built for the purpose of classification. This

system achieved 100% classification accuracy.









Burks (2000a) developed a method for classification of weed species using color

texture features and discriminant analysis. The image analysis technique used for this

method was the color-cooccurence (CCM) method. The method had the ability to

discriminate between multiple canopy species and was insensitive to leaf scale and

orientation. The use of color features in the visible light spectrum provided additional

image characteristic features over traditional gray-scale representation. The CCM method

involved three maj or mathematical processes:

1) Transformations of an RGB color representation of an image to an equivalent HSI

color representation.

2) Generation of color cooccurence matrices from the HSI pixels.

3) Generation of texture features from the CCM matrices.

In this study CCM texture feature data model for six classes of ground cover (giant

foxtails, crabgrass, velvet leaf, lambs quarter, ivy leaf morning glory and soil) were

developed and then stepwise discriminant analysis techniques were utilized to identify

combinations of CCM texture feature variables which have highest classification

accuracy with the least number of texture variables. Then a discriminant classifier was

trained to identify weeds using the models generated. Classification tests were conducted

with each model to determine their potential for classifying weed species. Overall

classification accuracies above 93% were achieved when using hue and saturation

features alone.

Experiments Based On Other Methods

Ning et al. (2001) demonstrated a computer vision system for obj ective inspection

of bean quality. They used a combination of features based on shape, as well as color, in

making their decisions on bean quality. Instead of using a CCD camera, as done by other









researchers, they used a high resolution scanner to acquire the images and to test the

implementation of their method using cheaper alternatives. The procedure involved the

following steps: determine bean image threshold intensity, separate individual kernels via

a disconnection algorithm, extract features of interest and make decisions based on their

range selection method. Using this method, they reported accuracies ranging from 53-

100%, suggesting that the computation algorithm worked well with some features, such

as foreign matter in the bean, small beans, off color and badly off color, but not as well

with others, such as cracks and broken beans.

Pearson and Young (2001) developed a system for automated sorting of almonds

with embedded shell using laser transmittance imaging. They constructed a prototype

device to automatically detect and separate kernels with embedded shell fragments. The

device images laser light transmitted through the kernel. Shell fragments block nearly all

the transmitted light and appear as a very dark spot in the image. A computer vision

algorithm was developed to detect these dark spots and activate an air valve to divert

kernels with embedded shell from the process stream. A 3x3 minimum filter was used to

eliminate the effect of light diffracting around the edges of the almond kernel and causing

camera saturation. They selected two different types of features for their classification

algorithm. The first was the number of pixels that were found to fall within a valley in the

image intensity map. The second was a two dimensional histogram bin values based on

the image intensity and gradient. Using a one pass sorting operation, they reported that

the system with vision technique was able to correctly identify 83% of the kernels with

embedded shell fragments.










Yang et al. [01] developed an infrared imaging and wavelet-based segmentation

method for apple defect detection. They proposed that the reflectance spectrum of apple

surfaces in the near-infrared region (NIR) provided effective information for a machine

vision inspection system. The differences in light reflectance of the apple surfaces caused

the corresponding pixels of bruised areas and good areas to appear different in intensities

in a NIR apple image. Segmenting the defective areas from the non-defective apple

images was a critical step for the apple defect detection. In their work, they used a 2-D

multiresolution wavelet decomposition to generate "wavelet transform vectors"' for each

pixel in the NIR apple images. These vectors are combined and weighted by dynamic

modification factors to produce the pixel vectors. Then a cluster analysis method is used

to classify the pixels according to the pixel vectors. The pixels with similar properties are

labeled as one class, to separate the defective areas from the good areas of apples in the

NIR image. They reported 100% accuracy of detecting good apples and 94.6% accuracy

of detecting defective apples.

Chao et al. [2000] assembled a dual-camera system for separating wholesome and

unwholesome chicken carcasses. For their machine vision inspection system, object

oriented programming paradigms were utilized to integrate the hardware components.

The image was reduced to a size of 256x240 pixels before the carcass was segmented

from the background suing simple thresholding. A total of 15 horizontal layers were

generated from each segmented image. For each layer a centroid was calculated form the

binarized image. Based on these centroids, each layer was divided into several square

blocks for a total of 107 blocks. The averaged intensity of each block was used as the

input data to neural network models for classification. Using their vision system, they









achieved classification accuracies of 94% for wholesome chicken and 87% for

unwholesome chicken.

Kim et al. (2001) designed and developed a laboratory based hyperspectral

imaging system with several features. The system was capable of capturing reflectance

and fluorescence images in the 430 to 930 nm region with 1 mm spatial resolution. They

tested their system on classifying apples which were healthy, as well as fungal apples,

based on their hyperspectral images. The research showed promising results and it is

envisioned that multispectral imaging will become an integral part of food production

industries in the near future for automated on-line applications because of acquisition and

real time processing speeds.

Clark et al. (2003) used transmission NIR spectroscopy to determine whether

sample orientation and degree of browning were significant factors requiring

consideration in the design of online detection systems. Their results suggested that

single NIR transmission measurements could lead to a worthwhile reduction in the

incidence of internal browning disorder in commercial lines containing infected fruit.

Burks et al. (2000b) completed an evaluation of neural network classifiers for

weed species discrimination. Color co-occurrence texture analysis techniques were used

to evaluate three different neural network classifiers for potential use in real time weed

control systems. The texture data from six different classes of weed species was used.

The weed species used were: foxtail, crabgrass, common lambsquarter, velvetleaf,

morning glory and clear soil surface. The three neural network classifiers that were

evaluated were: back-propagation based classifier, counter-propagation based classifier

and radial basis function. It was found that the back-propagation neural network classifier










provided the best classification performance and was capable of classification accuracies

of 97% with low computational requirements.

Lee and Slaughter [98] developed a real time robotic weed control system for

tomatoes, which used a hardware-based neural network. A real-time neural network

board named ZISC (Zero Instruction Set Computer, IBM Inc) was used to recognize

tomato plants and weeds. With the hardware based neural network, 38.9% of tomato

cotyledons, 37.5 % of tomato true leaves, and 85.7% of weeds were correctly classified.

Moshou et al. (2002) developed a weed species spectral detector based on neural

networks. A new neural network architecture for classification purposes was proposed.

The Self-Organizing Map (SOM) neural network was used in a supervised way for a

classification task. The neurons of the SOM became associated with local linear

mappings (LLM). Error information obtained during training was used in a novel

learning algorithm to train the classifier. The method achieved fast convergence and good

generalization. The classification method was then applied in a precision farming

application, the classification of crops and different kinds of weeds by using spectral

reflectance measurements.

Yang et al. (1998) developed an artificial neural networks (ANNs) to distinguish

between images of corn plants and seven different weeds species commonly found in

experimental Hields. The performance of the neural networks was compared and the

success rate for the identification of corn was observed to be as high as 80 to 100%, while

the success rate for weed classification was as high as 60 to 80%.









Nakano (1998) studied the application of neural networks to the color grading of

apples. Classification accuracies of over 75% were reported for about 40 defected apples

using their neural network.

As described in the above research abstracts, automation in agriculture is

undergoing unique technological innovations which will significantly improve farm

productivity, as well as quality of the food. Machine vision technology is an inherent part

of all of these methods and thus is an important area of study. The application of machine

vision is both an art and a scientific pursuit, requiring the experience and knowledge of

the researcher, to chalk out an effective strategy based on a specific problem. In the next

chapter the theory employed in this research, will be discussed.















CHAPTER 4
FEATURE EXTRACTION

In this chapter, the theory involved in feature extraction, which is the first step in

the classification process would be discussed. The method followed for extracting the

feature set is called the color co-occurrence method or CCM method in short. It is a

method, in which both the color and texture of an image are taken into account, to arrive

at unique features, which represent that image. It is well known in the image processing

research community that classification accuracies are highly dependent on the feature set

selection. In other words, the classification accuracy is as good as the feature set that is

selected to represent the images. Therefore, careful consideration must be given to this

particular step. Various researchers have used several methods of feature representation,

such as those based on shape, color, texture, wavelet analysis, reflectance, etc. All these

have been discussed in the literature review section. Previous research by Burks (2000a)

proved that CCM features can be used effectively in classification of weed species. The

present work is an extension of that research, providing a feasibility analysis of the

technology in citrus disease classification. Before further describing the theory of the

CCM method, a description of texture analysis and color technology will be given.

Texture Analysis

Texture is one of the features that segments images into regions of interest and

classifies those regions. It gives information about the spatial arrangement of the colors

or intensities in an image. Part of the problem in texture analysis is defining exactly what

texture is. There are two main approaches, the structural and statistical approaches.









Structural approach. States that texture is a set of primitive texels in some

regular or repeated relationship.

Statistical approach. States that texture is a quantitative measure of the

arrangement of intensities in a region.

Jain and Tucervan (1998) gave taxonomy of texture models. Identifying the

perceived qualities of texture in an image is an important first step towards building

mathematical models for texture. The intensity variations in an image, which characterize

texture, are generally due to some underlying physical variation in the scene (such as

pebbles on a beach or waves in water). Modeling this physical variation is very difficult,

so texture is usually characterized by the two-dimensional variations in the intensities

present in the image. This explains the fact that no precise, general definition of texture

exists in the computer vision literature. In spite of this, there are a number of intuitive

properties of texture, which are generally assumed to be true.

*Texture is a property of areas; the texture of a point is undefined. So, texture is a

contextual property and its definition must involve gray values in a spatial neighborhood.

The size of this neighborhood depends upon the texture type, or the size of the primitives

defining the texture.

*Texture involves the spatial distribution of gray levels. Thus, two-dimensional

histograms or co-occurrence matrices are reasonable texture analysis tools.

*A region is perceived to have texture when the number of primitive objects in

the region is large. If only a few primitive obj ects are present, then a group of countable

objects are perceived, instead of a textured image. In other words, a texture is perceived

when significant individual "forms" are not present.










Image texture has a number of perceived qualities, which play an important role

in describing texture. Laws (1980) identified the following properties as playing an

important role in describing texture: uniformity, density, coarseness, roughness,

regularity, linearity, directionality, direction, frequency, and phase. Some of these

perceived qualities are not independent. For example, frequency is not independent of

density and the property of direction only applies to directional textures. The fact that the

perception of texture has so many different dimensions is an important reason why there

is no single method of texture representation, which is adequate for a variety of textures.

There are various methods for texture analysis. A discussion of those is given next.

Statistical methods:

One of the defining qualities of texture is the spatial distribution of gray values.

The use of statistical features is therefore one of the early methods proposed in the

machine vision literature. Statistical patterns (stochastic) are random and irregular and

usually occur naturally.

Co-occurrence Matrices

Statistical methods use second order statistics to model the relationships between

pixels within the region by constructing Spatial Gray Level Dependency (SGLD)

matrices. A SGLD matrix is the joint probability occurrence of gray levels 'i' and 'j' for

two pixels with a defined spatial relationship in an image. The spatial relationship is

defined in terms of distance'd' and angle '6'. If the texture is coarse and distance'd' is

small compared to the size of the texture elements, the pairs of points at distance d should

have similar gray levels. Conversely, for a Eine texture, if distance d is comparable to the

texture size, then the gray levels of points separated by distance d should often be quite










different, so that the values in the SGLD matrix should be spread out relatively

uniformly. Hence, a good way to analyze texture coarseness would be, for various values

of distance d, some measure of scatter of the SGLD matrix around the main diagonal.

Similarly, if the texture has some direction, i.e. is coarser in one direction than another,

then the degree of spread of the values about the main diagonal in the SGLD matrix

should vary with the direction d. Thus, texture directionality can be analyzed by

comparing spread measures of SGLD matrices constructed at various distances d. From

SGLD matrices, a variety of features may be extracted. The original investigation into

SGLD features was pioneered by Haralick et al. (1973). From each matrix, 14 statistical

measures were extracted including: angular second moment, contrast, correlation,

variance, inverse different moment, sum average, sum variance, sum entropy, difference

variance, difference entropy, information measure of correlation I, information measure

of correlation II, and maximal correlation coefficient. The measurements average the

feature values in all four directions.

Autocorrelation Based Texture Features

The textural character of an image depends on the spatial size of texture

primitives. Large primitives give rise to coarse texture (e.g. rock surface) and small

primitives give fine texture (e.g. silk surface). An autocorrelation function can be

evaluated to measure this coarseness. This function evaluates the linear spatial

relationships between primitives. If the primitives are large, the function decreases slowly

with increasing distance whereas it decreases rapidly if texture consists of small

primitives. However, if the primitives are periodic, then the autocorrelation increases and

decreases periodically with distance.









Geometrical Methods

The class of texture analysis methods that falls under the heading of geometrical

methods is characterized by their definition of texture as being composed of "texture

elements" or primitives. The method of analysis usually depends upon the geometric

properties of these texture elements. Once the texture elements are identified in the

image, there are two major approaches to analyzing the texture. One computes statistical

properties from the extracted texture elements and utilizes these as texture features. The

other tries to extract the placement rule that describes the texture. The latter approach

may involve geometric or syntactic methods of analyzing texture.

Voronoi Tessellation Functions

Tucervan and Jain (1998) proposed the extraction of texture tokens by using the

properties of the Voronoi tessellation of the given image. Voronoi tessellation has been

proposed because of its desirable properties in defining local spatial neighborhoods and

because the local spatial distributions of tokens are reflected in the shapes of the Voronoi

polygons. First, texture tokens are extracted and then the tessellation is constructed.

Tokens can be as simple as points of high gradient in the image or complex structures

such as line segments or closed boundaries.


Structural methods. The structural method is usually associated with man-made

regular arrangements of lines, circles, squares, etc. The structural models of texture

assume that textures are composed of texture primitives. The texture is produced by the

placement of these primitives according to certain placement rules. This class of

algorithms, in general, is limited in power unless one is dealing with very regular










textures. Structural texture analysis consists of two major steps: (a) Extraction of the

texture elements, and (b) Inference of the placement rule.

Model Based Methods. Model based texture analysis methods are based on the

construction of an image model that can be used not only to describe texture, but also to

synthesize it. The model parameters capture the essential perceived qualities of texture.

Random Field Models

Markov random fields (MRFs) have been popular for modeling images. They are

able to capture the local (spatial) contextual information in an image. These models

assume that the intensity at each pixel in the image depends on the intensities of only the

neighboring pixels. MRF models have been applied to various image processing

applications such as, texture synthesis, texture classification, image segmentation, image

restoration, and image compression.

Signal Processing Methods

Psychophysical research has given evidence that the human brain performs a

frequency analysis of the image. Texture is especially suited for this type of analysis

because of its properties. Most techniques try to compute certain features from filtered

images, which are then, used in either classification or segmentation tasks. Some of the

techniques that are used are spatial domain filters, Fourier domain filters, Gabor and

wavelet models etc.


Color Technology

According to Zuech (1988), "The human perception of color involves

differentiation based on three independent properties; intensity, hue and saturation. Hue










corresponds to color, intensity is the lightness value, and saturation is the distance from

lightness per hue."

Color Spaces. Color is a perceptual phenomenon related to the human response

to different wavelengths in the visible electromagnetic spectrum. Generally, a color is

described as a weighted combination of three primary colors that form a natural basis.

There are many color spaces currently being used. The three color spaces most often used

are RGB, normalized RGB and HSI spaces.

RGB Space

Red-green-blue (RGB) space is one of the most common color spaces

representing each color as an axis. Most color display systems use separate red, green,

and blue as light sources so that other colors can be represented by a weighted

combination of these three components. The set of red, green, and blue can generate the

greatest number of colors even though any other three colors can be combined in varying

proportions to generate many different colors. All colors that can be displayed are

specified by the red, green, and blue components. One color is presented as one point in a

three-dimensional space whose axes are the red, green, and blue colors. As a result, a

cube can contain all possible colors. The RGB space and its corresponding color cube in

this space can be seen in Figure 4.1. The origin represents black and the opposite vertex

of the cube represents white.















~green


white





black red R












Figure 4-1. RGB color space and the color cube

Any color can be represented as a point in the color cube by (R, G, B). For

example, red is (255, 0, 0), green is (0, 255, 0), and blue is (0, 0, 255).The axes represent

red, green, and blue with varying brightness. The diagonal from black to white

corresponds to different levels of gray. The magnitudes of the three components on this

diagonal are equal. The RGB space is discrete in computer applications. Generally, each

dimension has 256 levels, numbered 0 to 255. In total, 256 different colors can be

represented by (R, G, B), where R, G, and B are the magnitudes of the three elements,

respectively. For example, black is shown as (0, 0, 0), while white is shown as (255, 255,

255).









HSI Space

Hue-saturation-intensity (HSI) space is also a popular color space because it is

based on human color perception. Electromagnetic radiation in the range of wavelengths

of about 400 to 700 nanometers is called visible light because the human visual system is

sensitive to this range. Hue is generally related to the wavelength of a light and intensity

shows the amplitude of a light. Lastly, saturation is a component that measures the

"colorfulness" in HSI space. Color spaces can be transformed from one to another easily.

A transformation from RGB to HSI can be formulated as below:



hItensity: I = (R G B) 3 (4.1)

3 *[min(R, G, B)]
Saturation: S = 1- (4. 2)
(R + Gr + B)



H~~~ = CS (R -G) +(R -G)(G B)
Hue: (4.3)
[(R Gr) + (R B)]
H = ACOS ,ohe zn ise+R-~(;~





HSI space can be considered as a cylinder as represented in Figure 4.2, where the

coordinates r, 6, and : are saturation, hue, and intensity, respectively.








































r (S~aturation)


Figure 4-2. HSI color space and the cylinder

The coordinates r, 6, and z represent saturation, hue, and intensity, respectively. Hue

plane is obtained by all colors that have the same angle.

Co-occurrence Methodology for Texture Analysis

The image analysis technique selected for this study was the CCM method. The

use of color image features in the visible light spectrum provides additional image

characteristic features over the traditional gray-scale representation. The CCM

methodology consists of three major mathematical processes. First, the RGB images of









leaves are converted into HSI color space representation. Once this process is completed,

each pixel map is used to generate a color co-occurrence matrix, resulting in three CCM

matrices, one for each of the H, S and I pixel maps. The color co-occurrence texture

analysis method was developed through the use of spatial gray level dependence matrices

or in short SGDM's. The gray level co-occurrence methodology is a statistical way to

describe shape by statistically sampling the way certain grey-levels occur in relation to

other grey-levels. As explained by Shearer and Holmes [1990], these matrices measure

the probability that a pixel at one particular gray level will occur at a distinct distance and

orientation from any pixel given that pixel has a second particular gray level. For a

position operator p, we can define a matrix 'Pij that counts the number of times a pixel

with grey-level i occurs at position p from a pixel with grey-level j. For example, if we

have four distinct grey-levels 0, 1, 2 and 3, then one possible SGDM matrix P (i, j, 1, 0)

is given below as shown

0031
2102
I (x, y)=
3203
1213



2122
1032
2301
2210




If we normalize the matrix P by the total number of pixels so that each element is

between 0 and 1, we get a grey-level co-occurrence matrix C.










Different authors define the co-occurrence matrix in two ways:

* By defining the relationship operator p by an angle 8 and distance d, and

* By ignoring the direction of the position operator and considering only the

(bidirectional) relative relationship. This second way of defining the co-occurrence

matrix makes all such matrices symmetric.

The SGDMs are represented by the function P (i, j, d, 6) where 'i' represents the

gray level of the location (x, y) in the image I(x, y), and j represents the gray level of the

pixel at a distance d from location (x, y) at an orientation angle of 8. The nearest

neighbor mask is as shown in the figure below.



1350 900 450

6 7 8

5*"1 0O








Figure 4-3. Nearest neighbor diagram

The reference pixel at image position (x, y) is shown as an asterix. All the

neighbors from 1 to 8 are numbered in a clockwise direction. Neighbors 1 and 5 are

located on the same plane at a distance of 1 and an orientation of 0 degrees. An example

image matrix and its SGDM are already given above. In this research, a one pixel offset

distance and a zero degree orientation angle was used.










The CCM matrices are then normalized using the equation given below, where P

(i, j, 1, 0) represents the intensity co-occurrence matrix




P (i, j)= as (4 .4)
pCL(i, j,1,0)
I=0 ]=0

Where Ng is the total number of intensity levels.

The hue, saturation and intensity CCM matrices are then used to generate the

texture features described by Haralick and Shanmugam (1974). Shearer and Holmes

(1990) reported a reduction in the 16 gray scale texture features through elimination of

redundant variables. The resulting 13 texture features are defined by Shearer and Holmes

(1990) and Burks (1997). The same equations are used for each of the three CCM

matrices, producing 13 texture features for each HSI component and thereby a total of 39

CCM texture statistics. These features and related equations are defined as follows along

with a brief description as pertains to intensity. Similar descriptions would also apply to

saturation as mentioned by Shearer (1986).



Matrix Normalization:


P (i, j)= 4~ (4.5)
pCL(i, j,1,0)
O= ]=0


Marginal probability matrix:










Ng-1
Px (i) = [ p(i, j) (4.6)





Sum and difference matrices:

Ng-1Ng-1
p x, (k) = I P(i, j) (4.7)
I=0 ]=0


k=I+j; for k=0,1,2 ..... 2(Ng-1)

Ng-1Ng-1
px- y(k) = ffP(i, j) (4.8)
I=0 ]=0


k=|I-j|I; for k=0,1,2 ..... 2(Ng-1)

Where


P(ij) = the image attribute matrix and

Ng = total number of attribute levels




Texture features:


The angular moment (II) is a measure of the image homogeneity.

Ng-1Ng-1
I, = C f [P(i, j)]2 (4.9)
I=0 ]=0

The mean intensity level (I2) is a measure of image brightness derived from the co-

occurrence matrix.

Ng-1
I2 = [Pi)x(i (4.10)
I=0


Variation of image intensity is identified by the variance textural feature (IS).










Ng-1
I, = C(i-I,) P,(i) (4.1 1)


Correlation (14) is a measure of the intensity linear dependence in the image.

Ng-1Ng-1


I4 U ]=0 (4.12)


The product moment (IS) is analogous to the covariance of the intensity co-occurrence

matrix.

Ng-1Ng-1
I, = C f(i-I? )(.j-)Pij)(.3


Contrast of an image can be measured by the inverse difference moment (16).

Ng-Ng- P(i j)
I6 2(4.14)


The entropy feature (17) is a measure of the amount of order in an image.

Ng-1Ng-1
I, = C C P(i, j)1n P(i, j) (4.15)


The sum and difference entropies (IS and 19) are not easily interpreted, yet low entropies

indicate high levels of order.


2(Ng-1)
I, =[ P (k)1~c~s(k) (4.16)



Ng-1
l9 P (k)1P,_,(k) (4.17)





The information measures of correlation (110 and Il l) do not exhibit any apparent

physical interpretation.










I, HXY1
lin= (4.18)
HX

Ill = [1- e-2(HXr2-lo I2 (4.19)

Where

Ng-1
HX = Py (i)In P (i) (4.20)


Ng-1Ng-1
HXY1= -f [P(i, j)1n[P,(i)S,(j)] (4.2 1)


Ng-1Ng-1
HXY~2 = -C f Pr (i)Px ( j)1n[P (i)P,( j)] (4.22)


SAS Based Statistical Methods to Reduce Redundancy

The 13 texture features from the hue, saturation and intensity CCM matrices

provide a set of 39 characteristic features for each image. It is, however, desirable to

reduce the number of texture features to minimize redundancy, reduce computational

complexity during classification and representation of the image. Burks (2000a)

implemented a reduction technique using SAS (Statistical analysis package). SAS offers

a procedure for accomplishing the above tasks, referred to as PROC STEPDISC.

In this research, PROC STEPDISC was used to create various models of data

using various combinations of the HSI/CCM texture statistic data sets. PROC STEPDISC

may be used to reduce the number of texture features by a stepwise process of selection.

The assumption made is that all the classes of data included in the data set are

multivariate normal distributions with a common covariance matrix. The stepwise

procedure begins with no entries in the model. At each step in the process, if the variable

within the model which contributes least to the model, as determined by the Wilks'









lambda method, does not pass the test to stay, it is removed from the model. The variable

outside and which contributes most to the model and passes the test to be admitted is

added. When all the steps are exhausted, the model is reduced to its final form.















CHAPTER 5
CLASSIFICATION

Image classification is the Einal step in any pattern recognition problem. It is of

two types. They are:

Supervised classification and

Unsupervised classification

In supervised classification, a priori knowledge of the images to be classified is

known. Hence, the classification is simply a process of testing whether the computed

classification agrees with the a priori knowledge. In unsupervised learning, there is not

any a priori knowledge on the images to be classified. Hence, the classification is a little

bit more tedious since we have no prior knowledge of the various data classes involved.

There are various classification techniques. In this research, two classification approaches

based on the supervised classification approach are implemented. They are listed below.

1) Statistical classifier using the squared Mahalanobis minimum distance

2) Neural network classifiers

i) Multi layer feed-forward neural network with back propagation

ii) Radial basis function neural network

Earlier research by Burks (2000a and 2000b) had shown good results for the

application of weed detection in wheat fields using the above mentioned techniques, thus

favoring the choice of these methods in this research.









Statistical Classifier Using the Squared Mahalanobis Minimum Distance


The Mahalanobis distance is a very useful way of determining the similarity of a

set of values from an unknown sample to a set of values measured from a collection of

known samples. The actual mathematics of the Mahalanobis distance calculation has

been known for some time. In fact, this method has been applied successfully for spectral

discrimination in a number of cases. One of the main reasons the Mahalanobis distance

method is used is that it is very sensitive to inter-variable changes in the training data. In

addition, since the Mahalanobis distance is measured in terms of standard deviations

from the mean of the training samples, the reported matching values give a statistical

measure of how well the spectrum of the unknown sample matches (or does not match)

the original training spectra. This method belongs to the class of supervised classification.

Since this research is, a feasibility study to analyze whether such techniques give accurate

enough results, so that the technology is viable for an autonomous harvester, supervised

classification is a good approach to test the efficacy of the method.

The underlying distribution for the complete training data set, consisting of the

four classes of leaves, was a mixture of Gaussian model. Earlier research by Shearer et al.

(1986) had shown that plant canopy texture features could be represented by a multi-

variate normal distribution. Each of the 39 texture features represented a normal Gaussian

distribution. Thus, the feature space can be approximated to be a mixture of Gaussians

model containing a combination of 39 univariate normal distributions, if all the features

are considered. For other models (having a reduced number of features), the feature space

is a mixture of Gaussians model containing a combination of 'N' univariate normal

distributions, where 'N' is the number of texture features in the model.










Since, the feature space of various classes of leaves is a mixture of Gaussians

model; the next step is to calculate the statistics representing those classes. Four

parameter sets X [(CI, C)] (mean and covariance), representing the various classes of

diseased and normal leaves, namely greasyspot, melanose, normal and scab, were

calculated, using the training images. The procedure until this stage represented the

training phase, where in, we calculate the necessary statistical features representing

various classes of leaves. After the parameter sets were obtained, the classifier was tested

on the test images for each class. This constitutes the testing phase. The classifier was

based on the squared Mahalanobis distance from the feature vector representing the test

image to the parameter sets of the various classes. It used the nearest neighbor principle.

The formula for calculating the squared mahalanobis distance metric is as given below.


r" = (x CI)T I''(x C) (5.0)


Where,


'x' is the N-dimensional test feature vector (N is the number of features considered),


'9'~ is the N-dimensional mean vector for a particular class of leaves,


'C' is the N x N dimensional co-variance matrix for a particular class of leaves.


During testing phase of the method, the squared mahalanobis distance, for a

particular test vector representing a leaf, is calculated with all the classes of leaves in this

problem. The test image is then classified using the minimum distance principle. The test










image is classified as belonging to a particular class to which its squared mahalanobis

distance is minimum among the calculated distances.


Neural Network Based Classifiers

Artificial Neural Networks (ANNs) are computational systems whose architecture

and operation are inspired by knowledge about biological neural cells (neurons) in the

brain. According to Ampazis (1999), ANNs can be described either as mathematical and

computational models for non-linear function approximation, data classification,

clustering and non-parametric regression, or as simulations of the behavior of collections

of model biological neurons. These are not simulations of real neurons in the sense that

they do not model the biology, chemistry, or physics of a real neuron. They do, however,

model several aspects of the information combining and pattern recognition behavior of

real neurons in a simple yet meaningful way. Neural modeling has shown incredible

capability for emulation, analysis, prediction, and association. ANNs can be used in a

variety of powerful ways: to learn and reproduce rules or operations from given

examples; to analyze and generalize from sample facts and make predictions from these;

or to memorize characteristics and features of given data and to match or make

associations from new data to the old data.


Artificial Neural Network (ANN) is an information processing paradigm that is

inspired by the way biological nervous systems, such as the brain, process information.

The key element of this paradigm is the novel structure of the information processing

system. It is composed of a large number of highly interconnected processing elements

(neurons) working in unison to solve specific problems. ANNs, like people, learn by










example. An ANN is configured for a specific application, such as pattern recognition or

data classification, through a learning process. Learning in biological systems involves

adjustments to the synaptic connections that exist between the neurons. This is true of

ANNs as well. Various authors have given various definitions to a neural network. The

following are some of them.


According to the DARPA Neural Network Study (1988, AFCEA International
Press, p. 60):

A neural network is a system composed of many simple processing elements
operating in parallel whose function is determined by network structure, connection
strengths, and the processing performed at computing elements or nodes.

According to Haykin, S. (1994), Neural Networks: A Comprehensive Foundation,
NY: Macmillan, p. 2:

A neural network is a massively parallel-di stributed processor that has a natural
propensity for storing experiential knowledge and making it available for use. It
resembles the brain in two respects:

1. Knowledge is acquired by the network through a learning process.
2. Interneuron connection strengths known as synaptic weights are used to store
the knowledge.

ANNs have been applied to an increasing number of real-world problems of

considerable complexity. Their most important advantage is in solving problems that are

too complex for conventional technologies -- problems that do not have an algorithmic

solution or for which an algorithmic solution is too complex to be found. In general,

because of their abstraction from the biological brain, ANNs are well suited to problems

that people are good at solving, but for which computers are not. These problems include

pattern recognition and forecasting (which requires the recognition of trends in data).


Ampazis (1999) introduced ANN's in his paper 'Introduction to neural networks'.

The following discussion is quoted from his paper:









"ANNs are able to solve difficult problems in a way that resembles human

intelligence. What is unique about neural networks is their ability to learn by example.

Traditional artificial intelligence (AI) solutions rely on symbolic processing of the data,

an approach which requires a priori human knowledge about the problem. Also neural

networks techniques have an advantage over statistical methods of data classification

because they are distribution-free and require no a priori knowledge about the statistical

distributions of the classes in the data sources in order to classify them. Unlike these two

approaches, ANNs are able to solve problems without any a priori assumptions. As long

as enough data is available, a neural network will extract any regularity and form a

solution.


As ANNs are models that resemble biological neuronal structures, the starting

point for any ANN would be a basic neural element whose behavior follows closely that

of real neurons. Each computing unit in an ANN is based on the concept of an idealized

neuron. An ideal neuron is assumed to respond optimally to the applied inputs. Neural

network is a collective set of such neural units, in which the individual neurons are

connected through complex synaptic connections characterized by weight coefficients.

Every single neuron makes its contribution towards the computational properties of the

whole system. The Eigure 5-1, shows a basic neuron.










Input 1




Input 2
Weight

Output



InputN Sigmoicl


Threshold B



Figure 5-1. A basic neuron

The neuron has several input lines and a single output. Each input signal is

weighted; that is, it is multiplied with the weight value of the corresponding input line (by

analogy to the synaptic strength of the connections of real neurons). The neuron will

combine these weighted inputs by forming their sum and, with reference to a threshold

value and activation function; it will determine its output. In mathematical terms, the

neuron is described by writing the following pair of equations:




=i:-ii (5.1)
i=1


and f (5.2)

Where x,, x2 7 are the input signals, w w2 7 N are the synaptic weights, a is


the activation potential of the neuron, a is the threshold, y is the output signal of the


neuron, and f(.) is the activation function.









For notational convenience, the above equations may be reformulated by letting

we = B and setting x, = -1 Then




w~x, 8 wwrxi (5.3)
i=1 i=0


z=0


(5.4)


The combination of a fixed input x,


1 and of an extra input weight we


accounts for what is known as a bias input. Note that the new notation has augmented any


input vector x e t "Eto the vector (-1, X) qc and also the weight vector

Sof the neuron, to the vector ( or ) 2N'



The activation function, denoted by (.), defines the output of the neuron in

terms of the activity level at its input. The most common form of activation function used

in the construction of ANNs is the sigmoid function. An example of the sigmoid is the

logistic function, defined by


u) =+ exp(-au)


(5.5)


Where 8 is the slope parameter of the sigmoid function. By varying the

parameters we can obtain sigmoid functions of different slopes. In the limit, as the

slope parameter approaches infinity, the sigmoid function becomes simply a threshold









function. The threshold function, however, can take only the values 0 or 1, whereas a

sigmoid function assumes a continuous range of values from 0 to 1. Also the sigmoid

function is differentiable, whereas the threshold function is not. Differentiability is an

important feature of neural network theory since it has a fundamental role in the learning

process in ANNs.

Ampazis (1999) gave a good historical perspective regarding the development of

neural networks. The first model of a neuron was proposed in 1943 by McCulloch and

Pitts when they described a logical calculus of neural networks. In this model, the

activation function used was the threshold function. The McCulloch-Pitts neuron models

connected in a simple fashion (forming a single layer), were given the name

"perceptrons" by Frank Rosenblatt in 1962. In his book "Principles of Neurodynamics"

he described the properties of these neurons, but more importantly, he presented a

method by which the perceptrons could be trained in order to perform simple patterns

recognition tasks. He also provided a theorem called the perceptionn convergence theory'

which guarantees that if the learning task is linearly separable (that is, if the data classes

can be separated by a straight line in input space) then the perception will yield a solution

in a finite number of steps. Perceptrons, however, are unable to solve problems that are

not linearly separable. It was the pointing of this limitation of the perceptrons in 1969 by

Minsky and Papert in their famous book "Perceptrons" (using elegant mathematical

analysis to demonstrate that there are fundamental limits on what one-layer perceptrons

can compute) and their pessimism about the prospects of discovering efficient algorithms

for the training of multilayer perceptrons (multilayer perceptrons can solve non-linearly

separable problems) which lead to the recline of the subj ect of neural computing for more









than a decade. The development, however, of the back propagation algorithm in 1986 by

Rumelhart, Hinton and Williams and the subsequent publication of the book "Parallel

Distributed Processing: Explorations in the Microstructures of Cognition" by Rumelhart

and McClelland answered Minsky and Papert's challenge (in the sense that it was proved

that there can indeed exist algorithms for the training of multilayer perceptrons) and that

resulted in the current resurgence of interest in neural computing.

There is a fair understanding of how an individual neuron works. However there is

still a great deal of research needed to decipher the way real neurons organize themselves

and the mechanisms used by arrays of neurons to adapt their behavior to external stimuli.

There are a large number of experimental ANN structures currently in use reflecting this

state of continuing research. Among the many interesting properties of all these

structures, the property that is of primary significance is the ability of the networks to

learn from their environment, and to improve their performance through learning. ANNs

learn about their environment through an iterative process of adjustments applied to their

free parameters, which are the synaptic weights and thresholds. The type of learning is

determined by the manner in which the parameter changes take place. There are three

basic types of learning paradigms: supervised learning, reinforcement learning, and self-

organized (unsupervised) learning

As its name implies supervised learning is performed under the supervision of an

external "supervisor". The supervisor provides the network with a desired or target

response for any input vector. The actual response of the network to each input vector is

then compared by the supervisor with the desired response for that vector, and the

network parameters are adjusted in accordance with an error signal which is defined as









the difference between the desired response and the actual response. The adjustment is

carried out iteratively in a step-by-step fashion with the aim of eventually making the

error signal for all input vectors as small as possible. When this has been achieved then

the network is believed to have built internal representations of the data set by detecting

its basic features, and hence, to be able to deal with data that has not encountered during

the learning process. That is, it can generalize its "knowledge". Supervised learning is by

far the most widely used learning technique in ANNs because of the development of the

back propagation algorithm which, allows for the training of multilayer ANNs. In the

next section, the mathematics of the derivation of the algorithm would be considered and

the factors behind its wide acceptance as the standard training algorithm for multilayer

ANNs would be examined."

In the material discussed above, a general introduction of Neural Networks and its

working was given. However, to make these networks functional on a variety of

applications several algorithms and techniques were developed. Back propagation

algorithm is one such method.

The Back propagation Algorithm. Ampazis (1999) also gave an articulate

description regarding back propagation algorithm and its advantages and disadvantages.

The following material is quoted from his paper.

"The Error Back propagation (or simply, back propagation) algorithm is the most

important algorithm for the supervised training of multilayer feed-forward ANNs. It

derives its name from the fact that error signals are propagated backward through the

network on a layer-by-layer basis. As a tool for scientific computing and engineering

applications, the morphology of static multilayered feed forward neural networks










(MFNNs) consists of many interconnected signal processing elements called neurons.

The MFNN is one of the main classes of static neural networks and it plays an important

role in many types of problems such as system identification, control, channel

equalization and pattern recognition. From the morphological point of view, a MFNN has

only feedforward information transmission from the lower neural layers to higher neural

layers. On the other hand, a MFNN is a static neural model, in the sense that its input-

output relationship may be described by an algebraic nonlinear mapping function. The

most widely used static neural networks are characterized by nonlinear equations that are

memory less; that is, their outputs are a function of only the current inputs. An obvious

characteristic of a MFNN is its capability for implementing a non linear mapping from

many neural inputs to many neural outputs. The Back propagation (BP) algorithm is a

basic and most effective weight updating method of MFNNs for performing some

specific computing tasks. The BP algorithm was originally developed using the gradient

descent algorithm to train multi layered neural networks for performing desired tasks.

The backpropagation algorithm is based on the selection of a suitable error function

or cost function, whose values are determined by the actual and desired outputs of the

network. The algorithm is also dependent on the network parameters such as the weights

and the thresholds. The basic idea is that the cost function has a particular surface over

the weight space and therefore an iterative process such as the gradient descent method

can be used for its minimization. The method of gradient descent is based on the fact that,

since the gradient of a function always points in the direction of maximum increase of the

function. Then, by moving to the direction of the negative gradient induces a maximal

"downhill" movement that will eventually reach the minimum of the function surface









over its parameter space. This is a rigorous and well-established technique for the

minimization of functions and has probably been the main factor behind the success of

backpropagation. However, as shall be seen in the next section the method does not

guarantee that it will always converge to the minimum of the error surface as the network

can be trapped in various types of minima.

















Nodes kT Nodes j Nodes i
(Input) (Hidden) (Output)


Figure 5-2. Multilayer feedforward ANN

A typical multilayer feed-forward ANN is shown in figure 5-2. This type of

network is also known as a Multilayer Perceptron (MLP). The units (or nodes) of the

network are nonlinear threshold units described by equations (5.3) and (5.4) with their

activation function given by equation (5.5). The units are arranged in layers and each unit

in a layer has all of its inputs connected to the units of a preceding layer (or to the inputs

from the external world in the case of the units in the first layer). However, it does not

have any connections to units of the same layer to which it belongs. The layers are

arrayed one succeeding the other so that there is an input layer, multiple intermediate

layers and an output layer. Intermediate layers, that are those that have no inputs or









outputs to the external world, are called hidden layers. Figure 5-2 shows a MLP with only

one hidden layer. Backpropagation neural networks are usually fully connected. This

means that each unit is connected to every output from the preceding layer (or to every

input from the external world if the unit is in the first layer), as well as to a bias signal

which is common to all the units according to the convention described earlier.

Correspondingly, each unit has its output connected to every unit in the succeeding layer.

Generally, the input layer is considered as just a distributor of the signals from the

external world and is not therefore counted as a layer. This convention would be retained

in this analysis and hence, in the case of figure 5-2 the hidden layer is the first layer of the

network.

The backpropagation training consists of two passes of computation: a forward pass

and a backward pass. In the forward pass, an input pattern vector is applied to the sensory

nodes of the network. That is, to the units in the input layer. The signals from the input

layer propagate to the units in the first layer and each unit produces an output according

to equation (5.4). The outputs of these units are propagated to units in subsequent layers

and this process continuous until; finally, the signals reach the output layer where the

actual response of the network to the input vector is obtained. During the forward pass

the synaptic weights of the network are fixed. During the backward pass, on the other

hand, the synaptic weights are all adjusted in accordance with an error signal which is

propagated backward through the network against the direction of synaptic connections


The mathematical analysis of the algorithm is as follows:


In the forward pass of computation, given an input pattern vectorY(P each hidden node

receives a net input










k (5.6)


Where W1 represents the weight between hidden node and input node k


Thus, node produces an output


k (5.7)

Each output node thus receives



J J (5.8)


Where W~represents the weight between output node and hidden node

Hence, it produces for the final output as shown in equation 5.9.


j jk (5.9)

The backpropagation algorithm can be implemented in two different modes: on-

line mode and batch mode. In the on-line mode the error function is calculated after the

presentation of each input pattern and the error signal is propagated back through the

network modifying the weights before the presentation of the next pattern. This error

function is usually the Mean Square Error (MSE) of the difference between the desired

and the achtal responses of the network over all the output units. Then the new weights

remain fixed and a new pattern is presented to the network and this process continuous

until all the patterns have been presented to the network. The presentation of all the

patterns is usually called one epoch or one iteration. In practice, many epochs are needed

before the error becomes acceptably small. In the batch mode, the error signal is again









calculated for each input pattern but the weights are modified only when all input

patterns have been presented. Then the error function is calculated as the sum of the

individual MSE errors for each pattern and the weights are accordingly modified (all in a

single step for all the patterns) before the next iteration. Thus, in the batch mode, the

error or cost function calculated as the MSE over all outputs units I and over all patterns

p is given by or p is given by



E C (dr y ))




2p (5.10)

Clearly, E is a differentiable function of all the weights (and thresholds according

to the bias convention given earlier) and therefore we can apply the method of gradient

descent.


For the hidden-to-output connections the gradient descent rule gives


IE

& (5.11)


where r is a constant that determines the rate of leaning; it is called the learning ratle of

the backpropagation algorithm.


Using the chain rule, we have








i~E i~i(P,


~Y8


i~l P'


i~C(P Z~Yg(5.12)


giving


,I,,,I,,
~rP, L~C(P) Zllr~


~YI


= (d -~ y' ) f~ (x)


(5.13)


Thus


Aw ..= r (dI"


- rC 63 "y
?


- y ) >f (x-" ) y )


(5.14)


where


) f (x( f )(5.15)


4"' = (djF y!"


For the input-to-hidden connections the gradient descent rule gives

dE

Ji (5.16)








which shows that we must differentiate with respect to the j's, which are more deeply
embedded in equation (5.10). Using the chain rule, we obtain









-~P f x
~(5.18)





L8 (P f ( )


Pqato (5.19)d t qutin5.








Thus



C(d P -r y )f (~x I)w,S (x P) y rP
(5.20)


Equation 5.20 leads to equation 5.21.



aw = 9 E (dP y') )f (x ')w, f (x P) y,




P (5.21)


with



(5.22)


From equations (5.13) and (5.20) it is observed that if the activation function was

not differentiable, then it is impossible to implement the gradient-descent rule, as it

would be impossible to calculate the partial derivatives of E with respect to the weights.

It is for this reason that the differentiability of the activation function is so

important in backpropagation learning. Note also that the derivative of the sigmoid

function is very easy to compute since



f (u) =f (u) [1- f (u) (.3










Therefore, it is not necessary to compute separately once we have


foundf .U This is one of the advantages of the sigmoid function as computation time

can reduce significantly.


Although we have written the update rules for the batch mode of training it is

clear that the weight updates in the on-line case would be given again by equations (5.14)

and (5.21) without of course the summation over the training patterns. Note that equation

(5.21) has the same form as equation (5.14) but with a different definition of the6 's. In

general, with an arbitrary number of layers, the backpropagation update rule always has

the form:



Weight Correction ( mlb) = [Learning rate ( )] [Local gradient S( 1) [Input signlal oflnode( m)] (5.24)


The general rule of Equation 5.24 given above, for the adaptation of the weights, is

also known as the generalized delta rule.

An important aspect of backpropagation training is the proper initialization of the

network. Improper initialization may cause the network to require a very long training

time before it converges to a solution and even then there is a high probability of

converging to non-optimum solutions."

Considerations on the Implementation of Back Propagation

Ampazis (1999) discussed the key features in the implementation of the back

propagation algorithm. The following discussion is quoted from his paper.

The initial step in backpropagation learning is the initialization of the network. A
good choice for the initial values of the free parameters (i.e., synaptic weights and
thresholds) of the network can significantly accelerate learning. It is also important









to note that if all the weights start out with equal values and if the solution requires
that unequal weights be developed, the system can never learn. This is because the
error is propagated back through the weights in proportion to the values of the
weights. This means that all hidden units connected directly to the output units will
get identical error signals, and, since the weight changes depend on the error
signals, the weights from those units to the output units must always be the same.
This problem is known as the symmetry breaking problem. Internal symmetries of
this kind also give the cost-function landscape periodicities, multiple minima,
(almost) flat valleys, and (almost) flat plateaus or temporary minima (Murray,
1991a). The last are the most troublesome, because the system can get stuck on
such a plateau during training and take an immense time to find its way down the
cost function surface. Without modifications to the training set or learning
algorithm, the network may escape from this type of "minimum" but performance
improvement in these temporary minima drops to a very low, but non-zero level
because of the very low gradient of the cost function. In the MSE versus training
time curve, a temporary minimum can be recognized as a phase in which the MSE
is virtually constant for a long training time after initial learning. After a generally
long training time, the approximately flat part in the energy landscape is
abandoned, resulting in a significant and sudden drop in the MSE curve (Murray,
1991a; Woods, 1988). The problem of development of unequal weights can be
counteracted by starting the system with random weights. However as learning
continues internal symmetries develop and the network encounters temporary
mimima again.

The customary practice is to set all the free parameters of the network to random
numbers that are uniformly distributed inside a small range of values. This is so
because if the weights are too large the sigmoids will saturate from the very
beginning of training and the system will become stuck in a kind of saddle point
near the starting point (Haykin, 1994). This phenomenon is known as premature
saturation (Lee et al.., 1991). Premature saturation is avoided by choosing the
initial values of the weights and threshold levels of the network to be uniformly
distributed inside a small range of values. This is so because when the weights are
small the units operate in their linear regions and consequently it is impossible for
the activation function to saturate. Gradient descent can also become stuck in local
minima of the cost function. These are isolated valleys of the cost function surface
in which the system may "stuck" before it reaches the global minimum. In these
valleys, every change in the weight values causes the cost function to increase and
hence the network is unable to escape. Local minima are fundamentally different
from temporary minima as they cause the performance improvement of the
classification to drop to zero and hence the learning process terminates even though
the minimum may be located far above the global minimum. Local minima may be
abandoned by including a momentum term in the weight updates or by adding
"noise" using the on-line mode training which is a stochastic learning algorithm in
nature. The momentum term can also significantly accelerate the training time that
is spent in a temporary minimum as it causes the weights to change at a faster rate.









Radial basis function networks. After the Feed Forward networks, the radial

basis function network (RBFN), comprises one of the most used network models. A

radial basis function network is a neural network approached by viewing the design as a

curve-fitting (approximation) problem in a high dimensional space. Learning is

equivalent to Einding a multidimensional function that provides a best fit to the training

data, with the criterion for "best fit" being measured in some statistical sense.

Figure 4-2 illustrates an RBF network with inputs xr,...,x= and output 9. The arrows

in the figure symbolize parameters in the network. The RBF network consists of one

hidden layer of basis functions, or neurons. At the input of each neuron, the distance

between the neuron center and the input vector is calculated. The output of the neuron is

then formed by applying the basis function to this distance. The RBF network output is

formed by a weighted sum of the neuron outputs and the unity bias shown.

















Figure 5-3. An RBF network with one input

Radial Functions

Mark Orr (1996) described radial basis functions, in his paper 'Introduction to radial

basis function networks'. The material quoted in this chapter is taken from his paper.
















































i I


"Radial functions are a special class of function. Their characteristic feature is that

their response decreases (or increases) monotonically with distance from a central point.

The centre, the distance scale, and the precise shape of the radial function are parameters

of the model, all fixed if it is linear. A typical radial function is the Gaussian which, in

the case of a scalar input, is


(5.25)


Its parameters are its centre c and its radius r. Fig 5.3, illustrates a Gaussian RBF

with centre c = 0 and radius r = 1.


Figure 5-4. Gaussian radial basis function



Radial Basis Function Networks

Radial functions are simply a class of functions. In principle, they could be

employed in any sort of model (linear or nonlinear) and any sort of network (single-layer


h~s) =P exp .)





or multi-layer). Radial basis function networks (RBF networks) have traditionally been

associated with radial functions in a single-layer network such as shown in the Fig.5.4 .


Figure 5-5. The traditional radial basis function network

In Fig 5-5, each of n components of the input vector X feeds forward to m basis

functions whose outputs are linearly combined with weights {wj } into the network

output f(X).

When applied to supervised learning with linear models the least squares principle

leads to a particularly easy optimisation problem. If the model is



f (x) =C wi y(x)
1= 1 (5.26)










and the training set is ((Xi-i y ) =1, then the least squares recipe is to minimise the


sum-squared-error which is given as shown in equation 5.27.


P 9 (;)
i= 1 (5.27)


with respect to the weights of the model. If a weight penalty term is added to the sum-

squared-error, as is the case with ridge regression, then the following cost function is

minimised

Pm

i= 1j (5.28)

where {Al, }~ are the regularisation parameters.

The minimisation of the cost function (shown in appendix b) leads to a set of m

simultaneous linear equations in the m unknown weights and the linear equations can be

written more conveniently as the matrix equation


A fr= H fr (5.29)


where H, the design matrix, is


A(x1) A,(x1) ... Am,(x1)
21(X2) Ag(XZ) ... Am(XZ)
H =

hz1(xp) kg(xp) ... km(xp) (5.30)








A l, the variance matrix, is


A-l = (H H + A)-1= (5.31)


the elements of the matrix A are all zero except for the regularisation parameters along its

diagonal and y = [yz, 2 ...."' pTis the vector of training set outputs. The solution is the

so-called normal equation,


irv = A 2H 3, (5 32)

and w = [w,,w2 """ mm T iS the vector of weights which minimises the cost function".















CHAPTER 6
MATERIALS AND METHODS

In this chapter, the image acquisition system will be discussed along with the

strategies that have been followed to achieve the classification results. The figure below

shows the image acquisition system.


Figure 6-1. Image acquisition system


The system consisted of the following components.

1) Four 16W cool white fluorescent bulbs (4500K) with natural light filters and

reflectors.

2) JAI MV90, 3 CCD color camera with 28-90 mm Zoom lens.

5) Coreco PC-RGB 24 bit color frame grabber with 480 by 640 pixels

4) MV Tools Image capture software.









5) Matlab Image Processing Toolbox and Neural Network toolbox

6) SAS Statistical Analysis Package.

7) Windows based computer

It was decided for initial experiments that images would be taken indoors for

samples collected in an outdoor environment, in order to minimize the detrimental

effects of variation in the lighting during the daytime conditions. Although a real

autonomous harvester or a sprayer would have to operate in an outdoor environment,

that endeavor would need further research to make the imaging and analysis

procedure invariant to lighting conditions. Since the effort in this research was to test

the efficacy of the method, it was decided to perform the imaging in an indoor

environment inside a laboratory. However, the conditions in which the images were

taken were simulated to be similar to an outdoor environment. To accomplish this,

four 16W cool white fluorescent bulbs (4500 K) with natural light filters and

reflectors were used. The lighting set up is shown in the Figure 6.1 above.

The choice of cool white fluorescent light for the imaging system can be best

explained by looking at the spectrums of natural light and those of the cool white

bulbs with natural light filters, which are shown the figures 6.2 to 6.4. It is obvious

that the natural light filters mimic the spectrum of sunlight.

The camera used for image acquisition was a JAI MV90 3 CCD color camera with

28-90mm zoom lens. The camera was interfaced with the CPU. Using a frame

grabber embedded, which converted the analog signals from the camera into digital

RGB images. The frame grabber was a Coreco PC-RGB 24 bit color frame grabber

with 480 by 640 pixels in image size.


























Figure 6-2. Full spectrum sunlight


Figure 6-3. Cool white fluorescent spectrum


Figure 6-4. Spectrum comparison

A CCD type camera (Analog and Digital) image sensor use photo diodes to

convert photons to electrons. CCD sensors create high quality, low-noise images.










CCD sensors have been mass produced for a longer period of time, so they are more

mature. They tend to have higher quality pixels, and more of them. An image sensor

consists of a rectangular array imagerss) or a single line (line-scan) of equi-spaced,

discrete light-sensing elements, called photo-sites. The charges that build up on all of

the array photo-sites were linked or "coupled" together so that they could be

transferred out of the array directly (digital output) or processed into a time-varying

video signal (analog output). A visual representation of an image sensor is given

below.























Figure 6-5. Visual representation of an image sensor


As shown in the Eigure above, the image sensor could be represented in the form

of an array of buckets (photo diodes). Whenever the buckets were filled with

sufficient amount of light (pixels), they were transferred. The camera in this

experiment was an analog camera. The final output from an analog camera would be

an analog signal, which needs to be processed further so that a digital image can be










captured. This was accomplished using a frame grabber. The frame grabber used in

this research was a Coreco PC-RGB 24 bit color frame grabber with 480x640 pixels

in image size.
























Figure 6-6. Coreco PC-RGB 24 bit color frame grabber

"PC-RGB is a high performance, low-cost PCI-bus image capture board

supplying the industry's fastest color or monochrome video data transfer to host computer

memory. PC-RGB combined a high performance frame grabber, 4MB of onboard

memory to speed image transfers, unique onboard circuitry for precise interrupt control,

OPTO-22 compatible digital I/O for controlling external events, and support for a wide

range of cameras. PC-RGB could acquire monochrome (up to 5 simultaneous input

channels with 6 total inputs) or RGB color data (2 RGB channels) from RS-170, CCIR,

progressive scan, and variable scan cameras. PC-RGB used a special hardware scatter

gather technique to provide continuous high-speed image transfers across the PCI-bus at

rates up to 120MB/sec sustained with almost no host CPU involvement.









PC-RGB could transfer images and selected Areas of Interest (AOls, up to 4K x

4K) out of memory and at full speed without the need to write any special code.

Additionally, onboard pixel packing circuitry enabled nondestructive graphic overlays,

such as annotation, windows, and cross-hairs on live images. PC-RGB provided

additional advanced features including padding (sending 8-bit data to a 16-bit VGA color

surface), clipping (allowing data to be bus mastered directly to a VGA target without

seeing "color dots"-Windows reserved values), and programmable X/Y zooming (x2, x4).


Key features of the PC-RGB are as follows:

1) PCI-bus moves images to the host PC in less than 4 ms, providing an additional 29 ms

of free processing time

2) Hardware scatter gather bus mastering feature for high-speed simultaneous grab and

transfer to the host with minimal CPU involvement

5) Image modification on transfer to host memory with no additional CPU involvement

(flip, rotate, zoom, etc.)

4) Image deinterlacing on image transfers with no CPU involvement

5) Supports programmable OPTO-22 compatible I/O, trigger, strobe, and frame reset for

developers with "real world" applications and demanding cycle times

6) DMA of Areas of Interest (AOls) minimize transfer times"

Also, software for capturing images of leaves was used. The software used was

GUI-based software with easy-to-use features for image acquisition. Algorithms for

feature extraction and classification were written in MATLAB. A statistical package

called SAS was also used for feature reduction. The technicalities of feature extraction

and classification were explained in chapters 4 and 5.









The leaf samples that were used in this research were collected in a citrus grove

located in Bowling green, Florida. The main reason to get samples in the field was to

prove the concept that color texture analysis with classification algorithms could be used

to attain the objective of leaf classification. Ambient light variability was nullified by

performing the analyses in controlled lab conditions. The image data for this analysis was

obtained from leaf samples, collected from a grapefruit grove. Grapefruit has diseases

that can easily be identified by visual inspection. The samples were collected over a

number of different grapefruit trees, for each diseased condition. The leaf samples within

arm's reach were pulled off with leaf stems intact and then sealed in Ziploc bags to

maintain the moisture level of the leaves. Forty samples were collected for each of the

four classes of leaves. The samples were bought to the laboratory, and then, lightly rinsed

and soaped, to remove any non- uniform distribution of dust. This was done so that the

image data would have similar surface conditions for all classes of leaves. The leaf

samples were then sealed in new bags with appropriate labels, and then put in

environmental control chambers maintained at 400 F The leaf samples were then taken

to the imaging station and images of all the leaf samples were acquired, both in the front

portion and the back portion. Forty images from each leaf class were stored in

uncompressed jpeg format. The 40 images per class were divided into 20 samples each

for training dataset and test dataset. The selection method was based on alternate

selection with sequential image capture.

The camera used for image acquisition was calibrated under the artificial light

source using a calibration grey card. An RGB digital image was taken of the grey-card

and each color channel was evaluated using histograms, mean and standard deviation










statistics. Red and green channel gains were adjusted until the grey-card images had

similar means in R, G, and B equal to approximately 128, which is mid-range for a scale

from 0 to 255. Standard deviation of calibrated pixel values was approximately equal to

5.0.

The detailed step by step account of the image acquisition and classification

process is illustrated in the following flowchart.


Figure 6-7. Image acquisition and classification flow chart

In the initial step, the RGB images of all the leaf samples were obtained. For each

image in the data set the subsequent steps were repeated. Edge detection of the leaf is

done on each image of the leaf sample using canny's edge detector. A sample edge

detected image of the leaf sample is shown in the following figure 6.8.


480x640
8 bit


240x320


Hue, Saturation &
Intensity Matrices


6 bit




























Figure 6-8. Edge detected image of a leaf sample

Once the edge was detected, the image was scanned from left to right for each

row in the pixel map and the area outside the leaf was zeroed to remove any background

noise. In the next step, the image size was reduced from 480x640 pixel resolution to

240x520 pixel resolution. The reduced images were then converted from RGB format to

HSI format. The SGDM (Spatial gray level dependency matrices) matrices were then

generated for each pixel map of the image. The SGDM is a measure of the probability

that a given pixel at one particular gray-level will occur at a distinct distance and

orientation angle from another pixel, given that pixel has a second particular gray-level.

From the SGDM matrices, the texture statistics for each image were generated.

SAS Analysis

Once the texture statistics were generated for each image, statistical analyses were

conducted using SAS statistical analysis package to reduce redundancy in the texture

feature set which acted as the input data to our classifiers. Based on these analyses

several models of data sets were created which are shown below:












MODEL LEAF COLOR STEPDISC Variable sets


IB Back HS S5, S2,H7, S6, S9,H8,H11,S12,H1 ,H12


2B Back I I2,Il3,I8,I7,I6,I3


3B Back HSI I2,S5,I10,H11,S1,Il 3, S13


4B Back HSI All Variables


1F Front HS S2,H10,H6,H2,H8, S9, S4


2F Front I I,5Ill


3F Front HSI I2,I5,14, S2,H11, S4,H4


4F Front HSI All Variables



Table 6-1. Classification models


The variables listed, correspond to the color feature set and the particular Haralick

texture feature. For instance, S2 corresponds to the saturation feature and I2 corresponds

to the intensity texture feature. Detailed description can be found in the section 'co-

occurrence methodology for texture extraction' in chapter 4. In the table above, 'H'

represents hue, 'S' represents saturation and 'I' represents intensity. In the classification

process only data models from image sets of back portion of the leaves (IB, 2B, 3B, 4B)

were used. Earlier research by Burks (2000) had revealed that the classification










accuracies in case of both front portions and back portions matched closely. Due to the

unavailability of sufficient number of datasets for front portions of leaves it was decided

that only back portions of leaf images would be used for the research.

Input Data Preparation

Once the feature extraction was complete, two text files were obtained. They

were :

1) Training texture feature data ( with all 39 texture features) and

2) Test texture feature data ( with all 39 texture features)

The files had 80 rows each, representing 20 samples from each of the four classes

of leaves as discussed earlier. Each row had 39 columns representing the 39 texture

features extracted for a particular sample image. Each row had a unique number (1, 2, 3

or 4) which represented the class of the particular row of data '1' represented greasy

spot disease infected leaf. '2' represented melanose disease infected leaf. '3' represented

normal leaf and '4' represented scab disease infected leaf.

Once the basic data files were obtained as explained above, training and text files

for each of the models mentioned above in Table 6.1i, were obtained by selecting only the

texture features needed in that particular model from the total 39 texture features in the

original data files.

Classification Using Squared Mahalanobis Distance

A software routine was written in MATLAB that would take in text files

representing the training and test data, train the classifier using the 'train files', and then

use the 'test file' to perform the classification task on the test data. The train files were as

follows:










1) 'gl.txt': contains a 20 x n matrix where each row represents the 'n' selected texture

features of the 20 training images of greasy spot leaves

2) 'ml.txt': contains a 20 x n matrix where each row represents the 'n' selected texture

features of the 20 training images of melanose leaves

3) 'n1.txt': contains a 20 x n matrix where each row represents the 'n' selected texture

features of the 20 training images of normal leaves

4) 's1.txt': contains a 20 x n matrix where each row represents the 'n' selected texture

features of the 20 training images of scab leaves

Here 'n' is dependent on the model selected which is discussed in Table 6-1


above.


The 'test file' contained the 20 x n data matrix representing one class of test data

and therefore classification task had to be done in 4 iterations for each class of data.

Classification Using Neural Network Based on Back Propagation Algorithm:

The network used in the analysis is as follows:







Input from model selected l


Figure 6-9. Network used in feed forward back propagation algorithm

As shown in figure 6.9, the network has three layers. The hidden layers had 10

processing elements or neurons in each layer. The output layer had four neurons. The

inputs were the texture features, as discussed earlier. Based on the model selected

appropriate texture features were selected as input. The transfer function used at the

hidden layers and the output layer was a TANSIG function.









A Matlab routine would load all the data files (training and test data files) and

make modifications to the data according to the model chosen. The routine is included at

the end of this thesis in appendix A.

The ideal output (target vectors) of the various leaf samples was represented as

follows:

Greasy spot --- [1;0;0;0]T

Melanose ----- [0;1;0;0]T

Normal-------- [0;0;1;0]T .

Scab----------- [0;0; 0;1]'

In the target matrix 'ttl', the first 20 columns represented greasyspot, columns 20

through 40 represented melanose, columns 40 through 60 represented normal and the

final 20 columns from 60 through 80 represented scab.

After importing the training matrix and the target matrix in to the matlab

workspace, a network was constructed using the command 'newff', which is inherent in

the Matlab neural network toolbox. After creating the network, it was trained using the

function 'train'. After training, the test data was simulated using the function 'sim'. The

MATLAB technical literature gave the following explanation for the function, 'newff .

"NEWFF creates a feed-forward back propagation network and returns an N layer

feed-forward back propagation network. The syntax for the function is as follows:

net = newff (PR,[S1 S2...SN],{tTF 1 TF2...TFNI} ,BTF,BLF,PF)

Description:

NEWFF (PR, [S1 S2...SN], {TF1 TF2...TFNI}, BTF, BLF, PF) takes,

PR R x 2 matrix of min and max values for R input elements.









Si Size of ith layer, for Nl layers.

TFi Transfer function of ith layer, default = 'tansig'.

BTF Backprop network training function, default = 'trainlm'.

BLF Backprop weight/bias learning function, default = 'learngdm'.

PF Performance function, default = 'mse'.

The transfer functions TFi can be any differentiable transfer function such as

TANSIG, LOGSIG, or PURELIN. The training function BTF can be any of the back

propagation training functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD,

etc. The learning function BLF can be of the back propagation learning functions such as

either LEARNGD, or LEARNGDM. The performance function can be any of the

differentiable performance functions such as MSE or MSEREG."

The architecture of the network used in this study was as follows:

1) Number of hidden layers : 3

2) Number of inputs to the input layer, 'n' (representing the number of texture

features selected) depended on the model used.

3) Number of outputs in the output layer : 4.

4) The parameters of the network were as follows:

Network: Feed forward back propagation

Training function: TRAINTLM

Adaptation learning function: learngdm

Performance function: MSE

Epochs: 3000

Goal: 0.0000001










The number of epochs and the performance goal were specified in the matlab

routine as shown below:

"net.trainParam.epochs = 3000

net.trainParam.goal = 0.0000001"

With these parameters, the network was trained. Once the training was complete,

the test data for each class of leaves was tested. The results of the classification task are

given in the next chapter.

Classification Using Neural Network Based on Radial Basis Functions:

For classification using neural networks, the Neural Network Toolbox of

MATLAB version 6.12 was used. The Neural Network Toolbox is a GUI based tool,

which allows the user to load data files, specify the parameters and specify the kind of

training to be used. The GUI is very user friendly. When using it, the user can input data,

specify the type of network to be built, choose appropriate training method and specify

the parameters for the constructed network. It also allows the user to simulate the trained

network using the test data. To invoke the toolbox, the command 'nntool' has to be typed

at the Matlab command prompt. This command opens up the graphical user interface

(GUI). The GUI is a convenient way of constructing any type of Neural Network and to

specify the parameters of that network. The GUI has number of buttons, which add the

necessary functionality of inputting data, constructing the network, training the network

and simulating it on test data. The GUI that appears, after invoking the 'nntool' command

is shown in figure 6.10.







,IOIx
rlar~arl :


t~alF I II. r~ar.r.rl I
Im~rr~ I I .i-~ I I:lclcln I


Ir~llrijll;f udji~l
i
rlalrrjl Flc~::arl Tnalbn- "cr:l~r; ~l:lr~l


Figure 6-10. Snapshot of the GUI for the neural network toolbox data manager
Using the 'import' button, the training and testing data files could be loaded into
the MATLAB workspace. Once the data files were loaded, a neural network was
designed using the 'New Network' button as shown. The dialog box that pops up allows
the user to specify the number of layers and the type of training that has to be used. The
detailed literature regarding the creation and training of the neural networks can be found
in the MATLAB technical literature
The network used for this technique is as shown in figure 6. 11.


Tjlgli.

I


I


Err-i.

I


I














Input from model selected 80 2


Figure 6-1 1. Neural network based on radial basis function used in leaf classification

The network used 80 basis functions as shown in Figure 6-11. This is obvious

because the input had 80 total input vectors for training. The output is 2 x 1 column

vector. The outputs of the RBF are fuzzy outputs giving a measure of strength. In this

analysis the ideal target output vector used for training, for various classes of leaves are

as follows:

Greasy spot: [0 0] ; Normal: [1 0]

Melanose: [0 1] ;Scab: [1 1]



The level of fuzziness was determined to be as follows:

1) Any value < 0.5 was determined to be equivalent to 0

2) Any value > 0.5 was determined to be equivalent to 1.

The parameters used in building this network are as follows:

1) Function used: Radial basis function (exact fit)

2) Spread constants used: 3.3822e+003, 956.2873, 629.2455, 1.6532e+003

3) Input consisted of various models of data as discussed earlier. The input was

normalized before being fed into the network.

After the network was built, test data for each class was fed to the network and the

classification task was completed based on the target vectors and fuzzy criterion as

described above.















CHAPTER 7
RESULTS

The results for the various classification strategies that were used are given below:

Generalized Square Distance Classifier from SAS

Model Color Feature Greasy spot Melanose Normal Scab Overall
IB HS 100 100 90 95 96.3
2B I 100 100 95 100 98.8
3B HSI 100 100 100 100 100
4B ALL 100 100 100 100 100

Table 7-1. Percentage classification results of the test data set from SAS


The results shown in Table 7-1 were obtained using a generalized square distance

classifier in SAS (PROC DISCRIM). The results were part of preliminary investigations

by Burks (2002). The results reported better classification accuracies for all the data

models. In particular, models 3B and 4B achieved perfect overall classification rates.

Model IB achieved an overall accuracy of 96.3 % and model 2B an accuracy of 98.8%.

However, it should be noted that models 2B, 3B and 4B involve calculation of intensity

texture features, which is disadvantageous in terms of computational complexity.

Therefore, it is deciphered that model IB is the overall best model in this classifier.

One more advantage of using IB is the decrease in computational time for training

and classification. This is because of the elimination of intensity features and because of

the less number of features present in the model.












Statistical Classifier Based on Mahalanobis Minimum Distance Principle

Model Color Feature Greasy spot Melanose Normal Scab Overall
IB HS 100 100 100 95 98.75
2B I 100 95 95 100 97.5
3B HSI 0 100 100 100 75
4B ALL 90 100 80 60 85

Table 7-2. Percentage classification results for mahalanobis distance classifier

The results shown in Table 7-2 were obtained using the Mahalanobis minimum

distance classifier. In particular, models IB and 2B achieved better overall classification

rates. Model IB achieved an overall accuracy of 98.75 % and model 2B an accuracy of

97.5%. However, it should be noted that model 2B involves calculation of intensity

texture features as already explained in the above paragraphs. Therefore, model IB is the

overall best model in this classifier.

In general, tables 7.1 and 7.2 prove that the results for both classifiers based on

statistical classification agree closely, especially concerning models IB and 2B. Although

model 2B performed well in both cases, it is not useful in real world applications, since

choosing only intensity may be detrimental to the classification task due to inherent

intensity variations in an outdoor lighting environment. Hence, model IB emerges as the

best model in classifiers based on statistical classification.

Neural Network Classifier Based on Feed Forward Back Propagation Algorithm

Model Color Feature Greasy spot Melanose Normal Scab Overall
IB HS 100 90 95 95 95
2B I 95 95 15 100 76.25
3B HSI 100 90 95 95 95
4B ALL 100 95 100 100 98.75

Table 7-3. Percentage classification results for neural network using back propagation










The results shown in Table 7.3 were obtained using Neural Network based on Back

Propagation principle. In particular, models IB, 3B and 4B achieved better overall

classification rates. Models IB and 3B achieved an overall accuracy of 95 % and model

4B an accuracy of 98.75%. Based on the explanation already given model IB emerged as

the best model in this classification strategy.

Neural Network Classifier Based on Radial Basis Functions

Model Color Feature Greasy spot Melanose Normal Scab Overall
IB HS 100 100 85 60 86.25
2B I 100 20 75 0 48.75
3B HSI 95 80 85 85 86.25
4B ALL 100 95 100 95 97.5

Table 7-4. Percentage classification results for neural network using RBF

The results shown in Table 7.4 were obtained using Neural Network based on

Radial Basis Functions (RBF). In particular, models IB, 3B and 4B achieved better

overall classification rates. Models IB and 3B achieved an overall accuracy of 86.25%

and model 4B an accuracy of 97.5%.

Model 4B achieved higher overall classification accuracy. It is justifiable since

model 4B consists of more texture features, which is beneficial in neural network

applications, which are curve-fitting problems. However, in an outdoor application model

4B may be disadvantageous due to the large CPU calculation time required in calculating

the texture features.

Therefore, from the experiments, it is concluded that model IB consisting of

features from hue and saturation is the best model for the task of citrus leaf classification.

Elimination of intensity in texture feature calculation is the major advantage since it










nullifies the effect of light variability. In addition, algorithms would be faster due to the

less number of texture features (10) required.

Discussion. Burks (2002) completed preliminary investigations, for the task of

citrus leaf classification. In that study, the data sets consisted of both the front portion as

well as the back portion of leaves. From those data sets, the texture features for various

models as described earlier were obtained. The classification results using the SAS

generalized square distance classifier gave similar overall accuracies for both the front as

well as the back data models. However, in the case of back portion the accuracies were a

little bit higher. Hence, for this study it was decided to use the back portion of leaves for

feature extraction of various data models.

It is evident from Table 7.2 that, for models 3B and 4B, the classification

accuracies for some classes of leaves were inconsistent with the excellent results that

were obtained for other models. Mahalanobis distance method determines the similarity

of a set of values from an unknown sample (test data) to a set of values measured from a

collection of known samples (training data). For models 3B and 4B, it may be the case

that the spectrums for training and test were dissimilar for the particular choice of texture

features in those models. Since the classification was predominantly based on the

minimum distance, even a slight off bound may significantly affect the test accuracies.

Therefore, the choice of the model and hence the texture features selected, significantly

affect the classification results.

Similarly, from Tables 7.3 and 7.4, the results for neural network classifiers also

show some discrepancies in terms of accuracies for some models. In the case of neural

network with back propagation as well as with radial basis functions, model 2B (with










only intensity features) performs poorly. This can be attributed to the fact that the

network did not have enough information (in other words, there was overlapping of data

clusters) to make a perfect classification hyperplane to separate the data clusters

belonging to various classes of leaves. The intensity of the leaves when considered as a

whole may not have incorporated enough information, for the network to make correct

classification decisions. This proves the fact that for neural network classifiers using only

intensity texture features will not yield good classification. One significant point to be

noted in neural network classifiers is that the results may not be consistent across several

trials using the same input and parameters. This is because the weight initialization in the

network is random. Hence, the outputs vary. The results for neural network classifiers

that were shown in this research were the average of outputs (classification accuracies)

for three successive trials.

Model IB emerged as the best model among various models. It was noted earlier,

that this was in part because of the elimination of the intensity texture features.

Elimination of intensity is advantageous in this study because it nullifies the effect of

intensity variations. Moreover, it reduces the computational complexity by foregoing the

need to calculate the CCM matrices and texture statistics for the intensity pixel map.

However, in an outdoor application, elimination of intensity altogether may have an

effect on the classification, since the ambient variability in outdoor lighting is not taken

into consideration. Hence, future research for outdoor applications should consider

outdoor lighting variability.

Table 7-5 shows the number of leaf samples classified into each category for the

case of a neural network classifier with back propagation algorithm using model IB. The










results show that a few samples from melanose, normal and scab leaves were

misclassified. For the case of melanose infected leaves, two test images were

misclassified. One leaf sample was misclassified as belonging to normal leaf class and

the other as a scab infected leaf. Similarly, in the case of normal and scab images, one

test image form each class was misclassified as belonging to the other class. In general, it

was observed in various trials that misclassifications mainly occurred in three classes

namely melanose, normal and scab.

From Species Greasy spot Melanose Normal Scab Accuracy
Greasy spt 20 0 0 0 100
Melanose 0 18 0 0 90
Normal 0 1 19 1 95
Scab 0 1 1 19 95

Table 7-5. Classifieation results per class for neural network with back propagation

Figures 1-1 through 1-4 show the leaf samples belonging to various classes. It is

obvious that leaves belonging to melanose, normal and scab classes showed significant

difference form greasy spot leaves in terms of color and texture. The leaves belonging to

these three classes had minute differences as discernible to the human eye, which may

justify the misclassifications as shown in table 7-5. The false positives that were observed

in these leaves may be because for some test images, the feature set for the model chosen,

overlapped with the feature set of leaves belonging to other classes.

Table 7-6 lists the overall classification results for various classifiers for the

particular case of model IB. The classification accuracies for each of the four classes of

leaves, using various classifiers, are shown in the table.










Classifier Greasy spot Melanose Normal Scab Overall
SAS 100 100 90 95 96.3
Mahalanobis 100 100 100 95 98.75
NNBP 100 90 95 95 95
RBF 100 100 85 60 86.25

Table 7-6. Comparison of various classifiers for model IB


The table serves as a benchmark in comparing the efficacy of various classifiers.

The overall accuracies are well above 95% for statistical classifiers as well as for neural

network classifier using the back propagation algorithm. Radial basis function (RBF)

network, achieved an overall accuracy of 86.25%. Since the RBF network is a curve

fitting problem in a higher dimensional space, the low accuracy may be attributed to the

fact that due to some overlap among the various data clusters, the classification may have

had some false positives as well as false negatives thus affecting the overall accuracy.















CHAPTER 8
SUMMARY AND CONCLUSIONS

A detailed study was completed to investigate the use of computer vision and

image processing techniques in agricultural applications. The task of citrus leaf disease

classification using the above mentioned techniques was successfully implemented.

Three different classes of citrus diseases: greasy spot, melanose and scab were used for

this study. The image data of the leaves selected for this study was collected using a

JAI MV90, 3 CCD color camera with 28-90 mm zoom lens. Algorithms for feature

extraction and classification based on image processing techniques were designed. The

feature extraction process used color co-occurrence methodology (CCM method). It is

a method, in which both the color and texture of an image are taken into account, to

arrive at unique features, which represent that image. The manual feeding of the

datasets, in the form of digitized RGB color photographs was implemented for feature

extraction and training the SAS statistical classifier. After training the SAS classifier,

the test data sets were used to analyze the performance of accurate classification. The

whole procedure of analysis was replicated for three alternate classification approaches

to include; statistical classifier using the Mahalanobis minimum distance method,

neural network based classifier using the back propagation algorithm and neural

network based classifier using radial basis functions.

The analyses prove that such methods can be used for agricultural applications in

areas such as precision farming. Model IB emerged as the best data model for the task

of citrus leaf classification. The statistical classifiers gave good results averaging above










95% overall classification accuracy. Similarly, neural network classifiers also achieved

comparable results.

This research was a feasibility analysis to see whether the techniques investigated

in this research can be implemented in outdoor applications. The results that were

obtained prove that these methods can indeed be used for such applications. However,

it should be kept in mind that all the analyses in this study were done in controlled

laboratory conditions. The real world conditions are much more different due to the

inherent variability in natural outdoor lighting and tree structure. That would be a maj or

challenge to overcome in future implementations so as to make the research portable

for real time leaf classification.

Future work. The future implementations of the present research would include

analyzing disease conditions of the citrus trees in an outdoor environment. Moreover, the

tree canopy as a whole could be taken into consideration instead of just leaves.

Specifically, the research would include the study of tree canopies to determine the

presence of any disease condition. The whole technology could be incorporated onboard

an autonomous vehicle with GPS. This would not only identify the diseased trees but also

map out their positions in the grove so that selective chemical application is possible.

However, the above task is complex in terms of implementation due to the high

variability in outdoor conditions. The present research should be modified, to make it

feasible for outdoor applications, before such a system could be implemented.