Record for a UF thesis. Title & abstract won't display until thesis is accessible after 2013-08-31.

MISSING IMAGE

Material Information

Title:
Record for a UF thesis. Title & abstract won't display until thesis is accessible after 2013-08-31.
Physical Description:
Book
Language:
english
Creator:
Kim,Dae Gwan
Publisher:
University of Florida
Place of Publication:
Gainesville, Fla.
Publication Date:

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Agricultural and Biological Engineering
Committee Chair:
Burks, Thomas F
Committee Members:
Ritenour, Mark A
Turner, Allen E
Beck, Howard W
Dixon, Warren E

Subjects

Subjects / Keywords:
Agricultural and Biological Engineering -- Dissertations, Academic -- UF
Genre:
Agricultural and Biological Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Statement of Responsibility:
by Dae Gwan Kim.
Thesis:
Thesis (Ph.D.)--University of Florida, 2011.
Local:
Adviser: Burks, Thomas F.
Electronic Access:
INACCESSIBLE UNTIL 2013-08-31

Record Information

Source Institution:
UFRGP
Rights Management:
Applicable rights reserved.
Classification:
lcc - LD1780 2011
System ID:
UFE0042755:00001


This item is only available as the following downloads:


Full Text

PAGE 1

1 DETECTION OF CITRUS DISEASES USING COMPUTER VISION TECHNIQUES By DAE GWAN KIM A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCT OR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2011

PAGE 2

2 2011 Dae Gwan Kim

PAGE 3

3 To my parents and my advisor Dr. Burks

PAGE 4

4 ACKNOWLEDGMENTS I would like to thank my advisor Dr. Thomas F. Burks whom provided invaluable g uidance and encouragement during the course of this research. In addition, I would like to thank the following colleagues whom provided invaluable assistance to me during the course of my research; Tony Qin, Xuhui Zhao and Duke M. Bulanon. In addition, I w ish to express my gratitude to several dear friends whose encouragements kept me going through the process of this research. Finally, I would like to dedicate this research to my parents in Korea. They always encouraged me to pursue education, I am sure th ey would be pleasure with the results of this research and the completion of my PhD dissertation. I want to give my parents all the glory for this work.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 9 LIST OF FIGURES ................................ ................................ ................................ ........ 11 ABSTRACT ................................ ................................ ................................ ................... 15 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 18 Background ................................ ................................ ................................ ............. 18 Computer Vision Techniques for Citrus Disease Control ................................ ........ 19 Objectives ................................ ................................ ................................ ............... 20 Microscopic Imaging ................................ ................................ ......................... 21 Hyperspectral Imaging ................................ ................................ ..................... 21 2 L ITERATURE REVIEW ................................ ................................ .......................... 24 Citrus Industry Overview ................................ ................................ ......................... 24 Citrus Canker ................................ ................................ ................................ .......... 25 Citrus Greening ................................ ................................ ................................ ....... 26 Black Spot ................................ ................................ ................................ ............... 28 Recommendations on Management of Citru s Diseases ................................ ......... 29 The Electromagnetic Spectrum ................................ ................................ ............... 31 Machine Vision B ased C rop and Fruit S tatus M onitoring ................................ ........ 32 Image P rocessing A pproaches ................................ ................................ ............... 35 Spectral Images based Methods and Neural Network Classifiers .......................... 37 Near Infrared Monitoring Methods ................................ ................................ .......... 40 Wavelet Transform ................................ ................................ ................................ 42 Control of Citrus Diseases using Image Processing ................................ ............... 43 3 MATERIALS AND M ETHOD S ................................ ................................ ................ 46 Citrus Canker Samples ................................ ................................ ........................... 46 Citrus Leaf Samples ................................ ................................ ................................ 47 Black Spot Samples ................................ ................................ ................................ 48 Image Acquisition Systems ................................ ................................ ..................... 48 Charge Coupled Device Camera System ................................ ......................... 48 Hyperspectral I mage System ................................ ................................ ........... 51 Hyperspectral Image System C alibration ................................ ......................... 52 Pre processing Techniques ................................ ................................ .................... 53

PAGE 6

6 Visible Spectrum Image Pre processing ................................ ........................... 53 ROI s election and c olor s pace c onversion for canker detection ................. 54 Edge detection for greening detection ................................ ....................... 55 Hyperspectral Image Pre processing ................................ ............................... 55 C olor C o occurrence M ethodology ................................ ................................ ......... 57 Feature Extraction ................................ ................................ ................................ .. 60 Principal Compon ent Analysis ................................ ................................ ................ 62 Singular Vector Divergence ................................ ................................ .................... 64 Correlation Analysis ................................ ................................ ................................ 66 Wavelet Transform ................................ ................................ ................................ 67 Pattern Recognition ................................ ................................ ................................ 69 Fisher`s Linear Discriminant Method ................................ ................................ 71 Neural Network Based on Back Propagation Network Method ........................ 71 Support Vector Machine Method ................................ ................................ ...... 73 Statistical Classification ................................ ................................ .......................... 74 Hyperspectral Image Classification Methods ................................ .......................... 76 Spectral Angle Mapper ................................ ................................ ..................... 76 Spectral Information Divergence ................................ ................................ ...... 77 4 DETECTION OF CITRUS CANKER DISEASE ................................ ...................... 95 Materials and Methods ................................ ................................ ............................ 98 Image Acquisition System ................................ ................................ ................ 98 Image Analysis ................................ ................................ ................................ 99 Texture Classificati on ................................ ................................ ..................... 102 Results and Discussion ................................ ................................ ......................... 103 Selection of Texture Features ................................ ................................ ........ 103 Classification of Citrus Peel Conditions ................................ .......................... 104 Stability Test of the Classification Model ................................ ........................ 106 Summary ................................ ................................ ................................ .............. 107 5 DETECTION OF CITRUS GREENING DISEASE ON ORANGE LEAVES ........... 112 Materials and Methods ................................ ................................ .......................... 115 Image Acquisition System ................................ ................................ .............. 115 Image Processing ................................ ................................ ........................... 115 Features e xtraction ................................ ................................ .................. 118 Classification algorithms ................................ ................................ .......... 120 Result s and Discussion ................................ ................................ ......................... 121 Classifications of C itrus D isease C onditions ................................ .................. 121 The Confusion Matrix for Greening Positive vs. Negative .............................. 123 Stability Test of the Greening Classification Model ................................ ........ 124 Summary ................................ ................................ ................................ .............. 125 6 DETECTION OF THE DISEASE USING PATTERN RECOGNITION METHODS 134 Materials and Methods ................................ ................................ .......................... 136

PAGE 7

7 Image Acquisition System ................................ ................................ .............. 136 Image Analysis ................................ ................................ ............................... 137 Classification Using Pattern Recognition Methods ................................ ......... 140 Classification Results ................................ ................................ ............................ 140 Canker D isease C lassification B ased on P attern R ecognition A lgorithms. .... 140 Citrus G reening D isease C lassification B ased on P attern R ecognition A lgorithms. ................................ ................................ ................................ .. 142 Summary ................................ ................................ ................................ .............. 143 7 C LASSIFICATION METHODS OF CITRUS G REENING BY H YPERSPECTRAL I MAGING ................................ ................................ ................................ .............. 150 Materials and Methods ................................ ................................ .......................... 153 Citrus Leaf Samples ................................ ................................ ....................... 153 Hyperspectral Image Acquisition ................................ ................................ .... 154 Hyperspectral Image Analysis ................................ ................................ ........ 155 Hyperspectral image transformation using PCA ................................ ...... 155 Citrus greening classification algorithm using PCA ................................ .. 156 Classification algorithm using wavelet transform ................................ ..... 157 Results and D iscussion ................................ ................................ ......................... 158 The P lot of R elative R eflectance S pectra ................................ ....................... 158 Classification of Citrus Greening using PCA ................................ .................. 158 Classification of C itrus G reening using W avelet T ransform ............................ 160 Summary ................................ ................................ ................................ .............. 161 8 CITRUS BLACK SPOT DETECTION USING HYPERSPEC TRAL IMAGING ...... 170 Methodology ................................ ................................ ................................ ......... 172 Samples ................................ ................................ ................................ ......... 172 Hyperspectral Image Acquisition ................................ ................................ .... 172 Hyperspectral Image Processing ................................ ................................ .... 174 Results and Discussion ................................ ................................ ......................... 175 Spectral Characteristics of Black Spot an d Other Conditions ......................... 175 SID and SAM based on Classifications ................................ .......................... 176 The SID and SAM Threshold Values for Classification Accuracy ................... 176 Summary ................................ ................................ ................................ .............. 177 9 HYPERSPECTRAL BAND SELECTION FOR DETECTION OF BLACK SPOT USING MULTISPECTRAL ALGORITHMS ................................ ........................... 189 Materials and Methods ................................ ................................ .......................... 191 Hyperspectral Image Acquisition ................................ ................................ .... 191 Correlation Analysis Al gorithms for Band Selection ................................ ....... 192 Principal Component Analysis Algorithms for Band Selection ........................ 193 Singular Vector Decomposition Alg orithms for Band Selection ...................... 194 Result and Discussion ................................ ................................ .......................... 194 Mean R elative R eflectance S pectra of Black Spot ................................ ......... 194

PAGE 8

8 Hyperspectral Band Selection using CA ................................ ......................... 195 Hyperspectral Band Selection using PCA ................................ ...................... 195 Hyperspectral Band Selection using SVD ................................ ...................... 196 The Hyperspectral Images Selected by Band Selection Methods .................. 196 Classification of B lack Spot using Band Ratio Images ................................ ... 197 Summary ................................ ................................ ................................ .............. 200 1 0 C O N C L U S I O N A N D F U T U R E W O R K ................................ ................................ .. 221 Conclusions ................................ ................................ ................................ .......... 221 Future Work ................................ ................................ ................................ .......... 222 A P P E N D I X A M A T L A B CODE FILES FOR EDGE D ETECTION ................................ ................ 224 B MATLAB CODE FILES FOR PATTERN RECOGNITION METHODS .................. 226 C MATLAB CODE FILES FOR PATTERN HYPERSPECTAL IMAGING ................. 234 D MATLAB CODE FILES FOR P RINCIPAL COMPONENT ANALYSIS .................. 235 E MATLAB CODE FILES FOR WAVELELT TRANSFROM ................................ ..... 236 LIST OF REFERENCES ................................ ................................ ............................. 241 BIOGRAPHCAL SKETCH ................................ ................................ ........................... 248

PAGE 9

9 LIST OF TABLES Table page 3 1 The number of bla ck spot and other symptom s ................................ .................. 94 4 1 Texture features selected by stepwise discriminant analysis ........................... 109 4 2 Classification results using model HSI_13 in Table 4 1 ................................ .... 110 4 3 Classification results in percent correct for all models in Table 4 1 ................. 110 4 4 Classification r esults for shuffle data models in percent correct ....................... 111 5 1 Texture feature models selected by stepwise discriminant analysis for fall season ................................ ................................ ................................ .............. 128 5 2 Texture feature models to all conditions except young flush for fall season ..... 129 5 3 Classification summary in percent correct for all models in T able 5 1 .............. 129 5 4 Classification summary in percent correct for all models in T able 5 2 .............. 130 5 5 Classification result in percent correct for HSI_18 model in T abl e 5 1 ............. 130 5 6 Classification result in percent correct for HSI_15 model in T able 5 1 ............. 131 5 7 Classification result in percent corr ect for HSI_14 model in T able 5 1 ............. 131 5 8 Confusion matrix in percent correct for HSI_18 model in T able 5 1 ................. 132 5 9 Confusion matr ix in percent correct for HSI_15 model in T able 5 1 ................. 132 5 10 Confusion matrix in percent correct for HSI_14 model in T able 5 1 ................. 132 5 11 Classification results for shuffle data about HSI_1 5 model in percent correct .. 13 3 6 1 T exture feature models selected by SAS stepwise analysis for citrus canker .. 147 6 2 Canker disease c lassification results in percent correct for all models using FDA ................................ ................................ ................................ ................. 147 6 3 Canker disease c lassification results in percent correc t for all models using BP neural network ................................ ................................ ............................ 147 6 4 Canker disease c lassification results in percent correct for all models using SVMs ................................ ................................ ................................ ................ 148

PAGE 10

10 6 5 Texture feature models selected by SAS stepwise analysis for citrus greening ................................ ................................ ................................ ........... 148 6 6 Citrus greening c lassification results in percent correct for all models using FDA ................................ ................................ ................................ .................. 148 6 7 Citrus greening c lassification results in percent correct for all models using BP ................................ ................................ ................................ ..................... 149 6 8 Citrus greening classification results in perce nt correct for all models using SVM ................................ ................................ ................................ .................. 149 7 1 The number of citrus greening and other conditions ................................ ......... 168 7 2 Classification result in pe rcent correct using PCA based on method ................ 169 7 3 Classification result in percent correct using WT based on method ................. 169 8 1 The numb er of black spot and other conditions ................................ ................ 186 8 2 The classification summary using 6 threshold values for differentiating black spot from other conditions using SID mapping of hyperspectral images .......... 186 8 3 The classification summary using 6 threshold values for differentiating black spot from other conditions using SAM mapping of hyperspectral images ........ 187 8 4 The best overall classification accuracy using the SID threshold value of 0.04 187 8 5 The best overall classification accuracy using the SAM threshold value of 0.09 ................................ ................................ ................................ .................. 188 9 1 T he classification summary for differentiating black spot from other diseases and normal conditions using PCA ................................ ................................ ..... 218 9 2 T he classification summary for differentiating black spot from other diseases and normal conditions using SVD ................................ ................................ ..... 219 9 3 T he classification summary for differentiating black spot from other dise ases and normal conditions using CA ................................ ................................ ....... 220

PAGE 11

11 LIST OF FIGURES Figure page 1 1 Overview of citrus disease detection system ................................ ...................... 23 2 1 The electromagnetic spectrum comprising the visible and non visible range. .... 45 3 1 Images of a bnormal peel conditions ................................ ................................ ... 79 3 2 Images of nutritional deficiency ................................ ................................ .......... 79 3 3 Images of citrus greening ................................ ................................ ................... 80 3 4 Images of normal co nditions ................................ ................................ ............... 80 3 5 15 i mages of blotch mottle condition s ................................ ................................ 81 3 6 Representative images for each peel condition ................................ .................. 82 3 7 Three faces of each fruit with 120 interval ................................ ......................... 82 3 8 Typical image system for acquiring RGB images from citrus samples ............... 83 3 9 Digital microscope system for acquiring RGB images from citrus samples ........ 83 3 10 Typical h yperspectral line scan imaging system ................................ ................. 84 3 11 Principle of the prism grating prism imaging spectrograph for acquiring spartial and spectral information from an object ................................ ................. 84 3 12 Conceptual representation of a volume of hyperspectral image data ................. 85 3 13 White paper printed with thin parallel lines 2 mm apart for geometric calibration ................................ ................................ ................................ ........... 85 3 1 4 Spectral profiles of calibration lamps: xenon lamp and mercury argon lamp ...... 86 3 15 The relationship between the vertical band position and wavelength from two calibrat ion lamps ................................ ................................ ................................ 86 3 16 ROI selection program developed in MATLAB R2007b ................................ ...... 87 3 17 Typical ROI images for normal and diseased citrus pee l conditions ................... 87 3 19 Converted images of a leaf samples ................................ ................................ .. 88 3 20 Edge detected image of a leaf samples ................................ .............................. 88

PAGE 12

12 3 21 Hyperspectral image pre processing ................................ ................................ .. 89 3 22 The plot of reflectance factor for white panel ................................ ...................... 90 3 23 Nearest neighbor diagram ................................ ................................ .................. 90 3 25 Feature vector, feature space and scatter plot ................................ ................... 91 3 26 Good feature and bad fea ture ................................ ................................ ............. 92 3 27 Distribution plots with Pattern types ................................ ................................ .... 92 3 29 Illustration of the calculation ................................ ................................ ............... 93 3 30 Possible linear classifiers for separating the data ................................ .............. 94 3 31 The angle between endmember spectra and target spectra as vectors in a space ................................ ................................ ................................ .................. 94 4 1 Color image system for acquiring RGB images from citrus samples ................ 109 4 2 Procedures for color image analysis. ................................ ................................ 109 5 1 Digital microscope system for acquiring RGB images from citrus leaf samples 127 5 2 Gray level dependence example: (a) SGDM for different orientations, (b) gray level image ................................ ................................ ............................... 127 5 3 Procedures for color image analysis ................................ ................................ 127 5 4 Procedures for leaf edge detection ................................ ................................ ... 128 6 1 Color image system for acquiring RGB images from citrus samples ................ 145 6 2 Digital microscope system for acquiring RGB images from citrus dise ase samples ................................ ................................ ................................ ............ 145 6 3 Procedures for color image analysis for citrus canker ................................ ...... 146 6 4 Procedures for color image analysis for citrus greening ................................ ... 146 7 1 H yperspectral line scan imaging system ................................ .......................... 163 7 2 A detailed illustration of Threshold based on classification al gorithms ............. 163 7 3 Comparison of the threshold value distribution between citrus greening and other condition s ................................ ................................ ................................ 164 7 4 The image o pening morphological filtering ................................ ....................... 164

PAGE 13

13 7 5 A detailed illustration of the feature extraction using Wavelet Transform ......... 165 7 6 F our le vel decomposed images in each condition. ................................ ........... 165 7 7 Reflectance mean spectra and corresponding standard deviation of each ROI from citrus greening and other symptoms ................................ ................. 166 7 8 Reflectance of spectra of citrus leaf samples with citrus greening and other conditions ................................ ................................ ................................ ......... 167 7 9 Five score images obtained from principal component a nalysis using full spectral regions ................................ ................................ ................................ 168 8 1 Hyperspectral line scan imaging system ................................ .......................... 180 8 2 A detailed illustration of SAM and SI D classification algorithms ....................... 180 8 3 The mean and standard deviation from endmember spectra ........................... 181 8 4 Representative rule images fr om SID and SAM mapping the hyperspectral images ................................ ................................ ................................ .............. 181 8 5 Reflectance mean spectra and corresponding standard deviation of each ROI from black spot, greasy spot, melanose, wind scar, and market ............... 182 8 6 The reflectance spectra of the samples with black spot, normal and different symptoms over the wavelength range between 483 and 959 nm ..................... 183 8 7 Detailed comparison of the value distribution between black spot and non black spot region ................................ ................................ .............................. 184 8 9 Comparison with SID and SAM classification accuracy ................................ .... 185 9 1 Hyperspectral line scan imaging system ................................ .......................... 203 9 2 Data analysis and classification algorithms for CA ................................ ........... 203 9 3 First five score images obtained from principal component analysis for hyperspectral images of grapefruit samples ................................ ..................... 204 9 4 Data analysis and classification algor ithms for PCA ................................ ......... 205 9 5 Data analysis and classification algorithms for SVD ................................ ......... 205 9 6 Representative ROI images for each black spot peel conditions ...................... 206 9 7 The reflectance spectra of the samples with black spot, normal and different symptoms over the wavelength range between 483 and 959 nm ..................... 206

PAGE 14

14 9 8 The countour plot the correlation coefficients (r) between two band ratio and fruit peel conditions ................................ ................................ ........................... 207 9 9 Mean weighting coefficients for the third pri ncipal component from 20 black spot ROIs ................................ ................................ ................................ ......... 207 9 10 Singular value estimates of black spot conditions ................................ ............ 208 9 11 The representative single band reflectance images with five different conditions at the wavelengths with (a) highest correlation coefficients and (b) second highest correlation coefficients by CA ................................ .................. 209 9 12 Repres entative spectral reflectance images of diseased peel and normal conditions at four wavelength selected by PCA ................................ ................ 210 9 13 Representative spectral reflectance images of diseased peel and normal c onditions at four wavelength selected by SVD ................................ ................ 211 9 14 The ratio images of black spot using wavelengths selected by PCA ................ 212 9 15 The ratio images of black spot using wavelengths selected by SVD ................ 212 9 16 The ratio images of black spot using wavelengths selected by CA .................. 213 9 17 The histogram for ratio images of 802 nm/724 nm from PCA ........................... 213 9 18 The histogram for ratio images of 919 nm/572 nm from SVD ........................... 214 9 1 9 The histogram for ratio images of 854 nm/598 nm from CA ............................. 214 9 20 Identification of black spot lesions on the fruit peel based on two band ratio images using wavelengths selected by PCA (R802/R724) ............................... 215 9 21 C lassification accuracies obtained from the ratio image (802 nm/724 nm) selected by PCA ................................ ................................ ............................... 216 9 22 C lassification accuracies obtained from the ratio image (919 nm/572 nm) selected by SVD ................................ ................................ ............................... 216 9 23 C lassification accuracies obtained from the ratio image ( 854 nm/598 nm) selected by CA ................................ ................................ ................................ 217

PAGE 15

15 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy DETECTION OF CITR US DISEASES USING COMPUTER VISION TECHNIQUES By Dae Gwan Kim August 2011 Chair: Thomas F. Burks Major: Agricultural and Biological Engineering The citrus industry has led the agricultural economy of the state of Florida to prosperity. Florida has histo rically been the largest citrus producing state in the USA. Citrus fruits such as oranges, grapefruit, tangelos, navels, limes, tangerines and other specialty fruits are the chief crops of the state. The remarkable growth of the state economy has been part ially based on the various citrus related industries. This situation has brought job opportunity for many people and important potential for the economic growth of the state. To maintain the prosperity of citrus industry, Florida has been concerned about d isease control, labor cost, and global market During the recent past, citrus canker and citrus greening have become serious threats to citrus in Florida. These diseases can result in tree decline, death, yield loss and lost marketability. Likewise, the fa rmers are concerned about costs from tree loss, scouting, and chemicals used in an attempt to control the disease. An automated detection system may help in citrus greening and citrus canker prevention and, thus reduce the serious loss to the Florida citru s industry. This thesis considers the development of disease detection approaches for these diseases (citrus greening and citrus black spot ), when found in the presence of various

PAGE 16

16 other citrus diseases. The detection approach consists of th ree major sub systems, namely, image acquisition, image processing and pattern recognition. The image acquisition system consists of digital camera (standard lens on a microscope), frame grabber with high pixels embedded in the computer and supplemental lighting. The im aging processing sub system includes image preprocessing for background noise removal, leaf boundary detection and image feature extraction. Pattern recognition approaches were used to classify samples among several different conditions on fruit for canker black spot, and leaves for greening. The canker based detection studies were conducted on grapefruit samples collected during the 2006 2007 harvest season. Fruit samples exhibiting canker and other topical peel condition were collected and RGB images wer e taken of the various conditions. Color texture features were extracted and discriminant analysis was used to classify grapefruit according to peel condition achieving overall accuracies of 96.5%. The citrus greening based detection studies were conducte d on orange leaf samples collected during the spring of 2008. Leaf samples taken from the tree in an orange orchard exhibiting greening and other common leaf condition were collected and microscopic RGB images were taken. Color texture features were extrac ted at various levels of magnification and preliminary results demonstrated that 5 magnification w as optimal. The black spot based detection studies were conducted on fruit samples collected during the spring of 2010. Hyperspectral images of fruit samples infected with black spots were acquired. In addition to black spot infections, fruit samples with other peel conditions such as greasy spot, wind scar, melanose, and market quality conditions

PAGE 17

17 were also collected. For the hyperspectral image processing and analysis, unsupervised and supervised classification approaches w ere used to classif y the fruit samples as black spot or non black spot classes In order to evaluate the classification approaches, results were compared between classification methods for t he citrus greening. Results demonstrated classification accuracy for citrus greening as high as 97.00%. Thus, this research mainly focused on demonstrating feasibility of citrus disease detection from visible symptoms on the fruit or leaf. These were offli ne approaches, not directly applicable to real time technologies such as robotics and automated vehicles for agricultural application. However, they are a first step in demonstrating the potential for automated detection.

PAGE 18

18 CHAPTER 1 INTRODUCT ION Background Crops become stressed when any infections, such as viral infections, or physiological factors, such as air pollution, adversely affect growth, development, and yield. These stresses are expressed in various ways. For example, plant water str ess can slow down photosynthesis, reduce evapotranspiration, and raise leaf surface temperature [4]. Other symptoms include morphological changes such as leaf curling, change s in leaf angle, wilting, chlorosis, discoloration of leaves and fruits and premat ure drop of fruits. Because of these observable symptoms, humans could easily assess the conditions of their crops. Traditionally, disease inspection is performed by trained human inspectors. However, human inspectors are highly variable and decisions are not always consistent between inspectors or from day to day. Moreover, c ommercial citrus is currently produced in large acreages with large distances between blocks. This needs significant demands for labor to manage diseases and pests. Consequently, labor saving tools are necessary to effective scout for and manage diseases and pests. In recent years, the number of large scale farms has increased, while the labor force has shrunk due to laborers moving to other occupations. These conditions have stimulated development of automated scouts which can operate effectively for long periods of time. These systems can be more cost effective and accurate human scouts. In addition, automated disease detection can help farmers reduce cost. Continual inflow of non nat ive disease also makes it impossible to use human inspectors. Every year, various crops, including citrus come into U.S. commerce from foreign countries. Many of these countries are home to insects and diseases that do not

PAGE 19

19 occur in the United States. Non n ative species can wreak havoc on local environmental and agricultural resources. Citrus diseases have entered into Florida from these international sources such as the import of Southeast Asia, Africa, South America, and in Australia Scouting for infected plants and insects by human inspectors in the farm has limited usefulness. Moreover, the citrus diseases control is a repetitive manual task which is also very subjective in personal perception In this type of Florida's environment, machine vision system s are ideally suited for routine inspection and quality assurance tasks. Computer Vision Techniques for Citrus Disease Control Various technologies can be employed to provide intelligent systems, such as robotics and autonomous reasoning, and machine lear ning that have the potential to improve profitability in agricultural and food industries. Computers have been applied in the area of sorting, grading of fresh products, detection of defects such as cracks, dark spots and bruises on fresh fruits and seeds. Most symptoms of disease and defects on citrus fruit appear on the leaves of trees, and/or the peel of citrus fruit. The eye can detect some of these symptoms, but computer vision system has more accuracy and speed than the human eye. In outdoor scenes, i llumination and color can change. This can confuse detection by human eye, and computer vision. Some image processing methods can eliminate these problems and find best detection results. Computer vision systems can be employed on autonomous robots, to pr ovide un manned disease scouting systems. The computer vision and image processing techniques for disease detection used in this study have similar approaches. Figure 1 1 shows an overview of citrus disease detection system. Image data is acquired by var ious image acquisition equipments,

PAGE 20

20 such as microscopic system, and hyper spectral system, etc. systems can be used in a laboratory, and in some cases extended to the field. After collecting image data, proper image processing methods are applied. Original image data contains some unwanted factors such as noise; overexposure (underexposure) to the light, change of color etc. Image processing techniques can improve this and help increase classification accuracy. For classifying microscopic image data, image f eatures are extracted by feature extraction methods like color co occurrence method (CCM), wavelets transform, run length matrix, fractal dimension etc. For hyperspectral and multispectral algorithms, principal component analysis (PCA), singular vector dec omposition (SVD), and correlation analysis (CA) are used to extract important band selection. Classification methods also have various techniques such as neural network, Mahalonobis minimum distance, self organizing map (SOM), support vector machine (SVM) etc. Each feature extraction and classification method has good points. It is important that proper extraction and classification methods be used because these methods affect the classification accuracy. This dissertation will use these procedures for ge neral disease detection using computer vision technologies. In the following chapters, these methods and steps are discussed in more detail. Objectives The overall objective s of the research described in this dissertation w ere to develop a computer visio n based method for detecting various diseases using color texture feature and spectral image analysis algorithms under a controlled lighting.

PAGE 21

21 Microscopic Imaging The microscopic imaging objective was to investigate the potential of using color texture fea tures in the visible region for detecting citrus canker and greening symptoms Specific goals were to: Compare the classification accuracies using various pattern recognition techniques and select optimal classifier. Use a digital color microscope system to collect RGB images from orange leaves with seven conditions (i.e., young flush, normal mature, blotchy mottle, greening islands, zinc deficiency, iron deficiency, and manganese deficiency) Use an image acquisition system to collect RGB images oranges w ith normal and five peel conditions (i.e., canker, copper burn, greasy spot, melanose, and wind scar). Determine image texture features based on the color co occurrence method (CCM). Create a set of reduced feature data models through a stepwise eliminatio n process and classify different citrus disease conditions. Compare the classification accuracies using various pattern recognition techniques and select optimal classifier. Hyperspectral Imaging The hyperspectral imaging objective was to investigate the potential of using hyperspectral imaging in the visible and near infrared region for detecting citrus greening and black spot. Specific goals were to: Set up, calibrate and characterize a hyperspectral imaging system for acquiring spatially resolved stead state reflectance from the materials over the visible and short wave near infrared region s (500 1000 nm); Use a hyperspectral imaging system to collect the reflectance in the spectral region(500 1000 nm) from grapefruit leaves of two citrus greening disea ses, three nutrition deficiency and two normal leaves (i.e., g reening islands, blotch mottle, Iron Deficiency, Manganese Deficiency Zinc Deficiency normal mature and young flush),and grapefruits with normal and four diseased skin conditions (i.e., blac k spot, melanose, greasy spot, and wind scar);

PAGE 22

22 Use principal component analysis (PCA) and wavelet transform (WT) based algorithms for hyperspectral image processing and classification to identify citrus greening diseases, normal and nutrient deficiencies s ymptoms ; Use the spectral information divergence (SID) and spectral angle mapper (SAM) based algorithms to differentiate citrus spot black from normal and other common diseased peel conditions.

PAGE 23

23 Figure 1 1. Overview of citrus disease detect ion system

PAGE 24

24 CHAPTER 2 LITERATURE REVIEW The citrus market is one of the most important industries in the Florida economy. Citrus growers desire to exploit automation technology to help manage citrus groves and make the crop more economica l Automation systems offer farmers means to minimize agricultural chemicals inputs and increase overall crop yield and profits. Automated agricultural systems plays an important part in managing the cost of production, since citrus production is an "input i ntensive crop and the Florida agricultural industry has a volatile trend of cost per unit area (Sevier and Lee, 2003). These agricultural systems require more knowledge of in field variability and its relationship to crop yield, as well as, the influence of disease and pest on crop damage and ultimately profitability. In the past decade, various researchers have used image processing and pattern recognition techniques in agricultural applications, such as detection of weeds in a field, sorting of fruits an d vegetables, and pest and disease detection, etc. This chapter will introduce previous research in agricultural based machine vision, image processing, and classification methods for agricultural applications, and also introduce background information fo r citrus diseases. Citrus Industry Overview Florida leads the U.S. citrus production and accounts for a major part of the U.S. citrus industry. According to the Citrus production report (USDA, 2005 06), Florida accounts for more than 70 % of U.S. citrus production. Florida produces a number of different citrus products. Orange, grapefruit, and lemons are major citrus crops, with lesser production in tangerines, limes, and an

PAGE 25

25 increasing variety of specialties. Oranges account for more than 80 % of yield ( Hodges et al., 2001). Orange and grapefruit account for about 90 % of U.S. production (Jacobs et al., 1994). Florida, California/Arizona, and Texas are the major states for U.S. citrus productions. Florida accounted for more than three quarters of all U.S. citrus, while California/Arizona and Texas produced the remainder. Citrus is consumed not only as fresh fruit but also in products that use citrus fruits. Florida has 52 citrus processing plants (Hodges et al., 2001). They produce various orange juice ite ms such as chilled, canned, and concentrated juice. Processing also generates a number of byproducts like food additives, cattle feed, and cosmetics. These plants also produced many byproducts like citrus pulp, molasses, and the essential oil limonene (Hod ges et al., 2001). Citrus Canker Polek (2006) described the origin of citrus canker and characteristics important for canker identification. Citrus canker occurs in various countries around the world, Asia, United States, Indian Ocean and Pacific islands, the Middle East, and Australia. Citrus canker occurs not only in countries with warm and tropical climates but also in dry climates. The major cause of citrus canker is pathotypes or variants of bacterium, Xanthomonas axonopodis (formerly campestris) pv.c itru(Xac). Many citrus growing counties have regulated this bacterium and citrus products infected with citrus canker. Gottwald et al. (2002) presented a history and overview of citrus canker. Florida has a long history of fighting citrus canker. Citrus c anker first appeared around 1912 in Florida with outbreaks in the Florida peninsula. After about 80 years, citrus canker spread over the state, and Florida Department of Agriculture and Consumer Services

PAGE 26

26 (FDACS), the Division of Plant Industry (DPI), and t he USDA Animal and Plant Health Inspection Service (APHIS) established The citrus canker detection program a cooperative state/federal citrus canker eradication program (CCEP). After becoming infected with citrus canker, leaf and fruits will show sev eral symptoms. The surface of the leaf may have spots and the peel become blemished with lesions. When the conditions are fully developed, the fruit are dropped and the tree dies. Leaf lesions appear about 7 to 10 days after infection. Lesions on fruit an d twigs generally appear like cork and sometimes lesions appear like a blister or eruption on leaf and fruits. Once fruit becomes symptomatic, the marketability to many parts of the world becomes limited. This is most problematic for fresh market citrus, while citrus for juice is not affected. The disease primarily spread during spring and summer rains combined with wind speeds in excess of 18 mph. Citrus Greening In 2005, citrus greening, Huanglongbing (HLB), was first discovered in Florida groves. This disease is a greater threat to Florida groves than citrus canker. Florida citrus growers are desperately fighting this plant disease which has the potential to destroy the state's $9 billion commercial citrus industry ( Defending Florida's Citrus Industry f rom an Emerging Disease, The American Phytopathological Society, 2008). C itrus greening has spread from eight to 23 counties since it was first found in Florida just a little more than three years ago Once citrus trees are infected, the fruit yield, and q uality are greatly reduced. The trees also become susceptible to other diseases and health problems.

PAGE 27

27 HLB threatens the Florida citrus industry. The diseases effects are: Destroys production, appearance and value of citrus trees. Produces bitter, inedible misshapen fruit. Is fatal to citrus trees. While canker blemishes the fruit peel affecting marketability, citrus greening can kill the whole grove in a matter of a few years. Crops in Asia and Africa have been severely impacted by HLB and have caused a loss in citrus production in Brazil. In some areas of Brazil, citrus greening has affected as much as 70 % of the fruit yield (ScienceDaily, 2007). Citrus greening reduces citrus production period from fifty years to fifteen years, and to date no cure for c itrus greening has been found. Citrus greening was one of the most serious disease s that affect citrus ( Chung and Brlansky 2006) A number of countries in Asia have been affected by this disease in the past, while more recently Brazil (2004) and Florida (2005) were infected. Early symptoms of HLB are yellowing leaves that may appear on only a single branch. A condition termed, blotchy mottle, appears on the leaf surface. Infected trees become non productive in 2 to 3 years. After early stages of HLB, the leaves became small and exhibit nutritional deficiency symptoms frequently. Also, the value of citrus fruit is reduced as they become small, sparse and have abnormal appearance and color. Moreover, t he affected citrus often contains aborted seeds and juice quality is low N utritional deficiencies of citrus each have distinctive feature on the leaf surface. Leaves with iron deficiency ha ve the dark green veins within the yellow leaf blade. The L ea ves with manganese deficiency have symmetrical yellowing acros s the mid vein and a dark triangle at the leaf base. The lea ves with zinc deficiency ha ve fairy symmetrical yellowing across the mid vein (Polek, 2007). Leaves with citrus greening symptoms are

PAGE 28

28 similar to other conditions and diseases, but a blotchy mottle pattern with asymmetrical light greening and dark green patches, is the most distinctive symptom Green islands on the leaf have asymmetrical patterns on the opposite sides of the mid vein. Black Spot Citrus black spot (CBS) is the latest exotic diseas e introduced in Florida. This fungal disease has potential as a greater threat to Florida groves similar to citrus canker and greening because it is one of the most well known fungal diseases worldwide. Once citrus trees are infected with citrus black spot s, the fruit yield, and quality are greatly reduced. Besides, fruits infected with citrus black spots are not acceptable for fresh market (Chung, 2009) Therefore it is important to control this disease to achieve profitable production. Citrus black spot is a kind of fungal disease caused by Guignardia citricarpa (sexual stage) and Phyllosticta citrucarpa ( asexual stage). Citrus black spot can be identified by fruit symptoms which cause cosmetic lesions on the fruit peel that are the most conspicuous sympt om of infection (Dewdney, 2010) Fruit symptoms can be quite variable. Black spot lesions begin as small orange or red spots with black margins and enlarge to become necrotic lesions. Other symptoms of black spot on fruit include hard spot lesions, virulen t spot, cracked spot, and false melanose ( The Institute of Food and Agricultural Sciences Extension at University of Florida 2010 ) Detecting fruits infected with black spots can help in controlling the spread of this disease specifically in areas that ar e black spot free. The design and implementation of technologies that can efficiently detect black spot symptoms will greatly aid in the control effort.

PAGE 29

29 Recommendations on Management of Citrus Diseases Citrus disease management has been introduced by sever al researchers. Morris and Muraro (2008) reported controlling the spread of greening in Brazil. Recent experience in Brazil has shown that when greening is detected early enough and control practices are following diligently, groves can remain productive w ith disease incidence at low levels. Early detection is the key for controlling the spread of greening, follow ed by effective control practices. In Brazil, after three years of intensive spraying for psyllids, scouting for the disease, and aggressive remov al of infected trees, the best managed grove s have below 1% infection Other diseases are causing more tree losses than greening in these groves. Moreover, control of innoculum by removing infected trees is the most critical component of a successful green ing control program. Thus, the sooner infected trees are removed, the better the chances of reducing future infections. An experienced scouting crew can identify trees accurately, enabling the removal of infected trees without the turn around time it takes to send infected leaves to a lab for confirmation. Thus, the number of trees with suspect samples sent to a lab for verification from these programs is small percentage of the trees removed. In Brazil, the objective is to remove infected tree the day they are identified. Yates et al (2008) suggested early scouting for citrus greening management. Scouting for citrus greening has become a necessary practice for citrus production since the first positive confirmation of greening in Florida in August 2005. The y emphasized the need of scouting frequency for citrus greening management. Citrus groves should be inspected for greening every two to three months. If greening has previously been found in a grove or has been confirmed nearby, scouting more than four tim es a year is strongly recommended. Greening symptoms are most visible during

PAGE 30

30 the fall and winter months, but can be observed year round. During the spring flush, scouting becomes more difficult because new leaves typically do not express greening symptoms, and the older, symptomatic leaves become hidden by the new flush. They introduced various methods such as All terrain vehicles (ATVs), elevated platform and walking through groves to scout for greening. Multiple types of elevated platforms are available, including tractor or truck mounted platforms. They suggested other managements such as removal of infected trees, integrated pest management, and use of disease free nursery trees. Brlansky et al (2010) recommended the scouting for greening infected tre es in Florida The scouting for greening should be done routinely so that infected trees can be removed. It is recommended that scouting be conducted four or more times per year. The frequency of scouting may be higher in areas previously determined to hav e HLB positive trees. Symptoms are the easiest to find from October to March. However, symptoms may be present at other times of the year. The current methods used to scout are walking, all terrain vehicles and on vehicle mounted platforms. Other managemen t practices include removal of infected trees. For removal of infected trees, symptomatic tree numbers and the rows in which they are found should be marked with colored flagging tape and GPS coordinates taken or the sites marked on a map to facilitate rel ocation. In some cases, an HLB PCR diagnostic test may be necessary to confirm the disease. Removal of infected trees is the only way to ensure that they will not serve as a source of the bacteria for psyllid acquisition and subsequent transmission. Prior to removal, the infected tree should be treated with a foliar insecticide (such as Danitol, fenpropathrin) to kill all adult psyllids feeding on that tree.

PAGE 31

31 Failure to control these psyllids will result in the infected psyllids dispersing to new plants once the diseased tree is removed. To control citrus black spot, fungicides are required to control citrus black spot in countries where it is endemic ( Chung et al. 2009) Protective treatments using copper or strobilurin fungicides or mancozeb must be prope rly timed, and up to 5 sprays may be required during the period of susceptibility. Removal of dead leaves in groves is an effective practice and reduces inoculum potential. Long distance spread of citrus black spot occurs via infected nursery stock, and st eps to avoid movement of infected trees help limit spread of the disease to new areas. Little effort has been made toward developing varieties with tolerance or resistance to citrus black spot. The Electromagnetic Spectrum Sun (2008) summarized the electr omagnetic spectrum. The electromagnetic spectrum is useful in image formation. The images are derived from electromagnetic radiation in both visible and non visible ranges. Radiation energy travels in space at the speed of light in the form of sinusoidal w aves with known wavelengths. Figure 2 1 shows the electromagnetic spectrum of all electromagnetic waves. The electromagnetic spectrum arranged from shorter to longer wavelengths provides information on the frequency as well as the energy distribution of th e electromagnetic radiation. The gamma rays with wavelengths of less than 0.1 nm constitute the shortest wavelength of the electromagnetic spectrum. Traditionally, gamma radiation is important for medical and astronomical imaging, leading to the developmen t of various types of anatomical imaging modalities such as CT, MRI, SPECT and PET.

PAGE 32

32 Ultraviolet (UV) light is of shorter wavelength than visible light. Similar to IR, the UV part of the spectrum can be divided, this time into three regions: near ultraviol et (NUV) (300 nm), far ultraviolet (FUV) (30 nm), and extreme ultraviolet (EUV) (3 nm). NUV is closest to the visible band while EUV is closest to the X ray region and therefore is the most energetic of the three types. FUV, meanwhile, lies between the ne ar and extreme ultraviolet regions, and is the least explored of the three. There are many types of CCD camera that provide sensitivity at the near UV wavelength range. The sensitivity of such a camera usually peaks at around 369 nm while offering coverage down to 300 nm. Located in the middle of the electromagnetic spectrum is the visible range, consisting of narrow portion of the spectrum with wavelength ranging from 400 nm (blue) to 700 nm (red). The popular charge coupled device (CCD) camera system wor ks in the visible range. Infrared (IR) light lies between the visible and microwave portions of the electromagnetic band. As with visible light, infrared has wavelengths that range from near (shorter) infrared to far (longer) infrared. The latter belongs t o the thermally sensitive region, which makes it useful in imaging device is the indium gallium arsenide (InGaAs) based near infrared (NIR) camera, which gives the optimum response in the 900 1700 nm band. At the other end of the spectrum, the longest wave s are radio waves, which have wavelengths of many kilometers. The well known ground probing radar (GPR) and other wave based imaging modalities operate in this frequency range. Machine Vision B ased C rop and Fruit S tatus M onitoring Schertz and Brown (1968) proposed fruit identification using machine vision systems nearly forty years ago. Using light reflectivity differences between leaves and

PAGE 33

33 fruits in the visible or infrared portion of the electromagnetic spectrum, the information about location of fruits might be determined. Powell et al. (2005) develop ed machine vision for an autonomous vehicle. The development is related in various test situation such as straight, curved and sine wave lines. The autonomous car was tested in a controlled environment. The result about straight and sweeping curve paths are satisfied, and nonlinear test such as sine wave lines also had good result. However, the sine wave path was less accurate. The overall result is satisfied and the system could identify test situation. Ng et al. (1989) measured corn ( Zea mays L.) kernel damage using machine vision algorithms. Damage was divided into two classes, mechanical damage and mold damage. E ach d amage ha d a different analysis method. For mechanical damage, both single kernel and bat ch analysis was used for evaluating damage area using green dye In mold damage; only single kernel analysis was used. The machine vision system had high classification (mechanical damage: 99.5% and mold damage: 98.7%) accuracy for both damages. T his resea rch demonstrated that batch analysis method was faster than single kernel analysis. Takashi et al. (2003) developed a machine vision system for crop status. Crop status is a vital part for farm management and crop cultivation. The system used several spec ific functions, like growth curve (the Gompertz curve) and exponential function for estimating vegetation cover area and the actual plant status. The result was not acceptable for every plant status, but th e research demonstrated that image processing coul d be very useful to build machine vision systems for crop status detection.

PAGE 34

34 Jimenez et al. (2000) surveyed several computer vision approaches for locating fruit in trees. Th eir work reported that most detection vision systems had three sensor types: s pec tral, i ntensity and d istance, and two analysis methods: l ocal and s hape. V ision systems can use different combinations of possible types of sensing and analysis approaches. Each combination has their own advantages. The method using local analysis based on spectral images obtained the best rates for detection (100%), but required known conditions for background color. Another method using shape analysis based on intensity or spectral images was found to be good for detect ing fruits regardless of its color, however there were frequent false detections. Another method using range image and shape based approaches ha d good results (above 80%). In addition, this algorithm did not generate false detections Regunathan et al. (2005) indentified fruit count and siz e using machine vision and an ultrasonic sensor. The ultrasonic sensor gave distance information, while color images were obtained for counting fruit An i mage processing method convert ed red green blue ( RGB ) data to hue saturation intensity ( HLS ) data f ormat. The detection system combined two sensors to estimate the actual size of fruits T hree different classification methods were used: Bayesian and neural network and Fisher linear discriminant giving good result s for fruit size and counts, but root mea n square error the neural network method was better than others for fruit size, while the error for neural network was only 2.6% for fruit count and 97.4% for size estimation (0.4 cm). Ashish et al. (2007) detected citrus greening using a spectral method for the detection of greening. To distinguish greening and healthy trees, discriminability, spectral derivative analysis and spectral ratio analysis were used. The discriminability of

PAGE 35

35 wavelengths had a wide range. In visible wavelengths of 695 to 705 nm, t he discriminability had the best result of 0.83 to 0.86. In spectral derivative analysis, the wavelengths of 480 nm, 590 nm, 754 nm, 1041 nm, and 2071 nm was suitable to differentiate greening. The spectral ratio analysis was better for understanding of th e spectral properties of greening infected in citrus canopy. The reflectance at 530 564 nm was higher in the spectral ratio. Image P rocessing A pproaches Dave et al. (1995) compared three image processing methods for identifying species of plants. Three methods were investigated, Fourier transform, f ractal dimension, and c o occurrence matrix methods. A fast F ourier transform method converted spatial domain image data to frequency domain data. Using frequency domain data, the power spectrum inside the ann ular band of radius R1 to R2 was calculated. The fractal dimension method used fractal geometry information from the input image. After converting RGB to gray image, geometrical properties, such as shape, line and surface area were calculated. The co occur rence matrix method created s patial d ependence m atrices calculated from the gray scale image. All texture features were obtained from these matrices. The co occurrence matrix method gave the best accuracy, and the execution time also was less than others. Du et al (2006) described five different texture feature methods to invest igate tenderness of cooled pork ham. In this work, First order gray label statistics (FGLS), Run length matrix (RLM), Gray level co occurrence matrix (GLCM), fractal dimension (FD), and wavelet transform (WT) were investigated. After extracted texture feature, statistical analysis was studied using the CORR procedure in SAS package. Moreover, for further analysis, a partial least squares regression (PLSR) technique was applied.

PAGE 36

36 The WT method had multi scale representation. Th ese characteristics presented the best results about the tenderness of cooked pork ham than the traditional methods (FGLS, GLCM, and RLM). Burks et al. (2000) used the color co occurrence method and neural network for classifying weed species. Thirty three texture features were extracted using the co occurrence matrix method ( CCM ) in input image data. For analyzing texture features, SAS disciminant procedure STEPDISC was used. The back propagation neural network met hod gave good accuracy rates (96.7%) as well as high individual species classification accuracies (90.0%). Pydipati et al (2006) identified citrus disease using the co occurrence matrix method, ( CCM ) texture feature method and a discriminant analysis. Be fore extracting image features, the input image was converted from RGB to HSI. HSI data consisted of three texture sets (hue, saturation, and intensity). This image processing method based on the co occurrence matrix method ( CCM ) method gave high accuracy rates (above 95.0 % ) in an identification of citrus disease. Zhang et al. (2005) proposed a new method based on wavelet package transform for extraction of image texture features. The wavelet package transform was compared with color co occurrence mat rix. Original images, stone, door, brick, bark, noise, ripple and leaf images, were preprocessed. Once converting the same size gray scale images, image data sets were compared. From the classification results, an approach based on wavelet package transfor m had better accuracy of classification than co occurrence matrix. In this paper, the wavelet transform methods showed better execution efficiency than traditional method based on CCM method.

PAGE 37

37 Pydipati (2004) demonstrated color texture can detect citrus dis ease on leaves Algorithms based on image processing techniques for feature extraction and classification were developed Manual feeding of datasets, in the form of digitized RGB color images were used for feature extraction and training of the SAS statist ical classifier. The whole procedure was replicated for three alternate classification approaches to include; Statistical classifier using the Mahalanobis minimum distance method and neural network based classifier using radial basis functions. Comparison of the results obtained from the three approaches was completed and the best approach for the problem at hand was found to be the M ahalanobis based classifier Spectral Images based Methods and Neural Network Classifiers In recent years, h yper spectral ima ging has been used in the food processing and inspection application Hyper spectral imaging gives useful information about the full spectral response of the target image across abroad spectral band Support vector machine (SVM) are a set of supervised gen eralization linear classifiers that have often been found to provide higher classification accuracies than other widely used pattern classification techniques, such as multilayer perception neural networks ( B urges, 1998). The growing interest in SVM has de monstrated its potential by their successful implementation in food and grain analysis, such as classification of starch and wheat. Jiang et al. (2007) used hyper spectral fluorescence imaging to analyze the differences between walnut shells and meat. Hype r spectral fluorescence imaging system scanned samples at 79 diff e rent wavelengths ranging from 425nm to 775nm with 4.5 nm increments. Data redundancy of data was reduced by p rincipal c omponent a nalysis (PCA ). This in vestigation used two statistical patter n recognition methods: Gaussian Mixture Model (GMM) based Bayesian classifier and Gaussian kernel based

PAGE 38

38 Support Vector Machine (SVM). It was found that the SVM method with Gaussian kernel performed more effectively than PCA and FDA reached in classificatio n of walnut meat and shells by hyper spectral imagery. The overall recognition rate of Gaussian SVM achieved above 90%, and was about 3% more than that of PCA and 5% more than that of PDA. Zhang et al (2007) suggested a creative classification approach f or distinguishing healthy and fungal infected wheat kernels during storage. The research showed the potential use of NIR hyper spectral imaging in grain quality assessment. The research used NIR hyper spectral imaging and support vector machine ( SVM ) for i dentifying the fungi that caused the infection. In this study, 2160 kernels were randomly selected, with 540 kernels from each group of A. niger, A. glaucus, Penicillium spp. With the NIR hyper spectral imaging and SVM, t he overall classification accuracy was 94.8%, with 531 kernels correctly classified and 29 kernels not. This method could classify healthy wheat kernels from infected ones with 100% accuracy, and Penicillium spp i nfected kernels accuracy of 99.3% Mehl et al (2002) developed a simple mu ltispectral detection system utilizing only three channels in the visible spectral range. They used hyper spectral imaging to design the multi spectral image system for rapid detection of apple contaminations. Apple cultivars selected were: Red Delicious, Golden Delicious, and Gala. Despite their color differences, it was possible to use the same configuration. In this study, through hyper spectral imaging, a rapid multi spectral imaging analysis for food safety and food quality was designed.

PAGE 39

39 Lee et al (20 05) used the hyper spectral imaging technique to detect defects on apple peel after harvest. They developed a proper wavelength selection method for detecting the defects. They used p rincipal c omponent a nalysis (PCA) to detect the fecal contamination on th e poultry or apples from hyper spectral images. They used the correlation analysis method using wavelength differences and band ratio from correlation. For the ratio, the correlation coefficient was 0.91 for the ratio of 683.8nm and 670.3 nm. For the diffe rence of wavelength a high correlation was found at 827.9 nm and 737.8 nm, and the correlation coefficient value was 0.79. The band ratio by the combination of two spectral images was 0.8. It was the same for the image analysis using the wavelength which w as selected from the correlation analysis method. They demonstrated that the correlation analysis method was feasible for select ing the wavelength s to detect defects on apple. Cheng et al (2003) presented a new approach combin ing p rincipal c omponent a naly sis (PCA) and f l inear d iscriminant (FLD) method. This method maximized the representation and classification effects on the new feature bands extracted from hyperspectral imaging. In this research, the new projected features generated by PCA method gave good results for pattern representation and performed well for obviously separated patterns. For similar patterns, however, FLD method obtained better classification results but FLD was more sensitive to noise and less stable than PCA Therefore, th ey proposed an integrated PCA FLD method to overcome the drawbacks of the previous two methods which needed more flexibility in dealing with different sample patterns by adjusting K value properly.

PAGE 40

40 Kim (2002) researched a method for using hyper spectral d ata to identify wavebands to be used in multispectral detection systems, and evaluated spatial and spectral responses of hyper spectral reflectance images of fecal contaminated apples. To detect fecal contaminated apples, he presented a systematic method u sing hyperspectral reflectance imaging technique in conjunction with the use of PCA to define several optimal wavelength bands. Fukagawa et al. (2003) developed a monitoring system of crop status for precision farming. A multi s pectral i maging s ensor, whi ch can get three wave length images (R, G and NIR) simultaneously, was used as the imaging sensor for this system. Leaf height and the number of stems were estimated by Vegetation Cover Rate ( VCR ) The correlation coefficient between the leaf height and VC R in transplanting variety and seeding variety indicated 0.64, 0.69 respectively. Consequently, the numbers of stems can be estimate d by VCR. SPAD value was estimated by LCI is defined as corrected R gray level. The correlation coefficient between the leaf height and VCR in transplanting variety and seeding variety were each 0.79, 0.60. Additionally LCI map was generated using position and posture data of the helicopter. Near Infrared Monitoring Methods Shimada et al (2008) developed a personal remote se nsing system, which consisted of the following components for an extremely narrow area like a Japanese paddy field (100*100m): radio controlled helicopter, two digital still cameras, network camera board, wireless LAN, notebook computer for the ground stat ion and image processing software. The goal of this project was to develop a low cost remote sensing system. red green red (RGB) and near infrared (NIR) data were acquired with a 4 band camera, system that can calculate normalized difference vegetation ind ex (NDVI) and

PAGE 41

41 so on. The R sensitivity characteristics of the NIR camera, and it became possible for more accurate exposure value compensation to be done. The precision of NDVI improved over former systems as a result. Yang et al (2001) developed an infra red imaging and wavelet based segmentation method for apple defect detection. They proposed that the reflectance spectrum of apple surface in the near infrared region (NIR) provided effective information for a machine vision inspection system. The differen ces in light reflectance of the apple surfaces caused the corresponding pixels of bruised areas and good areas to appear different in a NIR apple image. Segmenting the defective areas from the non defective in apple images was a critical step for apple def ect detection. In their work they used a 2 D multi resolution wavelet decomposition to generate 'wavelet transform vectors' for each pixel in the NIR apple images. These vectors were combined and weighted by dynamic modification factors to produce the pixe l vectors. The pixels with similar properties are labeled as one class, to separate the defective areas from the good area of apples in the NIR image. They reported 100% accuracy of detecting good apples and 94.6% accuracy of detecting defective apples. Ka wamura et al. (2003) constructed an on line near infrared (NIR) spectroscopic sensing system on an experimental basis. This system enables NIR spectra of unhomogenized milk to be obtained during milking over a wavelength range of 600 nm to 1050 nm. In this study, the NIR sensing system could be used to assess milk quality in real time during milking. The system can provide dairy farmers with information on milk quality and physiological condition of individual cows and therefore give then feedback control f or optimizing dairy farm management.

PAGE 42

42 Wavelet Transform Ionescu and Llobet (2002) used the classification method based on the discrete wavelet transform (DWT) to discriminate between gases and vapours. They compared the discrete wavelet transform (DWT) with the fast Fourier transform (FFT) to extract the texture features from the sensor working temperature. In research, they showed that the discrete wavelet transform (DWT) outperformed the fast Fourier transform (FFT). DWT gave both frequency and information about temperature in the sensor, but FFT provided only frequency information for the complete duration of the signal. Therefore, the DWT analysis helped to extract more important features from the sensor response than the FFT analysis. Through this compar ative study, the DWT demonstrated a good linear separation in feature space between mixed vapours. Moreover, algorithms based on DWT can be developed easily and promise for low cost to operate gas monitors. Arivazhagan and Ganesan (2003) introduced the t exture analysis using the wavelet transform. The texture features were extracted using the co occurrence method from decomposed images and originals. This research shows texture segmentation algorithms; the images were read and decomposed sub image using t he discrete wavelet transform (DWT). The co occurrence matrices were implemented for original images and sub image blocks. The wavelet co occurrence features (WCF) such as, contrast, cluster shade, and cluster prominence were calculated from the matrices f or texture segmentation and the difference between the sums of WCFs of sub images. After this analysis, post processing (disk filtering and thresholding techniques) was performed in the segmentation band to remove the noise, and the skeletonizing algorithm was applied to get segmented line of one pixel thickness. The thinned lines are exactly lined up with texture boundaries.

PAGE 43

43 Ruiz, Fdez Aarria and Recio (2004) studied several texture methods for the classification of remote sensing images with different typ es of four areas (three forests and one urban). In this research, four approaches were applied for texture analysis and feature extraction: (1) Grey level co ocurrence matrix, (2) Energy filters and edgeness, (3) Gabor filters, (4) Wavelet transform. For c omparison of these methods, several combinations of texture variables and different methods were tested, and the method used in the classification step was the maximum likelihood classifier. The table of overall accuracy in this paper showed that the combi nation of four texture analysis method produced a significant increase in the overall accuracy levels, and the wavelet transform and the grey level co occurrence matrix also increased the overall accuracy. Control of Citrus Diseases using Image Processing Camargo and Smith (2009) developed a visible spectrum imaging system and color image processing to identify banana leaves that were infected with Black Sigatoka. Color images of infected banana leaves were used to develop the image processing which include s color transformation, histogram multi thresholding, and segmentation. They demonstrated that the algorithm was able to identify the diseased regions in most of the images tested. A machine vision system in the visible region was used by Blasco et al. (20 03) to detect external blemishes in apples and peaches. They achieved 93% accuracy in blemish detection. Visible imaging systems are sufficient if the symptoms occur in the visible region, but most of the diseases and conditions could be identified more ef fectively by exploiting regions beyond the visible. Zarco Tejada et al. (2005) used a compact airborne spectrographic imager and hyperspectral sensors to monitor the temporal and spatial chlorosis condition of Vitis vinifera L Their results showed that th ey could estimate chlorophyll content in the narrow band hyperspectral

PAGE 44

44 indices calculated in the 700 750nm. A ground based hyperspectral imaging system for characterizing vegetation spectral features was developed by Ye et al. (2008) They used a hyperspec tral line sensor with a wavelength range of 360 1010 nm to demonstrate the potential of the system to monitor vegetation variability in crop systems. Several researcher (Sepulcre Canto, 2007, Muhammed ,2005, and Gowen et al ,2007 ) have reported c hanges in spe ctral signatures due to deficiencies of nutrients and damage by pests and environmental factors. In addition, the spectral signatures is influenced by the amount of pigments, leaf angle, leaf surface texture, diseases and stress, plant growth stage, and me asurement condition. While researchers have demonstrated the potential of imaging technologies for disease detection, not much work can be found on the automatic detection citrus greening. Mi sh ra et al. (2007) investigated the spectral characteristics of leaves infected with HLB and normal leaves using a spectroradiometer (350 2500 nm). The spectral bands of green to red wavelength and the near infrared band were found to have the potential to discriminate HLB infected leaves from normal leaves by using di scriminability, spectral derivative analysis, and spectral ratio analysis. These wavelengths include 530 564 nm, 710 715 nm, 1041 nm, and 2014 nm. In addition, they also developed a four band active optic sensor to identify young leaf flushes to control th e spot spraying of citrus trees. It is known that the Asian citrus psyllid, vector of HLB disease feeds on young leaves. Although they have characterized the spectral reflectance property of HLB infected leaves and discriminate it from normal leaves, other conditions such as nutrient deficiency of leaves could look similar to HLB infected leaves.

PAGE 45

45 Figure 2 1. The electromagnetic spectrum comprising the visible and non visible range.

PAGE 46

46 CHAPTER 3 MATERIALS AND METHODS This chapter describes the ci trus leaf and fruit samples that were collected for this study. It also discusses the features of the two typical optical hardware systems used to collect data. Finally it describes t he data analysis methods used to detect citrus disease symptoms Citrus C anker Samples Ruby Red grapefruit was selected for this study because it is one of the more popular grapefruit varieties in Florida and it is susceptible to canker and other common peel diseases. Fruit samples were handpicked from a grapefruit grove nea r Punta Gorda, Florida, during the spring 2007 because the harvest season is October to June in Florida and Texas Dr. T im Schubert and Dr. Gordon Bunn advised to select grapefruit samples. G rapefruit samples of normal market quality and five fruit with ab normal peel conditions were collected because these are very typical peel conditions during the spring The peel conditions considered were; Normal C anker, copper burn, greasy spot, melanose, and wind scar Representative images for each peel condition are shown in Figure 3 1. Thirty samples for each condition were selected, giving a total of 180 grapefruits tested in this study. All grapefruits were washed and treated with chlorine and sodium o phenylphenate (SOPP) at the Indian River Research and Educatio n Center of University of Florida in Fort Pierce, Florida. The samples were then stored in an environmental control chamber at 4 C and were removed from cold storage about 2 hours before imaging to allow them to equilibrate to room temperature

PAGE 47

47 Citrus Lea f Samples Citrus leaves from Valencia orange trees can serve as indicators of common diseases and nutrient deficiencies. Various colors of leaves present the symptoms and reflect the tree health. The leaf samples used in this study were collected from tw o Valencia orange groves near Immokolee in Southwestern Florida Dr. Mongi Zekri is the southwest Florida Multi County Citrus Agent. He provided technical advices for distinction of citrus greening symptoms and season of sample collection. The samples we re collected in late spring summer and fall of 2008 The degree of symptoms varied between leaf samples. The time of season also affects the appearance of the disease symptoms Hence, in this study, samples were collected at optimal times for symptom dete ction. In this study, seven different classes of citrus leaves that included symptoms of greening blotchy mottle, greening island, iron deficiency, magnesium deficiency, zinc deficiency, healthy young leaves, and healthy old leaves were selected T he early symptoms of blotchy mottling may be confused by other chlorosis symptoms common to citrus, including nutritional deficiencies, and those which result from some insects and diseases These leaf samples ha d 3 diff erent symptom categories: N utritional defici ency: Iron Deficiency, Manganese Deficiency, and Zinc Deficiency Citrus Greening symptom s : Blotchy Mottle and Green Islands Normal conditions: Normal Young and Normal Mature Images of Leaf samples are shown in Fig ure s 3 2, 3 3, and 3 4 Figure 3 5 shows various greening symptoms on Leaves used in this study. Leaf samples were clipped with petioles intact and then sealed in p l astic bags to maintain the moisture level of the leaves. Sixty samples of each of the seven classes of le aves were collected. The samples were brought to the laboratory then sealed in new

PAGE 48

48 bags with appropriate labels and put in environmental control chambers maintained at 4 C The leaf samples were then taken to an imaging station and images of the upper l eaf surface were acquired. Black Spot Samples Valencia orange s were handpicked from citrus groves near Immokalee in Southwestern Florida in the April of 2010 because s ymptoms of black spot were first detected in Valencia sweet oranges in the Immokalee area on March 8, 2010 The fruit samples included marketable fruit and those with symptoms of black spot or greasy spot, melanose, or wind scar since these symptoms were progressive during the spring. Dr. Mark Ritenour has contributed to suggestions about sample collection and technical advices Representative images for each peel condition are shown in Figure 3 6 All the samples were washed with soap to remove dirt from the grove before imaging the fruit samples. To maximize the collection of hyperspect ral images of the limited number of black spot samples, three faces of each fruit (with 120 0 interval) were collected as shown in Figure 3 7 Table 3 1 demonstrated the sample number for this study. Thus, a total of 525 samples were selected in this study. Image Acquisition Systems Charge Coupled Device Camera System In general, a charge coupled device (CCD) camera is equipment which is designed to convert optical brightness into electrical amplitude signals using a plurality of CCDs, and then reproduce the image of a subject using the electric signals without time restriction. CCD vision system is suitable for surface imaging.

PAGE 49

49 Figure 3 8 shows a typical set up. CCD camera system to collect images uses the computer system installed with image capture softwa re and a 24 bit color frame grabber board with 480640 pixel resolution This system consisted of the following major components. Lighting system: Two 13W cool white fluorescent bulbs with reflectors were used. Color CCD Camera: The camera consisted of 3 CCD RGB color camera (CV M90, JAI, San Jose, CA, USA) and a zoom lens (Zoom 7000, Navitar, Rochester, NY, USA). Computer System: Coreco PC RGB 24 bit color frame grabber with 480x640 pixels embedded in the CPU of the computer. T o minimize specular reflect ance and shadow and maximize the contrast of the images t he setup of the lighting system was designed. The height of the camera and its focus were adjusted to contain the image of the whole fruit, with an approximate 100 mm100 mm field of view. Automatic white balance calibration was conducted using a calibrated white balance card before acquiring images from fruit samples. The digital color images were saved in uncompressed BMP format. Compared to a typical computer vision system, a digital microscope sy stem has several benefits and advance for collecting sample images. This system can easily adjust the magnification using a zoom lens. The magnification can be changed without losing sight of the target you are observing. In addition, optimal magnification to provide the clearest image can be set easily. Integrated illumination system required no setup time. The field view and focus of camera can be simply established. CCD camera system captured images and saved on has 160 GB hard disk drive which can store 575,000 compressed images.

PAGE 50

50 In this research, a d igital m icroscope system (VHX 600K, Keyence, JAPAN) was used for acquiring RGB images from citrus leaf samples, and it is shown in Figure 3 9 These descriptions were extracted from a user`s manual book, re leased by the Keyence Corporation, Osaka, Japan. This system is made up of a CCD camera and a controller. The camera unit consist of a high pixel color CCD and light. The controller has various functions such as display, record, measurement, and etc. The s tand device offers the user quick observation, analysis and data processing. The imaging system consisted of several parts: H igh intensity halogen lamp (12V, 100W) Zoom lens (C mount lens, OP 51479) 2.11 million pixel CCD image sensor (1/1.8 inch) 15 in ch Color LCD monitor (TFT, 1600x1200, UXGA) Console installed with a hard disk drive (Image format: JPEG and TIFF, Storage capacity: 700MB) and CD RW drive units and Control Panel Color CCD Camera. The camera uses a pixel shift technology. This technol ogy allows the maximum resolution to reach 18 million pixels. There are four options for the number of pixels (18 million/8 million/4 million/2.11 million) based on the type of observation to be selected. Lowest resolution (2.11 million, 1/1.8 inch) was u sed for capturing a target image in this research. Automatic white balance calibration was conducted using a calibrated white balance function before acquiring images from leaf samples. Hard Disk. The image data can be stored on the built in hard disk in the controller. It can store up to 575,000 pictures. The leaf sample images were saved in uncompressed JPEG format (1200x1600, 8bit). Built in Light. The build in light system irradiated rays of light directly from the lens. The lighting system was design ed to maintain optimum illumination intensity and minimize specular reflectance and shadow. The light system was consisted of a 12V, 100W, Halogen lamp. Zoom Lens. Digital microscopes came with various types of zoom lenses allowing continuous adjustment of magnification. The lens power and focus were adjusted easily to maintain the image of the whole leaf, with center on the vein. LCD Monitor. A 15 inch, built in 1600x1200 pixel high resolution liquid crystal monitor displayed the magnified image.

PAGE 51

51 Operat ion Console. This console can be used to quickly and easily perform the major observation task such as, adjusting the brightness, colors, and shutter. Hyperspectral I mage System This section introduces a new technique using hyperspectral imaging in line s can mode for the rapid acquisition of spatially resolved scattering profiles as the wavelengths of 450 1000 nm. The instrument set up and calibration, data analysis, and system validation are presented. The hyperspectral imaging technique is faster and sim pler than a solid state CCD array camera system, and is capable of measuring the optical properties of turbid or opaque sample fruits over a broad spectral range simultaneously. A hyperspectral line scan imaging system, as shown in Figure 3 10 was used for acquiring spectral images from citrus leaf and fruit samples. The imaging system consisted of an electron multiplying charge coupled device (EMCCD) camera (Luca, Andor Technology Inc., CT, USA) with imaging spectrograph (ImSpector V10E, Spectral Imagi ng Ltd., Oulu, Finland) and a C mount lens (Rainbow CCTV S6X11, International Space Optics, S.A., Irvine, CA, USA), a pair of halogen line lamp (21V, 150W) powered with a DC voltage regulated power supply (Dolan Jenner Industries, Inc., Lawrence, MA, USA) and a programmable motorized positioning table (XN10 Xslide, Velmex Inc., Bloomfield, NY, USA). This equipment was placed inside in a dark box to eliminate stray external light. The EMMCCD camera has 1004x1002 pixels and is equipped with a Peltier coolin g device to cool the CCD detector to 20 to improve the dynamic range and the signal to noise ratio of the CCD detector. The use of high performance CCD camera with a large dynamic range is necessary because light attenuation in opaque fruit is so

PAGE 52

52 signifi cant that the diffuse reflectance profile changes dramatically in a short distance. The imaging spectrograph shown in Figure 3 11 is based on prism grating prism principle, and does not have moving mechanical components. It has a slit 2.8 mm long wide, and line scans the sample. If the incoming radiation passes the prism grating prism unit, it is dispersed light is projected onto the pixels of the CCD detector, creating a special 2 D image: one dimension represents spatial information and the othe r dimension spectral (Figure 3 11). When the sample is moved perpendicularly to the scanning direction by the motorized positioning table, one thousand seven hundred and forty line scans were performed for each sample, and four hundred pixels covering the scene of the sample at each scan were saved. Therefore, a three dimensional image cube associated with a spectral curve is created (Figure 3 12). The hyperspectral imaging software to transfer data and parameterization was developed using the Andor Softwa re Development Kit (SDK, Luca, Andor Technology Inc., CT, USA) for the hyperspectral line scan imaging system ( Kim et al 2001) An Hg Ne spectral calibration lamp (Oriel Instruments, Stratford, CT, USA) was used to investigate spectral calibration of the system. Because of low light output in the visible region less than 450 nm, and low quantum efficiency of the EMCCD in the NIR region beyond 930 nm, the wavelength range between 400 nm and 900 nm was used (totaling 92 bands with a spectral resolution of 5 .2 nm). Hyperspe ctral Imag e System C alibration Before the hyperspectral imaging system is used for imaging, proper calibration is required. Calibration requirements may vary depending on the application. Spectral and geometric calibrations are g enerally required for all applications. Spectral calibration ensures that each pixel on the CCD area array is assigned to an appropriate

PAGE 53

53 wavelength, whereas geometric calibration corrects any distance distortions for individual pixels of the image. Althoug h a complete commercial hyperspectral imaging system is usually pre calibrated spectrally and geometrically, most hyperspectral imaging system built with components from different dealers is assembled by the researchers. Thus, two forms of calibration and necessary spectral and geometric calibration must be performed. As shown in Figure 3 13, geometrical calibration was done with a white grid paper printed with lines 2 mm apart. In system calibration, full scale (i.e., pixel by pixel) calibrations were no t necessary because spatial and spectral distortions such as smile and keystone for any given spatial pixel from the scanning line or any wavelength over the spectral region of 500 1000 nm were occurred in one pixel. Spectral calibrations were performed u sing spectral calibration lamps (i.e., xenon lamp (model 6033), mercury argon lamp (model 6035), New port, Irvine, CA, USA). Figure 3 14 shows the spectral output of 6033 xenon lamp and of 6035 mercury argon calibration lamps. The spectral peaks from each lamp and their pixel position in the original images were identified. The relationship between the vertical band position and wavelength from the two calibration lamps was established using a linear regression function (Figure 3 15). Pre processing Techniq ues Visible Spectrum Image Pre processing The images were taken for each sample class and were stored in JPEG format. The images from each class were divided into two datasets consisting of samples for training and testing. The samples were firs t arranged in ascending order for the time the

PAGE 54

54 images were acquired. This approach minimizes negative time dependant variability, and reduces potential for data selection bias between the training and test datasets. All a lgorithms for image segmentation a nd texture feature generation were developed in MATLAB. In the initial step, the RGB images of all leaf samples were obtained. For reducing the computational burden with minimal loss of texture feature quality, the image resolution was reduced from 1600x12 00 pixels to 800x600 pixels and the reduced images were then converted from eight bit to six bit per channel RGB format. The subsequent steps were repeated for each image in the dataset. ROI s election and c olor s pace c onversion for canker detection ROI ima ges were first extracted from the original RGB color images with the dimension of 480640 pixels, generating small images covering the interested areas (i.e., normal peels or various diseases) on the fruit surface. Referring to Figure 3 16, t he ROI selecti on was started manually by determining a point on the original image, and then was finished by a Matlab program for extracting a square portion with the dimension of 6464 pixels centered on the determined point. This approach obtains the useful image data and significantly reduces the computational burden for the following data analysis procedures. Representative ROI images for each fruit peel condition used in this study are shown in Figure 3 17 The ROI images were then converted from the original eight bit per channel red, green, blue (RGB) color representation to a six bit per channel hue, saturation, and intensity (HSI) color representation to facilitate the SGDM calculation. Intensity is calculated using the mean value of the three RGB values. The hu e and saturation values are determined using a geometrical transformation of the International

PAGE 55

55 the CIE chromaticity diagram represents a two dimensional hue and saturati on space (Wyszecki and Stiles, 1992). The RGB values determine the chromaticity coordinates on the hue and saturation space, which are then used to geometrically calculate the value of hue and saturation. Edge detection for greening detection After the i mages were reduced, edge detection of the leaf was completed on each image of the leaf sample using MATLAB program file. Figure 3 18 exhibits a detailed edge detection process. First, as shown in F igure 3 19, each RGB image was converted to a gray image an d then a binary image. Next, edge of a binary image was detected by the command 'imerode' and 'imdilate' in M ATLAB Once the edge detection was finished, the image was scanned from left to right for each row in the pixel map, and the area outside the leaf was zeroed to remove any background noise. In next step, the images were converted from RGB format to HSI format. A sample edge detected image of the leaf sample was shown in the following Figure 3 20. Hyperspectral Image Pre processing Figure 3 21 shows the details of pre processing steps. Image preprocessing was first performed for the original hyperspectral reflectance images to fulfill flat field correction, spatial data reduction and image masking, resulting in normalized and masked hyperspectral dat a with dimension of 870x200x92 (92band). Flat field corrections were performed on the hyperspectral images to obtain the relative reflectance prior to image analysis and image processing for classification. Equation ( 3 1) was used for the flat field correc tion to obtain the relative reflectance R

PAGE 56

56 for the 92 spectral bands. Flat field correction technique can reduce uneven illumination and distortion. ( 3 1) where R(w) is the relative reflectance, is the original sample image, is the reference image acquired from a white Spectralon panel, is the dark images obtained with cap covering the camera lens, and w is the wavelength, r is reflectance factor. Referring to Figure 3 22, t he actual reflectance factor for the Spectralon panel is about 99% in the wavelength range measured by the hyperspectral imaging system, but a reflectance factor wa s assumed to be 100% in this study for simplicity. The relative reflectance was adjusted to the value in the range of 0 to 10,000 so the resulting image would be in the range of the original data from 0 16383 (the 14 bit EMCCD). The adjusted value could reduce rounding errors for further data analysis. Next, the mask template of the relative reflectance image was created by the finding the threshold value. To reduce processing time, the leaf area was separated from the background by creating a mask. The mask was created using a threshold value determined from the hyperspectral image which gave the largest contrast between the leaf surfaces with the background. By using the mask, the image resolution was reduced to half of the original size. The reduction of the images serves several important purposes: 1) it reduces the computational burden of the redundant features, 2) it tends to improve the performance of classification algorithms, and 3) it reduces memory and storage demands. The reduction brought comp arable spatial resolutions for vertical and horizontal dimensions.

PAGE 57

57 C olor C o occurrence M ethodology Image data includes large amounts of information, such as color, light, texture and shape. These properties are used in image processing and computer vision algorithm. In these properties, color, shape and light can be changed by surroundings. For example, chameleons have a special ability to match their skin color to surroundings. This ability confuses a prey to escape dangerous situation. If background of a n input image becomes too dark or too light, the object in the image cannot be perceptible in its background. Images of real objects do not have uniform properties, but texture can give information about the image through repeating pattern. This characteri stic of texture segmentation is very important in machine vision and image processing, and a variety of texture analysis methods have been applied in various fields of study. Tuceryan (1998) introduced various aspects of texture analysis. According to the paper, there are three main applications, inspection, medical image analysis, and document processing. "Inspection means the detection of defects in texture images or textile inspection. Medical image analysis has involved the automatic extraction of feat ures from images which is used for a variety of classification, such as distinguishing normal tissues from abnormal tissue. Document processing has applications ranging from postal address recognition to analysis and interpretation of maps." This paper als o presented three texture methods; statistical, geometrical, and model based methods. Statistical methods were proposed early and used widely. This study has used spatial distribution of gray values. The key word of geometrical methods was "texture element s" or primitives. After extracting texture elements in image, the texture features were utilized. Model based methods is "the construction of an image model that can be used not only to describe texture, but also to synthesize it"

PAGE 58

58 In this thesis, t he image analysis method was the color co occurrence matrix (CCM) statistical method. Several researchers have applied this method to agricultural application. Bucks (2000) used color co occurrence method (CCM) for detection of weed, and Pydipati et al (2005) app lied disease detection in citrus trees. Ondimu et al (2008) compare the plant water stress in sunagoke moss using color co occurrence matrix. These papers showed good result and high accuracy in classification. Before applying CCM method to input images, the original image consisting of a red, green, and blue (RGB) color space are converted to a HSI color space. HSI space is distributed into hue, saturation, and intensity components. Most of image processing engines and methods are based on HSI color space system. This color system has strong tolerance for a change of a light on an image or a reflection. This characteristic of HSI can help image processing be less sensitive to illumination of surroundings. Each pixel map was used to generate a color co occ urrence matrix after the H SI image was completed, resulting in three CCM matrices. That is, one CCM matrix for each of the HSI pixel maps. Through the use of spatial gray level dependence matrices occurrence texture analysis method w as developed. The spatial gray level dependence matrices (SGDM`s) is related with gray level co occurrence matrix (GLCM) because of second order statistics. Haralick et al. (1979) proposed the use of gray level co occurrence matrix (GLCM) method. As his pa per, the matrix is consisted of distance ( a ), gray level. The operator a means distance between gray level i from j As an example, consider the following image containing 4 different gray values:

PAGE 59

59 (3 2) This matrix was converted to SGDM matrix," as follows: ( 3 3) The matrix means the number of times a pixel with a vector (1, 0) from gray level "0" to "0". Shearer (1986) illustrated the SGDM presented by the function It i s similar to function but this function does not use vector (i,j) but distance (d) and an orientation angle ( As shown in F igure 3 23, all the neighbors from 1 to 8 are numbered in a clockwise direction. Equation 3 4 presents an example image matrix. (3 4) Haralick and Shanmugam (1974) developed a set of 16 texture features associa ted with the SGDM`s. After these features were founded, Shearer (1986) expanded these features to use the hue, saturation and intensity color features. Also, Shearer reduced 16 features to 11 features using the method developed by Haralick and Shanmugam. F or classifying cancer tissue, Donohue et al. (2001) suggested that image contrast and modus texture features were added to original 11 texture features. Therefore, the color co occurrence matrices (CCM) consisted of these matrices, once

PAGE 60

60 each for the hue, s aturation and intensity features. This resulted in 39 color texture features. Feature Extraction Shearer and Holmes (1990) defined related equations with a brief description as pertains to intensity. Shearer (1986) also applied to saturation with similar descriptions. Hue values are different from intensity and saturation values, so the values are treated very differently in the analysis by Shearer (1986). The descriptions of texture feature equations below were found in Burks (1997). The CCM matrices are normalized using the equation as follows; Matrix Normalization: (3 5) Marginal probability matrix: (3 6) Sum and difference matrices: (3 7) (3 8) (3 9) (3 10) where =the image attribute matrix and = the total number of attribute levels

PAGE 61

61 After normalized CCM matrices, texture features are extracted using equations as follows; Texture features: The angular second moment (I1) is a measu re of the image homogeneity (3 11) The mean intensity level (I2) is a measure of image brightness derived from the co occurrence matrix. (3 12) Variation of image intensity is identified by the variation textural feature (I3). (3 13) Correlation (I4) is a measure of the intensity linear dependence in the image. (3 14) The product moment (I5) is analogue to the covariance of the intensity co occurrence matrix. (3 15) Contrast of an image can be measured by the inverse difference moment (I6). (3 16) The entropy feature (I7) is a measure of the amount of order in an image. (3 17) The sum and difference entropies (I8 and I9) are not easily interpreted, yet low entropies indicate high levels of order. (3 18) (3 19) The information measures of correlation (I10 and I11) do not exhibit any apparent physical in terpretation.

PAGE 62

62 (3 20) (3 21) Where, (3 22) (3 23) (3 24) The contrast feature (I12) is a measure of salience in o bject recognition (3 25) The modus feature (I13) is a measure of maximum value in images. (3 26) Intensity texture feature equations are presented in T able 3 1. Principal Component Analysis Principal c omponent analysis (PCA) is an unsupervised method to find the optimal features f ro m the data. PCA projects d dimensional data o nto lower dimensional subspace in a way that is optimal in a sum squared error sense. Duda et al.(2000) presented the basic approach in karhunen love transform in their book Given n samples of m dimensional data, represented as the m by n matrix. the sample mean is (3 27) where is the j th column of X. Let e be a unit vector in the dire ction of the line. Then the equation of the line can be written as

PAGE 63

63 (3 28) where the sca la r a (which takes on any real value) corresponds to the d istance of any point x from the mean m. If we represent by m + we can find an "optimal" set of coefficients by minimizing the square error criterion function. The squared error criterion function can be written as (3 28) Recognizing that partially differentia ting with respect to and setting the derivative to zero, we obtain (3 29) This brings us to the more interesting problem of find ing the best direction e for the line. The sample covariance matrix arises here when we substitute found in Equation 3 29 into Equation 3 28 to obtain (3 30) Clear ly, the vector e that minimizes also maximizes We use the method of Lagrange multiplier to maximize subject to the constraint that Letting be undetermined multiplier, we differentiate (3 31) with respect to e to obtain (3 32)

PAGE 64

64 Setting this gradient v ector equal to zero, we see that e must be an eigenvector of the covariance matrix: (3 33) In particular, because it follo ws that to maximize we want to select the eigenvector corresponding to the largest eigenvalue of the covariance matrix. In other words, to find the best one dimensional projection of the data (best in the least sum of squared error sense), we proj ect the data onto a line through the sample mean in the direction of the eigenvector of the covariance matrix having the largest eigenvalue. The d dimensional mean vector and d d covariance matrix are computed for the full data set. Next, the eigenvectors and eigenvalues are computed, and sorted according to dec r easing eigenvalue. Call these eigenvectors with eigenvalue with eigenvalue and so o n, and choose the k eigenvectors having the largest eigenvalues. Often there will be just a few large eigenvalues, and this implies that k is the inherent dimensionality of the subspace governing the "signal" whiles the remaining d k dimensions generally contain noise. We form a d k matrix A whose columns consist of the k eigenvector. The representation of data by principal components consists of projecting the data onto the k dimensional subspace according to (3 34) Singular Vector Divergence The procedures for deriving singular vector divergence (SVD) are summarized in this section. Wall et al. (2003) described the mathematical definition of singular vector divergence (SVD). The equation for singular value decomposition of X is given by (3 35)

PAGE 65

65 X means an m x n matrix of real valued data and rank r where without loss of gene rality m n and therefore r n. In the case of microarray data, x ij is the expression level of the i th gene in the j th assay. The elements of the i th row of X form the n dimensional vector g i which we refer to as the transcriptional response of the i th gene. Alt ernatively, the elements of the j th column of X form the m dimensional vector a j which we refer to as the expression profile of the j th assay. U is an m n matrix. The columns of U are called the left singular vectors { u k }, and form an orthonormal basis for the assay expression profiles, so that u i u j = 1 for i = j and u i u j = 0 otherwise. S is an n n diagonal matrix. The elements of S are only nonzero on the diagonal, and are called the singular values Thus, S is diagonal matrix, S = and s k > 0 for 1 k r and s i = 0 for ( r +1) k n (3 36) V T is also an n n matrix.The rows of V T contain the elements of the right singular vectors { v k }, and form an orthonormal basis for the gene transcriptional re sponses. One important result of the SVD of X is that (3 37) is the closest rank l matrix to X that X ( l ) minimizes the sum of the squares of the difference of the elements of X and X ( l ) ij | x ij x ( l ) ij | 2 One way to calculate the SVD is to first calculate V T and S by diagonalizing X T X : (3 38) and then to calculate U as follows: (3 39)

PAGE 66

66 where the ( r +1),..., n columns of V for which s k = 0 are ignored in the matrix multiplication of Equation 3 39. Choices for the remaining n r singular vectors in V or U may be calculated using the Gram Schmi dt orthogonalization process or some other extension method. In practice there are several methods for calculating the SVD that are of higher accuracy and speed. Section 4 lists some references on the mathematics and computation of SVD. Correlation Analysi s Gonzalez et al. (200 8 ) described that the c orrelation analysis measures the relationship between two variables. The correlation coefficient, defined as (3 40) Where and are (3 41) (3 42) (3 43) is a normalized covariance, and must always be between 1 and +1. If then x and y are maximally positively correlated, while if they are maximally negatively corre la ted If the variables are uncorrelated. It is common practice to consider variables to be uncorrelated for practical purpose if the magnitude of their correction coefficient is below some threshold, such as 0 .05, although the threshold that makes sense do es not depend s on the actual situation. If x and y are statistically independent, then for any two functions f and g we obtain (3 44)

PAGE 67

67 a result which follows from t he definition of statistical independence and expectation. Note that if and this theorem again shows that is zero if x and y are statistically independent. Wavelet Transform The w avelet transform (WT) in digital image processing is one of the practical tools to transform images and signals. The Haar transform, which is the simplest orthogonal wavelet transform, was selected in this study. The basic idea of the Haar transform for 2 D multiresolution data is that it is computed by iterating difference and averaging between odd and even samples of input data. Since the data is in 2D, the average and difference in both the horizontal and vertical directions are computed. The main formul a for the wavelet transform of image of size M N can be described by the following equation (Gonzalez, 2008): (3 45) (3 46) Where (3 47) (3 48) are horizontal along the x axis, vertical along the y axis, and diagonal along y=x To obtain the image features using the Haar wavelet transform, the images were taken of the top surface for each leaf class and centered on the mid. Algorithms for texture feature generation were developed in MATLAB. Due to the minimal loss of

PAGE 68

68 texture feature quality and orthogonal wavelet tran sform, the image resolution was rescal ed to 256 256 pixels. The subsequent steps were repeated for each image in the dataset. After the images were rescaled, decomposition of the images was completed on each image of the leaf sample using the MATLAB. As shown in F igure 3 24 each image was decomposed to four level wavelet transform. The four level wavelet transform produce a total of 13 sub bands, including vertical, horizontal and diagonal bands. In the next step, the texture features were then ex tracted from each sub image. Once the 4 level decompositions were generated for each image, texture features were extracted using energy of each sub bands in the image. The average energy of approximation and detail sub image of four level decomposed imag es are calculated as features using the formulas given as follows: (3 49) Where M,N denote the size of sub image, w(I,j) denotes the valu e of pixel of image Therefore, a total of 13 sub bands were obtained and each image has 13 texture features In the classification phase, Fisher linear discriminant was used for Wavelet Transform based classification. Fisher's linear discriminant method is that a line projects high dimensional data and classification in a one dimensional space. The main formula for calculating Fisher's linear discriminant is giving by: (3 50) where is the "between classes scatter matrix" and is the "within classes scatter matrix" The definiti on of scatter matrix is:

PAGE 69

69 (3 51) (3 52) (3 53) where (3 54) and is the number of cases in class c. Pattern Recognition Pattern recognition is a part of artificial intelligence. Artificial intelligence is an approach that mimics human's intelligen ce to build learning ability, reasoning, and perception. The research about artificial intelligence has developed into a discipline known as intelligent system. Bishop (2006) presented p attern recognition techniques consist ed of features and patterns. A feature is specific aspect, qua lity and characteristic of some objects. The feature can be color, a symbol sign, and numerical value, such as distance, height, and weight. If features have two or more numerical values, the features can express d dimension row features called as a featu re vector, and this d dimension space defined as a feature vector is a feature space (Bishop, 2006) An object can be expressed as points modeled by a feature vector. Such as, a plot of features expressed as points in space of a scatter plot. This plot can be expressed visually to 3 dimension al a space. If the feature space is 4 or more dimension, the feature vector cannot be plotted, but still exists in a n dimensional space. Figure 3 25

PAGE 70

70 below shows examples of various features. Pattern means traits or fea tures of an individual object, and is defined as a set of feature together. Features and patterns are similar concepts, but features form pattern. In pattern recognition, the pattern is expressed by {x, }, x is feature vector observed and is the unique c lass of the feature vector. This class is also called category, group, or label. The feature vector selected to represent the class is very important and affects the selection of pattern recognition algorithms and the cognitive approach. Hence, the feature represents characteristics of the classes that make them distinguishable. In other words, samples from one should have similar features within the class, while samples from another class should have different feature values. Figure 3 26 presents good fea ture separation on the left and poor separation on the right. The feature vector with pattern can be classified by its distribution type as follows. Linear distribution. Nonlinear distribution. High correlation distribution. Multi class distribution. F igure 3 27 shows distribution plots of pattern types. When feature patterns are distributed by linear distribution type, the class can be classified more easily. There are various approaches in pattern recognition, and these approaches have demonstrated success in a variety of research areas such as aerospace, defense, medical, neurobiology, and linguistics etc. In this study, three pattern recognition approaches were used for finding an optimal pattern recognition approach. Linear models for classificati on : Fisher's linear discriminant analysis method

PAGE 71

71 Neural Networks fo r classif ication : Back propagation based on neural netw ork method Nonlin ear Kernel for c lassification : Supp ort Vector Machi ne (SVM) Fisher`s Linear Discriminant Method The central idea of Fisher's linear discriminant method is that a line projects high dimensional data and classification in a one dimensional space. M. Welling (2006) introduced Fisher's linear disciminant analysis where the formula for calculating Fisher's linear discri minant is giving by: (3 55) where is the "between classes scatter matrix" and is the "within classes scatter ma trix" The definition of scatter matrix is: (3 56) (3 57) (3 58) where (3 59) and is the number of cases in class c. Neural Network Based on Back Propagation Network Method The main idea of a back propagation network is that the connecting link weights between hidden layers are found by back prop agating the errors from the output layer. Back propagation networks typically use multilayer networks. Figure 3 28 illustrates a network diagram for a multilayer network. Nodes are used to represent the input, hidden, and output variables, and links betwee n the nodes describe the weight parameters. The

PAGE 72

72 link contribution from additional input and hidden variables, X0 and Z0, represent the bias parameters. The process flow of this network follows the arrows shown in F igure 3 2 8 C. M. Bishop (2006) describe d the derivation of a back propagation algorithm in his book "Pattern recognition and machine learning". (3 60) where is learning rate, is weight from unit j to unit i is a error of output from unit j to, is input pattern. Using equation 3 60, we can calculate the weight function using forward propagation as shown in F igure 3 29. The output error is updated. After each forward propagation and used to re calculate the hidden layer weights using the back propagation formula. The back propagation formula can be described by the following equation: (3 61) where (3 62) Figure 3 2 9 tells us that the value of for a particular hidden unit can be obtained by propagating the 's backwards from units higher up in the network. The neural network application in this research was designed by the MATLAB PRTools toolbox (Faculty of Applied Physics, Delft University of Technology, The Ne therlands) This application used the back propagation formula of the neural network which shown above.

PAGE 73

73 Appendix B shows a detail MATLAB routine. First, the data files would be loaded. The data files had a reduced texture features about each conditions. Second, the data files were divided between training sets and test sets using the command 'dataset'. The command 'dataset' consists of a set of 'm' objects, each given by 'k' features. A 'm' by 'k' rows represents such a dataset in this MATLAB routine. Thi rd, after importing the training matrix and the testing matrix, the network was trained using the function BPXNC '. The syntax for this function is as follows: [W,HIST] = BPXNC ( A ,UNITS,ITER,W_INI,T,FID) where A Dataset UNITS Array indicating number o f units in each hidden layer (default: [5]) ITER Number of iterations to train (default: inf) W_INI Weight initialisation network mapping (default: [], meaning initialisation by Matlab's neural network toolbox) T Tuning set (default: [] meaning use A) FID File descriptor to report progress to (default: 0, no report) After training, the test data for each class was simulated using the function testc'. Support Vector Machine Method The support vector machine is very useful approach for classification. In recent years, SVMs method has been developed by many researchers and applied in various fields. In general, the SVM method handles 2 class problems, however multiclass SVMs have been developed by various researchers. The multiclass solu tion is based on the two class support vector machine. Gunn (1998) presented the support vector

PAGE 74

74 classification. Figure 3 30 shows numerous possible linear classifiers for separating the data, but only one line maximizes the distance between it and the near est data point of each class. This linear classifier is called the optimal separating hyper plane or hyper line. He formalized the SVM main problem as, how to find optimal separating hyper plane in n dimension space. The main formula for the mathematical a nalysis is equation 3 68. A separating hyper plane in canonical form must satisfy the following constraints, (3 63) The distance d(w,b;x) of point x from the hyperplane (w,b) is, (3 64) The optimal hyperplane is provided by maximizing the margin, (w,b), subject to the constraints of equation (3 64). Where the margin is given by, (3 65) (3 66) (3 67) Hence, the hyperplane that optimally separates the data is the one that minimizes (3 68) Through this analysis, we can find the maximum margin hyper plane. Statistical Classification After the color co occurrence matrices (CCM) were generated for input images, each input image has 39 t exture features. Because the proper number of texture

PAGE 75

75 features causes the reduced computational requirement such as, time and computer hardware and complexity for classification, it is an important procedure to eliminate redundant texture features. SAS off ers this procedure and a discriminant classifier between image classes after training the statistical classification model. Burks (1997) introduced the procedure for accomplishing above tasks. PROC STEPDISC is a useful function to reduce the number of tex ture feature by a stepwise selection process. In Stepwise procedure, the main assumption is that all of the classes included in the data set are to be multi variate normal with a common covariance matrix. The process has two different conditions related wi th variance. Burks (1997) explained as follows. "First, the variable within the model which contributes least to the model, as determined by the Wilks' lambda method that does not pass the test to stay is removed from the model. Secondly, the variable outs ide the model which contributes most to the model and passes the test to be admitted is added. When no more steps can be taken the model is reduced to its final form." In PROC DISCRIM procedure, the classification accuracy may be determined by a discrimin ant function established using a measure of the generalized squared distance between the image texture variables and the class texture variable means, and the posterior probability. The pooled covariance matrix of the training set texture variables and the prior probabilities of the classification groups may affect the classification criterion.

PAGE 76

76 Hyperspectral Image Classification M ethods The Spectral Angle Mapper (SAM) and Spectral Information Divergence (SID) are the two supervised classification met hods used in analyzing the spectral characteristics of the peel conditions. Spectral Angle Mapper The Spectral Angle Mapper (SAM) is a physically based spectral classification that uses an n dimensional angle to match pixels to reference spectra. The algo rithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. This technique, when used on calibrated reflectance data, is re latively insensitive to illumination and albedo effects. Endmember spectra used by SAM can come from ASCII files, spectral libraries, or can be extracted directly from the image (as ROI average spectra). SAM compares the angle between the endmember spectru m vector and each pixel vector in n dimensional space. Smaller angles represent closer matches to the reference spectrum. Pixels further away than the specified maximum angle threshold in radians are not classified. Figure 3 31 illustrated that the main co ncept of SAM is to calculate the angle between endmember spectra and target spectra as vectors in a space with dimensionality equal to the number of bands (Shippert, 2003) The formula of spectral angle ( ) is calculated as: (3 69)

PAGE 77

77 where M is the number of spectral bands, is the reflectance of a target spectrum, is the reflectance of a target spectrum, is the length of the endmember vector, and is the length of the target spectrum vector. The length of the endmember vector and the target spectrum vector are calculated as: (3 70) After finding the spectral angle ( ), the spectral angle is compared with threshold value. If the threshold is below than the angle, the target spectrum is determined as classifying to the endmember class (Dennison, 2004) Spectral Infor mation Divergence While SAM is a deterministic method, SID is a probabilistic method that allows for variations in pixel measurements, where probability is measured from zero to a user defined threshold (Du et al., 2004) Chang (1999) described the deriva tion of spectral information divergence (SID). The hyperspectral pixel vector is given by (3 71) Each component x can be modeled as a random variable by defining an appropriate probability distribution. Due to the nature of reflectance, assume that all component 's in X are nonnegative Thus, the probability measure can be defined as: (3 72) and the desired probability vector is For an information theory to capture relationship and correlation between two hyperspectral pixel vectors, another

PAGE 78

78 pixel vector with the probability distribution given by and Thus, spectral information divergence (SID ) can be given by: (3 73) where called as the relative entropy of Y with respect to X is defined by (3 74) Therefore, the value of SID shows the use of the relative entropy and the similarity between two spectral pixels.

PAGE 79

79 Figure 3 1. Images of a bnormal peel conditions Figure 3 2. Images of nutritional deficiency

PAGE 80

80 Figur e 3 3. Images of citrus greening Figure 3 4. Images of normal conditions

PAGE 81

81 Figure 3 5 15 i mages of blotch mottle condition s

PAGE 82

82 Figure 3 6 Representative images for each peel condition Figure 3 7 Three faces of each fruit with 120 interval

PAGE 83

83 Fi gure 3 8. Typical image system for acquiring RGB images from citrus samples Figure 3 9. Digital microscope system for acquiring RGB images from citrus samples

PAGE 84

84 Figure 3 10. Typical h yperspectral line scan imaging system Figure 3 11. Principle of th e prism grating prism imaging spectrograph for acquiring spartial and spectral information from an object

PAGE 85

85 Figure 3 12. Conceptual representation of a volume of hyperspectral image data Figure 3 13 White paper printed with thin parallel lines 2 mm apar t for geometric calibration

PAGE 86

86 Figure 3 1 4 Spectral profiles of calibration lamps: xenon lamp and mercury argon lamp Figure 3 15. The relationship between the vertical band position and wavelength from two calibration lamps

PAGE 87

87 Figure 3 16. ROI sel ection program developed in M ATLAB R2007b Normal Canker Copper Burn Greasy Spot Melanose Wind Scar Figure 3 17 Typical ROI images for normal and diseased citrus peel conditions

PAGE 88

88 Figure 3 18 Procedures for edge detection Figure 3 19 Conver ted images of a leaf samples Figure 3 20 Edge detected image of a leaf samples

PAGE 89

89 Figure 3 21 Hyperspectral image pre processing

PAGE 90

90 Figure 3 22 The plot of reflectance factor for white panel Figure 3 23 Nearest neighbor diagram

PAGE 91

91 Figure 3 24. Decomposition of an image Figure 3 25 Feature vector, feature space and scatter plot

PAGE 92

92 Figure 3 26 Good feature and bad feature Figure 3 27 Distribution plots with Pattern types

PAGE 93

93 F igure 3 28 The flow of a network Figure 3 29 Illustration of the calculation

PAGE 94

94 Figure 3 30. Possible linear classifiers for separating the data Figure 3 31 The angle between endmember spectra and target spectra as vectors in a space Table 3 1 The number of black spot and other symptom s Class Symptoms Number Total Black spot Black spot 135 100 No black spot Greasy spot 90 390 Market 90 Melanose 105 Wind scar 105

PAGE 95

95 CHAPTER 4 DETECTION OF CITRUS CANKER DISEASE Citrus trees can exhib it a host of symptoms reflecting various disorders that can adversely impact their health, vigor, and productivity to varying degrees. In some cases, disease control actions or remedial measures can be undertaken if the symptoms are identified early. Addit ional opportunities for disease control exist when precision agriculture techniques are involved, which could use early detection along with a global positioning system to map diseases in the grove for future control actions. Environmental pollution is ano ther concern throughout the world. Indiscriminate use of fungicides, pesticides, and herbicides to control pest and diseases has led to problems, such as deteriorating ground water quality, and health hazards for operators and the general public. Increased pressures to reduce chemical applications have led researchers to study new ways for early detection of various diseases on citrus trees with an aim to reduce chemical usage and maintain cost effective crop production. This study explored machine vision b ased techniques that can visually differentiate common citrus peel disorders using individual fruit color texture features. Citrus samples were collected in the field and evaluated under laboratory conditions. Future studies will expand the technologies to in field inspections. In the past decade, r esearchers have used image processing and pattern recognition techniques in agricultural applications, such as detection of weeds in the field, and sorting of fruits and vegetables. The underlying approach for al l of these techniques is the same. First, images are acquired from the environment using analog, digital, or video cameras. Then, image processing techniques are applied to extract useful features that are necessary for further analysis of the images. Afte rwards,

PAGE 96

96 discriminant techniques, such as parametric or non parametric statistical classifiers and neural networks, are employed to classify the images. The selection of the image processing techniques and the classification strategies are important for the successful implementation of any machine vision system. Object shape matching functions, color based classifiers, reflectance based classifiers, and texture based classifiers are some of the common methods that have been tried in the past. A number of tec hniques have been studied to detect defects and diseases related to citrus. Gaffney (1973) obtained reflectance spectra of citrus fruit and some surface defects. Edwards and Sweet (1986) developed a method to assess damages due to citrus blight disease on citrus plants using reflectance spectra of the entire tree. Miller and Drouillard (2001) collected data from Florida grapefruit, orange, and tangerine varieties using a color vision system. They used various neural network classification strategies to dete ct blemish related features for the citrus fruit. Aleixos et al. (2002) developed a multispectral camera system that could acquire visible and near infrared images from the same scene, and used it on a real time system for detecting defects on citrus surfa ce. Blasco et al. (2007) reported the application of near infrared, ultraviolet and fluorescence computer vision systems to identify the common defects of citrus fruit. They proposed a fruit sorting algorithm that combines the different spectral informatio n to classify fruit according to the type of defect. Their results showed that non visible information can improve the identification of some defects. Most recently, Qin et al. (2008) developed an approach for citrus canker detection using hyperspectral re flectance imaging and PCA based image classification method. Their results

PAGE 97

97 demonstrated that hyperspectral imaging technique could be used for discriminating citrus canker from other confounding diseases. This research was aimed to develop a method to dete ct citrus peel diseases using color texture features. The use of color texture features in classical gray image texture analysis was first reported by Shearer (1986). Shearer and Holmes (1990) reported a study for classifying different types of nursery sto ck by the color co occurrence method (CCM). This method had the ability to discriminate between multiple canopy species and was insensitive to leaf scale and orientation. The use of color features in the visible light spectrum provided additional image cha racteristic features over traditional gray scale texture representation. The textural methods employed were statistical based algorithms that measured image features, such as smoothness, coarseness, graininess, and so on. The CCM method involves three majo r mathematical processes briefly described in the following. A complete discussion of the color co occurrence method could be found in Shearer and Holmes (1990). Transformation of a red, green, blue (RGB) color representation of an image to an equivalent h ue, saturation, and intensity (HSI) color representation; Generation of color co occurrence matrices from the HSI pixel maps. Each HSI matrix is used to generate a spatial gray level dependence matrix (SGDM) Calculation of texture f Burks et al. (2000) developed a method for weed species classification using color texture features and discriminant analysis. In their study, CCM texture feature data models for six classes of ground cover (giant foxtails, c rabgrass, velvet leaf, lambs quarter, ivy leaf morning glory, and soil) were developed and stepwise discriminant analysis techniques were utilized to identify combinations of CCM texture feature

PAGE 98

98 variables, which have the highest classification accuracy wit h the least number of texture variables. A discriminant classifier was trained to identify weeds using the models generated. Classification tests were conducted with each model to determine their potential for classifying weed species. Pydipati et al. (200 6) utilized the color co occurrence method to extract various textural features from the color RGB images of citrus leaves. The CCM texture statistics were used to identify three diseased conditions and normal citrus leaves using discriminant analysis. The overall objective of this research was to develop a machine vision based method for detecting various diseases on citrus peel using color texture features under a controlled lighting condition. Specific objectives implemented to accomplish the overall obj ective were to: U se a color imaging system to collect RGB images from grapefruits with normal and five peel conditions (i.e., canker, copper burn, greasy spot, melanose, and wind scar); D etermine image texture features based on the color co occurrence meth od (CCM); and D evelop algorithms for selecting useful texture features and classifying the citrus peel conditions based on the reduced texture feature sets. Materials and Methods Image Acquisition System A color image acquisition system was assembled for acquiring RGB images from citrus samples, and it is shown in Figure 4 1 The imaging system consisted of two 13 W high frequency sealed fluorescent lights (SL Series, StockerYale, Salem, NH, USA), a zoom lens (Zoom 7000, Navitar, Rochester, NY, USA), a 3 C CD RGB color camera (CV M90, JAI, San Jose, CA, UDA), a 24 bit color frame grabber board with 480640 pixel resolution (PC RGB, Coreco Imaging, St. Laurent, Quebec, CA), and a computer

PAGE 99

99 installed with an image capture software. The setup of the lighting sys tem was designed to minimize specular reflectance and shadow, and to maximize the contrast of the images. The height of the camera and its focus were adjusted to contain the image of the whole fruit, with an approximate 100 mm100 mm field of view. Automat ic white balance calibration was conducted using a calibrated white balance card before acquiring images from fruit samples. The digital color images were saved in uncompressed BMP format. Image Analysis The data analysis methods for analyzing the color i mages of the fruit samples based on the color co occurrence method (CCM) are illustrated in the flow chart shown in Figure 4 2 which involve the procedures for selection of region of interest (ROI), transformation from RGB format to HSI format, generation of spatial gray level texture features, and discriminant analysis for disease classification. All image processing and data analysis procedures were executed using programs developed in M ATLAB 7.0 (MathWorks, Natick, MA, USA) and SAS 9.1 (SAS Institute Inc., Cary, NC, USA). The color co occurrence texture analysis method was developed through the use of the spatial gray nerated for each color pixel map of the ROI HSI images, one each for hue, saturation and intensity. These matrices measure the probability that a pixel at one particular gray level will occur at a distinct distance and orientation from any pixel given that pixel has a second particular gray level (Shearer and Holmes, 1990). The SGDM is represented by level of the pixel at (x 1 ,y 1 ) in the

PAGE 100

100 image, and j represents the gray level of the pixel at (x 2 y 2 ) loc ated at a distance d and 1 ,y 1 ) (Shearer, 1986). The matrix is constructed by counting the number of pixel pairs of (x 1 y 1 ) and (x 2 y 2 ) with the grey value i and j at saturation and intensity were then used to calculate the texture features. Shearer and Holmes (1990) reported a reduction for the 16 gray scale texture features through elimination of redundant variables, resulting in 11 texture features. Donohue et al. ( 2001) added two more texture features (i.e., image contrast and modus) to those used by Shearer and Holmes (1990). In this study, the combined 13 texture features proposed by Shearer and Holmes (1990) and Donohue et al. (2001) were used for citrus peel dis ease classification, and they included (1) uniformity, (2) mean intensity, (3) variance, (4) correlation, (5) product moment, (6) inverse difference, (7) entropy, (8) sum entropy, (9) difference entropy, (10) information correlation #1, (11) information co rrelation #2, (12) contrast, and (13) modus. The equations for calculating the 13 texture features can be found in Pydipati et al. (2006). texture features for each HSI component an d thereby a total of 39 texture statistics. The texture features were identified by a coded variable name where the first letter represents whether it is a hue (H), saturation (S) or intensity (I) feature and the number following represents one of the thir teen texture features described above. As an example, the feature (I 7 ) is a measure of the entropy in the intensity CCM matrix, which represents the amount of order in an image and is calculated by equation (4 1 )

PAGE 101

101 ( 4 1) The p( i,j) matrix represents the normalized intensity co occurrence matrix and N g represents the total number of intensity levels. The equation for normalizing the co occurrence matrix is given in equation (4 2 ) where P(i,j,1,0) is the intensity co occurrence m atrix. ( 4 2) A physical representation of entropy (uncertainty) may be visualized by comparing a checkerboard like image to an image where one half is black and the other half is white. The latter image is highly ordered havi ng all pixels of the same intensity segregated into two distinct pixels groups, which gives greater certainty of the pixel value of the adjacent pixels. The checkerboard image has a lower amount of order due to intermixing of black and white squares, which results in a greater level of uncertainty of neighboring pixel values. The lower order image would therefore have more uncertainty and thus a higher entropy measure. After the texture statistics were obtained for each image, feature selection was conducte d to reduce the redundancy in the texture feature set. The SAS procedure STEPDISC can reduce the size of the variable set and find the variables that are important for discriminating samples in different classes, and it was used for the texture feature sel ection. The stepwise discriminant analysis begins with no variables in the classification model. At each step of the process, the variables within and outside the model are evaluated. The variable within the model, at that particular step, which

PAGE 102

102 contribute from the model. Likewise, the variable outside the model that contributes most to the model and passes the test to be admitted is added. A test significant level of 0.0001 for the va riables of SLS (test for variable to stay) and the SLE (test for variable to enter) in the STEPDISC procedure was chosen for the stepwise discrimination of the variable list (SAS, 2004). When no more steps can be taken, the number of variables in the model is reduced to its final form. Burks et al. (2000) had shown that classification performances were poor if only hue or saturation information was used in the classification models. Thus three color feature combinations including hue, saturation, and intens ity (H, S, I), hue and saturation (H, S), and intensity (I) only were used to perform the texture feature selections. These three color combinations have demonstrated high classification accuracies in the applications for other plant discriminations (Burks et al. 2000; Pydipati et al. 2006). Texture Classification The classification models were developed using the SAS procedure DISCRIM, which creates a discriminant function based on a measure of the generalized squared distance between a specific test imag e texture variable input set and the class texture variable means, with an additional criteria being the posterior probability of the classification groups (Rao, 1973). Each sample in the testing set was placed in the class for which it had the smallest ge neralized square distance between the test observation and the selected class, or the largest posterior probability of being in the selected class. The DISCRIM procedure utilized a likelihood ratio test for homogeneity of the within group covariance matric es at a 0.1 test significance level.

PAGE 103

103 The 30 samples from each peel condition were divided into two datasets consisting of 20 samples for training and 10 samples for testing The samples were first arranged in ascending order for the time the images were a cquired. The first two samples were selected for training and the third sample for testing. This approach minimizes negative time dependent variability, and reduces potential for data selection bias between the training and test datasets. A training data s et and a test data set were created for each of the subsets of the texture features selected by the stepwise discriminant analysis described above. The training sets were used to train the classification models and the testing sets were used to evaluate th e accuracies of different classification models. Results and Discussion Selection of Texture Features The texture feature selection results are summarized in Table 4 1 Four classification models were developed using the selected texture feature sets from the three color combinations [(H, S, I), (H, S), and (I)]. The variables listed in the column of 4 1 were generated by the SAS STEPDISC procedure, and they were arranged in the descending order of the importance for the class ification models. The subscript numbers indicate the texture statistics as the following: (1) uniformity, (2) mean intensity, (3) variance, (4) correlation, (5) product moment, (6) inverse difference, (7) entropy, (8) sum entropy, (9) difference entropy, ( 10) information correlation #1, (11) information correlation #2, (12) contrast, and (13) modus. As an example, H 9 represents the difference entropy of hue, and it is selected as the most important texture feature for the first two classification models dev eloped using two different color combinations [(H, S, I), and (H, S)].

PAGE 104

104 The classification models were named using the color features involved in the texture feature selections followed by the total numbers of the selected texture features. For example, mo del HSI_13 consists of a reduced set of hue, saturation and intensity texture features, and there are 13 texture features in total that were used to construct the model. As shown in Table 4 1 significant eliminations of redundant texture features were acc omplished through the stepwise discriminant analysis. Nine and eleven texture features were selected for model HS_9 and model I_11, respectively. The simplification of the texture features largely reduces the computation burden due to the redundant data, a nd it also helps improve the performance of classification models. In addition to the three models described above, a classification model that used all 39 HSI texture features was developed for the purpose of comparisons with other models. Thus there are four classification models that were used to differentiate the citrus peel diseases, and they were independently evaluated for classification performance. Classification of Citrus Peel Conditions The SAS procedure DISCRIM was used to test the accuracies o f the classification models. Table 4 2 summarizes the classification results for differentiating different citrus peel conditions using model HSI_13 listed in Table 4 1 As shown in Table 4 2 four peel conditions (normal, canker, copper burn, and wind sca r) among the total six conditions tested in this study were perfectly classified into the appropriate categories. For the other two conditions (greasy spot and melanose), there was one misclassified sample for each case. One greasy spot sample was misclass ified as copper burn, and one melanose sample was misclassified as wind scar. The classification accuracies for greasy spot and melanose were 90.0%. In general, there were only two samples that

PAGE 105

105 were misclassified in the 60 samples in the testing set, and t he overall classification accuracy for the model HSI_13 was 96.7%. Same procedures were applied for the other three classification models listed in Table 4 1 and the classification results, along with those from the model HSI_13, are summarized in Table 4 3 Using nine selected hue and saturation texture features, model HS_9 provided the classification accuracies of 90.0% for normal, canker, copper burn, greasy spot, and wind scar, and 70.0% accuracy for melanose. The average accuracy of the model HS_9 was 86.7%. Model I_11 used 11 selected intensity texture features alone. Although it achieved two perfect classification results (100.0%) for copper burn and greasy spot, the performances for the other four conditions were poor, especially for melanose (70.0% ) and wind scar (50.0%). The overall accuracy of the model I_11 was 81.70%, which is the worst among the four models tested in this study. When all 39 texture features were used by model HSI_39, the classification accuracy was achieved as 88.3%, which was higher than those of the models HS_9 and I_11, but lower than that of the model HSI_1 3 (96 .7 %). Based on the results shown in Table 4 3 we could find that classification model using intensity texture features only (model I_11) gave the worst performance w hen compared to the other models. It is likely that the poor performance of the intensity texture features only is due to the variations of the light intensity during the image acquisition. On the other hand, classification model that used hue and saturati on texture features (model HS_9) outperforms the model that used intensity texture features only (model I_11). When intensity texture features were added to hue and saturation features, the performance of the classification models was further improved. Tex ture

PAGE 106

106 feature selection is necessary for obtaining better classification accuracy, and this is confirmed by the fact that the model using 13 selected hue, saturation and intensity texture features (model HSI_13) achieved better accuracy than the one that us ing all 39 HSI texture features (model HSI_39). The model HSI_13 emerged as the best one among the four classification models tested in this study, suggesting that it would be best to use a reduced hue, saturation and intensity texture feature set to diffe rentiate different citrus peel conditions. Stability Test of the Classification Model It is important to test high classification accuracy using various texture features because this classification results presented in above section is established statist ically. Moreover, if such stability can be demonstrated by this test, this image analysis method and procedures will be more useful for detecting purpose in predicting citrus disease. Image samples had a fixed order (i.e., one from every thirty samples arr anged in ascending order for the time the images were acquired). Using the order, training samples and testing samples were separated. To test the stability of the classification model, 20 training samples and 10 testing samples were randomly chosen from t he 30 samples for each peel condition, and they were used to train and test the model HSI_14, which gave the best classification performance, following the same procedures described earlier. Ten runs were repeated for the training and testing. The average value and standard deviation in T able 4 4 were 96.0% and 2.3%, respectively. These results indicate that the classification model developed using the 14 selected hue, saturation and intensity texture features is robust in performance, and thus is able to c lassify the new fruit samples according to their peel conditions.

PAGE 107

107 Summary Color imaging coupled with texture feature analysis based on color co occurrence method provides a useful means for identifying common diseases on citrus fruit. A color imaging syst em was assembled to acquire RGB images from grapefruits with normal and five peel disorders including canker, copper burn, greasy spot, melanose, and wind scar. Small images covering the interested areas on the fruit surface were extracted from the origina l RGB images, and they were then transformed to hue, saturation, and intensity color representation. Spatial gray level dependence matrices were generated from the hue, saturation, and intensity images. A total of 39 image texture features were determined texture features were developed based on a stepwise discriminant analysis for three color combinations including hue, saturation, and intensity (HSI), hue and saturation (HS), and inten sity (I). Classification models were constructed using the reduced texture feature sets through a discriminant function based on a measure of the generalized squared distance. Significant eliminations of redundant texture features were accomplished through the stepwise discriminant analysis. 13, 9, and 11 texture features were selected for the color combinations of HSI, HS, and I, respectively. The simplification of the texture features largely reduces the computation burden, and it also helps improve the p erformance of classification models. The classification model using intensity texture features only gave the worst accuracy (81.7%), and the model using 13 selected HSI texture features achieved the best classification accuracy (96.7%) among four classific ation models including the one using all 39 HSI texture features. The results suggested that it would be best to use a reduced hue, saturation and intensity texture

PAGE 108

108 feature set to differentiate different citrus peel conditions. A stability test for the cla ssification model with the best performance was accomplished by 10 runs using randomly selected training and testing samples. Average classification accuracy and standard deviation in T able 4 4 were 96.0% and 2.3%, respectively, indicating that the classif ication model is robust for classifying new fruit samples according to their peel conditions. This research demonstrated that color imaging and texture feature analysis could be used for differentiating citrus peel diseases under the controlled laboratory lighting conditions. Future studies will explore the utility of these algorithms in outdoor conditions, and develop pattern recognition methods such as self organizing map (SOM) or support vector machines (SVM) for real time application The most significa nt challenge will be created by the inherent variability of color under natural lighting conditions. By eliminating intensity based texture features, this variability can be significantly reduced. However, hue and saturation can be somewhat influenced by l ow lighting conditions. This may point to the need to use cameras with light availability color compensation, supplemental lighting, or night time applications where lighting levels can be controlled.

PAGE 109

109 Figure 4 1 Color image system for ac quiring RGB images from citrus samples Figure 4 2 Procedures for color image analysis. Table 4 1 Texture features selected by stepwise discriminant analysis Classification Model Color Feature Texture Feature Set HSI_1 3 H, S, I H 9 H 10 I 12 S 7 I 3 I 2 S 12 I 11 I 1 I 8 S 1 ,H 2 H 5 HS_9 H, S H 9 H 10 S 7 H 5 H 11 S 12 S 11 H 7 H 13 I_11 I I 2 I 3 I 5 I 10 I 6 I 13 I 8 I 9 I 1 I 11 I 7 HSI_39 H, S, I All 39 texture features (H 1 H 13 S 1 S 13 I 1 I 13 )

PAGE 110

110 Table 4 2 Classification results using model HSI_13 in Table 4 1 Actual Peel Condition Classified Peel Condition Accuracy (%) Normal Canker Copper Burn Greasy Spot Melanose Wind Scar Normal 10 0 0 0 0 0 100 .00 Canker 0 10 0 0 0 0 100 .00 Copper Burn 0 0 10 0 0 0 100 .00 Greasy Spot 0 0 1 9 0 0 90 .00 Melanose 0 0 0 0 9 1 90 .00 Wind Scar 0 0 0 0 0 10 100 .00 Total 10 10 11 9 9 11 96.7 0 Table 4 3 Classification results in percent correct for all models in Table 4 1 Peel Condition Classification Model HSI_1 3 HS_9 I_11 HSI_39 Normal 100 .00 9 0 .00 80 .00 80 .00 Canker 100 .00 90 .00 90 .00 100 .00 Copper Burn 100 .00 90 .00 100 .00 90 .00 Greasy Spot 90 .00 90 .00 100 .00 90 .00 Melanose 90 .00 70 .00 70 .00 70 .00 Wind Scar 100 .00 90 .00 50 .00 100 .00 Overall Accuracy (%) 96. 67 86.7 0 81.7 0 88.3 0

PAGE 111

111 Tabl e 4 4 Classification results for shuffle data models in percent correct Number of random data Canker (%) Copper (%) Greasy Spot (%) Normal (%) Melanose (%) Windscar (%) Total (%) 1 100 .00 90 .00 100 .00 100 .00 100 .00 100 .00 98.3 3 2 100 .00 100 .00 100 .00 80 .00 90 .00 100 .00 95 .00 3 100 .00 100 .00 100 .00 100 .00 90 .00 100 .00 98.3 3 4 80 .00 100 .00 100 .00 90 .00 90 .00 100 .00 93.3 3 5 100 .00 100 .00 90 .00 100 .00 90 .00 90 .00 95 .00 6 100 .00 100 .00 100 .00 100 .00 100 .00 100 .00 100 .00 7 100 .00 90 .00 100 .00 100 .00 100 .00 90 .00 96. 6 7 8 100 .00 100 .00 100 .00 90 .00 100 .00 80 .00 95 .00 9 100 .00 90 .00 100 .00 90 .00 90 .00 100 .00 95 .00 10 80 .00 100 .00 100 .00 100 .00 90 .00 90 .00 93.3 3 Average Accuracy (%) 96 .50

PAGE 112

112 CHAPTER 5 DETECTION OF CITRUS GREENING DISEASE ON ORANGE LEAVES Huanglongbing (HLB), commonly known as citrus greening, is one of the most dangerous diseases that affect citrus production, and citrus greening has threatened to destroy an estimated 60 million trees in Africa and Asia (Ruangwong et al., 2006) Citrus greening was found in Miami Dade County, Florida in August 2005. Florida citrus growers are fighting this disease which has the potential to destroy the state's $9 billion commercial citrus industry (The American Phytopathological Societ y, 2008). Citrus greening is a bacterial disease that affects the phloem system of citrus trees and causes the leaves of infected trees to become yellow, the trees to become unproductive, decline and possibly die within a few years. The bacterium is sprea d by an insect, the citrus psyllid. Citrus greening infects all types of citrus species, cultivars, and hybrids and some citrus relatives. The symptoms of citrus greening usually include blotchy, chlorotic mottling of leaves, yellow shoots, misshapen or lo psided small fruit name huanglongbing means "yellow shoot", which is descriptive of the yellow sectors of infected trees (Gottwald et al., 2007) Currently, there is no cure for citrus greening, but early detection of the disease and appropriate management of the insect vector should alleviate the severity of the greening disease and minimize its spread. To reach this goal, many image processing and computer vision tec hnologies have been developed to achieve the automatic identification of disease symptoms. The design and implementation of these technologies will greatly aid in scouting for the disease, selective chemical application, reducing costs and thus leading to improved productivity and fruit quality.

PAGE 113

113 The identification of various plants and crops using image processing techniques has been attempted by several researchers. Haralick et al (1973) used gray level co occurrence features to analyze remotely sensed im ages. They computed gray level co occurrence matrices for a pixel offset equal to one and with four directions(0, 45, 90, 135). For a seven class classification problem, they obtained approximately 80% classification accuracy using texture features. Ta ng et al (1999) developed a texture based weed classification method using Gabor wavelets and neural networks for real time selective herbicide application. The method comprised a low level Gabor wavelet based feature extraction algorithm and a high level neural network based pattern recognition algorithm. The model was specifically developed to classify images into broadleaf and grass categories for real time herbicide application. Their analyses showed that the method is capable of performing textur e based broadleaf and grass classification accurately with 100% classification accuracy. Burks et al. (2000) developed a method for classification of weed species using color texture features and discriminate analysis. The image analysis technique used for this method was the color co occurrence (CCM) method. The method had the ability to disc r iminate between multiple canopy species and was insensitive to leaf scale and orientation. The use of color features in the visible light spectrum provided additional image characteristic features over traditional gray scale representation. The CCM method involved three major mathematical processes: Transformations of an RGB color representation of an image to an equivalent HSI color representation. Generation of col or co occurrence matrices from the HSI pixels

PAGE 114

114 Generation of texture features from the CCM matrices. CCM texture feature data models for six classes of ground cover (giant foxtails, crabgrass, velvet leaf, lambs quarter, ivy leaf morning glory, and soil) we re developed and stepwise discriminant analysis techniques were utilized to identify combinations of CCM texture feature variables, which have the highest classification accuracy with the least number of texture variables. A discriminant classifier was tra ined to identify weeds using the models generated. Classification tests were conducted with each model to determine their potential for classifying weed species. Overall classification accuracies above 93% were achieved when using hue and saturation featur es alone. A complete discussion of the CCM approach is found in Shearer and Holmes (1990) Pydipati et al. (2006) analyzed detection in citrus leaves using machine vision. The image data of the leaves selected for disease monitoring was collected. Then, al gorithms based on image processing techniques for feature extraction and classification were designed. Manual feeding of datasets, in the form of digitized RGB color photographs was conducted for feature extraction and training the SAS statistical classifi er. After training the SAS classifier, the test data sets were used to analyze the performance of accurate classification. The overall objective of this research was to develop a machine vision based method for detecting citrus greening on leaves. This ap proach would use color texture features under controlled lighting in order to discriminate between greening and leaf conditions that are commonly confused with greening. This preliminary approach used low level magnification to enhance features. As a resul t, this would be conducted in a laboratory setting. Future studies would use field based detection. Specific objectives implemented to accomplish the overall objective were to:

PAGE 115

115 Use a digital color microscope system to collect RGB images from orange leaves with eight conditions (i.e., young flush, normal mature, blotchy mottle, green islands, zinc deficiency, iron deficiency, manganese deficiency and dead). Determine image texture features based on the color co occurrence method (CCM). Create a set of reduce d feature data models through a stepwise elimination process and classify different citrus leaf conditions. Compare the classification accuracies. Materials and Methods Image Acquisition System A Digital Microscope system (VHX 600K, Keyence, JAPAN) was us ed for acquiring RGB images from citrus leaf samples, as shown in Figure 5 1 The imaging system consisted of a halogen lamp (12V, 100W), a zoom lens (C mount lens, OP 51479), a 2.11 million pixel CCD image sensor (1/1.8 inch), a 15 inch Color LCD monitor (TFT, 1600x1200, UXGA), and a computer installed with an image capture function and a hard disk drive unit (image format: JPEG and TIFF, Storage capacity: 700MB). The setup of the light source was designed to minimize specular reflectance and shadow, and t o maximize the contrast of the images. The height of the camera and its focus were adjusted to contain the whole leaf, centered on the main leaf vein. Automatic white balance calibration was conducted using a calibrated white balance function in this syste m before acquiring images from leaf samples. The digital color images were saved in uncompressed JPEG format (1200x1600, 8bit). Image Processing The image analysis technique selected for this study was the CCM method. The use of color image features in th e visible light spectrum provides additional image characteristic features over the traditional gray scale representation. The CCM

PAGE 116

116 procedure consists of three primary mathematical processes. First the RGB images of leaves are converted to a hue, saturation and intensity (HSI) color space representation. Intensity is calculated using the mean value of the three RGB values. The hue and saturation values are determined using a geometrical transformation of the chromaticity diagram (Ohta, 1985) In this process, the CIE chromaticity diagram represents a two dimensional hue and saturation space (Wyszecki et al., 1992) The pixel RGB values determine the chromaticity coordinates on the hue and saturation space, wh ich are then used to geometrically calculate the value of hue and saturation. This process has been documented by Shearer (1986) Each pixel map was used to generate a color co occurrence matrix after the H SI image was completed, resulting in three CCM mat rices. That is, one CCM matrix for each of the HSI pixel maps. Through the use of spatial gray occurrence texture analysis method was developed. The gray level co occurrence methodology is a statistical meth od to describe shape by statistically sampling the way certain gray levels occur in relation to other gray levels. Shear and Homes (1990) explained that these matrices measure the probability that a pixel at one particular gray level will occur at a distin ct distance and orientation from any pixel given that pixel has a second particular gray level. For a position operator p we can define a matrix Pij' that counts the number of times a pixel with grey level i occurs at position p from a pixel with grey le vel j For example, if we have four distinct grey levels 0,1,2 and 3, then Figure 5 2 a, where i is the row indicator and j is the possible column indicator in the SGDM matrix If we

PAGE 117

117 normalize the matrix P by the total number of pixels so that ea ch is between 0 and 1, we get a gray level co occurrence matrix. The SGDMs are represented by the function where i represents the gray level of location (x,y) in the image I(x,y) and j represents the gray level of the pixel at a distance d and an orientation angle of from location (x,y) An example image matrix I(x,y) with a gray scale ranges from zero to three is shown in Figure 5 2b The hue, saturation and intensity CCM matrices are then used to generate the texture features described by H aralick and Shanmugam (1974) Shearer and Holmes (1990) reported a reduction in the 16 gray scale texture features through elimination of redundant variables. The resulting 11 texture feature equations are defined by Shearer and Holmes (1990) Donohue et a l (1985) added image contrast and modus texture features to those used by Ohta (1985) for a total of thirteen features when classifying cancer tissue. The same equations are used for each of the three CCM matrices, producing 13 texture features for each H SI component and thereby a total of 39 CCM texture statistics. The texture features are identified by a coded variable name where the first letter represents whether it is a hue (H), saturation (S) or intensity (I) feature and the number following represen ts one of the thirteen texture features described in Shearer (1990) As an example, the feature (I 7 ) is a measure of the entropy in the intensity CCM matrix, which represents the amount of order in an image and is calculated by equation 5 1. ( 5 1) The p(i,j) matrix represents the normalized intensity co occurrence matrix and N g represents the total number of intensity levels. The equation for normalizing the co

PAGE 118

118 occurrence matrix is given in equation 5 2, wh ere P(i,j,1,0) is the intensity co occurrence matrix. ( 5 2) A physical representation of entropy (uncertainty) may be visualized by comparing a checkerboard like image to an image where one half is black and the other half is white. The latter image is highly ordered having all pixels of the same intensity segregated into two distinct pixels groups, which gives greater certainty of the pixel value of the adjacent pixels. The checkerboard image has a lower amount of order due t o intermixing of black and white squares, which results in a greater level of uncertainty of neighboring pixel values. The lower order image would therefore have more uncertainty and thus a higher entropy measure. Features e xtraction Sixty images were take n of the top surface for each leaf class and centered on the mid. Digital images were stored in uncompressed JPEG format. The three classification models discussed previously were treated as separate classification problems. The 60 images from each class w ere divided into two datasets consisting of 30 samples for training and 30 samples for testing. The samples were first arranged in ascending order for the time the images were acquired. This approach minimizes negative time dependant variability, and reduc es potential for data selection bias between the training and test datasets. A detailed illustration of the image acquisition and classification process is given in Figure 5 3 Algorithms for image segmentation and texture feature generation were

PAGE 119

119 develope d in MATLAB. In the initial step, the RGB images of all leaf samples were obtained. For reducing the computational burden with minimal loss of texture feature quality, the image resolution was reduced from 1600x1200 pixels to 800x600 pixels and the reduced images were then converted from eight bit to six bit per channel RGB format. The subsequent steps were repeated for each image in the dataset. After the images were reduced, edge detection of the leaf was completed on each image of the leaf sample using t he MATLAB program. Figure 5 4 exhibits a detailed edge detection process. First, each RGB image was converted to a gray image and then a binary image. Next, the edge of a binary image was detected by the command 'imerode' and 'imdilate' in Matlab. Once the edge detection was finished, the image was scanned from left to right for each row in the pixel map, and the area outside the leaf was zeroed to remove any background noise. In the next step, the images were then converted from RGB format to HSI format. The Spatial Gray Level Dependency Matrices (SGDMs) were then generated for each color pixel map of the image, one each for hue, saturation and intensity. It was decided during preliminary testing that the experiment would use the 0 CCM orientation angle a nd one offset, where the smaller the offset, the finer is the texture measured. Thus, a one pixel offset is the finest texture measure. From the SGDM matrices, the 39 CCM texture statistics described earlier were generated for each image using the three co lor feature co occurrence matrices, as each SGDM matrix provided 13 texture f eatures. A more complete description of this technique can be found in Haralick and Shanmugam (1974) and Shearer and Holmes (1990)

PAGE 120

120 Classification algorithms Once the texture s tatistics were generated for each image, SAS statistical analyses were conducted using procedure STEPDISC to reduce redundancy in the texture feature set. The training image dataset was used for the variable reduction analysis. SAS offers procedures for re ducing variable set size and for discriminating between classes (SAS, 1985) PROC STEPDISC is used to reduce the number of texture features by a stepwise selection process. The "stepwise" selection procedure begins with no variables in the classification m odel (SAS, 1985) At each step of the process, the variable within and outside the model are evaluated. The variable within the model, at that particular step, which contributes least to the model as determined by the Wilks` Lambda method is removed from t he model. Likewise, the variable outside the model that contributes most to the model is added. When no more steps can be taken, the number of variables in the model is reduced to its final form. Based on these analyses, several data models were created, w hich are shown in Table 5 1 Model HSI_16 consisted of all conditions, HSI_11 model consisted of all condition except normal young and the HSI_9 model consisted of all conditions except blotchy mottle and normal young leaves. When using PROC DISCRIM proced ure, a disc r iminant function is established using a measure of the generalized squared distance between the image texture variables and the class texture variable means T he posterior probability determines the classification accuracy. The classification c riterion may be affected by the pooled covariance matrix of the training set texture variables and the prior probabilities of the classification groups.

PAGE 121

121 Result s and Discussion Classifications of C itrus D isease C onditions The texture feature dataset was ge nerated by containing 39 texture features for each image. The dataset had 420 rows each, representing 60 samples from each of the seven classes of leaves. Each row had 39 columns representing the 39 texture features extracted for a particular sample image. Each row had a unique number (1, 2, 3, 4, 5, 6, in zinc. To compare classification accuracies under various disease conditions, three models were created which are sh own in Table 5 1 These models represent the compilation of three different leaf conditions sets, which isolate leaf conditions that are difficult to discriminate. Table 5 2 shows four different models which have all leaf conditions except young flush, but have various combinations of color texture features. This set of models was selected to isolate crucial color texture features which can lead to more efficient feature generation. The training and testing sets for each model mentioned in Tables 5 1 and 5 2 were obtained by selecting either, intensity, hue and saturation or all three HSI features from the total 39 texture features in the original data files. Once several data models were formed, SAS procedure STEPDISC was used to reduce the number of text ure features included in the models. As can be seen in Table 5 2 significant elimination of redundant variables was accomplished. For instance,

PAGE 122

122 HSI_18 model had 39 texture features in the unreduced form, and was reduced to 18 features through the stepwise linear reduction process. The simplification of the data model serves several important purposes: 1) it reduces the computational burden of the redundant features, 2) it tends to improve the performance of classification algorithms, and 3) it reduces memo ry and storage demands. The most significant variable reduction was found in I_8 model, which were reduced from 39 to 8 texture features after using STEPDISC. SAS procedure DISCRIM was used to test the various data model classification accuracies. Each of the models was trained and tested using the appropriate image data set. The classification results were recorded on an individual disease category basis using the SAS procedure output listing. The results shown in Table 5 3 are the classification summary from the HSI_18, HSI_15 and HSI_14 given in Table 5 1 As previously indicated, the test data consisted of 30 images from each category. The overall performance of HSI_18 model was 86.67%, which is the lowest accuracy among the three models shown in Table 5 1 HSI_15 and HSI_ 1 4 models had high classification accuracies (95. 56 % and 97.33%). Based on the results shown in Table 5 4 the classification model using only intensity texture features presented the worst performance at 81.11% for the I_11 model. When compared with other models, HS_10 model had 87.78%, HSI_15 model had 95. 56 % and HSI_39 model had 95.60%. Therefore, other models provided better performance than the model that used only intensity texture features. In Table 5 4 the highest overall perfor mance was 95.60% for HSI_39. This demonstrates that significant classification improvement occurs when intensity features are used, and there is no loss in accuracy when using the reduced

PAGE 123

123 HSI data set or the unreduced data set. As shown in Table 5 5 most images were correctly classified into the appropriate category; however, young flush leaves had a very low classification at 23%. The negative influence of young flush leaves was further demonstrated in the results from Table 5 3 where the classification accuracy is 86.67% while other leaf condition models that exclude young flush leaves have accuracy above 95%. Table 5 6 show improved classification accuracy of 95.56% and thus proved that young flush leaves affected overall performance result. Table 5 7 d emonstrates that HSI_14 model was the best accuracy (97.33%) in three leaf condition models in Table 5 1 However, HSI_14 model excluded citrus greening blotchy mottle, and thus ignored the most important greening identifier. Moreover, there was no signifi cant difference in the classification results between HSI_15 and HSI_14 models. These effects can be seen in confusion matrices, where a model exhibits the classification between positive vs. negative greening symptoms. The Confusion Matrix for Greening Positive vs. Negative A consistent analysis of classifier behavior can be provided by the semi global performance matrix, known as Confusion Matrix. This matrix provides a quantitative performance representation for each classifier in terms of class recogn ition. One benefit of a confusion matrix is that it is easy to see if the system is confusing two classes, citrus greening symptom and non greening symptoms. The classification results for the confusion matrix obtained under the positive vs. negative green ing model are shown in Tables 5 8 5 9 and 5 1 0 In general the HSI_18 model had the lowest classification accuracy among all symptom models. However, in the confusion matrix shown in Table 5 8 the discrimination of citrus greening symptoms had high succe ss rate (96.7%). On the other hand, the accuracy for greening was only 82.67%, giving an overall accuracy

PAGE 124

124 of 86.67%. In the HSI_15 model, the young flush leaves were removed and a 95.6% overall classification accuracy was achieved. This model also has good classification accuracies between positive (91.67%) and negative (97.50%) as shown in Table 5 9 Model HSI_14 excluded young flush leaves and blotchy mottle, and achieved an increase in classification performance when compared with the HSI_18 model (97.3 % versus 86.7%). In the confusion matrix shown in Table 5 1 0 HSI_14 used all disease conditions except young flush and blotchy mottle and had the same positive greening accuracy as HSI_18 model which used all disease condition. However, the overall accura cy was much higher 97.3%. When comparing each model, it is likely that the similarity between young flush leaves and other conditions affected the detection accuracy of citrus greening disease. Stability Test of the Greening Classification Model From the results stated above, leaf condition models were evaluated to determine which scenario would perform the best in distinguishing greening symptoms. It was also important to evaluate various texture feature combinations to determine which would provide high classification accuracy and demonstrate model stability, under varying training and testing conditions. The classification results presented above were obtained using test samples selected in a fixed order. In order to test the stability of the classificat ion model, 30 training samples and 30 testing samples were randomly chosen from the 60 samples for each condition. They were used to train and test a selected model using the same procedures described earlier. Ten runs were repeated for training and testin g. In this research, stability tests were provided for the HSI_15 model, since this model had demonstrated good performance on the leaf conditions of most significant interest. The average value shown in Table 5 1 1 was 94.06%. These results

PAGE 125

125 demonstrated th at the classification model, excluding young flush leaves, was robust under varying leaf sample conditions, and therefore should provide a viable classification of greening conditions. Summary Data analysis based on the color co occurrence method is usefu l for detection of citrus greening disease. A color imaging system was selected to obtain RGB images from citrus leaves consisting of two normal leaf conditions, young flush and mature. In addition, five leaf conditions including greening blotchy mottle, g reen islands, manganese deficiency, iron deficiency, and zinc deficiency were collected. Images of the leaf surface were extracted from the original RGB images, and then converted into hue, saturation, and intensity (HSI) color space representation. Each H SI image was used to generate spatial gray level dependence matrices. Once SGDMs were generated, a total of 39 image texture features were obtained from each citrus leaf sample. Algorithms for selecting useful texture features were developed based on a ste pwise discriminant analysis for three disease combinations including all conditions, all conditions excluding young flush, and conditions excluding blotchy mottle and young flush. Through a discriminant function based on a measure of the generalized square d distance, classification models were constructed using the reduced texture feature sets. Beneficial elimination of redundant texture features were accomplished through the stepwise discriminant analysis. Various texture features models were selected from the color combinations of HSI. The elimination of redundant texture features significantly reduces the computation burden, and it also helps improve the performance of classification models. The classification model excluding blotchy mottle and young flus h (HSI_14) gave the best accuracy (97.33%), while HSI_18 model achieved the worst

PAGE 126

126 classification accuracy (86.67%). When excluding only young flush condition, the classification had high accuracy of 95.60%. The results suggested that young flush samples co llected in fall created confusion between normal mature leaves. This fact also can be seen in the confusion matrix accuracies in Table s 5 8, 5 9 and 5 10. HSI_18 model had the lowest classification accuracy, but the success rate of positive for green ing disease was 96.67%. It was the same or higher than others. A stability test for the classification model with the best performance was accomplished by 10 runs using randomly selected training and testing samples. Average classification accuracy was 94. 06%, indicating that the classification model is robust for classifying new citrus leaf samples according to their conditions. For further study of the influence of young flush, it is suggested that a new model consisting of a mixed data set of young flush and mature normal leaves should be evaluated to see how it compares to the model, which excluded young flush leaves. This research demonstrated that color imaging and texture feature analysis could be used at low magnification for differentiating citrus g reening symptoms from other leaf conditions. Future studies will explore the utility of these algorithms in outdoor conditions

PAGE 127

127 Figure 5 1 Digital microscope system for acquiring RGB images from citrus leaf samples Figure 5 2 Gray level dependence example: (a) SGDM for different orientations, (b) gray level image Figure 5 3 Procedures for color image analysi s

PAGE 128

128 Figure 5 4. Procedures for leaf edge detection Table 5 1.Texture feature models selected by stepwise discriminant analysis for fall season Classification Condition Classification Model 1 Color Feature 2 Texture Feature Set 3 All disease condition HS I 18 H, S I S 4 I 2 H 7 S 13 H 2 H 9 S 5 I 7 S 7, I 9, S 8, I 1, I 10, H 4, I 6, S 6, H 8, I 13 All conditions except young flush HSI 15 H, S, I S 5 I 2 H 7, H 2 S 6 S 4 H 9 S 8 I 6, S 13, H 4, I 4, I 13, S 7, I 7 All conditions except blotch mottle and young flush HSI_ 1 4 H, S, I S 5 I 2 H 7, H 2 S 4 H 9 S 13 S 7, I 7, I 1, I 9, S 8, I 10, I 6 1. Classification model designation based color features in model and the total number of variable selected by STEPDISC. 2. Color texture features included in initial data set prior to red uction. 13 variables for color texture feature set. 3. Find texture features selected, given in order of discriminant power.

PAGE 129

129 Table 5 2. Texture feature models to all conditions except young flush for fall season Classification Model 1 Color Featu re 2 Texture Feature Set 3 HSI 15 H, S, I S 5 I 2 H 7, H 2 S 6 S 4 H 9 S 8 I 6, S 13, H 4, I 4, I 13, S 7, I 7 HS_ 10 H, S S 5 H 7 H 5 H 12 S 4 S 7 H 8 S 8 H 3, S 11 I_ 8 I I 2 I 8 I 9 I 6 I 5 I 7 I 10 I 1 HSI_39 H, S, I All 39 texture features (H 1 H 13 S 1 S 13 I 1 I 13 ) 1. Classification model designation based color features in model and the total number of variable selected by STEPDISC. 2. Color texture features included in initial data set prior to reduction. 13 variables for color texture feature set. 3. Fin d texture features selected, given in order of discriminant power. Table 5 3. Classification summary in percent correct for all models in T able 5 1 Disease Condition Classification Model HSI_18 HSI_ 15 HSI 1 4 Blotchy mottle 96.67 90.00 Islands 96.67 93.33 96.67 Iron deficiency 90.00 100.00 90.00 MN deficiency 100.00 96.67 100.00 Zinc deficiency 100.00 100.00 100.00 Normal 100.00 93.33 100.00 Young flush 23.33 Overall Accuracy (%) 86.67 95. 56 97.33

PAGE 130

130 Table 5 4. Classification summar y in percent correct for all models in T able 5 2 Disease Condition Classification Model HSI_ 15 HS_ 10 I_11 HSI_39 Blotchy mottle 90.00 70.00 70.00 96.67 Islands 93.33 76.67 73.33 93.33 Iron deficiency 100.00 100.00 86.67 93.33 MN deficiency 96.67 93. 33 93.33 96.67 Zinc deficiency 100.00 100.00 83.33 96.67 Normal 93.33 86.67 80.00 96.67 Overall Accuracy (%) 95. 56 87.78 81. 11 95.60 Table 5 5. Classification result in percent correct for HSI_18 model in T able 5 1 Actual Leaf Condition Classified l eaf Condition Blotchy mottle Islands Zinc deficiency Iron deficiency MN deficiency Normal Young flush Accuracy (%) Blotchy mottle 29 0 0 0 0 1 0 96.67 Islands 0 29 1 0 0 0 0 96.67 Zinc deficiency 0 0 30 0 0 0 0 100.00 Iron deficiency 0 0 1 27 0 0 2 9 0.00 MN deficiency 0 0 0 0 30 0 0 100.00 Normal 0 0 0 0 0 30 0 100.00 Young flush 4 0 0 3 1 15 7 23.33 Total 33 29 32 27 31 46 9 86.67

PAGE 131

131 Table 5 6. Classification result in percent correct for HSI_15 model in T able 5 1 Actual Leaf Condition Classi fied leaf Condition Blotchy mottle Islands Zinc deficiency Iron deficiency MN deficiency Normal Accuracy (%) Blotchy mottle 27 0 0 0 0 3 90.00 Islands 0 28 1 1 0 0 93.33 Zinc deficiency 0 0 30 0 0 0 100.00 Iron deficiency 0 0 0 30 0 0 100.00 MN defi ciency 1 0 0 0 29 0 96.67 Normal 2 0 0 0 0 28 93.33 Total 30 28 31 31 29 31 95.56 Table 5 7. Classification result in percent correct for HSI_14 model in T able 5 1 Actual Leaf Condition Classified leaf Condition Islands Zinc deficiency Iron deficie ncy MN deficiency Normal Mature Accuracy (%) Islands 29 1 0 0 0 96.67 Zinc deficiency 0 30 0 0 0 100.00 Iron deficiency 0 3 27 0 0 90.00 MN deficiency 0 0 0 30 0 100.00 Normal Mature 0 0 0 0 30 100.00 Total 29 34 27 30 30 97.33

PAGE 132

132 Table 5 8. Confusio n matrix in percent correct for HSI_18 model in T able 5 1 Prediction outcome Positive for greening (Blotch mottle Island s ) Negative for greening ( Young flush, Normal MN, IR, ZN) Total Actual value True (success rate) 58 / 6 0 ( 96.67 %) 124 / 15 0 ( 82.67 %) 1 82 / 21 0 ( 86.67 %) False (fail rate) 26 / 15 0 ( 17.33 %) 2 / 6 0 ( 3.33 %) 28 / 18 0 ( 13.33 %) Table 5 9. Confusion matrix in percent correct for HSI_15 model in T able 5 1 Prediction outcome Positive for greening (Blotch mottle Island s ) Negative for greening ( Nor mal MN, IR, ZN) Total Actual value True (success rate) 55 / 6 0 ( 91.67 %) 117 / 12 0 ( 97.50 %) 172 / 18 0 ( 95.56 %) False (fail rate) 3 / 12 0 ( 2.50 %) 5 / 6 0 ( 8.33 %) 8 / 18 0 ( 4.44 %) Table 5 10. Confusion matrix in percent correct for HSI_14 model in T able 5 1 Predict ion outcome Positive for greening (Island s ) Negative for greening ( Normal MN, IR, ZN) Total Actual value True (success rate) 29 / 3 0 ( 96.67 %) 117 / 12 0 ( 97.5 %) 146 / 150 ( 97.33 %) False (fail rate) 3/12 0 ( 2.50 %) 1 / 3 0 ( 3.33 %) 4 / 15 0 ( 2.67 %)

PAGE 133

133 Table 5 11. Classification results for shuffle data about HSI_1 5 model in percent correct Number of random data Blotch y mottle (%) Island s (%) Normal (%) MN (%) Zinc (%) Iron (%) Total (%) 1 83.33 83.33 93.33 100.00 86.67 100.00 91.11 2 90.00 96.67 96.67 93.33 96. 67 100.00 95.56 3 73.33 100.00 96.67 96.67 90.00 100.00 92.78 4 83.33 90.00 96.67 86.67 96.67 90.00 90.56 5 83.33 100.00 96.67 100.00 100.00 100.00 96.67 6 8 3.33 96.67 96.67 100.00 100.00 90.00 94.44 7 93.33 96.67 96.67 96.67 90.00 96.67 95.00 8 80.0 0 83.33 100.00 96.67 90.00 93.33 90.56 9 9 0 .00 100.00 83.33 100.00 93.33 100.00 94.44 10 90 .67 100.00 96.67 96.67 96.67 100.00 97.78 Average Accuracy (%) 86.67 94.67 95.34 96.67 94.00 97.00 94.06

PAGE 134

134 CHAPTER 6 DETECTION OF THE DIS EASE USING PA TTERN RECOGNITION ME THODS Early detection of citrus diseases is important for citrus fruit production and quality. In Florida, the citrus groves are under attack from two major diseases, namely citrus canker and greening. Citrus greening diseases affect fr uit quality. The negative impacts reduce the profitability of the citrus industry, and threaten the agricultural economy of the state of Florida. Hence, early detection of the disease can reduce its spread and minimize losses for citrus growers. For reducing labor cost and improving detection accuracy, pattern recognition method have demonstrated success in a variety of research areas such as aerospace, defense, medical, neurobiology, and linguistics etc. A rtificial intelligence is an approach that mi mics humans intelligent to build learning ability, reasoning, and perception. The research about artificial intelligence has developed into a discipline known as intelligent system. In agricultural engineering, many researchers have applied pattern recogn ition methods to agricultural management. Park et al (2007) detected fecal contamination in the visceral cavity of broiler carcasses using a pattern recognition method. The method comprised fisher linear discriminant analysis. Images of poultry carcass es were collected using hyperspectral imaging processing. Their analysis showed that the method is capable of detecting fecal contamination on the surface of broiler carcasses with 98.9 % classification accuracy. Cheng et al (2003) developed an a pproach for fruit and vegetable defect inspection. The hyperspectral imaging classification techniques used for this study were the p rincipal c omponent a nalysis (PCA) and f l inear d iscriminant (FLD) method.

PAGE 135

135 These methods h ave the ability to maximize the representation and classification effects on the extracted new feature bands. The use of hyperspectral image features provided high dimension feature space and reflection properties. PCA were employed for reducing the hyperspectral dimension. FLD te chniques were utilized to classify wholesome and unwholesome objects. Overall classification accuracy using only FLD solution was 90% but the combined PCA FLD solution had the accuracy of 93.3%. When the PCA and FLD methods were integrated, classification accuracy was better. H. Zhang et al (2007) analyzed fungal infected wheat kernels using a support vector machine (SVM). The image data of wheat kernels was collected by a near infrared refectance hyper spectral imaging system. Then, after four features w ere extracted from input images, algorithms based on principal component analysis (PCA) technique for reducing the dimensionality of pattern vectors were designed. The NIR hyperspectral image datasets was used for the SVM classifier. After classifying the datasets, the overall classification accuracy was 94.8%, with 531 kernels correctly classified and 29 kernels not. Pydipati et al (2005) developed disease detection in citrus leaves using statistical and neural network classifiers. He used co occurrence m atrices to extract texture features from HSI images, and the SAS classifier was used to train HSI feature dataset. After reducing image datasets, the classification results using neural network method and SAS classifier was compared. SAS classifier achieve d accuracy above 95% for all classes, while neural network algorithms achieved accuracy above 90% for all classes. There are various approaches in pattern recognition. In this study, three pattern recognition approaches were used for finding an optimal pat tern recognition approach.

PAGE 136

136 Linear models for classification: Fisher's linear discriminant analysis method Neural Networks for classification: Back propagation based on neural network method Nonlinear Kernel for classification: Support Vector Machine (SVM) The objec tive of this study is to find a pattern recognition method for detection of citrus diseases. For preliminary study, image analysis technique s based on color co occurrence method will be developed and three pattern recognition methods will be compa red for best disease image classification I mages were acquired under controlled lighting conditions and low level magnification to enhance features. Specific objectives were to: Collect two image data sets of citrus canker and citrus greening diseases. E valuate the color co occurrence method for disease detection Develop various pattern recognition algorithms for classification of the citrus disease conditions based on the features obtained from the color co occurrence method. Compare the classification accuracies of the various pattern recognition classifiers. Materials and Methods Image Acquisition System There are differences between the hardware systems used for Canker and Greening disease detection acquiring best RGB images. First, canker images were acquired by the color image acquisition system in Figure 6 1 The imaging system consisted of two 13 W high frequency sealed fluorescent lights (SL Series, StockerYale, Salem, NH, USA), a zoom lens (Zoom 7000, Navitar, Rochester, NY, USA), a 3 CCD RGB col or camera (CV M90, JAI, San Jose, CA, UDA), a 24 bit color frame grabber board with 480640 pixel resolution (PC RGB, Coreco Imaging, St. Laurent, Quebec,

PAGE 137

137 CA), and a computer installed with an image capture software. The setup of the lighting system was de signed to minimize specular reflectance and shadow, and to maximize the contrast of the images. The height of the camera and its focus were adjusted to contain the image of the whole fruit, with an approximate 100 mm100 mm field of view. Automatic white b alance calibration was conducted using a calibrated white balance card before acquiring images from fruit samples. The digital color images were saved in uncompressed BMP format. Second, A Digital Microscope system (VHX 600K, Keyence, JAPAN) was used for acquiring RGB images from greening samples, as shown in Figure 6 2 The imaging system consisted of Halogen lamp (12V, 100W), a zoom lens (C mount lens, OP 51479), a 2.11 million pixel CCD image sensor (1/1.8 inch), a 15 inch Color LCD monitor (TFT, 1600x1 200, UXGA), and a computer installed with an image capture function and a hard disk drive unit (Image format: JPEG and TIFF, Storage capacity: 700MB). The setup of the light source was designed to minimize specular reflectance and shadow, and to maximize t he contrast of the images. The height of the camera and its focus were adjusted to contain the whole samples Automatic white balance calibration was conducted using a calibrated white balance function in this system before acquiring images from samples. T he digital color images were saved in uncompressed JPEG format (1200x1600, 8bit and 480x640, 8bit ). Image Analysis The methodology employed for citrus canker and greening classification is very similar. Figure 6 3 and F igure 6 4 illustrated citrus canker and greening classification procedures. However, there are differences in pre processing since the citrus canker

PAGE 138

138 approach focuses on specific sub region, while the citrus greening approach examines the whole leaf surface. From the original RGB color ima ges (480x640 pixel), citrus canker region of interest (ROI) images were extracted This ROI was focused on peel conditions of interest The ROI selection was started manually by establishing the center point of a 64x64 pixel window on the original image O nce extracted, e ach image was converted from RGB (red, green, blue) to HSI (hue, saturation, intensity) color format This approach obtains the useful image data and significantly reduces the computational burden for the following data analysis procedures. Citrus greening analysis has additional pre processing step s For reducing the computational burden with minimal loss of texture feature quality, the image resolution was reduced from 1600x1200 pixels to 800x600 pixels and the reduced images were then co nverted from eight bit to six bit per channel RGB format. The subsequent steps were repeated for each image in the dataset. After the images were reduced, edge detection of the leaf boundary was completed on each the leaf sample image Once the edge detect ion was finished, the image was scanned from left to right for each row in the pixel map, and the area outside the leaf was zeroed to remove any background noise. Then the images were converted from RGB format to HSI format. For extracting features from d igitized HSI images, Color Co occurrence Method (CCM) is used Shear and Holmes (1990) described 39 CCM texture statistics using the Spatial Gray level Dependence Matrices (SGDM). CCM texture statistics were generated from the SGDM of each HSI color featur e. Each of the three matrices is evaluated by thirteen texture statistic measures resulting in 39 texture features per

PAGE 139

139 image. The SGDM is a measure of the probability that a given pixel at one particular gray level will occur at a distinct distance and ori entation angle from another pixel, given that pixel has a second particular gray level. The SGDM represented by the function , when and represent gray values, is the offset distance and is orientation angle. Haralick and Shanmugam (1974) and Shearer and Holmes (1990) made detailed paper about this technique. After obtain ing 39 texture features f or each image, feature selection was used to eliminate the redundancy in the texture feature set. The SAS procedure STEPDISC can reduce the size of the variable set and find the most significant variables for discriminating samples in to different classes. The stepwise discriminant analysis begins with no variables in the classification model. At each step of the process, the variables within and outside the model are evaluated. The variable within the model, which contributes least to the model as determine deleted from the model while the variable outside the model that contributes most to the model and passes the test to be admitted is included When more steps can not be taken, the number of variables in the model is reduce d to its final form. Burks et al. (2000) had shown that when only hue or saturation information was used in the classification models classification accuracies were increased Thus three color feature combinations including 1) hue, saturation, and intensi ty (H, S, I), 2) hue and saturation (H, S), and 3) intensity (I) only were used to perform the texture feature selections. These three color combinations have demonstrated high classification accuracies in the applications for other plant discriminations ( Burks et al. 2000; Pydipati et al. 2006).

PAGE 140

140 Classification Using Pattern Recognition M ethods Once reduced texture features were obtained from STEPDISC procedure, two datasets were created one for training and one for test ing Table 6 1 and Table 6 5 represent s two datasets for citrus canker disease and citrus greening disease respectively For canker disease classification, the rows of training datasets consisted of 20 samples from each of the six classes of peel condition as discussed earlier T he columns represented the reduced texture features. For each image sample, t est datasets had 10 samples and reduced texture features. Datasets for citrus greening had seven classes with 30 samples for training and 30 samples for testing The se datasets were analyzed using a linear fisher discriminant analysis (FDA), a neural network based on back propagation, and support vector classification (SVC). All analysis was done using M ATLAB R2007b (The Mathworks, Inc., Natick, Mass.) and the M ATLAB PRTools toolbox (Faculty of Applied Physics, Delft University of Technology, The Netherlands). PRTools is a toolkit for MATLAB released for the developed and evaluation of pattern recognition algorithms. Classification Results Canker D isease C lassification B ased on P attern R ecognition A lgorithms. The texture feature selection results are summarized in Table 6 1. Four classification models were developed using the selected texture feature sets from the three color combinations [(H, S, I), (H, S), and (I)]. From the S AS STEPDISC procedure t he variables listed in Table 6 1 were selected for the first three models with variables arranged in the descending order of importance for the classification models. The classification models were named using the color features in volved in the texture feature selections followed by the total numbers of the selected texture

PAGE 141

141 features. For example, model HSI_13 consists of a reduced set of 13 hue, saturation and intensity texture features. As shown in Table 6 1 significant eliminati ons of redundant texture features were accomplished through the stepwise discriminant analysis. Nine and eleven texture features were selected for model HS_9 and model I_11, respectively. The simplification of the texture features reduces the computation b urden due to the redundant data, and it also helps improve the performance of classification models. In addition to the three models described above, a classification model that used all 39 HSI texture features was used for the purpose of comparisons with other models. Thus four classification models were used to differentiate citrus peel diseases, each model was independently evaluated for classification performance. The results shown in Table 6 2 were obtained using a fisher discriminant analysis (FDA) c lassifier. In particular, better overall classification rates were achieved by models HSI_13 and HSI_39 which had overall accuracy of 91.67% and 93.33%. The results using neural network based on back propagation algorithm in T able 6 3 also had good overall accuracy. Model HSI_13 achieved an overall accuracy of 95.00% and model HSI_39 an accuracy of 93.33%. Table 6 4 showed classification results using support vector method (SV M ) algorithm. This approach showed lower overall accuracy than other methods wher e only model HSI_13 achieved good accuracy at 95.00%. These results proved that model HSI_13 was best model for all classifier of canker disease Model HSI_39 also obtained good classification accuracy in FDA and the neural network methods, but it is not useful in real world applications, since it would require large calculation time due to the number of texture features. The best overall

PAGE 142

142 performance was the SVM using model HSI_13, which had overall classification accuracies of 95.00%, and no individual cl ass below 80%. Citrus G reening D isease C lassification B ased on P attern R ecognition A lgorithms. The dataset for citrus greening consisted of 39 texture features for each image. The dataset had 36 0 rows each, representing 60 samples from each of the se ven classes of leaves. Each row had 39 columns representing the 39 texture features extracted for a particular sample image. Each row had a unique number (1, 2, 3, 4, 5, or gr eening ing 5 normal mature leaves and 6 represented leaves deficient in zinc. Table 6 5 sh ows the classification models w hich are named using the total numbers of selected texture features similar to the citrus canker models. A classification models using all 39 HSI texture features was selected for comparison with the reduced data models. The results shown in Table s 6 6 6 7 and 6 8 were obtained using a fisher discriminant analysis (FDA) classifier, neural network based on back propagation (BP) algorithm, and support vector machine (SV M ). In T able s 6 6 and 6 7 models HSI_1 5 and HSI_39 achiev ed excellent overall classification rates. Models HSI_15 and HSI_39 in T able 6 7 had an overall accuracy of 95.55% and 93.89%, while Models HSI_15 and HSI_39 in T able 6 8 had an overall accuracy of 93.33% and 93.89%. The results using SV M method in T able 6 8 showed lower overall accuracy than other results. In T able 6 8 using SV M method, model HSI_15 and HS_9 only had a accuracy of 80%. Model HS_9 had the best overall accuracy at 84.44%, yet was lower

PAGE 143

143 than other classification approaches Therefore, SV M method was not as good for detecti ng citrus greening disease, since blotchy mottle confused the SVM classifier and significantly reduced the accuracy When blotchy mottle and other disease class es have a very similar pattern, it resulted in low accurac y. Hence, the FDA classifier using m odel HSI_15 was best model for detection of citrus greening disease. Although model HSI_39 also performed well on both cases, model HSI_39 may increase computation time for training and classification. As with the canke r disease classifier above, the reduced HSI texture model was best for greening detection. However, in th is case, the FDA classifier outperformed the SVM Summary Pattern recognition and color co occurrence texture method is a useful approach for detectio n of citrus disease. Digitized RGB images from various citrus disease conditions were obtained using an image acquisition system Texture features containing useful information for diseases classification were extracted from the pre processed RGB images th at had been converted into hue, saturation, and intensity (HSI) color space representation. For each HSI image, three s patial gray level dependence matrices (SGDMs) were generated, and a total of 39 image texture features were obtained from each image samp le. A stepwise discriminant analysis was used finding useful texture features from three color combinations including 1) h ue, saturation, and intensity (HSI), 2) hue and saturation (HS), and 3) intensity (I). Classification models were constructed using th e reduced texture feature sets through a discriminant function based on a measure of the generalized squared distance. Various texture features models were selected from the color combinations of HSI. The elimination of redundant texture features significa ntly reduces the computation burden,

PAGE 144

144 and it also improve d the performance of classification models. For canker detection, the reduced classification model (HSI_1 3 ) gave best accuracy ( 95.00 %) for back propagation based neural network method and support vec tor machine In citrus greening, the reduced models (HSI_15) showed an overall accuracy above 93% in back propagation based on neural network method and the fisher discriminant analysis method. The HSI_15 model using support vector machine had low accuracy of 82.78%. This result suggested that reduced HSI models were useful for building citrus diseases detection system. In general, back propagation based on neural network method has good performance above 93% for both diseases conditions. The support vector machine had good classification results of 95% in citrus canker, but poor accuracy of 82.78% in citrus greening. Back propagation method can be a good application for disease detection. This method showed good performance and the algorithms also was simpl e to implement for a detection system. For future study, these methods will be evaluated on other citrus disease condition as well as under outdoor lighting conditions. In addition, the stability of these algorithms will be tested. In conclusion, this rese arch demonstrated that color imaging texture feature analysis and pattern recognition could be used in the laboratory to classify citrus disease conditions

PAGE 145

145 Figure 6 1 Color image system for acquiring RGB images from citrus samples Figure 6 2 Digital microscope system for acquiring RGB images from citrus disease samples

PAGE 146

146 Figure 6 3 Procedures for color image analysis for citrus canker Figure 6 4 Procedures for color image analysis for citrus greening

PAGE 147

147 Table 6 1.Texture feature models selected by SAS stepwise analysis for citrus canker Classification Model Color Feature Texture Feature Set HSI_1 3 H, S, I H 9 H 10 I 12 S 7 I 3 I 2 S 12 I 11 I 1 I 8 S 1 ,H 2 H 5 HS_9 H, S H 9 H 10 S 7 H 5 H 11 S 12 S 11 H 7 H 13 I_11 I I 2 I 3 I 5 I 10 I 6 I 13 I 8 I 9 I 1 I 11 I 7 HSI_39 H, S, I All 39 texture features (H 1 H 13 S 1 S 13 I 1 I 13 ) Table 6 2 Canker disease c lassification results in percent correct for all models using FDA Peel Condition Classification Model HSI_1 3 HS_9 I_11 H SI_39 Normal 90.00 90 .00 80 .00 80 .00 Canker 90.00 90 .00 90 .00 100 .00 Copper Burn 100 .00 60.00 100 .00 100.00 Greasy Spot 90 .00 90 .00 100.00 90 .00 Melanose 80.00 70 .00 70 .00 90.00 Wind Scar 100.00 90.00 10.00 100 .00 Overall Accuracy (%) 91.67 81.67 75 .00 93.33 Table 6 3 Canker disease c lassification results in percent correct for all models using BP neural network Peel Condition Classification Model HSI_1 3 HS_9 I_11 HSI_39 Normal 100.00 90 .00 80.00 100.00 Canker 100.00 70.00 90.00 90.00 Copper Burn 100 .00 70.00 100.00 100.00 Greasy Spot 10 0 .00 90 .00 100 .00 90.00 Melanose 70.00 6 0 .00 70 .00 90.00 Wind Scar 100.00 100.00 20.00 90.00 Overall Accuracy (%) 95.00 80.00 76.67 93.33

PAGE 148

148 Table 6 4 Canker disease c lassification results in percent correc t for all models using SVMs Peel Condition Classification Model HSI_1 3 HS_9 I_11 HSI_39 Normal 100.00 90 .00 40.00 90.00 Canker 100.00 90 .00 90.00 50.00 Copper Burn 100 .00 80.00 100 .00 30.00 Greasy Spot 90 .00 70.00 100 .00 80.00 Melanose 80.00 50.00 5 0.00 90.00 Wind Scar 100.00 10.00 30.00 100 .00 Overall Accuracy (%) 95.00 65.00 68.33 73.33 Table 6 5.Texture feature models selected by SAS stepwise analysis for citrus greening Classification Model Color Feature Texture Feature Set HSI 15 H, S, I S 5 I 2 H 7, H 2 S 6 S 4 H 9 S 8 I 6, S 13, H 4, I 4, I 13, S 7, I 7 HS_ 10 H, S S 5 H 7 H 5 H 12 S 4 S 7 H 8 S 8 H 3, S 11 I_ 8 I I 2 I 8 I 9 I 6 I 5 I 7 I 10 I 1 HSI_39 H, S, I All 39 texture features (H 1 H 13 S 1 S 13 I 1 I 13 ) Table 6 6 Citrus greening c lass ification results in percent correct for all models using FDA Peel Condition Classification Model HSI_1 5 HS_9 I_11 HSI_39 Blotchy mottle 83.33 63.33 53.33 93.33 Islands 93.33 80.00 76.67 90.00 Iron deficiency 100.00 100.00 86.67 93.33 MN deficiency 1 0 0 .00 83.33 86.67 96.67 Zinc deficiency 100.00 100.00 86.67 96.67 Normal 96.67 93.33 73.33 93.33 Overall Accuracy (%) 95.55 86.67 77.22 93.89

PAGE 149

149 Table 6 7. Citrus greening c lassification results in percent correct for all models using BP Peel Condition C lassification Model HSI_1 5 HS_9 I_11 HSI_39 Blotchy mottle 83.33 73.33 66.67 80.00 Islands 86.67 70.00 70.00 100.00 Iron deficiency 100.00 96.67 90.00 100.00 MN deficiency 96.67 86.67 86.67 96.67 Zinc deficiency 100.00 96.67 83.33 93.33 Normal 93.3 3 83.33 73.33 93.33 Overall Accuracy (%) 93.33 84.45 78.33 93.89 Table 6 8 Citrus greening classification results in percent correct for all models using SVM Peel Condition Classification Model HSI_1 5 HS_9 I_11 HSI_39 Blotchy mottle 60.00 63.33 46.6 7 60.00 Islands 90.00 8 0 .00 66.67 36.67 Iron deficiency 100.00 100.00 86.67 66.67 MN deficiency 80.00 83.33 66.67 43.33 Zinc deficiency 70.00 93.33 50.00 76.67 Normal 96.67 86.67 80.00 36.67 Overall Accuracy (%) 82.78 84.44 66.11 53.34

PAGE 150

150 CHAPTE R 7 CLASSIFICATION METHO DS OF CITRUS GREENIN G BY HYPERSPECTRAL IMAGING Citrus greening, also known as Huanglongbing (HLB), is one of the most destructive diseases in citrus This disease can cause tree decline, death, yield loss and lost marketability. On ce citrus trees are infected, the fruit yield, and quality are greatly reduced. The trees also become susceptible to other diseases and health problems. Furthermore, the fruits from the infected trees are unmarketable and not suitable for juice processing due to acidity and bitter taste (Polek et al., 2007) During the recent past, citrus greening has become a serious threat in the citrus growing regions of Florida. In addition, farmers are concerned about huge costs from tree loss, scouting, and chemicals used in an attempt to control the disease. Florida citrus growers are desperately fighting this plant disease which has the potential to destroy the state's $9 billion commercial citrus industry (The American Phytopathological Society, 2008). Citrus green ing is a bacterial disease that affects the phloem system of citrus trees and causes the leaves of infected trees to become yellow, the trees to become unproductive, decline and possibly die within a few years. The bacterium is spread by an insect, the cit rus psyllid. Citrus greening infects all types of citrus species, cultivars, and hybrids and some citrus relatives. The symptoms of citrus greening usually include blotchy, chlorotic mottling of leaves, yellow shoots, misshapen or lopsided small fruit that name Huanglongbing means "yellow shoot", which is descriptive of the yellow sectors of infected trees (Gottwald et al., 2007) Unfortunately, there is no treatment and pr evention for citrus greening in the infected trees. Nevertheless, early detection of the

PAGE 151

151 disease and appropriate management of the insect vector should alleviate the severity of the greening disease and minimize its spread. Crops become stressed when any infections, such as viral infections, or physiological factors, such as air pollution, adversely affect growth, development, and yield. These stresses are expressed in various ways. For example, problems in the water balance control can slow down photosynt hesis, reduce evapotranspiration, and raise leaf surface temperature ( Nilsson 1995) Other symptoms include morphological changes such as leaf curling, change in leaf angle, wilting, chlorosis, discoloration of leaves and fruits and premature drop of frui ts. Because of these observable symptoms, humans could easily assess the conditions of their crops. However, the large size of current farms and the decrease of farm labor make it impossible to assess the whole field. Furthermore, the symptoms of some dise ases do not show up in the visible range (400 700 nm), which is the sensitivity of the human eyes. But with the advancement of optical sensing technologies (such as machine vision and spectroscopy), several researchers have shown the potential of these te chnologies to identify some of these diseases and conditions. Specifically, these technologies include visible imaging (400 700 nm), near infrared imaging (greater than 700 nm), multispectral imaging (a combination of wavelengths in the visible and near in frared), and hyperspectral imaging (400 1000 nm). Camargo et al. (2009) developed a visible imaging system and color image processing to identify banana leaves that were infected with Black Sigatoka. Color images of infected banana leaves were used to deve lop the image processing which includes color transformation, histogram multi thresholding, and segmentation. They demonstrated that the algorithm was able to identify the diseased regions in most

PAGE 152

152 of the images tested. A machine vision system in the visibl e region was used by Blasco et al. (2003) to detect external blemishes in apples and peaches. They achieved 93% accuracy in blemish detection. Visible imaging systems are sufficient if the symptoms occur in the visible region, but most of the diseases and conditions could be identified more effectively by exploiting regions beyond the visible. Zarco Tejada et al. (2005) used a compact airborne spectrographic imager and hyperspectral sensors to monitor the temporal and spatial chlorosis condition of Vitis vi nifera L. Their results showed that they could estimate chlorophyll content in the narrow band hyperspectral indices calculated in the 700 750nm. A ground based hyperspectral imaging system for characterizing vegetation spectral features was developed by Y e et al. (2008) They used a hyperspectral line sensor with a wavelength range of 360 1010 nm to demonstrate the potential of the system to monitor vegetation variability in crop systems. Changes in spectral signatures due to deficiencies of nutrients and damage by pests and environmental factors have been reported ( Sepulcre Canto et al., 2007 Muhammed 2005 and Gowen et al ., 2007) In addition, the spectral signatures is influenced by the amount of pigments, leaf angle, leaf surface texture, diseases and stress, plant growth stage, and measurement condition. While researchers have demonstrated the potential of imaging technologies for disease detection, not much work can be found on the automatic detection citrus greening. Mishra et al. (2007) investigat ed the spectral characteristics of leaves infected with HLB and normal leaves using a spectroradiometer (350 2500 nm). The spectral bands of green to red wavelength and the near infrared band were found to have the potential to discriminate HLB infected le aves from normal leaves by using

PAGE 153

153 discriminability, spectral derivative analysis, and spectral ratio analysis. These wavelengths include 530 564 nm, 710 715 nm, 1041 nm, and 2014 nm. In addition, they also developed a four band active optic sensor to identi fy young leaf flushes to control the spot spraying of citrus trees. It is known that the Asian citrus psyllid, vector of HLB disease feeds on young leaves. Although they have characterized the spectral reflectance property of HLB infected leaves and discri minate it from normal leaves, other conditions such as nutrient deficiency of leaves could look similar to HLB infected leaves. In this study, citrus leaves infected with citrus greening (i.e., blotch mottle and greening islands), leaves exhibiting nutrie nt deficiencies symptoms (i.e., iron, magnesium, and zinc deficiency), and healthy leaves (i.e., normal young and normal mature) were collected in the field. Hyperspectral images in the spectral region between 400 and 900nm of the samples were acquired. Th e main objective of this research was to develop image processing classification algorithms based on principal component analysis (PCA) and wavelet transform for the detection of citrus greening in the presence of confounding leaf symptoms. Materials and Methods Citrus L eaf S amples Table 7 1 demonstrated the sample number of the citrus greening and n o citrus greening conditions for this study. Fifty samples of each of the c itrus g reening conditions and forty samples of each of the other conditions were c ollected Therefore, a total of 300 samples were selected in this study. The leaf samples were clipped with petioles intact and then sealed in Ziploc bags to maintain the moisture level of the leaves.

PAGE 154

154 Hyperspectral Image Acquisition A hyperspectral line s can imaging system, as shown in Figure 7 1 was used for acquiring spectral images from citrus leaf samples. The imaging system consisted of an electron multiplying charge coupled device (EMCCD) camera (Luca, Andor Technology Inc., CT, USA) with imaging sp ectrograph (ImSpector V10E, Spectral Imaging Ltd., Oulu, Finland) and a C mount lens (Rainbow CCTV S6X11, International Space Optics, S.A., Irvine, CA, USA), a pair of halogen line lamp (21V, 150W) powered with a DC voltage regulated power supply (Dolan J enner Industries, Inc., Lawrence, MA, USA), and a programmable motorized positioning table (XN10 Xslide, Velmex Inc., Bloomfield, NY, USA). This equipment was placed inside in a dark box to eliminate stray external light. The EMMCCD has 1004x1002 pixels a nd a double stage Peltier device to cool to 20 An imaging spectrograph is based on prism grating prism principle. It has a slit by a prism grating prism device and projected onto the pixels of EMCCD detector. A two dimensional image is generated with the spatial dimension along the horizontal axis and the spectral dimension along the vertical axis of the EMCCD. When the citrus sample is moved perpendicularly to the s canning direction by the motorized positioning table, one thousand seven hundred and forty line scans were performed for each leaf sample, and four hundred pixels covering the scene of the sample at each scan were saved. Therefore, a three dimensional ima ge cube (1740400 for each band) was created.

PAGE 155

155 The hyperspectral imaging software to transfer data and parameterization was developed using the Andor Software Development Kit (SDK, Luca, Andor Technology Inc., CT, USA) for the hyperspectral line scan imagi ng system. An Hg Ne spectral calibration lamp (Oriel Instruments, Stratford, CT, USA) was used to investigate spectral calibration of the system. Because of low light output in the visible region less than 450 nm, and low quantum efficiency of the EMCCD in the NIR region beyond 930 nm, the wavelength range between 451.67 nm and 927.71 nm was used (totaling 92 bands with a spectral resolution of 5.2 nm). Hyperspectral Image Analysis Hyperspectral image transformation using PCA Since the input images consist of 3 D image data (106 176 92 bands), it is necessary to reduce the spectral dimension of the hyperspectral images. The data compression technique selected for this study was principal component analysis (PCA). PCA is a useful tool to reduce dimension ality of a data and extract information from original data, and maintain the useful information of the original data. The central idea of PCA method is to rotate the data on the principal axes using orthogonal transformation that best presents given data i n least square sense and to reduce the dimensionality of a given data. The axes can be obtained from the calculation of the eigenvectors of the covariance matrix from an original data (Duda et al., 2001) For the hyperspectral images, the covariance matr ix of the image is assembled, and the eigenvalues and eigenvectors are determined from the covariance matrix. Using the eigenvectors of covariance matrix, the principal components are formed. The principal components can be decomposed into two matrices; lo ading matrix and scores matrix. The loading matrix is the weight coefficients for each original variable when

PAGE 156

156 calculating the principal component. The scores matrix includes the principal component data rotated from the original data. Therefore, the 3 D hy perspectral data with 92 single band images is transformed into the 2 D principal component score image in this study. PCA procedures described above were done using Matlab 7.0 (The Mathworks, Inc., Natick, Mass.). Citrus greening classification algorithm using PCA Fifty samples for each greening condition and forty samples for each non greening condition were selected, and therefore a total of 300 citrus leaves were tested in this study. Greening islands and blotch mottle are included in the greening disea se condition and Manganese deficiency, Iron deficiency, Zinc deficiency, normal young and normal mature are contained in the non greening disease condition. In the PCA transformation, only the first five principal components (PC) were used since the higher PCs did not demonstrate meaningful features and consisted mostly of noise. It was found through visual observation of the PC score images that PC 3 enhanced the citrus greening leaves. A detailed illustration of the image processing and classification is given in Figure 7 2 After generating the masked image obtained from a single band image at around 800 nm, the images were transformed into PC 3 score images by performing PCA. To separate the citrus greening spots from the leaf surfaces, a thresholding b ased segmentation was applied to PC 3. In bi level thresholding, the pixel scale less than threshold value are set as "1" (greening disease condition) while others are set as "0"(non greening disease condition and back ground). The threshold value can be f ound from image histograms in Figure 7 3 The value of 0.79 provided the best overall classification accuracy. After image thresholding, some defects might be present in the

PAGE 157

157 images. As shown figure 7 4 the image opening morphological filtering can remove defects on the segmented images. This post processing might reduce the misclassification of citrus greening diseases. Classification algorithm using wavelet transform The algorithms shown in Figure 7 5 illustrated procedures of hyperspectral image process ing and classification. After generating PC score image using PCA, The image texture features were extracted from each decomposed image. Figure 7 6 showed four level decomposed images in each condition. The texture feature dataset was generated by containi ng 13 texture features for each image. The dataset had 300 rows each, representing 50 samples from each of the two conditions of citrus greening symptom, and 40 samples from each of the five conditions of no citrus greening symptom. Each row had 13 columns representing the 13 texture features extracted for a particular sample image. Each row had a unique number (1, 2) representing which class the represented a no citrus greening symptom. To compare classification accuracies under citrus greening symptoms with non citrus greening symptoms, two classes were created which are shown in Table 7 1 These classes represent the compilation of seven different leaf conditions sets, which i solate leaf conditions that are difficult to discriminate. This dataset were classified with fisher linear discriminant analysis. All analysis was done using M ATLAB R2007b (The Mathworks, Inc., Natick, Mass.) and the M TLAB PRTools toolbox (Faculty of Appl ied Physics, Delft University of Technology, The Netherlands). PRTools is a toolkit for the M ATLAB This package is released for the developed and evaluation of pattern recognition algorithms.

PAGE 158

158 Results and D iscussion The P lot of R elative R eflectance S pectr a The reflectance mean spectra and corresponding standard deviation of the samples with 7 different symptoms over the wavelength range between 451.67 nm and 927.71 nm are represented in F igure s 7 7 and 7 8 The reflectance spectra were calculated using th e mean of 10 spectra from 10 randomly selected samples. Each plot in F igure 7 7 is generated by the average ROI window covering 3 x 3 pixels extracted from each hyperspectral image. In mean spectra obtained from the hyperspectral images of citrus g reening with other conditions, the chlorophyll absorption peaks of wavelength from 400 nm (blue) to 600 nm (green) appear to be higher and more prominent. The relative reflectance increase steadily over the wavelengths 720 900 nm, but the reflectance rat e is low (10 30%). Reflectance spectra from hyperspectral images was extracted using ENVI 4.3 (ITT Visual Information Solutions, Boulder, CO, USA) Classification of Citrus Greening using PCA Once the space between citrus greening and other surface conditi on in all regions was narrowed and the difference between conditions is not evident, the specified region in plot would not help to indentify citrus greening. Moreover, If PCA was performed in the entire spectra region, a more uniform surface background in the score image was created and the possibility of converting the normal areas on the leaf surface to the citrus greening conditions in the binary images was reduced. Therefore, the entire regions in reflectance spectra would give a better classification and the performance of PCA.

PAGE 159

159 As observed in F igure 7 9 the PC 3 score images with disease conditions have brighter spots and higher contrast on the leaf surface. Leaf samples with normal mature and young conditions do not have the contrasting region s because of the absence of disease. When the two citrus greening conditions (i.e., greening islands and blotch mottle) were compared, the leaf surface including the blotch mottle spots appear dark in the PC 2 score image while the greening islands spots. The other score images, PC 1, PC 4 and PC 5have low contrast between the diseased and the non diseased surface. Thus, due to distinctive patterns and high contrast, the PC 3 score images were used for the classification of greening islands and blotch mottl e, and they were independently evaluated for classification between citrus greening and non citrus greening. The results shown in Table 7 2 is the classification summary for differentiating citrus greening from other nutrient deficiency and normal conditio ns using PC 3 score image. As previously indicated, the test data consisted of 50 images from each citrus greening and 40 images from others. The overall performance was 94.33% for all the samples. The classification accuracy for the 'Greening Disease' cl ass was 96%, and for the 'No Greening Disease' class was 93.5%, respectively. Based on the results shown in Table 7 2 four samples were misclassified in the 'Greening Disease' classes. Each 'Blotch Mottle' and 'Greening Islands' symptom had two misclassi fied samples. There were 17 samples that were misclassified in the 'No Greening Disease' class which included four 'Manganese Deficiency' symptoms, two 'Zinc Deficiency' symptoms, three 'Normal Young' symptoms, and four 'Normal Mature' symptoms. This miscl assification demonstrated the confusion in classifying citrus greening between manganese deficiency, zinc deficiency, normal young, and normal mature In particular, manganese

PAGE 160

160 deficiency and normal mature samples had the lowest classification (90%) On the o ther hand, all 'Iron Deficiency' samples were correctly classified. The plot of reflectance spectra in F igure 7 8 showed that iron deficiency was distinct to citrus greening, while the other conditions were very close and overlap each other. This misclassi fication may have been caused by the similarity of the reflectance spectra between citrus greening and other conditions. However, the high classification rate of PCA classification have shown that transforming the reflectance spectra increased the disparit y between citrus greening and other confounding conditions. Classification of C itrus G reening using W avelet T ransform The results using Wavelet Transform algorithms had a good overall accuracy. The overall performance shown in Table 7 3 has accuracy of 9 3%. For the other two classes (citrus greening and no citrus greening), there were two misclassified sample for 'Citrus Greening' class, and eight misclassified samples for 'No Citrus Greening' class. The classification accuracies for 'Citrus Greening' cla ss, and 'No Citrus Greening' class were 95% and 91%, respectively. In citrus greening class, the classification accuracies for 'greening islands' and 'blotch mottle' symptoms were 95%. In general, there was only one sample that was misclassified in each 30 test set. For the other class (i.e., non citrus greening), there were nine misclassified samples for this class. Two manganese deficiency samples, six zinc deficiency samples, and one normal young sample were misclassified as greening disease class. Iron deficiency and normal mature were correctly classified as non citrus greening disease class. Although it achieved classification accuracies of above 90% and two perfect classification results (100%), the performances for zinc deficiency condition was poor (70%). It is means that texture features from zinc deficiency was similar to citrus greening class in wavelet transform.

PAGE 161

1 61 Summary The early detection of citrus greening disease could be very useful in managing and controlling the infection. The challenge of identifying citrus greening is the classification of nutrient deficient symptoms as citrus greening. In this study, a hyperspectral imaging system was developed to recognize leaves exhibiting symptoms of citrus greening along with leaves with nutrient d eficiency and normal leaves. Seven leaf classes were evaluated; 1) normal young, 2) normal mature, 3) blotchy mottle ( i.e., citrus greening), 4) green island s ( i.e., citrus greening), 5) manganese deficiency, 6) iron deficiency, and 7) zinc deficiency. The leaf samples were collected in the field and hyperspectral images of the leaf samples were taken in the laboratory with the spectral range of 451.67 nm to 927.71 nm In the image processing algorithm, two classification approaches were developed. First, P rincipal Component Analysis (PCA) was used to transform the hyperspectral images to principal component score image. It was found that the third principal component images enhanced the two citrus greening symptoms from the other classes. By using a simple thresholding method to the principal score image, the citrus greening samples were classified. The second approach used the w avelet t ransform to extract 13 texture features from the principal component score image. A linear discriminant classifier based on the texture features was computed to classify the citrus greening samples. Results showed that the PCA thresholding approach had a total classification rate of 94.33% while 93% total classification rate was obtained for the Wavelet Transform classificat ion. This indicates that although the spectral reflectance profile of citrus greening and nutrient deficiency are near to each other, utilizing PCA to increase the

PAGE 162

162 variance proved to be useful. Furthermore, the texture features obtained from the wavelet tr ansform showed that the morphological properties of citrus greening can be used to differentiate from nutrient deficient leaf samples. This research demonstrated that hyperspectral imaging combined with an appropriate image processing algorithm such as PCA and wavelet transform could be used for differentiating citrus greening symptoms from other leaf conditions, specifically nutrient deficiency. Future studies will explore the utility of these algorithms in outdoor conditions

PAGE 163

163 Figure 7 1 H yperspectral line scan imaging system Figure 7 2. A detailed illustration of Threshold based on classification algorithms

PAGE 164

164 Figure 7 3 Comparison of the threshold value distribution between citrus greening and other condition s Figu re 7 4. The image opening morphological filtering

PAGE 165

165 Figure 7 5. A detailed illustration of the feature extraction using Wavelet Transform Figure 7 6. F our level decomposed images in each condition.

PAGE 166

166 Figure 7 7 Reflectance mean spectra and corr esponding standard deviation of each ROI from citrus greening and other symptoms

PAGE 167

167 Figure 7 8 Reflectance of spectra of citrus leaf samples with citrus greening and other conditions

PAGE 168

168 Figure 7 9 Five score images obtained from principal component analy sis using full spectral regions Table 7 1 The number of citrus greening and other conditions Class Symptoms Number Total Greening Disease Blotch Mottle 50 100 Greening Islands 50 No Greening Disease Iron Deficiency 40 200 Manganese Deficiency 40 Zinc Deficiency 40 Young Flush 40 Normal Mature 40

PAGE 169

169 Table 7 2. Classification result in percent correct using PCA based on method Class Symptoms Misclassified Accuracy (%) Greening Disease Blotch Mottle 2 96.0 96.0% Greening Islands 2 96.0 No Greening Disease Iron Deficiency 0 100.0 93.5% Manganese Deficiency 4 90.0 Zinc Deficiency 2 95.0 Young Flush 3 92.5 Normal Mature 4 90.0 Total 17 94.3% Table 7 3. Classification result in percent correct using WT based on method Class Symptoms Misclassified Accuracy (%) Greening Disease Blotch Mottle 1 95.0 95.0% Greening Islands 1 95.0 No Greening Disease Iron Deficiency 0 100.0 91.0% Manganese Deficiency 2 90.0 Zinc Deficiency 6 70.0 Young Flush 1 95.0 Normal Mature 0 100.0 Total 11 93.0%

PAGE 170

170 CHAPTER 8 CITRUS BLACK SPOT DE TECTION USING HYPERS PECTRAL IMAGING Citrus black spot is the latest exotic disease to be discovered in Florida. This fungal disease poses a threat to Florida groves similar to citrus canker and gre ening because it is one of the most well known fungal diseases worldwide. Once citrus trees are infected with citrus black spots, the fruit yield, and quality are greatly reduced. Furthermore fruits infected with citrus black spots are not acceptable for fresh market (Chung et al., 2009). Therefore it is important to control this disease to achieve profitable production. Citrus black spot can be identified by fruit symptoms which cause cosmetic lesions on the peel of fruit that are the most conspicuous sy mptom of infection (Dewdney, 2010). Fruit symptoms can be quite variable. Black spot lesions begin as small orange or red spots with black margins and enlarge to become necrotic lesions. Other symptoms of black spots include hard spot lesions, virulent spo t, cracked spot, and false melanose, appear on the fruit surface ( The Institute of Food and Agricultural Sciences Extension at University of Florida 2010). Detecting fruits infected with black spots can help in controlling the spread of this disease speci fically in areas that are black spot free. The design and implementation of technologies that can efficiently detect black spot disease will greatly aid in the control effort. The identification of various crops and plant parts using machine vision and ima ge processing techniques has been studi ed by several researchers. Jimenez et al. (2000) surveyed several computer vision approaches for locating fruit in trees. Regunathan and Lee (2005) indentified fruit count and size using machine vision and an ultrason ic sensor. Burks et al. (2000) developed a method for classification of weed species using

PAGE 171

171 color texture features and discriminate analysis. Tang et al (1999) developed a texture based weed classification method using Gabor wavelets and neural networks fo r real time selective herbicide application. Pydipati et al (2006) identified citrus disease using the co occurrence matrix method, ( CCM ) texture feature method and a discriminant analysis. Du et al (2006) described five different texture feature methods including the common first order gray level statistics (FGLS), run length matrix (RLM), gray level co occurrence matrix (GLCM), fractal dimension (FD), and wavelet transform (WT) based method. After that, both simple correlation analysis and partial leas t squares regression (PLSR), to investigate tenderness of cooled pork ham. In recent years, o ptical techniques ha ve been used widely in the food processing and inspection application. In particular, hyperspectral imaging technologies have a growing interes t in quality evolution and inspection of food and agricultural product (Sun, 2008). Previous research has showed hyperspectral imaging technologies and applications for agricultural product s Jiang et al. (2007) used hyper spectral fluorescence imaging to analyze the differences between walnut shells and meat. Hyper spectral fluorescence imaging system scanned samples at 79 different wavelengths ranging from 425nm to 775nm with 4.5 nm increments and d ata redundancy was reduced by principal component analys is (PCA). Zhang et al (2005) suggested a creative classification approach for distinguishing healthy and fungal infected wheat kernels during storage. The research showed the potential use of NIR hyper spectral imaging in grain quality assessment. The res earch used NIR hyper spectral imaging and support vector machine (SVM) for identifying the fungi that caused the infection. Kim et al. (2002) researched a method for using hyper spectral data to

PAGE 172

172 identify wavebands to be used in multispectral detection syst ems, and evaluated spatial and spectral responses of hyper spectral reflectance images of fecal contaminated apples. Lee et al (2005) used the hyper spectral imaging technique to detect defects on apple peel after harvest. They developed a proper wavelengt h selection method for detecting the defects. In hyperspectral image classification methods, a spectral angle mapper (SAM) and spectral information divergence (SID) classification that measures the spectral similarity between two spectra was applied to va rious agricultural products and systems. Park et al (2007) used SAM algorithms to detect fecal and ingesta contaminants on the surface of poultry carcasses. Qin et al (2009) introduced the detection of citrus canker using SID classification methods. Yang e t al (2006) studied the SAM method to airborne hyperspectral imagery for mapping yield variability. The overall objective of this research was to develop two spectral classification methods, spectral angle mapper (SAM) and spectral information divergence (SID) for detection of black spot by hyperspectral image. Methodology Samples Table 8 1 demonstrated the sample number of the black spot and no black spot conditions for this study. Thus, a total of 525 samples were selected in this study. 135 samples of the black spot conditions 90 samples of the greasy spot and market, and 105 samples of the melanose and windscar were collected Hyperspectral Image Acquisition A hyperspectral line scan imaging system, as shown in Figure 8 1 was used for acquiring hyper spectral images of the fruit samples. The imaging system consisted of

PAGE 173

173 an electron multiplying charge coupled device (EMCCD) camera (Luca, Andor Technology Inc., CT, USA) with imaging spectrograph (ImSpector V10E, Spectral Imaging Ltd., Oulu, Finland) and a C mount lens (Rainbow CCTV S6X11, International Space Optics, S.A., Irvine, CA, USA), a pair of halogen line lamp (21V, 150W) powered with a DC voltage regulated power supply (Dolan Jenner Industries, Inc., Lawrence, MA, USA), and a programmable motorize d positioning table (XN10 Xslide, Velmex Inc., Bloomfield, NY, USA). This equipment was placed inside in a dark box to eliminate stray external light. The EMMCCD has 1004x1002 pixels and a double stage Peltier device to cool to 20 An imaging spectrogra ph is based on prism grating prism principle. It has a slit by a prism grating prism device and projected onto the pixels of EMCCD detector. A two dimensional image is generated with the spatial dimension along the horizontal axis and the spectral dimension along the vertical axis of the EMCCD. When the citrus sample is moved perpendicularly to the scanning direction by the motorized positioning table, one thousand seve n hundred and forty line scans were performed for each leaf sample, and four hundred pixels covering the scene of the sample at each scan were saved. Therefore, a three dimensional image cube (1740400 for each band) was created. The hyperspectral imagin g software to transfer data and parameterization was developed using the Andor Software Development Kit (SDK, Luca, Andor Technology Inc., CT, USA) for the hyperspectral line scan imaging system. An Hg Ne spectral calibration lamp (Oriel Instruments, Strat ford, CT, USA) was used to investigate spectral

PAGE 174

174 calibration of the system. Because of low light output in the visible region less than 450 nm, and low quantum efficiency of the EMCCD in the NIR region beyond 930 nm, the wavelength range between 451.67 nm a nd 927.71 nm was used (totaling 92 bands with a spectral resolution of 5.2 nm). Hyperspectral Image Processing A detailed illustration of SAM and SID classification algorithms is given in Figure 8 2 After the relative hyperspectral reflectance images for all the wavelengths (483 to 959 nm) was acquired, the pixel value of the relative reflectance image was adjusted to the value in the range of 0 to 10,000 due to the range of original data from 0 16383 (the 14 bit EMCCD). The adjusted value could reduce rou nding errors for further data analysis. Next, the mask temple of the relative reflectance image was created by the threshold value. The threshold value was decided from the contrast image between the leaf surfaces with the background. Once the mask was cov ered with images, the image resolution was reduced to half of the image resolution. The reduction of the image resolution reduced the computational burden of the redundant features, and improved the performance of classification algorithms. The reduction b rought comparable spatial resolutions for vertical and horizontal dimensions. Next, the spectral similarity for SAM and SID classification algorithms was determined between endmember and target spectrum by calculating the angle and divergence. The endmembe r spectra were extracted directly from sample images. In this study, the mean reflectance spectra from ROIs of 10 black spot samples were collected for describing endmember spectra. The mean and standard deviation from endmember spectra were illustrated in Figure 8 3 After applying SID and SAM mappings for the hyperspectral image of the test samples, rule images to separate black spot lesions from the fruit peel were obtained.

PAGE 175

175 Representative rule images from SID and SAM mapping for the hyperspectral images are shown in Figure 8 4 Results and Discussion Spectral Characteristics of Black Spot and Other Conditions The mean reflectance spectra and standard deviation of the samples with black spot, normal and different symptoms over the wavelength range between 483 and 959 nm are represented in F igure s 8 5 and 8 6 This figure was developed using the mean of 10 spectra from the 10 different samples. Each plot in F igure 8 5 is generated by average region of interest (ROI) window covering 3 x 3 pixels extra cted from each hyperspectral image. F igure 8 6 shows similar patterns which are regardless of the surface conditions. All the spectra were presented as two local minima around 500 nm and 675 nm because of differenc e of light absorption of chlorophyll and carotenoid in the surface As shown in F igure 8 6 the spectra from normal has the highest peak in wavelength from 500 nm to 600 nm and other diseases except black spot gradually are increased. Due to t he low absorp tion s of chlorophyll and carotenoid, the spectra from black spot were not as outstanding as other peel conditions. Since any defect and contaminant was not appeared on the normal fruit surface, chlorophyll and carotenoid absorption s peak of the spectra reg ion from 500 nm to 675 nm appear to be higher and more prominent. The spectra values of other disease generally are contained between the spectral of normal and black spot in the range of 12% 20% and 52% 75%. Reflectance spectra from hyperspectral images w as extracted using ENVI 4.3 (ITT Visual Information Solutions, Boulder, CO, USA)

PAGE 176

176 SID and SAM b ased on Classifications Using the SID and SAM mapping, rule images of all the fruit samples were obtained. Since the SID and SAM mappings were based on the end spectra of black spots, the resulting rule images enhanced the black spot regions. Figure 8 7 shows a detailed comparison of the value distribution between black spot and non black spot regions. The SAM and SID pixel values of the black spot ranged from 0. 01 to about 0.1. Based on these intensity values, a thresholding algorithm to separate the black spot classes. Figure 8 8 shows a sample binary image resulting from the t hresholding operation where the black spots are represented by the white pixels. The SID and SAM Threshold Values for Classification Accuracy The results shown in Table 8 2 is the classification summary using 6 threshold values for differentiating black sp ot from other conditions using SID mapping of hyperspectral images. The threshold value was increased from the default value of 0.01 to 0.06 with a step size of 0.01. As indicated in Figure 8 9 (a), the value of 0.04 provided the best performance of 97.14%. When the threshold was changed from 0.01 to 0.04, the classification accuracy increased from 74.28% to 97.14%, respectively. However, the accuracy decreased when the threshold value was increased to 0.05. Based on the results shown in Table 8 4 the clas sification accuracy for 'Black Spot' class was 98%, and accuracy for 'No Black Spot' class was 96.92%. Only 3 black spot samples were misclassified. For the non black spot class, 12 samples were misclassified which included 8 greasy spot samples and 4 wind scar samples. The melanose samples and the market samples were correctly classified.

PAGE 177

177 In the classification using SAM mapping, the threshold values were changed from 0.06 to 0.11 with a 0.01 increment. Figure 8 9 (b) shows the effect of changing the thresh old to classification accuracy, which increased from 81% (0.06) and peaked to 97.9% (0.09) and decreased to about 90%. It showed a similar characteristic as the SID classification accuracy, however SAM mapping had a higher accuracy in t he threshold range that it was tested. As shown in Table 8 3 and Table 8 5 for the best classification result, the classification accuracies for 'Black Spot' class, and 'No Black Spot' class were 98% and 97.95%, respectively. There were three misclassifie d samples for 'Black spot' class, and eight misclassified samples for 'Non Black Spot' class which consisted of the greasy spot samples only. The other three non black spot classes ('Normal', 'Wind Scar', and 'Melanose') have perfect classification results (100%). The plot of reflectance spectra in F igure 8 3 illustrated that greasy spot were very close to those of the black spot condition which contributed to the misclassification. Imaging parameters that will enhance the difference between the black spot samples and other confounding conditions such as greasy spot will be investigated in future studies. These will include looking into the effect of varying the illuminating source and changing the optical device (e.g. lens) to improve both reflectance and r esolution of the images. Summary Citrus black spot is the latest disease that has invaded the Florida citrus industry. Symptoms of black spot appear on the fruit surface as black lesions. The detection of fruits infected with black spot will greatly help in the controlling the spread of the disease. In this study, a hyperspectral imaging system was developed to recognize fruits exhibiting symptoms of citrus black spots along with fruits with other peel

PAGE 178

178 conditions and market quality fruits. Five fruit class es were evaluated; 1) citrus black spot, 2) greasy spot, 3) melanose, 4) wind scar, and 5) market quality. The fruit samples were collected in the field and hyperspectral images of the leaf samples were collected in the laboratory with the spectral range o f 400 nm to 900 nm. Reference spectrum of black spot was obtained from the ROIs that were selected from the reference hyperspectral images of the black spot samples. In the image processing algorithm, two classification approaches were developed. First, Sp ectral Angle Mapper (SAM) was used to calculate the angle between the reflectance spectra of black spot and the reflectance spectra of an unknown fruit sample. The second method uses the Spectral Information Divergence (SID), which quantifies the similarit the unknown fruit sample. The hyperspectral images of the tested fruit samples were mapped using both SAM and SID methods. A simple thresholding approach was applied to the rule images, whic h resulted from the mapping, to segment the black spot regions and eventually classify the fruits with black spots. Based on the results, a black spot classification accuracy of 97.9% was obtained using the SAM approach with an optimal threshold value of 0.09. The SID mapping had a black spot classification accuracy of 97.14% with a 0.04 optimal threshold. All the melanose and market quality fruit samples were correctly classified using the two mapping approach, while the accuracy for greasy spot was about 91% and wind scar was over 96%. Overall, the performances of both classification approaches in detecting black spots along with other peel conditions were very good. However, it was found that mance deteriorated faster

PAGE 179

179 classification performance was not greatly affected as the threshold was changed. This research demonstrated that hyperspectral imaging combined wit h an appropriate image processing algorithm such as SAM and SID mapping could be used for detecting citrus black spots. Future studies will explore the identification of significant wavelengths from the reference spectrum to develop a multispectral imaging approach that could be applied in real time.

PAGE 180

180 Figure 8 1 Hyperspectral line scan imaging system Figure 8 2 A detailed illustration of SAM and SID classification algorithms

PAGE 181

181 Figure 8 3. The mean and standard deviation from endmember spectra Figure 8 4. Representative rule images from SID and SAM mapping the hyperspectral images

PAGE 182

182 Figure 8 5. Reflectance mean spectra and corresponding standard deviation of each ROI from black spot, greas y spot, melanose, wind scar, and market

PAGE 183

183 Figure 8 6 The reflectance spectra of the samples with black spot, normal and different symptoms over the wavelength range between 483 and 959 nm

PAGE 184

184 Figure 8 7 Detailed comparison of the value distribution betwee n black spot and non black spot region

PAGE 185

185 Figure 8 8 A sample binary image resulting from the thresholding operation where the black spots are represented by the white pixels Figure 8 9 Comparison with SID and SAM classificat ion accuracy

PAGE 186

186 Table 8 1 The number of black spot and other conditions Class Symptoms Number Total Greening Disease Black spot 135 100 No Greening Disease Greasy spot 90 390 Market 90 Melanose 105 Wind scar 105 Table 8 2. The classification summary using 6 threshold values for differentiating black spot from other conditions using SID mapping of hyperspectral images Spectral Information Divergence (SID) Thread Value 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Black spot 135 66 12 3 2 2 0 Greasy sp ot 0 1 3 8 22 38 53 Market 0 0 0 0 1 1 1 Melanose 0 0 0 0 0 0 1 Wind scar 0 0 1 4 11 19 33 Accuracy (%) 74.28 87.24 96.95 97.14 93.14 88.57 83.24 Overall accuracy (%) 88.65

PAGE 187

187 Table 8 3. The classification summary using 6 threshold values for differentiating black spot from other conditions using SAM mapping of hyperspectral images Spectral Angle Mapper (SAM) Thread Value 0.06 0.07 0.08 0.09 0.10 0.11 0.12 Black spot 92 42 14 3 1 1 1 Greasy spot 0 0 2 8 16 29 44 Market 0 0 0 0 0 0 1 Melano se 0 0 0 0 0 0 0 Wind scar 0 0 0 0 2 4 6 Accuracy (%) 82.48 92.00 96.95 97.90 96.38 93.52 90.10 Overall accuracy (%) 92.76 Table 8 4. The best overall classification accuracy using the SID threshold value of 0.04 Class Symptoms Misclassified Accurac y (%) Black spot Disease Black spot 3 98.00 98.00 No Black spot Disease Greasy Spot 8 91.11 96.92 Melanose 0 100.0 Wind scar 4 96.19 Market 0 100.0 Total 15 97.14

PAGE 188

188 Table 8 5. The best overall classification accurac y using the SAM threshold value of 0.09 Class Symptoms Misclassified Accuracy (%) Black spot Disease Black spot 3 98.00 98.00 No Black spot Disease Greasy Spot 8 91.11 97.95 Melanose 0 100.0 Wind scar 0 100.0 Market 0 100.0 Total 11 97.90

PAGE 189

189 CHAPTER 9 HYPERSPECTRAL BAND S ELECTION FOR DETECTI ON OF BLACK SPOT USI NG MULTISPECTRAL ALGORL THMS Citrus black spot is a fungal disease that threatens marketability of fresh market citrus production in Florida. Caused by Gui gnardia citricarpa (sexual stage) and Phyllosticta citrucarpa (asexual stage), this fungal disease is a significant threat to Florida citrus groves. The symptoms of black spots include hard spot lesions, virulent spot, cracked spot, and false melanose, and all appear on the fruit surface ( The Institute of Food and Agricultural Sciences Extension at University of Florida 2010 ). After citrus trees are infected with citrus black spot, the fruit yield, and quality are greatly reduced. Moreover, fruits infected with citrus black spots are not acceptable for fresh market export and domestic trade (Chung, 2009). Therefore, it is important to establish early control system to achieve profitable production in Florida. Detecting fruits infected with citrus black spot can help in controlling the spread of this disease specifically in areas that are black spot free. The design and implementation of computer vision technologies that efficiently detect citrus black spot would be an important aid in the control effort. The identification of various crops and plant parts using computer vision and image processing techniques has been studi ed by several researchers. Jimenez et al. (2000) surveyed several computer vision approaches for locating fruit in trees. Regunathan and Le e (2005) indentified fruit count and size using machine vision and an ultrasonic sensor. Burks et al. (2000) developed a method for classification of weed species using color texture features and discriminate analysis. Tang et al (1999) developed a textur e based weed classification method using Gabor wavelets and neural networks for real time selective herbicide application. Pydipati et al (2006) identified citrus disease using the co occurrence matrix method, ( CCM ) texture feature method

PAGE 190

190 and a discrimin ant analysis. Du et al [9] described five different texture feature methods including the common first order gray level statistics (FGLS), run length matrix (RLM), gray level co occurrence matrix (GLCM), fractal dimension (FD), and wavelet transform (WT) based method. After that, both simple correlation analysis and partial least squares regression (PLSR), to investigate tenderness of cooled pork ham. Optical techniques ha ve been used widely in the food processing and inspection application in recent years In particular, hyperspectral imaging technologies have a growing interest in quality evolution and inspection of food and agricultural product (Sun, 2008). Previous research has demonstrated hyperspectral imaging technologies and applications for agricul tural product. Jiang et al. (2007) used hyper spectral fluorescence imaging to analyze the differences between walnut shells and meat. Hyper spectral fluorescence imaging system scanned samples at 79 different wavelengths ranging from 425nm to 775nm with 4 .5 nm increments and d ata redundancy of data was reduced by principal component analysis (PCA). Zhang et al (2005) suggested a creative classification approach for distinguishing healthy and fungal infected wheat kernels during storage. The research show ed the potential use of NIR hyper spectral imaging in grain quality assessment. The research used NIR hyper spectral imaging and support vector machine (SVM) for identifying the fungi that caused the infection. Kim et al. (2002) researched a method for usi ng hyper spectral data to identify wavebands to be used in multispectral detection systems, and evaluated spatial and spectral responses of hyper spectral reflectance images of fecal contaminated apples. Lee et al (2005) used the hyper spectral imaging tec hnique to detect defects on apple peel after harvest. They developed a proper wavelength selection method for

PAGE 191

191 detecting the defects. In hyperspectral image classification methods, a spectral angle mapper (SAM) and spectral information divergence (SID) clas sification methods were applied to identify defected agricultural productions. Park et al. (2007) used SAM algorithms to detect fecal and ingesta contaminants on the surface of poultry carcasses. Qin et al. (2009) introduced the detection of citrus canker using SID classification methods. Hyperspectral imaging algorithms described above are very useful to inspect citrus diseases, but they would limit their applications such as rapid on line disease detection because to large amount of 3 D hyperspectral imag ing data. Therefore, algorithms for optimal band selection and multispectral image classification are needed to fulfill the goal of real time black spot inspection. The overall objective of this study was to develop a multispectral algorithm based method t o detect citrus black spot on citrus peel. Specific objectives implemented to accomplish the overall objective were to: U se a hyperspectral imaging system to collect 3 D hyperspectral images from fruits with black spot, greasy spot, melanose, wind scar, ma rket ; D etermine optimal wavelength bands based on principal component analysis (PCA), singular vector decomposition (SVD), and correlation analysis (CA) ; and D evelop algorithms for multispectral imaging analysis and classifying the citrus peel conditions b ased selected wavelengths Materials and Methods Hyperspectral Image Acquisition Figure 9 1 shows a hyperspectral line scan imaging system for acquiring hyperspectral images of the fruit samples. The imaging system consisted of an electron multiplying cha rge coupled device (EMCCD) camera (Luca, Andor Technology Inc., CT, USA) with imaging spectrograph (ImSpector V10E, Spectral Imaging Ltd., Oulu,

PAGE 192

192 Finland) and a C mount lens (Rainbow CCTV S6X11, International Space Optics, S.A., Irvine, CA, USA), a pair of halogen line lamp (21V, 150W) powered with a DC voltage regulated power supply (Dolan Jenner Industries, Inc., Lawrence, MA, USA), and a programmable motorized positioning table (XN10 Xslide, Velmex Inc., Bloomfield, NY, USA). This equipment was placed in side in a dark box to eliminate stray external light. The hyperspectral imaging software to transfer data and parameterization was developed using the Andor Software Development Kit (SDK, Luca, Andor Technology Inc., CT, USA) for the hyperspectral line sca n imaging system. An Hg Ne spectral calibration lamp (Oriel Instruments, Stratford, CT, USA) was used to investigate spectral calibration of the system. Because of low light output in the visible region less than 450 nm, and low quantum efficiency of the E MCCD in the NIR region beyond 930 nm, the wavelength range between 451.67 nm and 927.71 nm was used (totaling 92 bands with a spectral resolution of 5.2 nm). Correlation Analysis Algorithms for Band Selection Correlation analysis algorithms using two wavel ength ratios was used to find the best pair of wavelengths for differentiating between black spot and no black spot conditions. Referring to Figure 9 2, 35 and 80 representative spectra of 'B lack S pot classes and 'N o B lack S pot classes were extracte d from the hyperspectral images, respectively. All the spectral data were extracted from region of interest (ROI) hyperspectral image data, and at least one region of interest ( ROI ) was selected from the representative spot for each sample. Average spectru m calculated from the merged ROIs within each sample was used as individual reflectance spectrum for the wavelength selections.

PAGE 193

193 For each of the black spot and no black spot spectra, two band ratio values were calculated for all possible two wavelength combinations. In the algorithms, the quality index for black spot spectral was labeled to be "1", and the quality index for no bla ck spot was labeled to be "0". Correlation analyses were performed to determine correlation coefficient between two band r atios and fruit peel quality index values of the samples. The correlation analysis procedures described above were executed using program developed in MATLAB R2007B (MathWorks, Natick, MA, USA) and extraction of reflectance spectra from the hyperspectral i mages was finished using ENVI 4.3 (ITT Visual Information Solutions, Boulder, CO, USA). Principal Component Analysis Algorithms for Band Selection Each score images from principal component analysis represent a linear summation of the original hyperspectra l single band images at different wavelengths weighted by the corresponding spectral loadings. The weighting coefficients from spectral loading values are useful for identifying important wavelengths that are responsible for the unique features appeared in the score images. In Figure 9 3, the PC 3 score images showed great potential for indentifying black spot from fruit peel conditions. Figure 9 4 shows the detailed p rocedures for band selection using PCA algorithms. First, with the random number generator the random sample of 20 was selected, and average weight coefficients for the optimal principal component from 20 grapefruit samples with black spot lesions were selected. Next, optimal wavelengths were determined by large absolute weight coefficients. A fter important wavelengths were identified, image classification was performed.

PAGE 194

194 Singular Vector Decomposition Algorithms for Band Selection A detailed illustration of singular vector decomposition classification algorithms is given in Figure 9 5 The hype rspectral reflectance images for singular vector decomposition (SVD) analysis were extracted from ROI hyperspectral image data using MATLAB R2007b (Math w orks, Natick, MA, USA) T he ROI selection was started manually by determining a point on the hyperspect ral reflectance image, and then was finished for extracting a square portion with the dimension of 10 10 pixels centered on the determined point. Representative ROI images for each black spot peel condition s used in this study are shown in Figure 9 6 Aft er the ROI hyperspectral reflectance images were extracted the ROIs were analy zed with SVD. All analysis was done using M ATLAB R2007b (The Mathworks, Inc., Natick, Mass.) and the Hyperspectral Image Analysis T oolbox (HIAT) ( Research Thrust R3 Northeaste rn University USA ). HIAT is a toolkit for M ATLAB R2007b This package is released for the developed and evaluation of hyperspectral image analysis algorithms. Result and Discussion Mean R elative R eflectance S pectra of Black Spot The r eflectance mean spectra and corresponding standard deviation of the samples with 5 different conditions over the wavelength range between 451.67 nm and 927.71 nm are represented in F igure 9 7 The reflectance spectra were calculated using the mean of 10 s pectra from 10 randomly selected samples. Each plot in F igure 9 7 is generated by the average ROI window covering 3 x 3 pixels extracted from each hyperspectral image.

PAGE 195

195 In mean spectra obtained from the hyperspectral images of citrus greening with other co nditions, the chlorophyll absorption peaks of wavelength from 400 nm (blue) to 600 nm (green) appear to be higher and more prominent. The relative reflectance increase steadily over the wavelengths 720 900 nm, but the reflectance rate is low (10 30%). R eflectance spectra from hyperspectral images was extracted using ENVI 4.3 (ITT Visual Information Solutions, Boulder, CO, USA) Hyperspectral Band Selection using CA Figure 9 8 shows the c ountour plot the correlation coefficients (r) between two band ratio and fruit peel conditions. The highest correlation coefficient (r) of 0.92 occurred at the visible regions (724 nm and 698 nm, specifically). The band ratio with the second highest correlation value of 0.91 was in near infrared (NIR) and visible regions ( 854 nm and 598 nm, specifically). Referring Figure 9 6, the reflectance values of black spot at 724 nm and 698 nm were similar to other disease conditions. However, the wavelength of 598 nm has the lowest reflectance values of back spot. Hyperspectral Ban d Selection using PCA Figure 9 9 shows mean weighting coefficients for the third principal component from 20 black spot ROIs. The general pattern of the PC 3 loading revealed that the wavelengths in blue, green, red, and near infrared regions were signifi cant factors for generating the PC 3 score images. L ocal peaks appeared on the loading curve across the entire spectral region from 400 to 900 nm, and they had larger absolute values than those at other wavelengths. As shown in Figure 9 9 four optimal wa velengths were selected, and they centered at 870 nm, 802 nm, 724 nm, and 671 nm. The wavelength of 671 nm likely represents the low chlorophyll content in the black spot lesions, and the wavelength in

PAGE 196

196 the NIR region (870 nm, 802 nm) would provide useful i nformation that is insensitive to colors. These four wavelengths can be adopted by a multispectral imaging solution for real time black spot detection on commercial sorting line. Hyperspectral Band Selection using SVD In singular vector decomposition algor ithms, the number of features is not always easy to determine. The energy fraction could be used to argue for the usage of a given number of features. The number of features could also be determined from the characteristics of the singular values. When th e singular values stabilize, the remaining features are usually contaminated with much noise and therefore not useful. Singular values of black spot are shown in Figure 9 10 From the number of feature are four and up, the singular values are almost consta nt. Based on the Figure 9 10 four optimal wavelengths (e.g., 9 07 nm, 7 24 nm, 677 nm and 572 nm) were determined by SVD. The optimal wavelengths selected by SVD showed that the wavelengths in red, green, and short wavelength NIR regions could be importan t for black spot identification. S pecifically, the reflectance values of black spot at wavelength of 677 nm have the lower chlorophyll content than other disease conditions. Their effectiveness for black spot detection also needs to be evaluated by image c lassification results. The H yperspectral I mages S elected by B and S election M ethods Figure 9 1 1 (a) and (b) show the representative single band reflectance images of oranges with five different conditions at the wavelengths with highest correlation coefficients and second highest correlation coefficients. As observed in F igure 9 1 1 (a) and (b), the images at wavelengths of visible regions (e.g., 598 nm, 69 8 nm, and 724 nm) ha d more distinctive spots and higher contrast on the sample surface. Fruit

PAGE 197

197 samples with normal conditions d id not have the contrasting regions because of the absence of disease. Moreover, the other images at wavelengths of NIR regions (e.g., 854 nm) ha d low contrast between the dis eased and the non diseased surface. Representative spectral reflectance images of diseased peel and normal conditions at four wavelength selected by PCA and SVD are shown in Figure s 9 1 2 and 9 1 3 They also had more distinctive and higher contrast on the sample surface in the visible spectral region below 730 nm. Due to relatively high reflectance characteristics of diseased surface at long wavelengths, the contrast between diseased and non disease spot surface was reduced in the NIR region. Classification of Black Spot using Band Ratio Images The ratio images of black spot using wavelengths selected by CA, PCA and SVD algorithms were illustrated in Figure s 9 1 4 9 1 5 and 9 1 6 As shown in Figure s 9 1 4 and 9 1 5 each PCA and SVD band selection has six band s ratio images calculated using all six possible combinations of four bands. For the ratio images from CA band selections in Figure 9 1 6 the black spot revealed distinctive bright spots on the fruit surface due to their relatively high ratio values. The normal area of the fruit peel appeared darker than the disease owing to their small ratio values using selected wavelengths. However, two band ratio image from SVD and PCA band selections exhibited inconsistent patterns black spot identification. The ratio images (917 nm/729 nm, 917 nm/572 nm, and 729 nm/572 nm) from PCA and the (870 nm/802 nm, 870 nm/724 nm, and 802 nm/724 nm) ratio images from SVD showed similar features with those obtained using the bands from CA. In PCA, the stem end and some edge area appeared bright in the images (802 nm/671 nm, 870 nm/671 nm, and 724 nm/671 nm) as well as the images in Figure 9 1 4 Moreover,

PAGE 198

198 they showed inconsistent brightness and darkness patterns of the black spot lesions. Referring to Figure 9 1 5 inconsistent brig htness patterns were also observed in ratio image images (9 0 7 nm/677 nm, and 72 4 nm /677 nm) from SVD. The image (677 nm/572 nm) from SVD is not as important as other images because the bright area other than black spot could be the sources of false posit ive errors. To separate the citrus black spots from the peel surfaces, a thresholding based segmentation was applied to ratio images In bi level thresholding, the pixel scale less than threshold value are set as "1" ( black spot condition) while others ar e set as "0"(non black spot condition and back ground). To determine threshold values for automated classification of black spot, Figure s 9 17 9 18 and 9 19 illustrates histograms for ratio images of 802 nm/724 nm from PCA, 919 nm/572 nm from SVD, and 854 nm/598 nm from CA. The histograms shows that ratio values of black spot is larger than other peel diseases because black spot along with other four disease showed bright in the image and non black spot lesions on fruit surfaces appeared darker than the bl ack spot areas. Histograms for the other ratio images, as suggested in Figure s 9 17 9 18 and 9 19 failed to show high ratio value of black spot on fruit surface and are omitted for brevity. After image thresholding, some defects might be present in the images. As shown F igure 9 20 image opening morphological filtering can remove noise a false black spot on the segmented images. This post processing might reduce the misclassification of black spot diseases. The classification summary for differentiating Black Spot from other diseases and normal conditions using PCA is shown in T able 9 1. As previously indicated, the test data consisted of 135 images from citrus greening and 390 images from others. T he

PAGE 199

199 best performance was 96.19 % of the (802 nm/724 nm) r atio image (Table 9 1) The classification accuracy for the Black Spot class was 94.81 %, and for the 'No Black Spot class was 96.67 %, respectively. There were 7 misclassified samples for 'Black S pot' class, and 13 misclassified samples for 'Non Black Sp ot' class which consisted of greasy spot samples only. The other three non black spot classes ('Normal', 'Wind Scar', and 'Melanose') have perfect classification results (100%). Ba s ed on Table 9 2 (724 nm/ 67 1 nm) ratio image had the lowest classification ( 58.1 %) Figure 9 2 1 illustrates the plot of classification summary using 10 threshold values for differentiating black spot from other conditions using the (802 nm/724 nm) ratio image from PCA The threshold value was increased from the value of 1.2 ( F ig ure 9 2 1 ) to 1.6 with a step size of 0.05. As indicated in Figure 9 2 1 the value of 1.45 provided the best performance of 96.19 % When the threshold was changed from 1.2 to 1.45 the classification accuracy increased from 44.19 % to 9 6.19 %, respectively However, the accuracy decreased when the threshold value was increased to 1.5 In the classification using the SVD band selection as shown in Table 9 2 the ratio images were consist of the three pairs of two visible ratio bands, and three pairs of NIR and visible ratio bands As shown in Table 9 2 for the best classification result, the classification accuracies for 'Black Spot' class, and 'No Black Spot' class were 9 7 04 % and 93.59 %, respectively. F our samples were misclassified in the Black Spot cl asses and t here were 2 4 samples that were misclassified in the 'No Black Spot' class which included 2 3 Greasy Spot disease s 2 'Wind Scar' disease s This misclassification demonstrated the confusion in classifying black spot between greasy spot and wind scar. On the other hand, all 'Market' and 'Melanose' conditions were correctly classified.

PAGE 200

200 As shown in Table 9 2, three pairs of NIR and visible ratio images generated slightly worst result than that of two visible ratio images. Two pair of NIR and visibl e ( 9 0 7 nm/ 72 4 nm, and 917 nm/ 572 nm) ratio images achieved the high classification accuracies (above 90 % ) being slightly higher than those of two v isible ratio images. The above SVD results suggested that NIR and visible ratio bands necessarily improve t he classification performance. Figure 9 2 2 shows the effect of changing the threshold to classification accuracy, which increased from 8 2 48 % ( 2.70 ) and peaked to 94. 48 % ( 3. 2 0 ) and decreased to about 9 1 43 %. The overall classification accuracy peaked at th e threshold of 3.40, illustrated in Figure 9 2 3 by which the accuracy achieved 94. 8 6 %. Table 9 3 summarizes detailed classification results for identifying black spot from normal and other diseased peel conditions using all ratio images from CA wavelength selections. As shown in Table 9 3, 98.51% of the black spot were correctly classified as 'Black Spot' class, and accuracy for the 365 samples in 'No Black Spot' class was computed as 93. 5 9 %. A total of 27 samples were misclassified among all 525. 'Greasy Spot' samples were classified with the low accuracy of 72.22% due to their relatively high ratio values using the ratio bands selected by CA, and all other conditions were correctly classified. Summary Our previous studies showed that disease detection ba sed on hyperspectral reflectance imaging is useful for detection of black spot disease. Although good results for classification of citrus diseases were obtained with hyperspectral image techniques disease detection using multispectral algorithms should b e evaluated to further improve accuracies and reduce computational burden for real time detection system. Therefore, band selection of hyperspectral imaging is important to fulfill the goal of rapid on line

PAGE 201

201 disease inspection using multispectral techniques In this study, Principal component analysis, singular value decomposition, and correlation analysis algorithms were developed to find a few optimal wavelengths for multispectral detection of black spot. Based on the results, a black spot classification accuracy of 96.19 % was obtained using the PCA approach with the NIR and visible ratio image (802 nm/724 nm) and an optimal threshold value of 1.5 The SVD approach had a black spot classification accuracy of 94. 48 % with the NIR and visible ratio image (9 1 7 nm/572 nm) and a n optimal threshold value (3.20) In CA algorithms, the (854 nm/598 nm) ratio images that had second highest correlation coefficient (9.1) provided the best classification accuracy (94.86%). As shown in all results, t he classification accu racies for 'Greasy Spot' were relatively lower than other disease conditions because, reflectance spectra of greasy spot illustrated in Figure 9 7 were very close to those of the black spot condition which contributed to the misclassification. Thus, th e hy perspectral imaging parameters that will enhance the difference between the black spot samples and other confounding conditions such as greasy spot will be considered in next studies. Referring to Table s 9 1 9 2, and 9 3 the short wavelength NIR region and visible region (NIR/ v is ible ) ratio bands gave the good performances of three classification approaches among all combinations of the ratio images. Thus, this band pair (NIR/visible) has great potential to be adopted by a two band multispectral imaging system for real time black spot detection. In the control of threshold values, the best overall classification accuracies were determined based on the assumption that the incorrect classification of a fruit without black spot and with black spot (i.e., fal se negative and false positive) for was equally weighted. In practice, the control of the two

PAGE 202

202 types errors will be needed for adjusting the threshold values to meet the requirements of real time system. This research demonstrated that multispectral algorit hms using hyperspectral imaging could be used for differentiating black spot from other disease conditions. Future studies will explore the utility of these algorithms in a real time black spot detection system.

PAGE 203

203 Figure 9 1. Hyperspectral line s can imaging system Figure 9 2. Data analysis and classification algorithms for CA

PAGE 204

204 Figure 9 3 First five score images obtained from principal component analysis for hyperspectral images of grapefruit samples

PAGE 205

205 Figure 9 4 Data analysis and classifi cation algorithms for PCA Figure 9 5 Data analysis and classification algorithms for SVD

PAGE 206

206 Figure 9 6 Representative ROI images for each black spot peel conditions Figure 9 7 The reflectance spectra of the samples with black spot, normal and diff erent symptoms over the wavelength range between 483 and 959 nm

PAGE 207

207 Figure 9 8 The countour plot the correlation coefficients (r) between two band ratio and fruit peel conditions Figure 9 9 Mean weighting coefficients for the third principal component f rom 20 black spot ROIs

PAGE 208

208 Figure 9 10 Singular value estimates of black spot conditions

PAGE 209

209 Figure 9 1 1 The representative single band reflectance images with five different conditions at the wavelengths with (a) highest correlati on coefficients and (b) second highest correlation coefficients by CA

PAGE 210

210 Figure 9 1 2 Representative spectral reflectance images of diseased peel and normal conditions at four wavelength selected by PCA

PAGE 211

211 Figure 9 1 3 Represen tative spectral reflectance images of diseased peel and normal conditions at four wavelength selected by SVD

PAGE 212

212 Figure 9 1 4 The ratio images of black spot using wavelengths selected by PCA Figure 9 1 5 The ratio images of b lack spot using wavelengths selected by SVD

PAGE 213

213 Figure 9 1 6 The ratio images of black spot using wavelengths selected by CA Figure 9 1 7 The histogram for ratio images of 802 nm/724 nm from PCA

PAGE 214

214 Figure 9 1 8 The histogram f or ratio images of 919 nm/572 nm from SVD Figure 9 1 9 The histogram for ratio images of 854 nm/598 nm from CA

PAGE 215

215 Figure 9 20 Identification of black spot lesions on the fruit peel based on two band ratio images using wavelengths selected by PCA (R802/ R724)

PAGE 216

216 Figure 9 2 1 C lassification accuracies obtained from the ratio image (802 nm/724 nm) selected by PCA Figure 9 2 2 C lassification accuracies obtained from the ratio image ( 919 nm/572 nm ) selected by SVD

PAGE 217

217 Figure 9 2 3 C lassification accurac ies obtained from the ratio image ( 854 nm/598 nm ) selected by CA

PAGE 218

218 Table 9 1. T he classification summary for differentiating black spot from other diseases and normal conditions using PCA Range NIR/NIR NIR/Visible Vis./Vis. Ratio R870/ R802 R870 / R724 R820/ R671 R802/ R724 R802/ R671 R724/ R671 Misclassified Black Spot (n=135) 102 30 45 7 50 92 Greasy Spot (n=90) 71 3 68 13 75 68 Melanose (n=105) 0 0 16 0 29 29 Wind Scar (n=105) 52 0 22 0 20 19 Market (n=90) 5 0 7 0 14 12 Classification Ac curacy, % 'Black Spot' Class (n=135) 8.88 77.78 66.67 94.81 62.96 31.85 'No Black Spot' Class (n=390) 67.18 99.23 71.03 96.67 64.62 67.18 Overall Accuracy 77.14 93.52 69.90 96.19 64.19 58.1

PAGE 219

219 Table 9 2. T he classification summary for differen tiating black spot from other diseases and normal conditions using SVD Range NIR/Visible Visible/Visible Ratio R9 0 7/ R72 4 R9 0 7/ R677 R917/ R572 R72 4 / R677 R72 4 / R572 R677/ R572 Misclassified Black Spot (n=135) 44 74 4 35 20 50 Greasy Spot (n=90) 7 52 2 3 87 31 4 Melanose (n=105) 0 3 0 70 0 9 Wind Scar (n=105) 1 18 2 45 1 24 Market (n=90) 0 3 0 40 1 0 Classification Accuracy, % 'Black Spot' Class (n=135) 67.40 45.18 97.04 74.07 85.19 62.96 'No Black Spot' Class (n=390) 97.95 80.51 93.59 37.95 91.54 90.51 Overall Accuracy 90.10 74.43 94. 48 47.23 89.90 83.43

PAGE 220

220 Table 9 3. T he classification summary for differentiating black spot from other diseases and normal conditions using CA Range NIR/Visible Visible/Visible Ratio R854/R598 R724/R698 Mi sclassified Black Spot (n=135) 2 59 Greasy Spot (n=90) 25 20 Melanose (n=105) 0 0 Wind Scar (n=105) 0 6 Market (n=90) 0 2 Classification Accuracy, % 'Black Spot' Class (n=135) 98.51 22.96 'No Black Spot' Class (n=390) 93.59 92.82 Overall Accuracy 94.86 73.14

PAGE 221

221 CHAPTER 10 CONCLUSION AND FUTURE WORK Conclusions Florida has two important citrus markets : the fresh citrus fruits market and the processed citrus products market. However, c itrus market in Florida is currently threatened by three major citrus diseases ; citrus greening, canker, and black spot. For these reasons, a lot of effort has been put in to automating the detection of citrus diseases using machine vision inspection with great success. This research started with a color imagin g system. T h e color imaging system provide d an efficient laboratory based detection method for citrus canker and greening diseases, and show ed such technique s image processing and pattern recognition have the potential for automated disease detection appli cations. Moreover, t his research demonstrated that color imaging and texture feature analysis could be used for differentiating citrus diseases under the controlled laboratory lighting conditions. M odel H SI at citrus canker and greening diseases conditions emerged as the best data model for classification. Model HSI provided good results averaging a b ove 9 5 % overall classification for both canker disease and citrus greening disease. Similarly, the HSI models gave best classification accuracy (above 90%) for the pattern recognition method comparison. Through a series of random tests, the stability of classification accuracy was demonstrated This research also presents new applications of hyperspectral imaging for measuring the optical properties of fruits a nd disease blemishes In this study, a hyperspectral imaging system was developed to recognize fruits exhibiting symptoms of citrus diseases along with fruits with other peel conditions and market quality fruits. For

PAGE 222

222 real time citrus disease detection usin g multispectral algorithms, band selection algorithms for the hyperspectral image data were developed. Selected bands have great potential to be adopted by a multispectral imaging system for real time citrus disease detection As shown in Chapter 9 a blac k spot classification accuracy of 96.19 % was obtained using the PCA approach with the NIR and visible ratio image (802 nm/724 nm) and an optimal threshold value of 1.5 The SVD approach had a black spot classification accuracy of 94.29 % with the NIR and vi sible ratio image (9 0 7 nm/572 nm) and a n optimal threshold value (3.20) In CA algorithms, the (854 nm/598 nm) ratio images that had second highest correlation coefficient (9.1) provided the best classification accuracy (94.86%). Future Work This dissertat ion provided the development of the efficient laboratory based detection method for citrus diseases. Although the overall performance of this research was good, significant future work will develop new outdoor based algorithms that could be applied to auto nomous scouting. The research on computer vision algorithms for citrus diseases detection encompassed various disease samples and includes numerous computer vision and image processing techniques for the overall detection system. The selection of disease s ample was guided by season and disease symptoms T he initial leaf samples for citrus greening were collected during the f all season while later samples were completed in the spring season. The samples in spring season were collected in March, 2008 but the samples were not suitable for research, because the number of samples was small and some samples were dead.

PAGE 223

223 For further work, a n extensive research of detection for citrus canker and greening is planned with new spring samples. T his work will build on t he research data presented in spring and fall seasons. It is anticipated that the focus of this study will establish an improved image processing approach and classification method. The effect of the size of the disease lesions was not studied in this res earch. Morphological filtering in the image processing step has the risk of missing the fruit with small size disease lesions. Therefore, f uture work will be performed to find the detection limit for the size of the canker lesions as well as the effects of fruit harvest time on citrus disease identification. The development of a real time citrus disease detection system is the one of important further works This system will be implemented on a small scale fruit transport machine. The optical inspection mod ule will be designed based on a two band spectral imaging system.

PAGE 224

224 APPENDIX A MATLAB CODE FILES FO R EDGE DETECTION %% Matlab code for Edge Detection %% Program developed by Dae Gwan Kim, Research Assistant clear; for i=1:62 % The n umber of image samples cd( 'C: \ Documents and Settings \ student \ Desktop \ Research \ greening \ greening_0903 \ mn' ); % read the image name (load the images numbers) % ex) image_1.bmp, image_2.bmp, image_3.bmp, image_4.bmp, ..... original_name= 'mn_ st_f_' ; % [X,map] = imread([original_name '.jpg']); % Preprocessing images for edge detection X = double(imread([original_name int2str(i) '.jpg' ]))/255; X=imresize(X,0.5); % converting RGB images to Binary images hsvIm = rgb2hsv(X); im =hsvIm(:,:,2); oim=imadjust(im); oim = oim/max(max(oim)); mim2 = immultiply(im, oim); mim3=imadjust(mim2); oim3 = mim3/max(max(mim3)); mim4 = immultiply(mim3, oim3); mim5=imadjust(mim4); BW = double(im2bw( im,0.5)); % Create morphological structuring element se1 = strel( 'diamond' 2); se2 = strel( 'diamond' ,2); % control image edge using 'imdilate' and 'imerode' commands BW = imdilate(BW,se1); BW = imerode(BW,se2); BW = imer ode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imdilate(BW,se1); BW = imdilate(BW,se1);

PAGE 225

225 BW = imdilate(BW,se1); BW = imdilate(BW,se1); BW = imdilate(BW,se1); BW = imdilate(BW,se1); BW = imdilate(BW,se1); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2) ; % Fill image regions and holes BW=imfill(BW, 'holes' ); fim(:,:,1)=immultiply(X(:,:,1),BW); fim(:,:,2)=immultiply(X(:,:,2),BW); fim(:,:,3)=immultiply(X(:,:,3),BW); figure(3) imshow(fim) cd( 'C: \ Documents and Settings \ student \ Desktop \ Research \ greening \ greening_0903 \ mn \ edge' ); % write resized image % ex) 64x64_image_1.png, 64x64_image_2.png, ... imwrite(fim, [ 'edge_' original_name int2str(i) '.jpg' ], 'jpg' ); end

PAGE 226

226 APPENDIX B MATLAB CODE FILES FOR PATTE RN RECOGNITION METHO DS %% Code Files for Pattern Recognition Methods %% Citrus Greening Identification Project for Models %% Program developed by Dae Gwan Kim, Research Assistant % '1' represents Blotch Mottle % '2' represents Iron Deficiency % '3' repre sents Islands % '4' represents MN Deficiency % '5' represents Normal leaf % '6' represents Zinc Deficiency %% Blotchy mottle clear; % load each data sets data=load( 'blotch_0907.mat' ); h=data.blotch_0907_h; s=data.blotch_0907_s; i=data.blotch_0907_i; % l oad reduced texture features % HSI_15 tr_bm_hsi=[s(5,1:30);i(2,1:30);h(7,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30); i(6,1:30);s(13,1:30);h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_bm_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,31:60 );s(6,31:60);s(4,31:60);h(9,31:60); s(8,31:60);i(6,31:60);s(13,31:60);h(4,31:60);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_bm_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30);h(8,1:30);s(8,1:30 );h(3,1:30);s(11,1:30)]'; ts_bm_h s=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4,31:60);s(7,31:60);h(8,31:60 );s(8,31:60);h(3,31:60);s(11,31:60)]'; % I_11 tr_bm_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30);i(1,1:30)]; ts_bm_i=[i(2,31:60);i(8,31:60);i(9,31:6 0);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60);i(1,31 :60)]; % HSI_39 tr_bm_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_bm_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]'; %% Iron Deficiency % load each data sets data=load( 'iron_0907.mat' ); h=data.iron_0907 _h;

PAGE 227

227 s=data.iron_0907_s; i=data.iron_0907_i; % load reduced texture features % HSI_15 tr_iron_hsi=[s(5,1:30);i(2,1:30);h(7,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30) ;i(6,1:30);s(13,1:30);h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_i ron_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,31:60);s(6,31:60);s(4,31:60);h(9,31:60) ;s(8,31:60);i(6,31:60);s(13,31:60);h(4,31:60);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_iron_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30 );h(8,1:30);s(8,1:3 0);h(3,1:30);s(11,1:30)]'; ts_iron_hs=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4,31:60);s(7,31:60);h(8,31:6 0);s(8,31:60);h(3,31:60);s(11,31:60)]'; % I_11 tr_iron_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10, 1:30);i(1,1:30)]; ts_iron_i=[i(2,31:60);i(8,31:60);i(9,31:60);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60);i(1,3 1:60)]; % HSI_39 tr_iron_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_iron_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]'; %% Islands % load each d ata sets data=load( 'island_0907.mat' ); h=data.island_0907_h; s=data.island_0907_s; i=data.island_0907_i; % load reduced texture features % HSI_15 tr_is_hsi=[s(5,1:30);i(2,1:30);h(7,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30);i( 6,1:30);s(13,1:30 );h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_is_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,31:60);s(6,31:60);s(4,31:60);h(9,31:60);s( 8,31:60);i(6,31:60);s(13,31:60);h(4,31:60);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_is_hs=[s(5, 1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30);h(8,1:30);s(8,1:30); h(3,1:30);s(11,1:30)]'; ts_is_hs=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4,31:60);s(7,31:60);h(8,31:60); s(8,31:60);h(3,31:60);s(11,31:60)]'; % I_11 tr_is_i=[i(2,1:30);i(8, 1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30);i(1,1:30)]; ts_is_i=[i(2,31:60);i(8,31:60);i(9,31:60);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60);i(1,31: 60)]; % HSI_39 tr_is_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_is_hsi39=[h(:,31:60);s(:,31:6 0);i(:,31:60)]';

PAGE 228

228 %% MN Deficiency % load each data sets data=load( 'mn_0907.mat' ); h=data.mn_0907_h; s=data.mn_0907_s; i=data.mn_0907_i; % load reduced texture features % HSI_15 tr_mn_hsi=[s(5,1:30);i(2,1:30);h(7,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9 ,1:30);s(8,1:30); i(6,1:30);s(13,1:30);h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_mn_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,31:60);s(6,31:60);s(4,31:60);h(9,31:60); s(8,31:60);i(6,31:60);s(13,31:60);h(4,31:60);i(4,31:60);i(13,31:60);s(7,31:60 );i(7,31:60)]; % HS_9 tr_mn_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30);h(8,1:30);s(8,1:30 );h(3,1:30);s(11,1:30)]'; ts_mn_hs=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4,31:60);s(7,31:60);h(8,31:60 );s(8,31:60);h(3,31:60);s(11,31:6 0)]'; % I_11 tr_mn_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30);i(1,1:30)]; ts_mn_i=[i(2,31:60);i(8,31:60);i(9,31:60);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60);i(1,31 :60)]; % HSI_39 tr_mn_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30) ]'; ts_mn_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]'; %% Normal Leaves % load each data sets data=load( 'no_0907.mat' ); h=data.no_0907_h; s=data.no_0907_s; i=data.no_0907_i; % load reduced texture features % HSI_15 tr_nl_hsi=[s(5,1:30);i(2,1:30);h(7,1:30 );h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30);i( 6,1:30);s(13,1:30);h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_nl_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,31:60);s(6,31:60);s(4,31:60);h(9,31:60);s( 8,31:60);i(6,31:60);s(13,31:60);h(4,31:6 0);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_nl_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30);h(8,1:30);s(8,1:30); h(3,1:30);s(11,1:30)]'; ts_nl_hs=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4,31:60);s(7,31:60);h(8,31: 60); s(8,31:60);h(3,31:60);s(11,31:60)]';

PAGE 229

229 % I_11 tr_nl_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30);i(1,1:30)]; ts_nl_i=[i(2,31:60);i(8,31:60);i(9,31:60);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60);i(1,31: 60)]; % HSI_39 tr_nl_h si39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_nl_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]'; %% Zinc Deficiency % load each data sets h=data.zinc_0907_h; s=data.zinc_0907_s; i=data.zinc_0907_i; % load reduced texture features % HSI_15 tr_zinc_hsi=[s(5,1:30) ;i(2,1:30);h(7,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30 );i(6,1:30);s(13,1:30);h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_zinc_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,31:60);s(6,31:60);s(4,31:60);h(9,31:60) ;s(8,31:60);i(6,31:60) ;s(13,31:60);h(4,31:60);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_zinc_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30);h(8,1:30);s(8,1:3 0);h(3,1:30);s(11,1:30)]'; ts_zinc_hs=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4, 31:60);s(7,31:60);h(8,31:6 0);s(8,31:60);h(3,31:60);s(11,31:60)]'; % I_11 tr_zinc_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30);i(1,1:30)]; ts_zinc_i=[i(2,31:60);i(8,31:60);i(9,31:60);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60); i(1,3 1:60)]; % HSI_39 tr_zinc_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_zinc_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]'; %% PRTools for Pattern Recognition % make each traning set hsi_training=dataset([tr_bm_hsi;tr_iron_hsi;tr_is_hsi;tr_mn_hsi;tr_nl_hs i;tr_zinc_hsi],gen lab([30 30 30 30 30 30],[1 2 3 4 5 6]')); hs_training=dataset([tr_bm_hs;tr_iron_hs;tr_is_hs;tr_mn_hs;tr_nl_hs;tr_zinc_hs],genlab ([30 30 30 30 30 30],[1 2 3 4 5 6]')); i_training=dataset([tr_bm_i;tr_iron_i;tr_is_i;tr_mn_i;tr_nl_i;tr_zinc_i ],genlab([30 30 30 30 30 30],[1 2 3 4 5 6]')); hsi39_training=dataset([tr_bm_hsi39;tr_iron_hsi39;tr_is_hsi39;tr_mn_hsi39;tr_nl_hsi39;t r_zinc_hsi39],genlab([30 30 30 30 30 30],[1 2 3 4 5 6]')); % make each test set for HSI_15 model

PAGE 230

230 hsi_test_bm=dataset([ts _bm_hsi],genlab([30],[1]')); hsi_test_iron=dataset([ts_iron_hsi],genlab([30],[2]')); hsi_test_is=dataset([ts_is_hsi],genlab([30],[3]')); hsi_test_mn=dataset([ts_mn_hsi],genlab([30],[4]')); hsi_test_nl=dataset([ts_nl_hsi],genlab([30],[5]')); hsi_test_zinc=d ataset([ts_zinc_hsi],genlab([30],[6]')); % make each test set for HS_9 model hs_test_bm=dataset([ts_bm_hsi],genlab([30],[1]')); hs_test_iron=dataset([ts_iron_hs],genlab([30],[2]')); hs_test_is=dataset([ts_is_hs],genlab([30],[3]')); hs_test_mn=dataset([ts _mn_hs],genlab([30],[4]')); hs_test_nl=dataset([ts_nl_hs],genlab([30],[5]')); hs_test_zinc=dataset([ts_zinc_hs],genlab([30],[6]')); % make each test set for I_11 model i_test_bm=dataset([ts_bm_i],genlab([30],[1]')); i_test_iron=dataset([ts_iron_i],genlab ([30],[2]')); i_test_is=dataset([ts_is_i],genlab([30],[3]')); i_test_mn=dataset([ts_mn_i],genlab([30],[4]')); i_test_nl=dataset([ts_nl_i],genlab([30],[5]')); i_test_zinc=dataset([ts_zinc_i],genlab([30],[6]')); % make each test set for HSI_39 model hsi39_ test_bm=dataset([ts_bm_hsi39],genlab([30],[1]')); hsi39_test_iron=dataset([ts_iron_hsi39],genlab([30],[2]')); hsi39_test_is=dataset([ts_is_hsi39],genlab([30],[3]')); hsi39_test_mn=dataset([ts_mn_hsi39],genlab([30],[4]')); hsi39_test_nl=dataset([ts_nl_hsi39 ],genlab([30],[5]')); hsi39_test_zinc=dataset([ts_zinc_hsi39],genlab([30],[6]')); % Three Pattern Recognition methods for HSI_15 model hsi_fisher=fisherc(hsi_training); hsi_bp=bpxnc(hsi_training); hsi_svc=svc(hsi_training, 'p' ,2); % Three Pattern Recogn ition methods for HS_9 model hs_fisher=fisherc(hs_training); hs_bp=bpxnc(hs_training); hs_svc=svc(hs_training, 'p' ,2); % Three Pattern Recognition methods for I_11 model i_fisher=fisherc(i_training); i_bp=bpxnc(i_training); i_svc=svc(i_training, 'p' ,2);

PAGE 231

231 % Three Pattern Recognition methods for HSI_15 model hsi39_fisher=fisherc(hsi39_training); hsi39_bp=bpxnc(hsi39_training); hsi39_svc=svc(hsi39_training, 'p' ,2); %% classification results using FDA % Classification results for HSI_15 Model blotchy_mottle _hsi=testc(hsi_test_bm*hsi_fisher) iron_hsi=testc(hsi_test_iron*hsi_fisher) island_hsi=testc(hsi_test_is*hsi_fisher) mn_hsi=testc(hsi_test_mn*hsi_fisher) normal_leave_hsi=testc(hsi_test_nl*hsi_fisher) zinc_hsi=testc(hsi_test_zinc*hsi_fisher) % Classifica tion results for HS_9 Model blotchy_mottle_hs=testc(hs_test_bm*hs_fisher) iron_hs=testc(hs_test_iron*hs_fisher) island_hs=testc(hs_test_is*hs_fisher) mn_hs=testc(hs_test_mn*hs_fisher) normal_leave_hs=testc(hs_test_nl*hs_fisher) zinc_hs=testc(hs_test_zinc*h s_fisher) % Classification results for I_11 Model blotchy_mottle_i=testc(i_test_bm*i_fisher) iron_i=testc(i_test_iron*i_fisher) island_i=testc(i_test_is*i_fisher) mn_i=testc(i_test_mn*i_fisher) normal_leave_i=testc(i_test_nl*i_fisher) zinc_i=testc(i_test _zinc*i_fisher) % Classification results for HSI_39 Model blotchy_mottle_hsi39=testc(hsi39_test_bm*i_fisher) iron_hsi39=testc(hsi39_test_iron*hsi39_fisher) island_hsi39=testc(hsi39_test_is*hsi39_fisher) mn_hsi39=testc(hsi39_test_mn*hsi39_fisher) normal_l eave_hsi39=testc(hsi39_test_nl*hsi39_fisher) zinc_hsi39=testc(hsi39_test_zinc*hsi39_fisher) %% classification results using BP Neural Network % Classification results for HSI_15 Model blotchy_mottle_hsi=testc(hsi_test_bm*hsi_bp) iron_hsi=testc(hsi_test _iron*hsi_bp) island_hsi=testc(hsi_test_is*hsi_bp) mn_hsi=testc(hsi_test_mn*hsi_bp)

PAGE 232

232 normal_leave_hsi=testc(hsi_test_nl*hsi_bp) zinc_hsi=testc(hsi_test_zinc*hsi_bp) % Classification results for HS_9 Model blotchy_mottle_hs=testc(hs_test_bm*hs_bp) iron_hs =testc(hs_test_iron*hs_bp) island_hs=testc(hs_test_is*hs_bp) mn_hs=testc(hs_test_mn*hs_bp) normal_leave_hs=testc(hs_test_nl*hs_bp) zinc_hs=testc(hs_test_zinc*hs_bp) % Classification results for I_11 Model blotchy_mottle_i=testc(i_test_bm*i_bp) iron_i=tes tc(i_test_iron*i_bp) island_i=testc(i_test_is*i_bp) mn_i=testc(i_test_mn*i_bp) normal_leave_i=testc(i_test_nl*i_bp) zinc_i=testc(i_test_zinc*i_bp) % Classification results for HSI_39 Model blotchy_mottle_hsi39=testc(hsi39_test_bm*hsi_bp) iron_hsi39=testc (hsi39_test_iron*hsi39_bp) island_hsi39=testc(hsi39_test_is*hsi39_bp) mn_hsi39=testc(hsi39_test_mn*hsi39_bp) normal_leave_hsi39=testc(hsi39_test_nl*hsi39_bp) zinc_hsi39=testc(hsi39_test_zinc*hsi39_bp) %% classification results using SVC % Classificatio n results for HSI_15 Model blotchy_mottle_hsi=testc(hsi_test_bm*hsi_svc) iron_hsi=testc(hsi_test_iron*hsi_svc) island_hsi=testc(hsi_test_is*hsi_svc) mn_hsi=testc(hsi_test_mn*hsi_svc) normal_leave_hsi=testc(hsi_test_nl*hsi_svc) zinc_hsi=testc(hsi_test_zinc* hsi_svc) % Classification results for HS_9 Model blotchy_mottle_hs=testc(hs_test_bm*hs_svc) iron_hs=testc(hs_test_iron*hs_svc) island_hs=testc(hs_test_is*hs_svc) mn_hs=testc(hs_test_mn*hs_svc) normal_leave_hs=testc(hs_test_nl*hs_svc) zinc_hs=testc(hs_tes t_zinc*hs_svc)

PAGE 233

233 % Classification results for I_11 Model blotchy_mottle_i=testc(i_test_bm*i_svc) iron_i=testc(i_test_iron*i_svc) island_i=testc(i_test_is*i_svc) mn_i=testc(i_test_mn*i_svc) normal_leave_i=testc(i_test_nl*i_svc) zinc_i=testc(i_test_zinc*i_sv c) % Classification results for HSI_39 Model blotchy_mottle_hsi39=testc(hsi39_test_bm*hsi39_svc) iron_hsi39=testc(hsi39_test_iron*hsi39_svc) island_hsi39=testc(hsi39_test_is*hsi39_svc) mn_hsi39=testc(hsi39_test_mn*hsi39_svc) normal_leave_hsi39=testc(hsi3 9_test_nl*hsi39_svc) zinc_hsi39=testc(hsi39_test_zinc*hsi39_svc)

PAGE 234

234 APPENDIX C MATLAB CODE FILES FOR PATTERN HYPERSPECTAL IMAGING %Demonstrate procedures for hyperspectral imaging processing % Read original hyperspectral images of dark c urrent and white panel R_white = hsi_read ('white_panel_051710'); R_dark = hsi_read ('dark_current_051710'); save bs_wd; clear; %% Read original hyperspectral images of fruit samples for y=1:9 R_sample = hsi_read (['bs_' int2str(y)]); save (['bs_sa mple_' int2str(y)]); end clear; Generate mask for fruit for z=1:9; bs_hsi_mask (['bs_sample_' int2str(z)], 'bs_wd'); end %% Normalize hyperspectral images of fruit samples for z=1:9 hsi_norm (['bs_sample_' int2str(z)], 'bs_wd'); end for z=1:80 [R_score, R_loading] = hsi_pca (['Norm_gi_sample_' int2str(z)]); gi_R_score{z,1}=R_score; gi_R_loading{z,1}=R_loading; end

PAGE 235

235 APPENDIX D MATLAB CODE FILES FOR PRINCIPAL COMPONENT ANALYSIS function [R_score, R_loading] = hsi_pca (f ilename_sample) % Perform PCA for hyperspectral images % Example: [R_score, R_loading] = hsi_pca ('Norm_R_RG_CK_W_05'); % Read image data load (filename_sample); index_band = [1:1:92]; sample_m = size(R_sample, 1); sample_n = size(R_sample, 2); R_sam ple = double(R_sample(1:1:sample_m, 1:1:sample_n, index_band)); % Kim % Reshape image data image_height = size(R_sample, 1); image_width = size(R_sample, 2); num_band = size(R_sample, 3); R_sample = reshape (R_sample, image_height*image_width, num_band); % Perform PCA num_pcs = 6; options_pca = pca('options'); options_pca.algorithm = 'svd'; options_pca.display = 'off'; options_pca.plots = 'none'; model_pca = pca (R_sample, num_pcs, options_pca); clear R_sample; R_loading = model_pca.loads{2}; R_score_ve ctor = model_pca.loads {1}; R_score= reshape (R_score_vector, image_height, image_width, num_pcs);

PAGE 236

236 APPENDIX E MATLAB CODE FILES FOR WAVELELT TRANSFROM %% Weight coefficients for wavelet transform for i=1:50 cd('C: \ Documents and Settings \ student \ D esktop \ Research \ wavelet analysis \ bm_image'); band_number=84; level=4; n=256; load(['Norm_bm_sample_' int2str(i)]); img=R_sample(:,:,band_number); mn = min(min(img)); mx = max(max(img)); range = mx mn; % "g22" is Normalized "Principle Component 2" norm_wc_img = (img mn)/range; cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ wc_image'); imwrite(norm_wc_img, ['wc_img_' int2str(i) '.png'],'png'); [LL,HL,LH,HH]=wavelet(['wc_img_' int2str(i)],level, n); ELLP_bm(:,i)=LL; EHLP_bm(:,i)=HL; ELHP_bm(:,i)=LH; EHHP_bm(:,i)=HH; end for i=1:50 cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ gi_image'); band_number=84; level=4; n=256; load(['Norm_gi_sample_' int2str(i)]); img=R_sample(:,:,band_number); mn = min(min(img)); mx = max(max(img)); range = mx mn; % "g22" is Normalized "Principle Component 2" norm_wc_img = (img mn)/range; cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ wc_image'); imwrite(norm_wc_img, ['wc_i mg_' int2str(i) '.png'],'png'); [LL,HL,LH,HH]=wavelet(['wc_img_' int2str(i)],level, n);

PAGE 237

237 ELLP_gi(:,i)=LL; EHLP_gi(:,i)=HL; ELHP_gi(:,i)=LH; EHHP_gi(:,i)=HH; end for i=1:40 cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ ir_image' ); band_number=84; level=4; n=256; load(['Norm_ir_sample_' int2str(i)]); img=R_sample(:,:,band_number); mn = min(min(img)); mx = max(max(img)); range = mx mn; % "g22" is Normalized "Principle Component 2" norm_wc_img = (img mn)/range; cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ wc_image'); imwrite(norm_wc_img, ['wc_img_' int2str(i) '.png'],'png'); [LL,HL,LH,HH]=wavelet(['wc_img_' int2str(i)],level, n); ELLP_ir(:,i)=LL; EHLP_ir(:,i)=HL; ELHP_ir(:,i)=LH; EHH P_ir(:,i)=HH; end for i=1:40 cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ mn_image'); band_number=84; level=4; n=256; load(['Norm_mn_sample_' int2str(i)]); img=R_sample(:,:,band_number); mn = min(min(img)); mx = max(max(img )); range = mx mn; % "g22" is Normalized "Principle Component 2" norm_wc_img = (img mn)/range; cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ wc_image'); imwrite(norm_wc_img, ['wc_img_' int2str(i) '.png'],'png');

PAGE 238

238 [LL ,HL,LH,HH]=wavelet(['wc_img_' int2str(i)],level, n); ELLP_mn(:,i)=LL; EHLP_mn(:,i)=HL; ELHP_mn(:,i)=LH; EHHP_mn(:,i)=HH; end for i=1:40 cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ nm_image'); band_number=84; level=4; n=256; load(['Norm_nm_sample_' int2str(i)]); img=R_sample(:,:,band_number); mn = min(min(img)); mx = max(max(img)); range = mx mn; % "g22" is Normalized "Principle Component 2" norm_wc_img = (img mn)/range; cd('C: \ Documents and Settings \ student \ D esktop \ Research \ wavelet analysis \ wc_image'); imwrite(norm_wc_img, ['wc_img_' int2str(i) '.png'],'png'); [LL,HL,LH,HH]=wavelet(['wc_img_' int2str(i)],level, n); ELLP_nm(:,i)=LL; EHLP_nm(:,i)=HL; ELHP_nm(:,i)=LH; EHHP_nm(:,i)=HH; end for i=1:40 cd('C: \ Do cuments and Settings \ student \ Desktop \ Research \ wavelet analysis \ ny_image'); band_number=84; level=4; n=256; load(['Norm_ny_sample_' int2str(i)]); img=R_sample(:,:,band_number); mn = min(min(img)); mx = max(max(img)); range = mx mn; % "g22" is No rmalized "Principle Component 2" norm_wc_img = (img mn)/range; cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ wc_image');

PAGE 239

239 imwrite(norm_wc_img, ['wc_img_' int2str(i) '.png'],'png'); [LL,HL,LH,HH]=wavelet(['wc_img_' int2str(i) ],level, n); ELLP_ny(:,i)=LL; EHLP_ny(:,i)=HL; ELHP_ny(:,i)=LH; EHHP_ny(:,i)=HH; end for i=1:40 cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ zc_image'); band_number=84; level=4; n=256; load(['Norm_zc_sample_' int2str(i)]); im g=R_sample(:,:,band_number); mn = min(min(img)); mx = max(max(img)); range = mx mn; % "g22" is Normalized "Principle Component 2" norm_wc_img = (img mn)/range; cd('C: \ Documents and Settings \ student \ Desktop \ Research \ wavelet analysis \ wc_imag e'); imwrite(norm_wc_img, ['wc_img_' int2str(i) '.png'],'png'); [LL,HL,LH,HH]=wavelet(['wc_img_' int2str(i)],level, n); ELLP_zc(:,i)=LL; EHLP_zc(:,i)=HL; ELHP_zc(:,i)=LH; EHHP_zc(:,i)=HH; end for i=0:3 wt_features_bm(1+(3*i),:)=EHLP_bm((i+1),:); wt_fea tures_bm(2+(3*i),:)=ELHP_bm((i+1),:); wt_features_bm(3+(3*i),:)=EHHP_bm((i+1),:); end wt_features_bm(13,:)=ELLP_bm(1,:); for i=0:3 wt_features_gi(1+(3*i),:)=EHLP_gi((i+1),:); wt_features_gi(2+(3*i),:)=ELHP_gi((i+1),:); wt_features_gi(3+(3*i),:)=EHHP_gi((i+ 1),:); end wt_features_gi(13,:)=ELLP_gi(1,:); for i=0:3

PAGE 240

240 wt_features_ir(1+(3*i),:)=EHLP_ir((i+1),:); wt_features_ir(2+(3*i),:)=ELHP_ir((i+1),:); wt_features_ir(3+(3*i),:)=EHHP_ir((i+1),:); end wt_features_ir(13,:)=ELLP_ir(1,:); for i=0:3 wt_features_mn(1+(3 *i),:)=EHLP_mn((i+1),:); wt_features_mn(2+(3*i),:)=ELHP_mn((i+1),:); wt_features_mn(3+(3*i),:)=EHHP_mn((i+1),:); end wt_features_mn(13,:)=ELLP_mn(1,:); for i=0:3 wt_features_nm(1+(3*i),:)=EHLP_nm((i+1),:); wt_features_nm(2+(3*i),:)=ELHP_nm((i+1),:); wt_fea tures_nm(3+(3*i),:)=EHHP_nm((i+1),:); end wt_features_nm(13,:)=ELLP_nm(1,:); for i=0:3 wt_features_ny(1+(3*i),:)=EHLP_ny((i+1),:); wt_features_ny(2+(3*i),:)=ELHP_ny((i+1),:); wt_features_ny(3+(3*i),:)=EHHP_ny((i+1),:); end wt_features_ny(13,:)=ELLP_ny(1,:) ; for i=0:3 wt_features_zc(1+(3*i),:)=EHLP_zc((i+1),:); wt_features_zc(2+(3*i),:)=ELHP_zc((i+1),:); wt_features_zc(3+(3*i),:)=EHHP_zc((i+1),:); end wt_features_zc(13,:)=ELLP_zc(1,:);

PAGE 241

241 LIST OF REFERENCES Aleixos, N., J. Blasco, F. Navarron, and E. M olto. 2002. Multispectral inspection of citrus in real time using machine vision and digital signal processors. Comput. Electron. Agri,. Volume 33(2), Page(s): 121 137. Arivazhagan, S., Ganesan, L., 2003. Texture segmentation using wavelet transform Patt ern Recognition Letters, vol. 24, no. 16, pp. 3197 3203. Bishop, C.M., 2006. Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA. p p. 242 244. Burgers, C., 1998. A tutorial on support vector machines for pattern recognition. Data mining and knowledge discovery, 2, 121 167. Burks, T.F., Shearer, S.A., Payne, F.A., 2000. Classification of weed species using color texture features and disciminant analysis. Trans. ASAE, 43(2), 441 448. B urks, T.F., Shearer, S.A., Gates, R.S, Donohue, K.D., 2000. Back propagation neural network design and evaluation for classifying weed species using color image texture. Trans. ASAE, 43(4), 1029 1037. Burks, T. F., 1997. Color Image Texture Analysis and Neural Network Classification of Weed Species. Ph.D. Thesis, University of Kentucky, Lexington, Kentucky. Chang, C.I., 1999 Information divergence for hyperspectral image analysis. Geoscience and Remote Sensing Symposium, vol. 1, pp. 509 511 Cheng, X., Chen, Y.R., Tao, Y., Chan, D., Wang, C.Y., 2003. Hyper spectral Imaging and feature extraction methods in fruit and vegetable defect inspection, 2003 ASAE Annual meeting. pp. 03119. Chung, K.R., Brlansky, R.H., 2006. Citrus Disease Exotic to Florida: Huanglongbing (Citrus Greening), Institute of Food and Agricultural Sciences, University of Florida, pp. 210. Chung, K.T., Peres N.A., Timmer, L.W., 2009. Citrus disease exotic to Florida: Black spot, one of a series of the Plant Pathology Department, Florida Cooperative Extension Service, Institute of Food and Agricultural Sciences, University of Florida, pp 213. Citrus Greenin g Continues To Spread In Growing Areas [Internet], ScienceDaily LLC (US); [Updated: 2007 Jul 13; Cited: 2008 Oct 24]. http://www.sciencedaily.com/releases/2007/07/070711001507.htm

PAGE 242

242 Dave, S., Runtz, K., 1995. Image Processing Methods for Identifying Species of Plants, IEEE CAT. NO. 95CH3581 6/0 7803 2741 1. Dennison, P.E., Halligan, K.Q., Roberts, D.A., 2004. A co mparison of error metrics and constraints for multiple endmember spectral mixture analysis and spectral mapper. Remote sensing of environment, 93, 359 367. Dewdney, M.M., 2010. Citrus Black Spot. Citrus Research and Education Center, University of Florida Citrus Industry 91 (2), 19 20. Donohue, K.D., Huang, L., Burks, T., Forberg, F., Piccoli, C.W., 2001.Tissue classification with generalized spectrum parameters. Ultrasound Med. Biol. 27 (11), 1505 1514. Du, Y., Chang, C.I., Ren, H., Chang, C. C., Jens en, J.O., 2004. New hyperspectral discrimination measure for spectral characterization. Optical engineering, 43 (8), 1777 1788. Du, C. J., Sun, D. W., 2006. Correlating Image Texture Features Extracted by Five Different Methods with the Tenderness of Cook ed Pork Ham:A Feasibility Study. Trans. ASAE, 49(2), 441 448. Duda, R.O., Hart, P.E., Stork, D.G., 2001. Pattern Classification, 2nd Edition, A Wiley Interscience Publication, New Yock. Edwards, J.G., Sweet, C.H., 1986. Citrus blight assessment using a microcomputer: quantifying damage using an apple computer to solve reflectance spectra of entire trees. Florida Scientist 49 (1), 48 53. Florida Statistics Site [Internet], USDA's National Agricultural Statistics Service (US); [Update: 2007 June; cited 20 08 Nov]. http://www.nass.usda.gov/Statistics_by_State/Florida/Publications/Broc/index.asp Fukagawa, T., Ishii, K., Noguchi, N., Terao, H., 2003. Detecting crop growth by a multi spectral image sensor. Presented at an ASAE Meeting presentation, pp. 033125. Gaffney, J.J., 1973. Reflectance properties of citrus fruits. Trans. ASA E 16(2), 310 314. Gonzalez, R.C., Woods, R,E., Digital Image Processing, 3rd Edition, 2008, Pearson Prentice Hall, New Jersey. Gottwald, T.R., Graham, J.H., Schibert, T.S., 2002. Citrus Canker: The Pathogen and Its Impact. Online. Plant Health Progress doi:10.1094/PHP 2002 0812 01 RV.

PAGE 243

243 Gottwald, T.R., Graa, J.V., Bassanezi, R.B., 2007. Citrus Huanglongbing: The Pathogen and Its Impact. Online. Plant Health Progress doi:10.1094/PHP 2007 0906 01 RV. Frias, G.M., 2007. Hyperspectral imaging an emerging process analytical tool for food quality and safety control. Trends in Food Science & Technology. 18 (2007) : 590 598. Gunn, S.R., [Internet]. Support Vector Machines for Classification and Regression. Technical Report. University of Southhamptom(US); [Updated 1998 May 10; cited 2008 Oct 12]. http://users.ecs.soton.ac.uk/srg/publications/pdf/SVM.pdf Haralick, R.M., 1979. Statistical and Structural Approaches to Texture. Processing of the IEEE, 67, 768 804. Haralick, R.M., Shanmugam, K., 1974. Combined s pectral and spatial processing of ERTS imagery data. J. Remote Sens. Environ. 3, 3 13. Haralick, R. M. Shanmugam, K., Dinstein., 1973. Texture Features for Image classification, IEEE Trans. on systems, Man, and Cybernetics. 3, 610:621 Hodge, A., Philipp akos, E., Mulkey, D., Spreen, T., Muraro, R., 2001. Economic impact of Florida's citrus industry, Extension Digital Information Source (EDIS) FE307. Gainesville, Fla.: University of Florida, Department of Food and Resource Economics, 1999 2000. Ionescu, R ., Liobet, E., 2002. Wavelet transform based fast feature extraction from temperature modulated semiconductor gas sensors. Sens. Actuators B 81 (2002), pp. 289 295. Jacobs, J.A., 1994. Cooperative in the U.S. Citrus Industry, Agricultural Economist, U.S. Department of Agriculture, Rural Business and Cooperative Development Service, RBCDS Research Report 137, December, 1994. Jiang, L., Zhu, B, Rao, X., Berney, G., Tao, Y., 2007. Black walnut shell and Meat Discrimination using Hyper spectral Fluorescence I maging, 073089. Jimenez, A.R., R Ceres, Pons, J.L., 2000. A survey of computer vision methods for locating fruit on trees. Trans. ASAE 43(6), 1911 1920. Kataoka, T., Kaneko, T., Okamoto, H., Hata, S. 2003. Crop growth estimation system using machine vision. Presented at the 2003 IEEE/ASME International conference on advanced Intelligent mechatronics.AIM 2003. 2, b1079 b1083.

PAGE 244

244 Kawamura, S., Tsukahara, M., Natsuga, M., Itoh, K., 2003. On line near inf rared spectroscopic sensing technique for assessing milk quality during milking. 2003 ASAE Annual international meeting. 033026. Kim, M. S., Chen, Y. R., Mehl, P. M., 2001. Hyperspectral reflectance and fluorence imaging system for food quality and safety Trans. ASAE, 44(3), 721 729. Kim, M.S., Lefcourt, A.M., Chao, K., Chen, Y.R., Kim, I., Chan, D.E., 2002. Multispectral detection of fecal contamination on apple based on hyper spectral imagery. Trans. ASAE, 45(6), 2027 2037. Lee, K. J., Kang, S., Kim, M.S., Noh, S. H., 2005. Hyper spectral imaging for detection defect on apple. 2005 ASAE Annual international meeting, 053075. Lee, W.S., Slaughter, D., 1998. Plant recognition using hardware based neural network. Trans. ASAE. pp. 98030. Mehl, P.M., Cha o, K., Kim, M., Chen, Y.R., 2002. Detection of defects on selected apple cultivars using hyper and multi image analysis. Applied Engineering in Agriculture. 18(2), 219 226. Miller, W.M. Drouillard, G.P., 2001. Multiple feature analysis for machine visio n grading of Florida citrus. Applied Eng. Agri. 17(5), 627 633. Mishra, A., Ehsani, F., Albrigo, G., Lee, W.S., 2007. Spectral characteristics of citrus greening (Huanglongbing). ASABE Annual international meeting, 073056. Moshou, D., Virindts, E., Ke telaere, B.D., Baerdemaeker, J.D., Ramon, H., 2001. A neural network based plant classifier. Computers and electronics in agriculture. 31(1), 5 16. Muhammed, H., 2005. Hyperspectral Crop Reflectance Data for characterizing and estimating Fungal Disease S everity in Wheat. Biosystems Engineering. (2005) 91(1) : 9 20. Ng, H.F., Wilcke, W.F., Morey, R.V., Lang, J.P., 1998. Machine evaluation of corn kernel mechanical and mold damage. Trans. ASAE, 41(2), 415 420. Nilsson, H. E., 1995. Remote sensing and imag e analysis in plant pathology. Canadian journal of plant pathology. 17, 154 166 Ohta, Y. 1985. Knowledge Based Interpretation of Outdoor Natural Color Scenes. Pitman Publishing Inc., Marshfield, M.A.

PAGE 245

245 Park, B., Yoon, S.C., Lawrence, K.C., Windham, W.R., 2007. Fisher Linear Discriminant Analysis for Improving Fecal Detection Accuracy with Hyperspectral Images. Trans. ASAE 50(6), 2257 2283. Park, B., Windham, W.R., Lawrence, K.C., Smith, D.P., 2007. Contaminant classification of poultry hyperspectral image ry using a spectral angle mapper algorithm. Biosystems engineering, 96(3), 323 333 The P lant Pathology / Plant Disease Online [Internet], The American Phytopathological Society (US), [Update: 2008; Cited: 2008 Oct]. http://www.apsnet.org/ Polek, M., Vidalakis, G., Godfrey, K., 2007. Citrus Bacterial Canker Disease and Huanglongbing(Citrus Greening), University of California, Agricultural and Natural Resources, Publica tion 8218, ISBN 13: 978 1 60107 439 3, ISBN 10: 1 60107 439 5. Powell, N.B., Spencer, S.R., Boyette, M.D., 2005. Machine Vision for Autonomous Machine Guidance, 2005 ASAE Annual international meeting 053091. Pydipati, R., 2004. Automatic Disease Detectio n in Citrus Trees. MS Thesis, University of Florida, Gainesville, FL. Pydipati, R., Burks, T.F., Lee, W.S., 2005. Statistical and Neural Network Classifiers for Citrus Disease Detection Using Machine Vision. Trans. ASAE. 48(5), 2007 2014. Pydipati, R., B urks, T.F., Lee, W.S., 2006. Identification of citrus disease using color texture features and discriminant analysis. Computers and electronics in agriculture. 52, 49 59. Qin, J., Burks, T.F., Kim, M.S., Chao, K., Ritenour, M.A., 2008. Citrus canker detec tion using hyperspectral reflectance imaging and PCA based image classification method. Sens. Instrum. Food Qual. Saf., doi: 10.1007/s11694 008 9043 3. Qin, J., Burks, T.F., Ritenour, M.A., Bonn, W.G., 2009, Detection of citrus canker using hyperspe ctral reflectance imaging with spectral information divergence, Journal of Food Engineering, 93:183 191. Rao, C. R., 1973. Linear statistical inference and its application. John Wiley and Sons, New York, NY, USA. Requnathan, M., Lee, W.S., 2005. Ci trus Fruit Identification and Size Determination Using Machine Vision and Ultrasonic Sensors. 2005 ASAE Annual international meeting 053017. Ruangwong, O., Akrapisan, A., 2006. Detection of Candidatus Liberibacter asiaticus causing Citrus Huanglongbing d isease. Journal of Agricultural Technology 2(1):111 120.

PAGE 246

24 6 Ruiz, L.A., Fdez Sarria, A., Recio, J.A., 2002. Texture feature extraction for classification of remote sensing data using wavelet decomposition: A comparative study. IAPRS, vol. 35. SAS Institute Inc., 1985. SAS Introductory Guild (3rd ed.), Cary, North Carolina. USA Carolina, USA. Scherz, C.E., Browm, G.K., 1968. Basic considerations in mechanizing citrus har vest. Trans. ASABE. 343 346. Sepulcre Canto, G., Zarco Tejada, P.J., Jimenez Munoz, J.C., Sobrino, J.A., Soriano, M.A., Fereres, E., Vega, V., Pastor, M., 2007. Monitoring yield and fruit quality parameters in open canopy tree crops under water stress. Im plications for ASTER. Remote Sensing of Environment. 107 455 470. Sevier, B.J. and Lee, W.S., 2003. Adoption trends and attitudes towards precision agriculture in florida citrus: preliminary results from a citrus producer survey. Trans. ASAE. pp. 03110 0. Shearer, S.A., Holmes, R.G., 1990. Plant identification using co occurrence matrices. Trans. ASAE. 33, 2037 2044. Shearer, S.A., 1986, Plant identification using color co occurrence matrices derive d from digitized images. PhD Thesis, Ohio State University, Columbus, OH. Shearer, S.A., Holmes, R.G., 1990. Plant identification using co occurrence matrices. Trans. ASAE. 33, 2037 2044. Shimada, H., Shima, E., Tanaka, K., Nagayoshi, T., 2008. Use o f personal remote sensing system in grazing land. 2008 ASABE Annual international meeting, 084473. Shippert, P., 2003. Introduction to hyperspectral image analysis. Online Journal of space communication, Issue No.3. Sun, D.W., 2008. Computer Vision Tec hnology for food quality evaluation. Elesevier Inc. Burlington, MA 01803, USA. Tang, L., Tian, L.F., Stward, B.L., Reid, J.F., 1999. Texure based weed classification using gabor wavelets and neural networks for real time selective herbicide application. A SAE/CSAE SCGR Annual international metting, Toronto, Canada. pp. 993036. Tuceryan, M., Jain, A.K., 1998. Texture Analysis. The handbook of Pattern Recognition and Computer Vision (2nd Edition), by C. H. Chen, L. F. Pau, P. S. P. Wang (eds.), 207 248.

PAGE 247

247 Wel ling, M., [Internet]. Fisher Linear Discriminant Analysis. Department of Computer Science, University of Toronto, 10 King's College Road, Toronto, M55 3G5 Canada; [cited 2008 Oct 12]. http://www.ics.uci.edu/~welling/classnotes/papers_class/Fisher LDA.pdf Wyszecki, G., Stiles, W.S., 1992. Color science: concepts and methods. In: Quantitative Data and Formulae, second ed. John Wiley & Sons, New York, pp. 117 137. Yang, C., Everitt, J.H., Bradford, J.M., 2006. Use of spectral angle mapper (SAM) and hyperspectral imagery for yield estimation. 2006 ASAE Annual meeting presentation:061168. Yang, T ., Chen, Y., Cheng, X., 2001. Infrared imaging and wavelet based segmentation method for apple defect inspection. 2001 ASAE Annual international meeting, pp. 01 3109. Ye, X., Skai, K., Okamoto, H., Garciano, L.O., 2008. Aground based hyperspectral imaging system for characterizing vegetation spectral features. Computer and electronics in agriculture, 63, 13 21. Zarco Tejada, P.J., Berjon, A., Lopez Lozano, R., Miler, J.R., Martin, P., Cachorro, V., Gonzalez, M.R., de Frutos, A., 2005. Assessing vineyard c ondition with hyperspectral indices: Leaf and canopy reflectance simulation in a row structured discontinuous canopy. Remote sensing of environment. 99, 271 287. Zhang, H., Paliwal, J., Jayas, D.S., White, N.D.G., 2007. Classification of fungal Infect ed Wheat Kernels Using Near Infrared Reflectance Hyperspectral Imaging and Support Vector Machine, Trans. ASABE, 50(5), 1779 1785. Zhang, Y., He, X.Y., Han, J.H., 2005. Texture feature based image classification using wavelet package transform. ICIC 2005 part I, LNCS 3644, 165 173.

PAGE 248

248 BIOGRAPHCAL SKETCH Dae g wan Kim was born in 1979 in Tong Young, South Korea. He graduated with a m echanical engineering degree in March 2006 from the Dong A University, Busan, South Korea. He started his graduate studie s in the D epartment of Mechanical and Aerospace Engineering at the University of Florida. He transferred to the D epartment of Agricultural and Biological E ngineering in August 2007. He was a member of the A gricultural R obotics and M echatronics G roup (ARMg) in the D epartment of Agricultural and Biological Engineering, where he worked as a research assistant under advisor Dr. Thomas F. Burks.