<%BANNER%>

Record for a UF thesis. Title & abstract won't display until thesis is accessible after 2011-08-31.

Permanent Link: http://ufdc.ufl.edu/UFE0024188/00001

Material Information

Title: Record for a UF thesis. Title & abstract won't display until thesis is accessible after 2011-08-31.
Physical Description: Book
Language: english
Creator: Kim, Dae
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: Agricultural and Biological Engineering -- Dissertations, Academic -- UF
Genre: Agricultural and Biological Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Statement of Responsibility: by Dae Kim.
Thesis: Thesis (M.S.)--University of Florida, 2009.
Local: Adviser: Burks, Thomas F.
Local: Co-adviser: Schumann, Arnold W.
Electronic Access: INACCESSIBLE UNTIL 2011-08-31

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0024188:00001

Permanent Link: http://ufdc.ufl.edu/UFE0024188/00001

Material Information

Title: Record for a UF thesis. Title & abstract won't display until thesis is accessible after 2011-08-31.
Physical Description: Book
Language: english
Creator: Kim, Dae
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: Agricultural and Biological Engineering -- Dissertations, Academic -- UF
Genre: Agricultural and Biological Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Statement of Responsibility: by Dae Kim.
Thesis: Thesis (M.S.)--University of Florida, 2009.
Local: Adviser: Burks, Thomas F.
Local: Co-adviser: Schumann, Arnold W.
Electronic Access: INACCESSIBLE UNTIL 2011-08-31

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0024188:00001


This item has the following downloads:


Full Text

PAGE 1

1 DETECTION OF CITRUS DISEASES USING MICROSCOPIC IMAGING By D AE GWAN KIM A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2009

PAGE 2

2 2009 Dae Gwan Kim

PAGE 3

3 To my parents and my advisor Dr. Burks

PAGE 4

4 ACKNOWLEDGMENTS I would like to thank my advisor Dr. Thomas F. Burks whom provided invaluable guidance and encouragement during the course of this research. In addition, Dr. Howard W Beck and Dr. Arnold W. Schumann provided valuable technical assistance, while serving on my research committee Likewise, I would like to express my appreciation to Dr. Arnold W Schumann who as Assistant Professor for the Department of Soil and Water Science allowed me to pursue a M.S. and provided the financial resources necessary to conduct the research. In addition I would like to thank the following colleagues whom provided invaluable assistance to me during the course of my research ; To ny Qin, Xuhui Zhao and Duke M. Bulanon In addition, I wish to express my gratitude to several dear friends whose encouragements kept me going through the process of this research. Finally, I would like to d edicate this research to my parents in Korea. They always encouraged me to pursue education, I am sure they would be pleasure with the results of this research and the completion of my M.S. thesis. I want to give my parents all the glory for this work.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ ............... 4 LIST OF TABLES ................................ ................................ ................................ ........................... 8 LIST OF FIGURES ................................ ................................ ................................ ....................... 10 ABSTRACT ................................ ................................ ................................ ................................ ... 12 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .................. 14 The Need for an Automated Disease Control System ................................ ............................ 14 Citrus Industry Overview ................................ ................................ ................................ ....... 15 Computer Vision Techniques for Citrus Disease Control ................................ ...................... 15 2 OBJECTIVES OF THE RESEARCH ................................ ................................ .................... 18 3 LITERATURE REVIEW ................................ ................................ ................................ ....... 19 Citrus Canke r ................................ ................................ ................................ .......................... 19 Citrus Greening ................................ ................................ ................................ ....................... 20 Machine Vision B ased C rop and Fruit S tatus M onitoring ................................ ..................... 22 Image P rocessing A pproaches ................................ ................................ ................................ 24 Spectral Images based Methods and Neural Network Classifiers ................................ .......... 26 Near Infrared Monitoring Methods ................................ ................................ ........................ 29 4 MATERIALS AND M ETHOD S ................................ ................................ ........................... 31 Citrus Canker Samples ................................ ................................ ................................ ........... 31 Citrus Lea f Samples ................................ ................................ ................................ ................ 31 Color Image Acquisition ................................ ................................ ................................ ......... 33 Preprocessing of Input Data ................................ ................................ ................................ .... 35 Citrus Greening ................................ ................................ ................................ ............... 35 Citrus Canker ................................ ................................ ................................ ................... 36 Image Analysis ................................ ................................ ................................ ....................... 37 C olor C o occurrence M ethodology ................................ ................................ ................. 37 Texture Feature ................................ ................................ ................................ ................ 40 Statistical Method ................................ ................................ ................................ ............ 42 Pattern Recognition ................................ ................................ ................................ ......... 43 5 DETECTION OF CITRUS CANKER DISEASE ................................ ................................ .. 53 Introduction ................................ ................................ ................................ ............................. 53 Objectives ................................ ................................ ................................ ............................... 56 Materials and Methods ................................ ................................ ................................ ........... 56

PAGE 6

6 Citrus Samples ................................ ................................ ................................ ................. 56 Color Image Acquisition ................................ ................................ ................................ 56 Data Analysis for Color Images ................................ ................................ ...................... 57 ROI Selection and Color Space Conversion ................................ ................................ ... 57 SGDM Generation ................................ ................................ ................................ ........... 58 Texture Feature Calculation ................................ ................................ ............................ 59 Texture Feature Selection ................................ ................................ ................................ 60 Texture Classification ................................ ................................ ................................ ...... 61 Results an d Discussion ................................ ................................ ................................ ........... 62 Selection of Texture Features ................................ ................................ .......................... 62 Classification of Citrus Peel Conditions ................................ ................................ .......... 63 Stability Test of the Classification Model ................................ ................................ ....... 64 Summary and Conclusions ................................ ................................ ................................ ..... 65 6 DETECTION OF CITRUS GREENING DISEASE ................................ .............................. 73 Introduction ................................ ................................ ................................ ............................. 73 Materials and Method s ................................ ................................ ................................ ........... 76 Citrus Leaf Samples ................................ ................................ ................................ ........ 76 Color Image Acquisition ................................ ................................ ................................ 76 Texture analysis ................................ ................................ ................................ ............... 77 Color Co occurrence Methodology ................................ ................................ .......... 77 Features Extraction ................................ ................................ ................................ ... 79 Statistical analysis ................................ ................................ ................................ .... 81 Result ................................ ................................ ................................ ................................ ...... 82 Classifications of citrus disease conditions ................................ ................................ ..... 82 The Confusion Matrix for Greening Positive vs. Negative ................................ ............. 84 Stability Test of the Greening Classification Model ................................ ....................... 85 Summary and Conclusions ................................ ................................ ................................ ..... 85 7 DETECT ION OF THE DISEASE USING PATTERN RECOGNITION METHODS ......... 99 Introduction ................................ ................................ ................................ ............................. 99 Materials and Method s ................................ ................................ ................................ ......... 101 Citrus Canker and Greening Samples ................................ ................................ ............ 101 Color Image Acquisition ................................ ................................ ............................... 102 Image Pre Processing and Feature Extraction ................................ ............................... 103 Statistical Analysis ................................ ................................ ................................ ........ 104 Input Data Preparation and Classification Using Pattern Recognition methods ........... 105 Pattern Recognition ................................ ................................ ................................ ....... 105 Fisher`s Linear Discriminant Method ................................ ................................ ........... 107 Neural Network Based on Back Propagation Network Method ................................ ... 107 Support Vector Machine Method ................................ ................................ .................. 109 Classification Results ................................ ................................ ................................ ............ 110 Canker D isease C lassification b ased on P attern R ecognition A lgorithms. ................... 110 Citrus G reening D isease C lassification based on P attern R ecognition A lgorithms. ..... 112 Summary and Conclusions ................................ ................................ ................................ ... 113

PAGE 7

7 8 SUMMA RY AND FUTURE WORK ................................ ................................ .................. 126 Summary ................................ ................................ ................................ ............................... 127 Future Work ................................ ................................ ................................ .......................... 127 APPENDIX A MATLAB CODE FILES FOR EDGE dETECTION ................................ ........................... 128 B MATLAB CODE FILES FOR PATTERN RECOGNITION METHODS ......................... 130 LIST OF REFERENCES ................................ ................................ ................................ ............. 138 BIOGRAPH I CAL SKETCH ................................ ................................ ................................ ....... 143

PAGE 8

8 LIST OF TABLES Table page 5 1 Intensity texture features. ................................ ................................ ................................ ... 70 5 2 Texture features selecte d by stepwise discriminant analysis ................................ ............. 71 5 3 Classification results using model HSI_13 in Table 5 2 ................................ ................... 71 5 4 Classification results in percent correct for all models in Table 5 2 ................................ 72 5 5 Classification results for shuffle data models in percent correct ................................ ....... 72 6 1 Intensity texture features. ................................ ................................ ................................ ... 91 6 2 Texture feature models selected by stepwise discriminant analysis for fall season .......... 92 6 3 Texture feature models to all conditions except young flush for fall season ..................... 92 6 4 Classification summary in percent correct for all models in t able 6 2 .............................. 93 6 5 Classification summary in percent correct for all models in t able 6 2 .............................. 93 6 6 Classification result in percent correct for HSI_18 model in t able 6 2 ............................. 94 6 7 Classification result in percent correct for HSI_15 model in t able 6 2 ............................. 95 6 8 Classification result in percent correct for HSI_14 model in t able 6 2 ............................. 9 6 6 9 Confusion matrix in percent correct for HSI_18 model in t able 6 2 ................................ 96 6 10 Confusion matrix in percent correct for HSI_15 model in t able 6 2 ................................ 97 6 11 Confusion matrix in percent correct for HSI_14 model in t able 6 2 ................................ 97 6 12 Classification results for shuffle data about HSI_1 5 model in percent correct ................. 98 7 1 Intensity texture features. ................................ ................................ ................................ 121 7 2 Texture feature models selected by SAS stepwise analysis for citrus canker ................. 122 7 3 Canker disease c lassification results in percent correct for all models using FDA ........ 122 7 4 Canker disease c lassification results in percent correct for all models using BP neural network ................................ ................................ ................................ ............................. 123 7 5 Canker disease c lassification results in percent correct for all models using SVMs ....... 123

PAGE 9

9 7 6 Texture feature models selected by SAS stepwise analysis for citrus greening .............. 124 7 7 Citrus greening c lassification results in percent correct for all models using FDA ........ 124 7 8 Citrus greening c lassification results in percent correct for all models using BP ........... 125 7 9 Citrus greening classification results in percent correct for all models using SVMs ...... 125

PAGE 10

10 LIST OF FIGURES Figure page 1 1 General disease detection system ................................ ................................ ....................... 17 4 1 Images of a bnormal peel conditions ................................ ................................ .................. 45 4 2 Images of nutritional deficiency ................................ ................................ ........................ 45 4 3 Images of citrus greening ................................ ................................ ................................ ... 46 4 4 Images of normal conditions ................................ ................................ .............................. 46 4 5 15 i mages of blotch module conditions ................................ ................................ ............. 47 4 6 Color image system for acquiring RGB images from citrus samples ................................ 48 4 7 Digital microscope system for acquiring RGB images from citrus leaf samples .............. 48 4 8 Procedures for color image analysis for citrus greening ................................ .................... 49 4 9 Procedures for edge detection ................................ ................................ ............................ 49 4 10 Converted images of a leaf samples ................................ ................................ ................... 50 4 11 Edge detected image of a leaf samples ................................ ................................ .............. 50 4 12 Procedures for color image analysis for canker condition ................................ ................. 50 4 1 3 Typical ROI images for normal and abnormal citrus peel conditions ............................... 51 4 1 4 15 ROI images for citrus canker condition ................................ ................................ ........ 51 4 1 5 Nearest neighbor diagram ................................ ................................ ................................ .. 52 5 1 Typical normal and abnormal citrus peel conditions. ................................ ........................ 67 5 2 Color image system for acquiring RGB images from citrus samples ................................ 67 5 3 Procedures for color image analysis. ................................ ................................ ................. 68 5 4 Typical ROI images for normal and diseased citrus peel conditions. ................................ 68 5 5 Nearest neighbor mask for calculating spatial gray level dependence matrices ................................ ................................ ................................ .......................... 69 6 1 Citrus Leaf C onditions. ................................ ................................ ................................ ...... 88

PAGE 11

11 6 2 Digital microscope system for acquiring RGB images from citrus leaf samples .............. 88 6 3 Nearest neighbor mask for calculating spatial gray level dependence matrices ................................ ................................ ................................ .......................... 89 6 4 Gray level dependence example: (a) SGDM for different orientations, (b) gray level image ................................ ................................ ................................ ................................ .. 89 6 5 Procedures for color image analysis. ................................ ................................ ................. 90 6 6 Procedures for leaf edge detection ................................ ................................ ..................... 90 7 1 Images of citrus canker diseases ................................ ................................ ...................... 114 7 2 Images of citrus greening diseases ................................ ................................ ................... 115 7 3 Color image system for acquiring RGB images from citrus samples .............................. 115 7 4 Digital microscope system for acquiring RGB images from citrus disease samples ...... 116 7 5 Procedures for color image analysis for citrus canker ................................ ..................... 116 7 6 Procedures for color image analysis for citrus greening ................................ .................. 117 7 7 Nearest neighbor diagram ................................ ................................ ................................ 117 7 8 Feature vector, feature space and scatter plot ................................ ................................ .. 118 7 9 Good feature and bad feature ................................ ................................ ........................... 118 7 10 Distribution plots with Pattern types ................................ ................................ ................ 119 7 11 The flow of a network ................................ ................................ ................................ ...... 119 7 12 Illustration of the calculation ................................ ................................ ........................... 120 7 13 Possible linear classifiers for separating the data ................................ ............................. 120

PAGE 12

12 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science DETECTION OF CITRUS DISEASES USING MICROSCOPIC IMAGE By D ae G wan K im A u g u s t 2009 Chair: Thomas F Burks CoChair: Arnold W. Schumann Major: Agricultural and Biological Engineering The cit rus industry has led the agricultural economy of the state of Florida to prosperity. Florida has historically been the largest citrus producing state in the USA. Citrus fruits such as oranges, grapefruit, tangelos, navels, limes, tangerines and other speci alty fruits are the chief crops of the state. The remarkable growth of the state economy has been largely based on the various citrus related industries. This situation has brought job opportunity for many people and important potential for the economic gr owth of the state. To maintain the prosperity of citrus industry, Florida has been concerned about disease control, and natural disaster prevention. During the recent past citrus greening and citrus canker have become serious threat s to citrus in Florida These diseases can result in tree decline, death, yield loss and lost marketability Likewise, the farmers are concerned about huge costs from profit loss, tree loss, scouting, and chemicals used in an attempt to control the disease. An automated detectio n system may help in citrus greening and citrus canker prevention and, thus reduce the serious loss to the Florida citrus industry. This thesis considers the development of disease detection approaches for these diseases ( citrus greening and citrus canker), when found in the presence of various other citrus diseases. The detection approach consists of t hese major sub systems, namely, image acquisition, image

PAGE 13

13 processing and pattern recognition. The image acquisition system consists of digi tal camera (standard lens on a microscope), frame grabber with high pixels embedded in the computer and supplemental lighting. The imaging processing sub system includes image preprocessing for background noise removal, leaf boundary detection and image fe ature extraction Pattern recognition approaches w ere used to classif y samples among several different conditions on fruit for canker and leaves for greening. The canker based detection studies were conducted on grapefruit samples collected during the 2006 2007 harvest season. Fruit samples exhibiting canker and other topical peel condition were collected and RGB images were taken of the various conditions. Color texture features were extracted and discriminant analysis was used to classify grapefruit accor ding to peel condition achieving overall accuracies of 96.5%. The citrus greening based detection studies were conducted on orange leaf samples collected during the spring of 2008. Leaf samples taken from the tree in an orange orchard exhibiting greening a nd other common leaf condition were collected and microscopic RGB images were taken. Color texture features were extracted at various levels of magnification and preliminary results demonstrated that 5 X magnification was optimal. In order to evaluate the classification approaches results were compared between classification methods for the citrus greening Results demonstrated classification accuracy for citrus greening as high as 97.00%. Thus, this research mainly focused on demonstrating feasibility of citrus disease detection from visible symptoms on the fruit or leaf. These were offline approaches, not directly applicable to real time technologies such as robotics and automated vehicles for agricultural application. However, they are a first step in de monstrating the potential for automated detection.

PAGE 14

14 CHAPTER 1 INTRODUCTION The N eed for an A utomat ed D isease C ontrol S ystem Automation system s h ave been used in various industr ies Numerous researchers have sought to automate agricultural production The automation of disease detection is needed for two major reasons. First, Commercial citrus is produced in large acreages with large distances betwee n blocks. This needs significant demands for labor to manage diseases and pests. Consequently, labor saving tools are necessary to effective scout for and manage diseases and pests In recent years, the number of large scale farms has increased while the labor force has shrunk due to laborers moving to other occupations. These conditions have stimulated development of automated scouts which can operate effectively for long periods of time These systems can be more cost effective and accurate them human sc outs. In addition, automated disease detection can help farmers reduce cost The second is the continual inflow of non native disease and insects. Every year, various crop s, including citrus come into U.S. commerce from foreign countries. Many of these countries are home to insects and diseases that do not occur in the United States. Non native species can wreak havoc on local environmental and agricultural resources. Citrus greening and canker entered into Florida using th ese international sources Scouting for infected plants and insects by human inspectors has limited usefulness Therefore, automat ed disease detection has several good points including lowering production cost and improving scouting effic iency.

PAGE 15

15 Citrus Industry Overview Florida leads the U.S. citrus production and accounts for a major part of the U.S. citrus industry. According to the Citrus production report (USDA, 2005 06), Florida accounts for more than 70 percent of U.S. citrus production. Florida produces a number of different citrus products. Orange, grapefruit, and lemons are major citrus crops, with lesser production in tangerines, limes, and an increasing variety of specialties. Oranges account for more than 80 perc ent (Hodges et al., 2001). Orange and grapefruit account for about 90 percent of U.S. production ( Jacobs et al., 1994) Florida, California/Arizona, and Texas are the major states for U.S. citrus productions. Florida accounted for more than three quarters of all U.S. citrus, while California/Arizona and Texas had the remainder. Citrus is consumed not only as fresh fruit but also in products that use citrus fruits. Florida has 52 citrus processing plants (Hodges et al., 2001). They produce various orange jui ce items such as chilled, canned juice, and concentrated juice. Processing also generates a number of byproducts like food additives, cattle feed, and cosmetics. These plants also produced many byproducts like citrus pulp, molasses, and the essential oil l imonene (Hodges et al., 2001). Computer Vision T echniques for Citrus Disease Control Various technologies can be emplo yed to provide intelligent systems such as robotics and autonomous reasoning, and machine learning that have the potential to improve pr ofitability in agricultural and food industries. C omputer s ha ve been applied in the area of sorting, grading of fresh products, detection of defects such as cracks, dark spots and bruises on fresh fruits and seeds. M ost symptoms of disease and defects on citrus fruit appear on the leaves of tree s and /or the peel of citrus fruit The eye can detect some of these symptoms, but computer vision system has more accuracy and speed than the human eye. In outdoor scenes, illumination and color can change. Thi s can confuse detection by human eye, and computer vision Some image processing

PAGE 16

16 methods can eliminate these problems and find best detection results. Computer vision systems can be employed on autonomous robots, to provide un manned disease scouting syst ems. The computer vision and image processing techniques for disease detection used in this study have similar approach es Figure 1 1 shows a general disease detection system. Image data is acquired by various image acqui sition equipments, such as charge coupled device ( CCD ) camera, Hyper spectral system and m icroscope etc. s ystems can be used in a laboratory, and in some cases extended to the field After collecting image data, proper image processing methods are applied. Original image data contains som e unwanted factors such as noise; overexposure (underexposure) to the light, change of color etc. I mage processing techniques can improve this and help increase classification accuracy. For classifying image dat a image features are extracted by feature ex traction method s like color co occurrence method ( CCM ) wavelets transform, r un length matrix, f ractal dimension e tc. C lassification methods also ha ve various techniques such as n eural network, Mahalonobis minimum distance, self organizing map ( SOM ) support vector machine ( SVM ) and etc. Each method about feature extraction and classification has good points. It is important that proper extraction and classification methods be used because these methods affect the classification accuracy. This thesis will use th ese procedure s for general disease detection using image processing and pattern recognition algorithms. In following chapters, th ese methods and steps are discussed in more detail. The objectives of this thesis are specified in the following cha pter

PAGE 17

17 Figure 1 1. General disease detection system

PAGE 18

18 CHAPTER 2 OBJECTIVES OF THE RE SEARCH The overall objective of this research was to develop a machine vision based method for detecting various diseases using color texture features under a controlled lighting condition. Specific objectives implemented to accomplish the overall objective were to: Collect image data sets of various infected and normal samples. Use a digital color microscope system to collect RGB images from orange leaves with seven conditions (i.e., young flush, normal mature, blotchy mottle, greening islands, zinc deficiency, iron deficiency, and manganese deficiency) Use an image acquisition system to collect RGB images oranges with normal and five peel conditions (i.e., can ker, copper burn, greasy spot, melanose, and wind scar). Determine image texture features based on the color co occurrence method (CCM). Create a set of reduced feature data models through a stepwise elimination process and classify different citrus disease conditions. Compare the classification accuracies using various pattern recognition techniques and select optimal classifier.

PAGE 19

19 CHAPTER 3 LITERATURE REVIEW The citrus market is one of the important industries in the Florida economy. Citrus g rowers desire to exploit automation technology to help manage citrus groves and the crop more economically. Automation systems offer farmers means to minimize agricultural chemicals, inputs and increase overall crop yield and profits. Automated agricultura l systems plays an important part in managing the cost of production, since citrus production is an "input intensity" crop and the Florida agricultural industry has a volati le trend of cost per unit area (Sevier and Lee, 2003). These agricultural systems r equire more knowledge of in field variability and its relationship to crop yield, as well as, the influence of disease and pest on crop damage and ultimately profitability. In the past decade, various researchers have used image processing and pattern reco gnition techniques in agricultural applications, such as detection of weeds in a field, sorting of fruits and vegetables, and pest and disease detection, etc. This chapter will introduce previous research in agricultural based machine vision, image proces sing, and classification methods for agricultural applications, and also introduce background information for citrus greening and canker disease. Citrus Canker Polek (2006) described the origin of citrus canker disease and characteristics important for ca nker identification. Citrus canker occurs in various countries around the world, Asia, United States, Indian Ocean and Pacific islands, the Middle East, and Australia. Citrus canker occurs not only in countries with warm and tropical climates but also in d ry climates. The major cause of citrus canker is pathotypes or variants of bacterium, Xanthomonas axonopodis (formerly campestris) pv.citru(Xac). Many citrus growing counties have regulated this bacterium and citrus products infected with citrus canker.

PAGE 20

20 G ottwald et al. (2002) presented a history and overview of citrus canker. Florida has a long history of fighting citrus canker. Citrus canker first appeared around 1912 in Florida with outbreaks in the Florida peninsula. After about 80 years, citrus canker spread over the state, and Florida Department of Agriculture and Consumer Services (FDACS), the Division of Plant Industry (DPI), and the USDA Animal and Plant Health Inspection Service (APHIS) established The citrus canker detection program a cooperativ e state/federal citrus canker eradication program (CCEP). The conditions of citrus canker are described as follows. After becoming infected with citrus canker, the leaf and fruits will show several symptoms. The surface of the leaf may have spots and t he peel becomes blemished with lesions. When the conditions are fully developed, the fruit are dropped and the tree dies. Leaf lesions appear about 7 to 10 days after infection. Lesions on fruit and twigs generally appear like cork and sometimes lesions a ppear like a blister or eruption on leaf and fruits. Once fruit becomes symptomatic, the marketability to many parts of the world becomes limited. This is most problematic for fresh market citrus, while citrus for juice is not affected. The main reason fo r disease spread is spring and summer rains combined with wind speeds in excess of 18 mph. Citrus Greening In 2005, citrus greening, Huanglongbing (HLB), was first discovered in Florida groves. This disease is a greater threat to Florida groves than citru s canker. Florida citrus growers are desperately fighting this plant disease which has the potential to destroy the state's $9 billion commercial citrus industry ( Defending Florida's Citrus Industry from an Emerging Disease, The American Phytopathological Society, 2008). C itrus greening has spread from eight to 23 counties since it was first found in Florida just a little more than three years ago Once citrus trees are

PAGE 21

21 infected, the fruit yield, and quality are greatly reduced. The trees also become susce ptible to other diseases and health problems. HLB threatens the Florida citrus industry. The disease: Destroys production, appearance and value of citrus trees. Produces bitter, inedible, misshapen fruit. Is fatal to citrus trees. While canker blemishes the fruit peel affecting marketability, citrus greening can kill the whole grove in a matter of a few years. Crops in Asia and Africa have been severely impacted by HLB and caused a loss in citrus production in Brazil. In some areas of Brazil, citrus greening has affected as much as 70 percent of the fruit yield (ScienceDaily, 2007). Citrus greening reduces citrus production period from fifty years to fifteen years, and to date no cure for citrus greening has been found. K. R. Chung and R. H. Brlansky (2006) discussed citrus greening. Citrus greening was one of the most serious disease s that affect citrus. A number of countries in Asia have been affected by this disease in the past, while more recently Brazil (2004) and Florida (2005) were infected. Early symptoms of HLB are yellowing leaves and may appear on only a single branch. A condition termed, blotchy mottle, appears on the leaf surface. Infected trees are changed into a non productive state in 2 to 3 years. After early stages o f HLB, the leaves became small and exhibit nutritional deficiency symptoms frequently. Also, the value of citrus fruit is reduced as they become small, sparse and have abnormal appearance and color. Moreover, t he affected citrus often contains aborted seed s and juice quality is low Polek (2007) discussed these exotic plant pathogens. Considering the various citrus diseases, Citrus canker and huanglongbing (HLB or citrus greening) are the most serious diseases for the U.S. citrus industry. The origin of HLB is China. This disease was described first

PAGE 22

22 in 1991. The name originated from characteristic symptoms in China. HLB bacteria have several routes of infection including plant propagation and insect vectors. Asian psyllid and African psyllid are the prima ry insect vectors. Asian psyllid has been found in Florida by citrus growers. Citrus greening is very difficult to eradicate, but early detection and psyllid control are essential. Machine Vision B ased C rop and Fruit S tatus M onitoring Schertz and Brown (1 968) proposed fruit identification using machine vision systems nearly forty years ago. Using light reflectivity differences between leaves and fruits in the visible or infrared portion of the electromagnetic spectrum, the information about location of fr uits might be determined. Powell et al. (2005) develop ed machine vision for an autonomous vehicle. The development is related in various test situation such as straight, curved and sine wave lines. The autonomous car was tested in a controlled environment The result about straight and sweeping curve paths are satisfied, and nonlinear test such as sine wave lines also had good result. However, the sine wave path was less accurate. The overall result is satisfied and the system could identify test situation Ng et al. (1989) measured corn ( Zea mays L.) kernel damage using machine vision algorithms. Damage was divided into two classes, mechanical damage and mold damage. E ach d amage ha d a different analysis method. For mechanical damage, both single kernel and batch analysis was used for evaluating damage area using green dye In mold damage; only single kernel analysis was used. The machine vision system had high classification (mechanical damage: 99.5% and mold damage: 98.7%) accuracy for both damages. T his research demonstrated that batch analysis method was faster than single kernel analysis. Takashi et al. (2003) developed a machine vision system for crop status Crop status is a vital part for farm management and crop cultivation. The system used several specific functions,

PAGE 23

23 like growth curve (the Gompertz curve) and exponential function for estimating vegetation cover area and the actual plant status. The result was not acceptable for every plant status, but th e research demonstrated that image processing could be very useful to build machine vision systems for crop status detection. Jimenez et al. (2000) surveyed several computer vision approaches for locating fruit in trees. Th eir w ork reported that most detection vision systems had three sensor types: s pectral, i ntensity and d istance, and two analysis methods: l ocal and s hape. V ision systems can use different combinations of possible types of sensing and analysis approaches. Each co mbination has their own advantages. The method using local analysis based on spectral images obtained the best rates for detection (100%), but required known conditions for background color. Another method using shape analysis based on intensity or spectra l images was found to be good for detect ing fruits regardless of its color, however there were frequent false detections. Another method using range image and shape based approaches ha d good results (above 80%). In addition, this algorithm did not generate false detections Regunathan et al. (2005) indentified fruit count and size using machine vision and an ultrasonic sensor. The ultrasonic sensor gave distance information, while color images were obtained for counting fruit An i mage processing method convert ed red green blue ( RGB ) data to h ue s aturation i ntensity ( HLS ) data format. The detection system combined two sensors to estimate the actual size of fruits T hree different classification methods were used: Bayesian and neural network and Fisher l inear discriminant giving good result s for fruit size and counts, but root mean square error the neural network method was better than others for fruit size, while the error for neural network was only 2.6% for fruit count and 97.4% for size estimation (0. 4 cm).

PAGE 24

24 Ashish et al. (2007) detected citrus greening using a spectral method for the detection of greening. To distinguish greening and healthy trees, discriminability, spectral derivative analysis and spectral ratio analysis were used The discriminabili ty of wavelengths had the wide range. In visible wavelengths of 695 to 705 nm, the discriminability had the best result of 0.83 to 0.86. In spectral derivative analysis, the wavelengths of 480 nm, 590 nm, 754 nm, 1041 nm, and 2071 nm was suitable to differ entiate greening. The spectral ratio analysis was better for understanding of the spectral properties of greening infected in citrus canopy. The reflectance at 530 564 nm was higher in the spectral ratio. Image processing approaches Dave et al. (1995) compared three image processing methods for identifying species of plants. Three methods were investigated, Fourier transform, f ractal dimension, and c o occurrence matrix methods. A fast F ourier transform method converted spatial domain image data to frequency domain data. Using frequency domain data, the power spectrum inside the annular band of radius R1 to R2 was calculated. The fractal dimension method used fractal geometry information from th e input image. After converting RGB to gray image, geometrical properties, such as shape, line and surface area were calculated. The co occurrence matrix method created s patial d ependence m atrices calculated from the gray scale image. All texture features were obtained from these matrices. The co occurrence matrix method gave the best accuracy, and the execution time also was less than others. Du et al (2006) described five different texture feature methods to invest igate tenderness of cooled pork ham. In t his work, First order gray label statistics (FGLS), Run length matrix (RLM), Gray level co occurrence matrix (GLCM), fractal dimension (FD), and wavelet transform (WT) were investigated. After extracted texture feature, statistical analysis was studied usi ng the CORR procedure in SAS package. Moreover, for further analysis, a partial least

PAGE 25

25 squares regression (PLSR) technique was applied. The WT method had multi scale representation. Th ese characteristics presented the best results about the tenderness of co oked pork ham than the traditional methods (FGLS, GLCM, and RLM). Burks et al. (2000) used the color co occurrence method and neural network for classifying weed species. Thirty three texture features were extracted using the co occurrence matrix method ( C CM ) in input image data. For analyzing texture features, SAS disciminant procedure STEPDISC was used. The back propagation neural network method gave good accuracy rates (96.7%) as well as high individual species classification accuracies (90.0%). Pydipati et al (2006) identified citrus disease using the co occurrence matrix method, ( CCM ) texture feature method and a discriminant analysis. Before extracting image features, the input image was converted from RGB to HSI. HSI data consisted of three texture s ets (hue, saturation, and intensity). This image processing method based on the co occurrence matrix method ( CCM ) method gave high accuracy rates (above 95.0 %) in an identification of citrus disease. Zhang et al. (2005) proposed a new method based on wavelet package transform for extraction of image texture features. The wavelet package transform was compared with color co occurrence matrix. Original images, stone, door, brick, bark, noise, ripple and leaf images, were preprocessed. Once converting the same size gray scale images, image data sets were compared. From the classification results, an approach based on wavelet package transform had better accuracy of classification than co occurrence matrix. In this paper, the wavelet transform methods showe d better execution efficiency than traditional method based on CCM method. Pydipati (2004) demonstrated color texture can detect citrus disease on leaves Algorithms based on image processing techniques for feature extraction and classification were develo ped

PAGE 26

26 Manual feeding of datasets, in the form of digitized RGB color images were used for feature extraction and training of the SAS statistical classifier. The whole procedure was replicated for three alternate classification approaches to include; Statistical classifier using the Mahalanobis minimum distance method and neural network based classifier using radial basis functions. Comparison of the results obtained from the three approaches was completed and the best approach for the problem at hand was found to be to m ahalanobis based classifier Spectral Images based Methods and Neural Network Classifiers In recent years, h yper spectral imaging has been used in the food processing and inspection application Hyper spectral imaging give s useful infor mation about the full spectral response of the target image across abroad spectral band Support vector machine (SVM ) are a set of supervised generalization linear classifiers that have often been found to provide higher classification accuracies than othe r widely used pattern classification techniques, such as multilayer perception neural networks ( B urges, 1998 ) The growing interest in SVM has demonstrated its potential by their successful implementation in food and grain analysis, such as classification of starch and wheat Jiang e t al (2007) used hyper spectral fluorescence imaging to analyze the differences between walnut shells and meat. Hyper spectral fluorescence imaging system scanned samples at 79 diff e rent wavelengths ranging from 425nm to 775nm with 4.5 nm increments. Data redundancy of data was reduced by p rincipal c omponent a nalysis (PCA ). This in vestigation used two statistical pattern recognition methods: Gaussian Mixture Model (GMM) based Bayesian classifier and Gaussian kernel based Support Vector Machine (SVM). It was found that the SVM method with Gaussian kernel performed more effectively than PCA and FDA reached in classification of walnut meat and shells by hyper spectral imagery. The overall recognition rate

PAGE 27

27 of Gaussian SVM achieved a bove 90%, and was about 3% more than that of PCA and 5% more than that of PDA. Zhang et al (2007) suggested a creative classification approach for distinguishing healthy and fungal infected wheat kernels during storage. The research showed the potential us e of NIR hyper spectral imaging in grain quality assessment. The research used NIR hyper spectral imaging and support vector machine ( SVM ) for identifying the fungi that caused the infection. In this study, 2160 kernels were randomly selected, with 540 ker nels from each group of A. nige r A. glaucus, Penicillium spp. With the NIR hyper spectral imaging and SVM, t he overall classification accuracy was 94.8%, with 531 kernels correctly classified and 29 kernels not. This method could classify healthy wheat kernels from infected ones with 100% accuracy, and Penicillium spp i nfected kernels accuracy of 99.3% Mehl et al (2002) developed a simple mu ltispectral detection system utilizing only three channels in the visible spectral range. They used hyper spectral imaging to design the multi spectral image system for rapid detection of apple contaminations. Apple cultivars selected were: Red Delicious, Golden Delicious, and Gala. Despite their color differences, it was possible to use the same configuration. In this study, through hyper spectral imaging, a rapid multi spectral imaging analysis for food safety and food quality was designed. Lee et al (200 5) used the hyper spectral imaging technique to detect defects on apple peel after harvest. They developed a proper wavelength selection method for detecting the defects. They used p rincipal c omponent a nalysis (PCA) to detect the fecal contamination on the poultry or apples from hyper spectral images. They used the correlation analysis method using wavelength differences and band ratio from correlation. For the ratio, the correlation coefficient was 0.91 for the ratio of 683.8nm and 670.3 nm. For the differ ence of wavelength a high

PAGE 28

28 correlation was found at 827.9 nm and 737.8 nm, and the correlation coefficient value was 0.79. The band ratio by the combination of two spectral images was 0.8. It was the same for the image analysis using the wavelength which wa s selected from the correlation analysis method. They demonstrated that the correlation analysis method was feasible for select ing the wavelength s to detect defects on apple. Cheng et al (2003) presented a new approach combin ing p rincipal c omponent a nalysi s (PCA) and f l inear d iscriminant (FLD) method This method maximize d the representation and classification effects on the new feature bands extracted from hyperspectral imaging In this research, the new projected features generated by PCA method g ave good results for pattern representation and performed well for obviously separated patterns. F or similar patterns h owever, FLD method obtained better classification results but FLD was more sensitive to noise and less stable than PCA Therefore, they proposed an integrated PCA FLD method to overcome the drawbacks of the previous two methods which needed more flexibility in dealing with different sample patterns by adjusting K value properly. Kim (2002) researched a method for using hyper spectral data to identify wavebands to be used in multispectral detection systems, and evaluated spatial and spectral responses of hyper spectral reflectance images of fecal contaminated apples. To detect fecal contamin a ted apples, he presented a systematic method using hyperspectral reflectance imaging technique in conjunction with the use of PCA to define several optimal wavelength bands. Fukagawa et al. (2003) developed a monitoring system of crop status for precision farming. A multi s pectral i maging s ensor, which can get three wave length images (R, G and NIR) simultaneously, was used as the imaging sensor for this system. Leaf height and the number of stems were estimated by Vegetation Cover Rate ( VCR ) The correlation coeffici ent

PAGE 29

29 between the leaf height and VCR in transplanting variety and seeding variety indicated 0.64, 0.69 respectively. Consequently, the numbers of stems can be estimate d by VCR. SPAD value was estimated by LCI is defined as corrected R gray level. The correl ation coefficient between the leaf height and VCR in transplanting variety and seeding variety were each 0.79, 0.60. Additionally LCI map was generated using position and posture data of the helicopter. Near Infrared Monitoring Methods Shimada et al (2008) developed a personal remote sensing system, which consisted of the following components for an extremely narrow area like a Japanese paddy field (100*100m): radio controlled helicopter, two digital still cameras, network camera board, wireless LAN, notebook computer for the ground station and image processing software. The goal of this project was to develop a low cost remote sensing system. red green red ( RGB ) and near infrared (NIR) data were acquired with a 4 band camera, system that can calculate normalized difference vegetation index (NDVI) and so on. The R sensitivity characteristics of the NIR camera, and it became possible for more accurate exposure value compensation to be done. The precision of NDVI improved over former systems as a result. Yang et al (2001) developed an infrared imaging and wavelet based segmentation method for apple defect detection. They proposed that the reflectance spectrum of apple surface in the near infrared region (NIR) provided effective information fo r a machine vision inspection system. The differences in light reflectance of the apple surfaces caused the corresponding pixels of bruised areas and good areas to appear different in a NIR apple image. Segmenting the defective areas from the non defective in apple images was a critical step for apple defect detection. In their work they used a 2 D multi resolution wavelet decomposition to generate 'wavelet transform vectors' for each pixel in the NIR apple images. These vectors were combined and weighted b y dynamic modification factors to produce the pixel vectors. The pixels with

PAGE 30

30 similar properties are labeled as one class, to separate the defective areas from the good area of apples in the NIR image. They reported 100% accuracy of detecting good apples an d 94.6% accuracy of detecting defective apples. Kawamura et al. (2003) constructed an on line near infrared (NIR) spectroscopic sensing system on an experimental basis. This system enables NIR spectra of unhomogenized milk to be obtained during mil king over a wavelength range of 600 nm to 1050 nm. In this study, the NIR sensing system could be used to assess milk quality in real time during milking. The system can provide dairy farmers with information on milk quality and physiological condition of individual cows and therefore give then feedback control for optimizing dairy farm management.

PAGE 31

31 CHAPTER 4 MATERIALS AND METHODS This chapter describes the citrus leaf and fruit samples that were collected for this study. It also discusses the features of th e two optical hardware systems used to collect data. Finally it describes t he feature extraction and classification methods used to classify citrus disease conditions Citrus Canker Samples Ruby Red grapefruit was selected for this study because it is one of the more popular citrus varieties and it is susceptible to canker and other common peel diseases. Fruit samples were handpicked from a grapefruit grove near Punta Gorda, Florida, during the spring 2007 har vest season. G rapefruit samples with normal market quality fruit and five abnormal peel conditions were collected The peel conditions considered were; Normal. C anker, co p per burn, greasy spot, melanose, and wind scar. Representative images for each peel c ondition are shown in Figure 4 1. Thirty samples for each condition were selected, therefore a total of 180 grapefruits were tested in this study. All the grapefruits were washed and treated with chlorine and sodium o phenylphenate (SOPP) at the Indian Ri ver Research and Education Center of University of Florida in Fort Pierce, Florida. The samples were then stored in an environmental control chamber maintained at 4 C and they were removed from cold storage about 2 hours before imaging to allow them to re ach room temperature. Citrus Leaf Samples Citrus leaves can serve as indicators of common diseases and nutrient deficiencies. Various colors of leaves present the symptoms and reflect the tree health. The time of season also affects

PAGE 32

32 the appearance of the d isease condition. Hence, in this study, samples were collected at optimal times for symptom detection. In this study, e ight different classes of citrus leaves were selected The leaf samples were greening blotch y mottle greening island, Iron Deficiency, M agnesiu m Deficiency, Zinc Deficiency, Dead, Normal Young and Normal Old These leaf samples have 3 different symptom categories: Cultural conditions (nutritional deficiency): Iron Deficiency, Manganese Deficiency, and Zinc Deficiency Citrus Greening condit ions: Blotchy Mottle and Green Islands Normal conditions: Normal Young and Normal Old Images of Leaf samples are shown in Fig. 4 2, Fig. 4 3, and Fig. 4 4. Figure 4 5 shows various canker condition images using this study. Each nutritional deficiencies of citrus has distinctive feature on the leaf surface. The leaf with iron deficiency has the dark green network of veins within the yellow leaf blade. The leaf with manganese deficiency has the symmetry of the yellowing across the mid vein and the dark triangle at the leaf base. The leaf with zinc deficiency has the fairy symmetrical yellowing across the mid vein (Polek, 2007) The citrus greening leaf symptoms are similar to other cultural conditions and diseases, but it has di fference with others. A blotchy mottle pattern was most typical with light greening and dark green patches, and no symmetry. Green islands has non symmetrical pattern on opposite sides of mid vein. The leaf samples used in this study were collected from two orange groves near Immokolee in Southwestern Florida The samples were collected in late spring summer and fall of 2008 The degree of symptoms varied between leaf samples. Leaf samples were clipped with

PAGE 33

33 petioles intact and then sealed in Ziploc bags to maintain the moisture level of the leaves. Sixty samples of each of the seven classes of leaves were collected. The sample s were brought to the laboratory then sealed in new bags with appropriate labels and put in environmental control chambers mainta ined at 4 C The leaf samples were then taken to an imaging station and images of front of the leaf samples were acquired. Color Image Acquisition Image acquisition system to collect canker images uses the computer system installed with image capture software and a 24 bit color frame grabber board with 480640 pixel resolutio n as shown in Figure 4 6 This system consisted of the following major components. Lighting system: Two 13W cool white fluorescent bulbs with reflectors were used. Color CCD Cam era : The camera consisted of 3 CCD RGB color camera (CV M90, JAI, San Jose, CA, USA) and a zoom lens (Zoom 7000, Navitar, Rochester, NY, USA). Computer System : Coreco PC RGB 24 bit color frame grabber with 480x640 pixels embedded in the CPU of the computer. T o minimize specular reflectance and shadow and maximize the contrast of the images t he setup of the lighting system was designed. The height of the camera and its focus were adjusted to contain the image of the whole fruit, with an approximate 100 mm100 mm field of view. Automatic white balance calibration was conducted using a calibrated white balance card before acquiring images from fruit samples. The digital c olor images were saved in uncompressed BMP format. A d igital m icroscope system for citrus greening has several benefits and advance. This system can easily adjust the magnification using a zoom lens. The magnification can be changed without losing sight o f the target you are observing. In addition, optimal magnification to provide the clearest image can be set easily. Integrated illumination system required no setup time. The

PAGE 34

34 field view and focus of camera can be simply established. CCD camera system captu red images and saved on has 160 GB hard disk drive which can store 575,000 compressed images. In this research, a d igital m icroscope system (VHX 600K, Keyence, JAPAN) was used for acquiring RGB images from citrus leaf samples, and it is shown in Figure 4 7 These descriptions were extracted from a user`s manual book, released by the Keyence Corporation, Osaka, Japan. This system is made up of a CCD camera and a controller. The camera unit consist of a high pixel color CCD and light. The controller has vari ous functions such as display, record, measurement, and etc. The stand device offers the user quick observation, analysis and data processing. The imaging system consisted of several parts: H igh intensity h alogen lamp (12V, 100W) Zoom lens (C mount lens, OP 51479) 2.11 million pixel CCD image sensor (1/1.8 inch) 15 inch Color LCD monitor (TFT, 1600x1200, UXGA) Co nsole installed with a hard disk drive (Image format: JPEG and TIFF, Storage capacity: 700MB ) and CD RW drive units and Control Panel Color CCD Camera. The camera uses a pixel shift technology. This technology allows the maximum resolution to reach 18 million pixels. There are four options for the number of pixels (18 million/8 million/4 million/2.11 million) based on the type of observation to be selected. Lowest resolution (2.11 million, 1/1.8 inch) was used for capturing a target image in this research. Automatic white balance calibration was conducted using a calibrated white balance function before acquiring images from leaf samples. Har d Disk. The image data can be stored on the built in hard disk in the controller. It can store up to 575,000 pictures. The leaf sample images were saved in uncompressed JPEG format (1200x1600, 8bit). Built in Light. The build in light system irradiated ra ys of light directly from the lens. The lighting system was designed to maintain optimum illumination intensity and minimize specular reflectance and shadow. The light system was consisted of a 12V, 100W, Halogen lamp.

PAGE 35

35 Zoom Lens. Digital microscopes came w ith various types of zoom lenses allowing continuous adjustment of magnification. The lens power and focus were adjusted easily to maintain the image of the whole leaf, with center on the vein. LCD Monitor. A 15 inch, built in 1600x1200 pixel high resolut ion liquid crystal monitor displayed the magnified image. Operation Console. This console can be used to quickly and easily perform the major observation task such as, adjusting the brightness, colors, and shutter. Preprocessing of Input Data Citrus G r eening Thirty images were taken for each leaf class of the front and centered on the vein of leaf specimen. Digital images were stored in JPEG format. The 6 0 images from each class were divided into two datasets consisting of 3 0 samples for training and 3 0 samples for testing. The samples were first arranged in ascending order for the time the images were acquired. This approach minimizes negative time dependant variability, and reduces potential for data selection bias between the training and test datase ts. A detailed illustration of the image acquisition and classification algorithms is given in Figure 4 8 All a lgorithms for image segmentation and texture feature generation were developed in MATLAB. In the initial step, the RGB images of all leaf samp les were obtained. For reducing the computational burden with minimal loss of texture feature quality, the image resolution was reduced from 1600x1200 pixels to 800x600 pixels and the reduced images were then converted from eight bit to six bit per channel RGB format. The subsequent steps were repeated for each image in the dataset. After the images were reduced, edge detection of the leaf was completed on each image of the leaf sample using MATLAB program file Figure 4 9 exhibits a detailed edge detection process. First, as shown in figure 4 10 each RGB image was converted to a gray image

PAGE 36

36 and then a binary image. Next, e dge of a binary image was detected by the command 'imerode' and 'imdilate' in M ATLAB Once the edge detection was finished, the image was scanned from left to right for each row in the pixel map, and the area outside the leaf was zeroed to remove any background noise. In next step, the images were converted from RGB format to HSI format. A sample edge detected image of the leaf sample was s hown in the following figure 4 1 1 The Spatial Gray Level Dependency Matrices (SGDM's) were then generated for each color pixel map of the image, one each for hue, saturation and intensity. It was decided during preliminary testing that the experiment wou ld use the 0 CCM orientation angle and one offset, where the smaller the offset, the finer the texture measured. Thus a one pixel offset is the finest texture measure. From the SGDM matrices, the 39 CCM texture statistics described earlier were generated for each image using the three color feature co occurrence matrices, as each SGDM matrix provided 13 texture features. Citrus C anker Figure 4 1 2 shows the flow chart of the data analysis methods for analyzing the color images of the fruit samples based on the color co occurrence method (CCM) T he procedures are similar to citrus greening detection. The difference comes in selection of region of interest (ROI) instead of edge detection. F rom the original RGB color images with the dimension of 480640 pixe ls, region of interest (ROI) images were extracted manually. This ROI image was focused on area of interested (i.e., normal peels or various diseases) on the fruit surface. The ROI selection was started manually by determining a point on the original image and then was finished by a Matlab program for extracting a square portion with the dimension of 6464 pixels centered on the determined point. This approach obtains the useful image data and significantly reduces the computational burden for the followin g data analysis procedures.

PAGE 37

37 Representative ROI images for each fruit peel condition used in this study are shown in Figure 4 1 3and figure 4 14 shows 15 representative images about canker condition. The ROI images were then converted from the original eight bit per channel red, green, blue (RGB) color representation to a six bit per channel hue, saturation, and intensity (HSI) color Other procedures such as generation of spatial gray calculation of texture features, sele ction of useful texture features, and discriminant analysis for disease classification are same Image Analysis C olor C o occurrence M ethodology Image data includes large amounts of information, such as color, light, texture and shape. These properties are used in image processing and computer vision algorithm. In these properties, color, shape and light can be changed by surroundings. For example, chameleon s ha ve a special ability to match their skin color to surroundings. This ability confuses a prey t o escape dangerous situation. If background of an input image becomes too dark or too light, the object in the image cannot be perceptible in its background. Images of real objects do not have uniform properties, but texture can give information about the image through repeating pattern. This characteristic of texture segmentation is very important in machine vision and image processing, and a variety of texture analysis methods have been applied in various fields of study. Tuceryan (1998) introduced vario us aspects of texture analysis. According to the paper, there are three main applications, inspection, medical image analysis, and document processing. "Inspection means the detection of defects in texture images or textile inspection. Medical image analys is has involved the automatic extraction of features from images which is used for a variety of classification, such as distinguishing normal tissues from abnormal tissue. Document processing has applications ranging from postal address recognition to anal ysis and interpretation

PAGE 38

38 of maps." This paper also presented three texture methods; statistical, geometrical, and model based methods. Statistical methods were proposed early and used widely. This study has used spatial distribution of gray values. The key word of geometrical methods was "texture elements" or primitives. After extracting texture elements in image, the texture features were utilized. Model based methods is "the construction of an image model that can be used not only to describe texture, but also to synthesize it" In this thesis, t he image analysis method was the color co occurrence matrix (CCM) s tatistical method. Several researchers have a pplied this method to agricultural application. Bucks (2000) used color co occurrence method (CCM) for d etection of weed, and Pydipati et al (2005) applied disease detection in citrus trees. Ondimu et al (2008) compare the plant water stress in sunagoke moss using color co occurrence matrix. These papers showed good result and high accuracy in classification Before applying CCM method to input images, the original image consisting of a red, green, and blue (RGB) color space are converted to a HSI color space. HSI space is distributed into hue, saturation, and intensity components. Most of image processing e ngines and methods are based on HSI color space system. This color system has strong tolerance for a change of a light on an image or a reflection. This characteristic of HSI can help image processing be less sensitive to illumination of surroundings. Eac h pixel map was used to generate a color co occurrence matrix after the H SI image was completed resulting in three CCM matrices. That is, one CCM matrix for each of the HSI pixel maps. T hrough the use of spatial gray t h e color co occurrence texture analysis method was developed. The spatial gray level dependence matrices (SGDM`s) is related with gray level co occurrence matrix (GLCM) because of second order

PAGE 39

39 statistics. Haralick et al. (1979) proposed the use of gray leve l co occurrence matrix (GLCM) method. As his paper, the matrix ( ) is consisted of distance ( a ), gray level. The operator a means distance between gray level i from j As an example, consider the following 4 4 image containing 4 different gray values: I x y = 0 1 1 0 2 3 3 3 0 0 0 0 2 3 2 2 (4 1) This 4 4 matrix was converted to SGDM matrix," P a a = ( 1 0 ) as follows: I x y = 0 1 1 0 2 3 3 3 0 0 0 0 2 3 2 2 P a i j = 2 1 1 0 2 1 1 0 0 0 0 0 1 2 0 1 (4 2) The matrix P ( 1 0 ) ( 0 0 ) means the number of times a pixel with a vector (1, 0) from gray level "0" to "0". Shearer (1986) illustrated the SGDM presented by the function P ( i j d ) It is similar to function P a i j but this function d oes not use vector (i,j) but distance (d) and an orientation angle ( ) As shown in figure 4 1 4 all the neighbors from 1 to 8 are numbered in a clockwise direction. Equation 4 3 presents an example image matrix. I x y = 0 0 0 0 2 3 3 3 3 2 2 2 1 1 1 1 P i j 1 0 = 4 0 0 4 0 0 0 0 1 2 1 0 1 2 2 1 (4 3) Haralick and Shanmugam (1974) developed a set of 16 texture features associated with the SGDM`s. After these features were founded, Shearer (1986) expanded these features to use the hue, saturation and intensity color fe atures. Also, Shearer reduced 16 features to 11 features using the method developed by Haralick and Shanmugam. For classifying cancer tissue,

PAGE 40

40 Donohue et al. (2001) suggested that image contrast and modus texture features were added to original 11 texture f eatures. Therefore, the color co occurrence matrices (CCM) consisted of these matrices, once each for the hue, saturation and intensity features. This resulted in 39 color texture features. Texture Feature Shearer and Holmes (1990) defined related equati ons with a brief description as pertains to intensity. Shearer (1986) also applied to saturation with similar descriptions. Hue values are different from intensity and saturation values, so the values are treated very differently in the analysis by Shearer (1986). The descriptions of texture feature equations below were found in Burks (1997). The CCM matrices are normalized using the equation as follows; Matrix Normalization: = ( 1 0 ) ( 1 0 ) 1 = 0 1 = 0 (4 4) Marginal probability matrix: = ( ) 1 = 0 (4 5) Sum and difference matrices: + = ( ) 1 = 0 1 = 0 (4 6) = + ; = 0 1 2 2 1 (4 7) = ( ) 1 = 0 1 = 0 (4 8) = | | ; = 0 1 2 2 1 (4 9) where

PAGE 41

41 P i j = the image attribute matrix and N g = the total number of attribute levels After normalized CCM matrices, texture features are extracted using equations as follows; Texture features: The angular second moment (I1) is a measure of the image homogeneity 1 = [ ( ) ] 2 1 = 0 1 = 0 (4 10) The mean intensity level (I2) is a measure of image brightness derived from the co occurrence matrix. 2 = ( ) 1 = 0 (4 11) Variation of image intensity is identified by the variation textural feature (I3 ). 3 = ( 2 ) 2 ( ) 1 = 0 (4 12) Correlation (I4) is a measure of the intensity linear dependence in the image. 4 = 2 2 1 = 0 1 = 0 3 (4 13) The product moment (I5) is analogue to the covariance of the intensity co occurrence matrix. 5 = 2 2 ( ) 1 = 0 1 = 0 (4 14) Contrast of an image can be measured by the inverse difference moment (I6). 6 = ( ) 1 + ( ) 2 1 = 0 1 = 0 (4 15) The entropy feature (I7) is a measure of the amount of order in an image. 7 = ( ) 1 = 0 ( ) 1 = 0 (4 16) The sum and difference entropies (I8 and I9) are not easily interpreted, yet low entropies indica te high levels of order. 8 = + ( ) ln + ( ) 2 ( 1 ) = 0 (4 17) 9 = ( ) ln ( ) 1 = 0 (4 18)

PAGE 42

42 The information measures of correlation (I10 and I11) do not exhibit any apparent physical interpretation. 10 = 7 1 (4 19) 11 = [ 1 2 ( 2 7 ) ] 1 / 2 (4 20) Where, = 1 = 0 ( ) ( ) (4 21) 1 = ( ) [ ] 1 = 0 1 = 0 (4 22) 2 = = [ ] 1 = 0 1 = 0 (4 23) The contrast feature (I12) is a measure of salience in object recognition 12 = ( ) 2 = 0 ( ) 1 = 0 1 = 0 (4 24) The modus feature (I13) is a measure of maximum value in images. I 13 = max [ ] 1 = 0 1 = 0 (4 25) Statistical Method After the color co occurrence matrices (CCM) were generated for input images, each input image has 39 texture features. Because the proper number of texture features cause the reduced computational requirement such as, time and computer hardware and comple xity for classification, it is an important procedure to eliminate redundant the texture features. The SAS offers this procedure and a discriminant analysis between image classes after training the statistical classification model. Burks (1997) introduced the procedure for accomplishing above tasks. PROC STEPDISC is a useful function to reduce the number of texture feature by a stepwise selection process. In Stepwise procedure, the main assumption is that all of the classes included in the data set are to

PAGE 43

43 be multi variate normal with a common covariance matrix. The process has two different conditions related with variance. Burks (1997) explained as follows. "First, the variable within the model which contributes least to the model, as determined by the Wil ks' lambda method, does not pass the test to stay it is removed from the model. Secondly, the variable outside the model which contributes most to the model and passes the test to be admitted is added. When no more steps can be taken the model is reduced t o its final form." In PROC DISCRIM procedure, the classification accuracy may be determined by a disciminant function established using a measure of the generalized squared distance between the image texture variables and the class texture variable means, and the posterior probability. The pooled covariance matrix of the training set texture variables and the prior probabilities of the classification groups may affect the classification criterion. Pattern Recognition Pattern recognition is a part of artifi cial i ntelligence A rtificial intelligence is an approach that mimics humans intelligent to build learning ability, reasoning, and perception. The research about artificial intelligence has developed into a discipline known as intelligent system. Pattern recognition is consisted of features and patterns. A feature is specific aspect, quality and characteristic of some objects. The feature can be color, a symbol sign, and numerical value, such as distance, height, and weight. An object can be expressed as p oints modeled by a feature vector. Such as, a plot of features expressed as points in space of a scatter plot. This plot can be expressed visually to 3 dimension a space. Pattern means traits or features of an individual object, and is defined as a set of feature together. The feature and pattern is similar concepts, but features form pattern. In pattern recognition, the pattern is expressed by {x, }, x is feature vector observed and is the unique class of the feature vector. This class is also called cat egory, group, or label. The feature vector selected to represent the class is very

PAGE 44

44 important and affects the selection of pattern recognition algorithms and the statistical approach. Hence, the feature represents characteristics of the classes that make th em distinguishable. In other words, samples from one should have similar features within the class, while samples another class should have different feature values. There are various approaches in pattern recognition, and these approaches have demonstrat ed success in a variety of research areas such as aerospace, defense, medical, neurobiology, and linguistics etc. In this study, three pattern recognition approaches were used for finding an optimal pattern recognition approach. Linear models for classific ation : Fisher's linear discriminant analysis method Neural Networks fo r classif ication : Back propagation based on neural netw ork method Nonlin ear Kernel for c lassification : Supp ort Vector Machi ne (SVM)

PAGE 45

45 Figure 4 1. Images of a bnormal peel conditions Figure 4 2. Images of nutritional deficiency

PAGE 46

46 Figure 4 3. Images of citrus greening Figure 4 4. Images of normal conditions

PAGE 47

47 Figure 4 5 15 i mages of blotch module conditions

PAGE 48

48 Figure 4 6 Color image system for acquiring RGB images from citrus samples Figure 4 7 Digital microscope system for acquiring RGB images from citrus leaf samples

PAGE 49

49 Figure 4 8 Procedures for color image analysis for citrus greening Figure 4 9 Procedures for edge detection

PAGE 50

50 Figure 4 10 Converted images of a leaf samples Figure 4 1 1 Edge detected image of a leaf samples Figure 4 1 2 Procedures for color image analysis for canker condition

PAGE 51

51 Figure 4 1 3 Typical ROI images for normal and abnormal citrus peel conditions Figure 4 1 4 15 ROI images for citrus canker condition

PAGE 52

52 Figure 4 1 5 Nearest neighbor diagram

PAGE 53

53 CHAPTER 5 DETECTION OF CITRUS CANKER DISEASE Introduction Citrus trees can exhibit a host of symptoms reflecting various disorders that can adversely impact their health, vigor, and productivity to varying degrees. In some cases, disease control actions or remedial measures can be undertaken if the symptoms are identified early. Additional opportunities for disease control e xist when precision agriculture techniques are involved, which could use early detection along with a global positioning system to map diseases in the grove for future control actions. Environmental pollution is another concern throughout the world. Indisc riminate use of fungicides, pesticides, and herbicides to disease control has led to problems, such as deteriorating ground water quality, and health hazards for operators and the general public. Increased pressures to reduce chemical applications have led researchers to study new ways for early detection of various diseases on citrus trees with an aim to reduce chemical usage and maintain cost effective crop production. This study explored machine vision based techniques that can visually differentiate com mon citrus peel dis orders using individual fruit color texture features. Citrus samples were collected in the field and evaluated under laboratory conditions. Future studies will expand the technologies to in field inspections. In the past decade, various researchers have used image processing and pattern recognition techniques in agricultural applications, such as detection of weeds in the field, and sorting of fruits and vegetables. The underlying approach for all of these techniques is the same. First, i mages are acquired from the environment using analog, digital, or video cameras. Then, image processing techniques are applied to extract useful features that are necessary for further analysis of the images. Afterwards, discriminant techniques, such as pa rametric or non parametric statistical classifiers and neural networks, are employed to classify the images. The selection of

PAGE 54

54 the image processing techniques and the classification strategies are important for the successful implementation of any machine v ision system. Object shape matching functions, color based classifiers, reflectance based classifiers, and texture based classifiers are some of the common methods that have been tried in the past. A number of techniques have been studied to detect defects and diseases related to citrus. Gaffney (1973) obtained reflectance spectra of citrus fruit and some surface defects. Edwards and Sweet (1986) developed a method to assess damages due to citrus blight disease on citrus plants using reflectance spectra of the entire tree. Miller and Drouillard (2001) collected data from Florida grapefruit, orange, and tangerine varieties using a color vision system. They used various neural network classification strategies to detect blemish related features for the citrus fruit. Aleixos et al. (2002) developed a multispectral camera system that could acquire visible and near infrared images from the same scene, and used it on a real time system for detecting defects on citrus surface. Blasco et al. (2007) reported the appli cation of near infrared, ultraviolet and fluorescence computer vision systems to identify the common defects of citrus fruit. They proposed a fruit sorting algorithm that combines the different spectral information to classify fruit according to the type o f defect. Their results showed that non visible information can improve the identification of some defects. Most recently, Qin et al. (2008) developed an approach for citrus canker detection using hyperspectral reflectance imaging and PCA based image class ification method. Their results demonstrated that hyperspectral imaging technique could be used for discriminating citrus canker from other confounding diseases. This research was aimed to develop a method to detect citrus peel diseases using color texture features. The use of color texture features in classical gray image texture analysis was first reported by Shearer (1986). Shearer and Holmes (1990) reported a study for classifying

PAGE 55

55 different types of nursery stock by the color co occurrence method (CCM). This method had the ability to discriminate between multiple canopy species and was insensitive to leaf scale and orientation. The use of color features in the visible light spectrum provided additional image characteristic features over traditional gray scale texture representation. The textural methods employed were statistical based algorithms that measured image features, such as smoothness, coarseness, graininess, and so on. The CCM method involv es three major mathematical processes briefly described in the following A complete discussion of the color co occurrence method could be found in Shearer and Holmes (1990). Transformation of a red, green, blue (RGB) color representation of an image to an equivalent hue, saturation, and intensity (HSI) color r epresentation; Generation of color co occurrence matrices from the HSI pixel maps. Each HSI matrix is used to generate a spatial gray Calculation of Burks et al. (2000) developed a method for weed species classification using color texture features and discriminant analysis. In their study, CCM texture feature data models for six classes of ground cover (giant foxtails, crabgrass, velvet leaf, lambs qu arter, ivy leaf morning glory, and soil) were developed and stepwise discriminant analysis techniques were utilized to identify combinations of CCM texture feature variables, which have the highest classification accuracy with the least number of texture v ariables. A discriminant classifier was trained to identify weeds using the models generated. Classification tests were conducted with each model to determine their potential for classifying weed species. Pydipati et al. (2006) utilized the color co occurr ence method to extract various textural features from the color RGB images of citrus leaves. The CCM texture statistics were used to identify three diseased conditions and normal citrus leaves using discriminant analysis.

PAGE 56

56 Objectives The overall objective o f this research was to develop a machine vision based method for detecting various diseases on citrus peel using color texture features under a controlled lighting condition. Specific objectives implemented to accomplish the overall objective were to: U se a color imaging system to collect RGB images from grapefruits with normal and five peel conditions (i.e., canker, copper burn, greasy spot, melanose, and wind scar); D etermine image texture features based on the color co occurrence method (CCM); and D evelo p algorithms for selecting useful texture features and classifying the citrus peel conditions based on the reduced texture feature sets. Materials and Methods Citrus Samples Grapefruit is one of the citrus varieties that are susceptible to common peel diseases. Ruby Red grapefruits were used in this study. Fruit samples were handpicked from a grapefruit grove near Punta Gorda, Florida, during the harvest season in spring of 20 07. The grapefruits with normal and five peel conditions (i.e., canker, copper burn, greasy spot, melanose, and wind scar) were collected. Representative images for each peel condition are shown in Figure 5 1. Thirty samples for each condition were selecte d, hence a total of 180 grapefruits were tested in this study. All the grapefruits were washed and treated with sodium o phenylphenate (SOPP) at the Indian River Research and Education Center of University of Florida in Fort Pierce, Florida. The samples we re then stored in an environmental control chamber maintained at 4 C and they were removed from cold storage about 2 hours before imaging to allow them to reach room temperature. Color Image Acquisition A color image acquisition system was assembled for a cquiring RGB images from citrus samples, and it is shown in Figure 5 2. The imaging system consisted of two 13 W high

PAGE 57

57 frequency sealed fluorescent lights (SL Series, StockerYale, Salem, NH, USA), a zoom lens (Zoom 7000, Navitar, Rochester, NY, USA), a 3 CC D RGB color camera (CV M90, JAI, San Jose, CA, UDA), a 24 bit color frame grabber board with 480640 pixel resolution (PC RGB, Coreco Imaging, St. Laurent, Quebec, CA), and a computer installed with an image capture software The setup of the lighting syst em was designed to minimize specular reflectance and shadow and to maximize the contrast of the images. The height of the camera and its focus were adjusted to contain the image of the whole fruit, with an approximate 100 mm100 mm field of view. Automati c white balance calibration was conducted using a calibrated white balance card before acquiring images from fruit samples. The digital color images were saved in uncompressed BMP format. Data Analysis for Color Images The data analysis methods for analyz ing the color images of the fruit samples based on the color co occurrence method (CCM) are illustrated in the flow chart shown in Figure 5 3, which involve the procedures for selection of region of interest (ROI), transformation from RGB format to HSI for mat, generation of spatial gray texture features, selection of useful texture features, and discriminant analysis for disease classification. All image processing and data analysis procedures were executed using programs developed in Matlab 7.0 (MathWorks, Natick, MA, USA) and SAS 9.1 (SAS Institute Inc., Cary, NC, USA). Detailed methods and procedures for each step are described in the following sections. ROI Selection and Color Space Conversion ROI image s were first extracted from the original RGB color images with the dimension of 480640 pixels, generating small images covering the interested areas (i.e., normal peels or various diseases) on the fruit surface. The ROI selection was started manually by d etermining a

PAGE 58

58 point on the original image, and then was finished by a Matlab program for extracting a square portion with the dimension of 6464 pixels centered on the determined point. This approach obtains the useful image data and significantly reduces t he computational burden for the following data analysis procedures. Representative ROI images for each fruit peel condition used in this study are shown in Figure 5 4. The ROI images were then converted from the original eight bit per channel red, green, blue (RGB) color representation to a six bit per channel hue, saturation, and intensity (HSI) color representation to facilitate the SGDM calculation. Intensity is calculated using the mean value of the three RGB values. The hue and saturation values are d etermined using a geometrical (Ohta, 1985). In this process, the CIE chromaticity diagram represents a two dimensional hue and saturation space (Wyszecki and Stiles 1992). The RGB values determine the chromaticity coordinates on the hue and saturation space, which are then used to geometrically calculate the value of hue and saturation. SGDM Generation The color co occurrence texture analysis method was developed t hrough the use of the spatial gray pixel map of the ROI HSI images, one each for hue, saturation and intensity. These matrices measure the probability that a pixel at one particul ar gray level will occur at a distinct distance and orientation from any pixel given that pixel has a second particular gray level (Shearer and gray level of the pi xel at (x 1 ,y 1 ) in the image, and j represents the gray level of the pixel at (x 2 y 2 1 ,y 1 ) (Shearer, 1986). The matrix is constructed by counting the number of pixel pairs of (x 1 y 1 ) and (x 2 y 2 ) with the grey value i

PAGE 59

59 Figure 5 5, where the reference pixel is shown as an asterisk. All eight neighbors are one pixel are numbered in a clockwise direction from one to eight. The neighbors at positions one and five are both considered to be at an orientation angle equal to 0, while positions eight and four are considered to be at an angle of 45. It was determined from the preliminary test that the calculation would use a 0 orientation angle and an offset distance of one pixel. The offset represents the coarseness of the texture evaluation, where the smaller the offset, the finer the texture measured. Thus the one pixel offset is the finest texture measure. Texture Feature Calculation texture features. Shearer and Holmes (1990) reported a reduction for the 16 gray scale texture featur es through elimination of redundant variables, resulting in 11 texture features. Donohue et al. (2001) added two more texture features (i.e., image contrast and modus) to those used by Shearer and Holmes (1990). In this study, the combined 13 texture featu res proposed by Shearer and Holmes (1990) and Donohue et al. (2001) were used for citrus peel disease classification, and they included (1) uniformity, (2) mean intensity, (3) variance, (4) correlation, (5) product moment, (6) inverse difference, (7) entro py, (8) sum entropy, (9) difference entropy, (10) information correlation #1, (11) information correlation #2, (12) contrast, and (13) modus. The equations for calculating the 13 texture features can be found in Pydipati et al. (2006). The calculations wer features for each HSI component and thereby a total of 39 texture statistics. The texture features were identified by a coded variable name where the first letter represents whether it is a hue (H), saturation (S) or intensity (I) feature and the number following represents one of the thirteen

PAGE 60

60 texture features described above. Intensity texture feature equations are presented in table 5 1. As an example, the feature (I 7 ) is a measure of the entropy in the intensity CCM matrix, which represents the amount of order in an image and is calculated by equation (5 1 ) ( 5 1) The p(i,j) matrix represents the normalized intensity co occurrence matrix an d N g represents the total number of intensity levels. The equation for normalizing the co occurrence matrix is given in equation (5 2 ) where P(i,j,1,0) is the intensity co occurrence matrix. ( 5 2) A physical representation of entropy (uncertainty) may be visualized by comparing a checkerboard like image to an image where one half is black and the other half is white. The latter image is highly ordered having all pixels of the same intensity segregated into two distinct pixels groups, which gives greater certainty of the pixel value of the adjacent pixels. The checkerboard image has a lower amount of order due to intermixing of black and white squares, which results in a greater level of uncertainty of neighboring pixel values. The lower order image would therefore have more uncertainty and thus a higher entropy measure. Texture Feature Selection After the texture statistics were obtained for each image, feature selection was conducted to reduce the redundancy in the texture feature set. The SAS procedure STEPDISC can reduce the size of the variable set and find the variables that are important for discriminating samples in different classes, and it was used for the texture feature selection. The stepwise discriminant analysis begins with no variables in the classification model. At each step of the process, the

PAGE 61

61 variables within and outside the mode l are evaluated. The variable within the model, at that is removed from the model. Likewise, the variable outside the model that contributes most to the model a nd passes the test to be admitted is added. A test significant level of 0.0001 for the variables of SLS (test for variable to stay) and the SLE (test for variable to enter) in the STEPDISC procedure was chosen for the stepwise discrimination of the variabl e list (SAS, 2004). When no more steps can be taken, the number of variables in the model is reduced to its final form. Burks et al. (2000) had shown that classification performances were poor if only hue or saturation information was used in the classific ation models. Thus three color feature combinations including hue, saturation, and intensity (H, S, I), hue and saturation (H, S), and intensity (I) only were used to perform the texture feature selections. These three color combinations have demonstrated high classification accuracies in the applications for other plant discriminations (Burks et al. 2000; Pydipati et al. 2006). Texture Classification The classification models were developed using the SAS procedure DISCRIM, which creates a discriminant fun ction based on a measure of the generalized squared distance between a specific test image texture variable input set and the class texture variable means, with an additional criteria being the posterior probability of the classification groups (Rao, 1973) Each sample in the testing set was placed in the class for which it had the smallest generalized square distance between the test observation and the selected class, or the largest posterior probability of being in the selected class. The DISCRIM procedu re utilized a likelihood ratio test for homogeneity of the within group covariance matrices at a 0.1 test significance level. The 30 samples from each peel condition were divided into two datasets consisting of 20 samples for training and 10 samples for t esting The samples were first arranged in ascending

PAGE 62

62 order for the time the images were acquired. The first two samples were selected for training and the third sample for testing. This approach minimizes negative time dependent variability, and reduces po tential for data selection bias between the training and test datasets. A training data set and a test data set were created for each of the subsets of the texture features selected by the stepwise discriminant analysis described above. The training sets w ere used to train the classification models and the testing sets were used to evaluate the accuracies of different classification models. Results and Discussion Selection of Texture Features The texture feature selection results are summarized in Table 5 2 Four classification models were developed using the selected texture feature sets from the three color combinations 5 2 were generated by the SAS STEPDISC procedure, and they were arranged in the descending order of the importance for the classification models. The subscript numbers indicate the texture statistics as the following: (1) uniformity, (2) mean intensity, (3) v ariance, (4) correlation, (5) product moment, (6) inverse difference, (7) entropy, (8) sum entropy, (9) difference entropy, (10) information correlation #1, (11) information correlation #2, (12) contrast, and (13) modus. As an example, H 9 represents the di fference entropy of hue, and it is selected as the most important texture feature for the first two classification models developed using two different color combinations [(H, S, I), and (H, S)]. The classification models were named using the color featur es involved in the texture feature selections followed by the total numbers of the selected texture features. For example, model HSI_1 3 consists of a reduced set of hue, saturation and intensity texture features, and there are 1 3 texture features in total that were used to construct the model. As shown in Table 5

PAGE 63

63 2 significant eliminations of redundant texture features were accomplished through the stepwise discriminant analysis. Nine and eleven texture features were selected for model HS_9 and model I_11, respectively. The simplification of the texture features largely reduces the computation burden due to the redundant data, and it also helps improve the performance of classification models. In addition to the three models described above, a classificatio n model that used all 39 HSI texture features was developed for the purpose of comparisons with other models. Thus there are four classification models that were used to differentiate the citrus peel diseases and they were independently evaluated for clas sification performance. Classification of Citrus Peel Conditions The SAS procedure DISCRIM was used to test the accuracies of the classification models. Table 3 summarizes the classification results for differentiating different citrus peel conditions usi ng model HSI_1 3 listed in Table 5 2 As shown in Table 5 3 four peel conditions (normal, canker, copper burn, and wind scar) among the total six conditions tested in this study were perfectly classified into the appropriate categories. For the other two c onditions (greasy spot and melanose), there was one misclassified sample for each case. One greasy spot sample was misclassified as copper burn, and one melanose sample was misclassified as wind scar. The classification accuracies for greasy spot and melan ose were 90 .0 %. In general, there were only two samples that were misclassified in the 60 samples in the testing set, and the overall classification accuracy for the model HSI_1 3 was 96.7%. Same procedures were applied for the other three classification mo dels listed in Table 5 2 and the classification results, along with those from the model HSI_1 3 are summarized in Table 4 Using nine selected hue and saturation texture features, model HS_9 provided the classification accuracies of 90 .0 % for normal, can ker, copper burn, greasy spot, and wind scar, and 70 .0 % accuracy for melanose. The average accuracy of the model HS_9 was 86.7%. Model

PAGE 64

64 I_11 used 11 selected intensity texture features alone. Although it achieved two perfect classification results (100 .0 %) for copper burn and greasy spot, the performances for the other four conditions were poor, especially for melanose (70 .0 %) and wind scar (50 .0 %). The overall accuracy of the model I_11 was 81.7 0 %, which is the worst among the four models tested in this stu dy. When all 39 texture features were used by model HSI_39, the classification accuracy was achieved as 88.3%, which was higher than those of the models HS_9 and I_11, but lower than that of the model HSI_1 3 (96 .7 %). Based on the results shown in Table 5 4 we could find that classification model using intensity texture features only (model I_11) gave the worst performance when compared to the other models. It is likely that the poor performance of the intensity texture features only is due to the variation s of the light intensity during the image acquisition. On the other hand, classification model that used hue and saturation texture features (model HS_9) outperforms the model that used intensity texture features only (model I_11). When intensity texture f eatures were added to hue and saturation features, the performance of the classification models was further improved. Texture feature selection is necessary for obtaining better classification accuracy, and this is confirmed by the fact that the model usin g 1 3 selected hue, saturation and intensity texture features (model HSI_1 3 ) achieved better accuracy than the one that using all 39 HSI texture features (model HSI_39). The model HSI_1 3 emerged as the best one among the four classification models tested in this study, suggesting that it would be best to use a reduced hue, saturation and intensity texture feature set to differentiate different citrus peel conditions. Stability Test of the Classification Model It is important to test high classification accuracy using various texture features because this classification results presented in above section is established statistically. Moreover, if such stability can be demonstrated by this test, this image analys is method and procedures will be

PAGE 65

65 more useful for detecting purpose in predicting citrus disease. Image samples had a fixed order (i.e., one from every thi rty samples arranged in ascending order for the time the images were acquired) Using the order, train ing samples and testing samples were separated. To test the stability of the classification model, 20 training samples and 10 testing samples were randomly chosen from the 30 samples for each peel condition, and they were used to train and test the model H SI_14, which gave the best classification performance, following the same procedures described earlier. Ten runs were repeated for the training and testing. The average value and standard deviation in table 5 5 were 96.0% and 2.3%, respectively. These resu lts indicate that the classification model developed using the 14 selected hue, saturation and intensity texture features is robust in performance, and thus is able to classify the new fruit samples according to their peel conditions. Summary and Conclusio ns Color imaging coupled with texture feature analysis based on color co occurrence method provides a useful means for identifying common diseases on citrus fruit. A color imaging system was assembled to acquire RGB images from grapefruits with normal and five peel dis orders including canker, copper burn, greasy spot, melanose, and wind scar. Small images covering the interested areas on the fruit surface were extracted from the original RGB images, and they were then transformed to hue, saturation, and int ensity color representation. Spatial gray level dependence matrices were generated from the hue, saturation, and intensity images. A total of 39 selecting useful t exture features were developed based on a stepwise discriminant analysis for three color combinations including hue, saturation, and intensity (HSI), hue and saturation (HS), and intensity (I). Classification models were constructed using the reduced textu re feature sets through a discriminant function based on a measure of the generalized squared distance.

PAGE 66

66 Significant eliminations of redundant texture features were accomplished through the stepwise discriminant analysis. 13 9, and 11 texture features were selected for the color combinations of HSI, HS, and I, respectively. The simplification of the texture features largely reduces the computation burden, and it also helps improve the performance of classification models. The classification model using inte nsity texture features only gave the worst accuracy (81.7%), and the model using 1 3 selected HSI texture features achieved the best classification accuracy (96.7%) among four classification models including the one using all 39 HSI texture features. The re sults suggested that it would be best to use a reduced hue, saturation and intensity texture feature set to differentiate different citrus peel conditions. A stability test for the classification model with the best performance was accomplished by 10 runs using randomly selected training and testing samples. Average classification accuracy and standard deviation in table 5 were 96.0% and 2.3%, respectively, indicating that the classification model is robust for classifying new fruit samples according to the ir peel conditions. This research demonstrated that color imaging and texture feature analysis could be used for differentiating citrus peel diseases under the controlled laboratory lighting conditions. Future studies will explore the utility of these algo rithms in outdoor conditions, and develop pattern recognition methods such as self organizing map (SOM) or support vector machines (SVM) for real time application The most significant challenge will be created by the inherent variability of color under na tural lighting conditions. By eliminating intensity based texture features, this variability can be significantly reduced. However, hue and saturation can be somewhat influenced by low lighting conditions. This may point to the need to use cameras with lig ht availability color compensation, supplemental lighting, or night time applications where lighting levels can be controlled.

PAGE 67

67 Figure 5 1. Typical normal and abnormal citrus peel conditions. Figure 5 2 Color image system for acquiring RGB images from citrus samples

PAGE 68

68 Figure 5 3. Procedures for color image analysis. Normal Canker Copper Burn Greasy Spot Melanose Wind Scar Figure 5 4. Typical ROI images for normal and diseased citrus peel conditions.

PAGE 69

69 Figure 5 5. Nearest neighbor mask for calculating spatial gray level dependence matrices

PAGE 70

70 Table 5 1 Intensity texture features. Feature Description Equation I1 Uniformity (2 nd Moment) I2 Mean Intensity I3 Variance I4 Correlation I5 Product Moment I6 Inverse Difference I7 Entropy I8 Sum Entropy I9 Difference Entropy I10 Information Correlation 1 I11 Information Correlation 2 HX HXY1 HXY2 I12 Contrast I13 Modus

PAGE 71

71 Table 5 2 Texture features selected by stepwise discriminant analysis Classification Model Color Feature Texture Feature Set HSI_1 3 H, S, I H 9 H 10 I 12 S 7 I 3 I 2 S 12 I 11 I 1 I 8 S 1 ,H 2 H 5 HS_9 H, S H 9 H 10 S 7 H 5 H 11 S 12 S 11 H 7 H 13 I_11 I I 2 I 3 I 5 I 10 I 6 I 13 I 8 I 9 I 1 I 11 I 7 HSI_39 H, S, I All 39 texture features (H 1 H 13 S 1 S 13 I 1 I 13 ) Table 5 3 Classification results using model HSI_1 3 in Table 5 2 Actual Peel Condition Classified Peel Condition Accuracy (%) Normal Canker Copper Burn Greasy Spot Melanose Wind Scar Normal 10 0 0 0 0 0 100 .00 Canker 0 10 0 0 0 0 100 .00 Copper Burn 0 0 10 0 0 0 100 .00 Greasy Spot 0 0 1 9 0 0 90 .00 Melanose 0 0 0 0 9 1 90 .00 Wind Scar 0 0 0 0 0 10 100 .00 Total 10 10 11 9 9 11 96.7 0

PAGE 72

72 Table 5 4 Classification results in percent correct for all models in Table 5 2 Peel Condition Classification Model HSI_1 3 HS_9 I_11 HSI_39 Normal 100 .00 90 .00 80 .00 80 .00 Canker 100 .00 90 .00 90 .00 100 .00 Copper Burn 100 .00 90 .00 100 .00 90 .00 Greasy Spot 90 .00 90 .00 100 .00 90 .00 Melanose 90 .00 70 .00 70 .00 70 .00 Wind Scar 100 .00 90 .00 50 .00 100 .00 Overall Accuracy (%) 96. 67 86.7 0 81.7 0 88.3 0 Table 5 5 Classification results for shuffle data models in percent correct Number of random data Canker (%) Copper (%) Greasy Spot (%) Normal (%) Melanose (%) Windscar (%) Total (%) 1 100 .00 90 .00 100 .00 100 .00 100 .00 100 .00 98.3 3 2 100 .00 100 .00 100 .00 80 .00 90 .00 100 .00 95 .00 3 100 .00 100 .00 100 .00 100 .00 90 .00 100 .00 98.3 3 4 80 .00 100 .00 100 .00 90 .00 90 .00 100 .00 93.3 3 5 100 .00 100 .00 90 .00 100 .00 90 .00 90 .00 95 .00 6 100 .00 100 .00 100 .00 100 .00 100 .00 100 .00 100 .00 7 100 .00 90 .00 100 .00 100 .00 100 .00 90 .00 96. 6 7 8 100 .00 100 .00 100 .00 90 .00 100 .00 80 .00 95 .00 9 100 .00 90 .00 100 .00 90 .00 90 .00 100 .00 95 .00 10 80 .00 100 .00 100 .00 100 .00 90 .00 90 .00 93.3 3 Average Accuracy (%) 96 .50

PAGE 73

73 CHAPTER 6 DETECTION OF CITRUS GREENING D ISEASE Introduction Huanglongbing (HLB), commonly known as citrus greening, is one of the most dangerous diseases that affect citrus production, and citrus greening has threatened to destroy an estimated 60 million trees in Africa and Asia (Ruangwong et al., 2006) C itrus greening was found in Miami Dade County Florida in August 2005. Florida citrus growers are fighting this disease w hic h has the potential to destroy the state's $9 billion commercial citrus industry (The American Phyto pathological Society, 2008). Citrus greening is a bacterial disease that affects the phloem system of citrus trees and causes the leaves of infected trees to become yellow, the trees to become unproductive, decline and possibly die within a few years. The bacterium is spread by an insect, the citrus psyllid. Citrus greening infects all types of citrus species, cultivars, and hybrids and some citrus relatives. The symptoms of citrus greening usually include blotchy, chlorotic mottling of leaves, yellow shoo ts, misshapen or lopsided small fruit that fail to color properly and stay green, hence the descriptive of the yellow sectors of infected trees (Gottwald et al., 2007) C urrently, there is no cure for citrus greening, but early detection of the disease and appropriate management of the insect vector should alleviate the severity of the greening disease and minimize its spread. To reach this goal, many image processing and computer vision technologies have been developed to achieve the automatic identification of disease symptoms. The design and implementation of these technologies will greatly aid in scouting for the disease, selective chemical application, reducing costs a nd thus leading to improved productivity and fruit quality.

PAGE 74

74 The identification of various plants and crops using image processing techniques has been attempted by several researchers. Haralick et al (1973) used gray level co occurrence features to analyze remotely sensed images. They computed gray level co occurrence matrices for a pixel offset equal to one and with four directions(0, 45, 90, 135). For a seven class classification problem, they obtained approximately 80% classification accuracy using te xture features. Tang, L., et al (1999) developed a texture based weed classification method using Gabor wavelets and neural networks for real time selective herbicide application. The method comprised a low level Gabor wavelet based feature extraction algo rithm and a high level neural network based pattern recognition algorithm. The model was specifically developed to classify images into broadleaf and grass categories for real time herbicide application. Their analyses showed that the method is capable of performing texture based broadleaf and grass classification accurately with 100% classification accuracy. Burks et al. (2000) developed a method for classification of weed species using color texture features and discriminate analysis. The image analysis technique used for this method was the color co occurrence (CCM) method. The method had the ability to disc r iminate between multiple canopy species and was insensitive to leaf scale and orientation. The use of color features in the visible light spectrum p rovided additional image characteristic features over traditional gray scale representation. The CCM method involved three major mathematical processes: Transformations of an RGB color representation of an image to an equivalent HSI color representation. Generation of color co occurrence matrices from the HSI pixels Generation of texture features from the CCM matrices. CCM texture feature data models for six classes of ground cover (giant foxtails, crabgrass, velvet leaf, lambs quarter, ivy leaf morning gl ory, and soil) were developed and stepwise

PAGE 75

75 discriminant analysis techniques were utilized to identify combinations of CCM texture feature variables, which have the highest classification accuracy with the least number of texture variables. A discriminant c lassifier was trained to identify weeds using the models generated. Classification tests were conducted with each model to determine their potential for classifying weed species. Overall classification accuracies above 93% were achieved when using hue and saturation features alone. A complete discussion of the CCM approach is found in Shearer and Holmes (1990) Pydipati et al. (2006) analyz ed detection in citrus leaves using machine vision. The image data of the leaves selected for disease monitoring was co llected. Then, a lgorithms based on image processing techniques for feature extraction and classification were designed. Manual feeding of datasets, in the form of digitized RGB color photographs was conducted for feature extraction and training the SAS sta tistical classifier. After training the SAS classifier, the test data sets were used to analyze the performance of accurate classification. The overall objective of this research was to develop a machine vision based method for detecting citrus greening on leaves This approach would use color texture features under controlled lighting in order to discriminate between greening and leaf conditions that are commonly confused with greening This preliminary approach used low level magnification to enhance f eatures. As a result, this would be conducted in a laboratory setting. Future studies would use field based detection. Specific objectives implemented to accomplish the overall objective were to: U se a digital color microscope system to collect RGB images from orange leaves with eight conditions (i.e., young flush normal mature blotch y mottle green island s zinc deficiency iron deficiency manganese deficiency and dead) D etermine image texture features based on the color co occurrence method (CCM) Create a set of reduced feature data models through a stepwise elimination process and classify different citrus leaf conditions

PAGE 76

76 Compare the classification accuracies. Materials and Methods Citrus L eaf Samples The leaf samples used in this research were collected from two orange groves near Immokolee in southwest Florida, during summer and fall of 2008. Eight different classes of citrus leaves were selected for this study, and were graded manually in to classes by an expert extension agent. The leaf sample conditions were blotchy mottle, green islands, iron deficiency, manganese deficiency, zinc deficiency, young flush and normal mature. Images of leaf samples are shown in Fig. 6 1 The visual symptom ob served varied between leaf samples. Leaf samples were removed from t rees with petioles intact and then sealed in Ziploc bags to maintain the moisture level of the leaves. Sixty samples of each of the eight classes of leaves were collected. The sample s wer e brought to a laboratory The leaf samples were then sealed in new bags with appropriate labels and put in environmental control chambers maintained at 4 C T hey were removed from cold storage about 2 hours before imaging to allow them to reach room temperature. The leaf samples were then taken to an imaging station where images of the upper side of the leaf were acquired. Color Image Acquisition A Digital Microscope system (VHX 600K, Keyence, JAPAN) was used for acquiring RGB images from citrus leaf samples, as shown in Figure 6 2. The imaging system consisted of a halogen lamp (12V, 100W) a zoom lens ( C mount lens, OP 51479) a 2.11 million pixel CCD image sensor ( 1/1.8 inch) a 15 inch Color LCD monitor (TFT, 1600x1200, UXGA) and a computer insta lled with an image capture function and a hard disk drive unit ( image format: JPEG and TIFF, Storage capacity: 700MB ). The setup of the light source was designed to minimize s p ecular reflectance and shadow and to maximize the contrast of the images. The

PAGE 77

77 h eight of the camera and its focus were adjusted to contain the whole leaf, centered on the main leaf vein Automatic white balance calibration was conducted using a calibrated white balance function in this system before acquiring images from leaf samples. The digital color images were saved in uncompressed JPEG format (1200x1600, 8bit) Texture analysis Color Co occurrence Methodology The image analysis technique selected for this study was the CCM method. The use of color image features in the visible light spectrum provides additional image characteristic features over the traditional gray scale representation. The CCM procedure consists of three primary mathematical processes. First the RGB images of leaves are converted to a hue, saturation, and inte nsity (HSI) color space representation. Intensity is calculated using the mean value of the three RGB values. The hue and saturation values are determined using a geometrical ity diagram (Ohta, 1985) In this process, the CIE chromaticity diagram represents a two dimensional hue and saturation space (Wyszecki et al., 1992) The pixel RGB values determine the chromaticity coordinates on the hue and saturation space, which are th en used to geometrically calculate the value of hue and saturation. This process has been documented by Shearer (1986) Each pixel map was used to generate a color co occurrence matrix after the H SI image was completed resulting in three CCM matrices. Tha t is, one CCM matrix for each of the HSI pixel maps. T hrough the use of spatial gray t he color co occurrence texture analysis method was developed. The gray level co occurrence methodology is a statistical method to describe shape by statistically sampling the way certain gray levels occur in relation to other gray levels. Shear and Homes (1990) explained that these matrices measure the probability that a pixel at one particular gray level will occur at a distinct distance and orientation

PAGE 78

78 from any pixel given that pixel has a second particular gray level. For a position operator p we can define a matrix Pij' that counts the number of times a pixel with grey level i occurs at position p from a pixel with grey level j For example, if we have four distinct grey levels 0,1,2 and 3, then Figure 4a where i is the row indicator and j is the possible column indicator in the SGDM matrix If we normalize the matrix P by the total number of pixels so that each is between 0 and 1, we get a gray level co occurrence matrix. The SGDMs are represented by the function where i represents the gray level of location (x,y) in the image I(x,y) and j represents the gray level of the pixel at a distance d and an or ientation angle of from location (x,y) The nearest neighbor mask is shown in Figure 6 3 where the reference pixel (x,y) is shown as an asterisk. All eight neighbors shown are one pixel se direction from one to eight. The neighbors at positions one and five are both considered to be at an orientation angle equal to zero degree, while positions eight and four are considered to be at an angle of 45 degrees. An example image matrix I(x,y) with a gray scale range s from zero to three is shown in Figure 6 4 b. The hue, saturation and intensity CCM matrices are then used to generate the texture features described by Haralick and Shanmugam (1974) Shearer and Holmes (1990) reported a reduction in the 16 gray scale texture features through elimination of redundant variables. The resulting 11 texture feature equations are defined by Shearer and Holmes (1990) Donohue et al (1985) added image contrast and modus texture features to those used by Ohta (1985) for a total of thirteen features when classifying cancer tissue. The same equations are used for each of the three CCM matrices, producing 13 texture features for each HSI component and thereby a total of 39 CCM texture statistics. The texture feat ures are identified by a coded variable name where

PAGE 79

79 the first letter represents whether it is a hue (H), saturation (S) or intensity (I) feature and the number following represents one of the thirteen texture features described in Shearer (1990) Intensity texture feature equations are presented in T able 1. As an example, the feature (I 7 ) is a measure of the entropy in the intensity CCM matrix, which represents the amount of order in an image and is calculated by equation 7 1. ( 6 1) The p(i,j) matrix represents the normalized intensity co occurrence matrix and N g represents the total number of intensity levels. The equation for normalizing the co occurrence matrix is given in equation 7 2, where P(i,j,1,0) is the intensity co occurrence matrix. ( 6 2) A physical representation of entropy (uncertainty) may be visualized by comparing a checkerboard like image to an image where one half is black and the other half is white. The latte r image is highly ordered having all pixels of the same intensity segregated into two distinct pixels groups, which gives greater certainty of the pixel value of the adjacent pixels. The checkerboard image has a lower amount of order due to intermixing of black and white squares, which results in a greater level of uncertainty of neighboring pixel values. The lower order image would therefore have more uncertainty and thus a higher entropy measure. Features Extraction Sixty images were taken of the top surface for each leaf class and centered on the mid. Digital images were stored in uncompressed JPEG format. The three classification models discussed previously were treated as separate classification problems. The 60 im ages from each

PAGE 80

80 class were divided into two datasets consisting of 30 samples for training and 30 samples for testing. The samples were first arranged in ascending order for the time the images were acquired. This approach minimizes negative time dependant variability, and reduces potential for data selection bias between the training and test datasets. A detailed illustration of the image acquisition and classification process is given in Figure 6 5. Algorithms for image segmentation and texture feature ge neration were developed in MATLAB. In the initial step, the RGB images of all leaf samples were obtained. For reducing the computational burden with minimal loss of texture feature quality, the image resolution was reduced from 1600x1200 pixels to 800x600 pixels and the reduced images were then converted from eight bit to six bit per channel RGB format. The subsequent steps were repeated for each image in the dataset. After the images were reduced, edge detection of the leaf was completed on each image of t he leaf sample using the MATLAB program Figure 6 exhibits a detailed edge detection process. First, each RGB image was converted to a gray image and then a binary image. Next, the e dge of a binary image was detected by the command 'imerode' and 'imdilate' in Matlab. Once the edge detection was finished, the image was scanned from left to right for each row in the pixel map, and the area outside the leaf was zeroed to remove any background noise. In the next step, the images were then converted from RGB for mat to HSI format. The Spatial Gray Level Dependency Matrices (SGDMs) were then generated for each color pixel map of the image, one each for hue, saturation and intensity. It was decided during preliminary testing that the experiment would use the 0 CCM orientation angle and one offset, where the smaller the offset, the finer is the texture measured. Thus, a one pixel offset is the finest texture measure. From the SGDM matrices, the 39 CCM texture statistics described earlier were generated for each imag e using the three color feature co occurrence matrices, as each

PAGE 81

81 SGDM matrix provided 13 texture features. A more complete description of this technique can be found in Haralick and Shanmugam (1974) and Shearer and Holmes (1990) Statistical analysis Onc e the texture statistics were generated for each image, SAS statistical analyses were conducted using procedure STEPDISC to reduce redundancy in the texture feature set. The training image dataset was used for the variable reduction analysis. SAS offers pr ocedures for reducing variable set size and for discriminating between classes (SAS, 1985) PROC STEPDISC is used to reduce the number of texture features by a stepwise selection process. The "stepwise" selection procedure begins with no variables in the c lassification model (SAS, 1985) At each step of the process, the variable within and outside the model are evaluated. The variable within the model, at that particular step, which contributes least to the model as determined by the Wilks` Lambda method is removed from the model. Likewise, the variable outside the model that contributes most to the model is added. When no more steps can be taken, the number of variables in the model is reduced to its final form. Based on these analyses, several data models were created, which are shown in Table 6 2. Model HSI_16 consisted of all conditions HSI_11 model consisted of all condition except n ormal y oung and the HSI_9 model consisted of all condition s except b lotch y mottle and n ormal y oung leaves. As processing P ROC DISCRIM procedure, a disciminant function established using a measure of the generalized squared distance between the image texture variables and the class texture variable means, and the posterior probability determines the classification accuracy T h e classification criterion may be affected by t he pooled covariance matrix of the training set texture variables and the prior probabilities of the classification groups.

PAGE 82

82 Result Classifications of c itrus disease conditions The texture feature dataset was generated by containing 39 texture features for each image. The dataset had 420 rows each, representing 6 0 samples from each of the seven classes of leaves Each row had 39 columns representing the 39 texture features extracted for a particular sample image. Each row had a unique number (1, 2, 3, 4, 5, 6 or 7 ) representing which class the blotchy mottle leaves 2 green island leaves 3 leaves deficient in iron 4 d leaves deficient in manganese 5 young flush leaves, 6 normal mature leaves and '7' represented leaves deficient in zinc To compare classification accuracies under various disease conditions, three models were created which are shown in Table 6 2. These models represent the compilation of three different leaf conditions sets, which isolate leaf conditions that are difficult to discriminate. Table 6 3 shows four different models which have all leaf conditions except young flus h, but have various combinations of color texture features. This set of models was selected to isolate crucial color texture features which can lead to more efficient feature generation. The training and test ing sets for each model mentioned in T able s 2 an d 3 were obtained by selecting either, intensity, hue and saturation or all three HSI features from the total 39 texture features in the original data files. Once several data models were formed, SAS procedure STEPDISC was used to reduce the number of te xture features included in the models. As can be seen in T able 6 2 significant elimination of redundant variables was accomplished. For instance, HSI_18 model had 39 texture features in the unreduced form, and was reduced to 18 features through the stepwi se linear reduction process. The simplification of the data model serves several important purposes : 1) it reduces the computational burden of the redundant features, 2) it tends to improve the

PAGE 83

83 performance of classification algorithms, and 3) it reduces me mory and storage demands. The most significant variable reduction was found in I_8 model, which were reduced from 3 9 to 8 texture features after using STEPDISC. SAS procedure DISCRIM was used to test the various data model classification accuracies. Each of the models was trained and tested using the appropriate image data set. The classification results were recorded on an individual disease category basis using the SAS procedure output listing. The results shown in T able 6 4 are the classification summar y from the HSI_18, HSI_15 and HSI_14 given in Table 6 2. As previously indicated, the test data consisted of 3 0 images from each category. The overall performance of HSI_18 model was 86.67 % which is the lowest accuracy among the three models shown in Table 6 2. HSI_15 and HSI_11 models had high classification accuracies (95.60% and 97.33%) Based on the results shown in Table 6 5 the classification model using only intensity texture featur es presented the worst performance at 81.11 % for the I_11 model. When compared with other models, HS_10 model had 87.78%, HSI_15 model had 95.60% and HSI_39 model had 95.60%. Therefore, other models provided better performance than the model that used only intensity texture features. In Table 6 5, the highest overall performance was 95.60% for HSI_15 and HSI_39 This demonstrates that significant classification improvement occurs when intensity features are used, and there is no loss in accuracy when using the reduced HSI data set or the unreduced data set. As shown in T able 6 6 most images were correctly classified into the appropriate category ; however, young flush leaves had a very low classification at 23% The negative influence of young flush leaves was further demonstrated in the results from Table 6 4 where the classification accuracy is 86.67 % while other leaf condition models that exclude young flush leaves have accuracy above 95%. Table 6 7 show improved classification accuracy of 95.56% and thus proved that young

PAGE 84

84 flush leaves affected overall performance result. Table 6 8 demonstrates that HSI_14 model was the best accuracy (97.33%) in three leaf condition models in Table 6 2. However, HSI_14 model excluded citrus greening blotchy mottle, and thu s ignored the most important greening identifier. Moreover, there was no significant difference in the classification results between HSI_15 and HSI_14 models. These effec ts can be seen in confusion matrices, where a model exhibits the classification betwe en positive vs. negative greening symptoms. The Confusion Matrix for Greening Positive vs. Negative A consistent analysis of classifier behavior can be provided by the semi global performance matrix, known as Confusion Matrix. This matrix provides a quan titative performance representation for each classifier in terms of class recognition. One benefit of a confusion matrix is that it is easy to see if the system is confusing two classes, citrus greening symptom and non greening symptoms The classification results for the confusion matrix obtained under the positive vs. negative greening model are shown in T able s 6 9, 6 10 and 6 11 In general the HSI _18 model had the lowest classification accurac y among all symptom models. However, in the confusion matrix shown in Table 6 9, the discrimination of citrus greening symptoms had high success rate (96.7%). On the other hand, the accuracy for greening was only 82.67%, giving an overall accuracy of 86.67%. In the HSI_15 model the young flush lea ves were removed and a 95.6% overall classification accurac y was achieved. This model also has good classification accuracies between positive (91.67%) and negative (97.50%) as shown in Table 6 10. Model HSI_14 excluded young flush leaves and blotchy mott le, and achieved an increase in classification performance when compared with the HSI_18 model (97.3% versus 86.7%) In the confusion matrix shown in Table 6 11, HSI_14 used all disease conditions except young flush and blotchy mottle and had the same posi tive greening accuracy as HSI_18 model which used all disease condition. However, the overall accuracy was much higher 97.3%. When

PAGE 85

85 comparing each model, i t is likely that the similarity between young flush leaves and other conditions affected the detection accuracy of citrus greening disease Stability Test of the Greening Classification Model From the results stated above, leaf condition models were evaluated to determine which scenario would perform the best in distinguishing greening symptoms. It was al so important to evaluate various texture feature combinations to determine which would provide high classification accuracy and demonstrate model stability, under varying training and testing conditions. The classification results presented above were obta ined using test samples selected in a fixed order In order to test the stability of the classification model, 3 0 training samples and 3 0 testing samples were randomly chosen from the 6 0 samples for each condition They were used to train and test a selected model using the same procedures described earlier. Ten runs were repeated for training and testing. In this research, stability tests were provided for the HSI_15 model, since this model had demonstrated good performance on the leaf conditions of most significant interest. The average value shown in Table 6 11 was 94.06 %. These result s demonstrated that the classification model excluding young flush leaves, was robust under varying leaf sample conditions, and therefore should provide a viable clas sification of greening conditions. Summary and Conclusions Data analysis based on the color co occurrence method is useful for detection of citrus greening disease. A color imaging system was selected to obtain RGB images from citrus leaves consisting of two normal leaf conditions, young flush and mature. In addition, five leaf conditions including greening blotchy mottle green islands, manganese deficiency, iron deficiency, and zinc deficiency were collected I mages of the leaf surface were extracted fro m the original RGB images, and then converted into hue, saturation, and intensity (HSI) color space

PAGE 86

86 representation. Each HSI image was used to generate s patial gray level dependence matrices. Once SGDMs were generated, a total of 39 image texture features were obtained from each citrus leaf sample. Algorithms for selecting useful texture features were developed based on a stepwise discriminant analysis for three disease combinations including all conditions all conditions excluding young flush and conditi ons excluding blotchy mottle and young flush T hrough a discriminant function based on a measure of the generalized squared distance c lassification models were constructed using the reduced texture feature sets. Beneficial elimination of redundant texture features were accomplished through the stepwise discriminant analysis. Various texture features models were selected from the color combinations of HSI. The elimination of redundant texture features significantly reduces t he computation burden, and it also helps improve the performance of classification models. The classification model excluding blotchy mottle and young flush (HSI_14) gave the best accuracy ( 97.33 %), while HSI_18 model achieved the worst classification accu racy ( 86.67 %). When excluding only young flush condition, the classification had high accuracy of 95.60%. T he results suggested that young flush samples collected in fall created confusion between normal mature leaves This fact also can be seen in the con fusion matrix accuracies in Table 6 9 and table 6 10. HSI_18 model had the lowest classification accuracy, but the success rate of positive for greening disease was 96.67%. It was the same or higher than others. A stability test for the classification mode l with the best performance was accomplished by 10 runs using randomly selected training and testing samples. Average classification accuracy was 94.06% indicating that the classification model is robust for classifying new citrus leaf samples according t o their conditions For further study of the influence of young flush, it is suggested that a new model

PAGE 87

87 consisting of a mixed data set of young flush and mature normal leaves should be evaluated to see how it compares to the model which excluded young flu sh leaves This research demonstrated that color imaging and texture feature analysis could be used at low magnification for differentiating citrus greening symptoms from other leaf conditions Future studies will explore the utility of these algorithms in outdoor conditions

PAGE 88

88 Figure 6 1. Citrus Leaf C onditions. Figure 6 2 Digital microscope system for acquiring RGB images from citrus leaf samples

PAGE 89

89 Figure 6 3 Nearest neighbor mask for calculating spatial gray level dependence matrices (a) P i j 1 0 = 2 1 1 0 2 2 3 2 2 3 2 2 0 1 1 0 b I x y = 0 0 2 1 3 1 0 0 3 2 1 2 0 3 1 3 Figure 6 4 Gray level dependence example: (a) SGDM for different orientations, (b) gray level image

PAGE 90

90 Figure 6 5 Procedures for color image analysis. Figure 6 6. Procedures for leaf edge detection

PAGE 91

91 Table 6 1 Intensity texture features. Feature Description Equation I1 Uniformity (2 nd Moment) I2 Mean Intensity I3 Variance I4 Correlation I5 Product Moment I6 Inverse Difference I7 Entropy I8 Sum Entropy I9 Difference Entropy I10 Information Correlation 1 I11 Information Correlation 2 HX HXY1 HXY2 I12 Contrast I13 Modus

PAGE 92

92 Table 6 2.Texture feature models selected by stepwise discriminant analysis for fall season Classification Condition Classification Model 1 Color Feature 2 Texture Feature Set 3 All disease condition HS I 18 H, S I S 4 I 2 H 7 S 13 H 2 H 9 S 5 I 7 S 7, I 9, S 8, I 1, I 10, H 4, I 6, S 6, H 8, I 13 All conditions except young flush HSI 15 H, S, I S 5 I 2 H 7, H 2 S 6 S 4 H 9 S 8 I 6, S 13, H 4, I 4, I 13, S 7, I 7 All conditions except blotch mottle and young flush HSI_ 14 H, S, I S 5 I 2 H 7, H 2 S 4 H 9 S 13 S 7, I 7, I 1, I 9, S 8, I 10, I 6 1. Classification model designation based color features in model and the total number of variable selected by STEPDISC. 2. Color texture features included in initial data set prior to reduction. 13 variables for color texture feature set. 3. Find texture features selected, given in order of discriminant power. Table 6 3. Texture feature models to all conditions except young flush for fall season Classification Model 1 Color Feature 2 Texture Feature Set 3 HSI 15 H, S, I S 5 I 2 H 7, H 2 S 6 S 4 H 9 S 8 I 6, S 13, H 4, I 4, I 13, S 7, I 7 HS_ 10 H, S S 5 H 7 H 5 H 12 S 4 S 7 H 8 S 8 H 3, S 11 I_ 8 I I 2 I 8 I 9 I 6 I 5 I 7 I 10 I 1 HSI_39 H, S, I All 39 texture features (H 1 H 13 S 1 S 13 I 1 I 13 ) 1. Classification model designation based color features in model and the total number of variable selected by STEPDISC. 2. Color texture features included in initial data set prior to reduction. 13 variables for color texture feature set. 3. Find texture features selected, given in order of discriminant power.

PAGE 93

93 Table 6 4. Classification summary in percent co rrect for all models in t able 6 2 Disease Condition Classification Model HSI_18 HSI_ 15 HSI _11 Blotchy mottle 96.67 90.00 Islands 96.67 93.33 96.67 Iron deficiency 90.00 100.00 90.00 MN deficiency 100.00 96.67 100.00 Zinc deficiency 100.00 100.00 100.00 Normal 100.00 93.33 100.00 Young flush 23.33 Overall Accuracy (%) 86.67 95.60 97.33 Table 6 5. Classification summary in percent correct for all models in t able 6 2 Disease Condition Classification Model HSI_ 15 HS_ 10 I_11 HSI_39 Blotchy mottle 90.00 70.00 70.00 96.67 Islands 93.33 76.67 73.33 93.33 Iron deficiency 100.00 100.00 86.67 93.33 MN deficiency 96.67 93.33 93.33 96.67 Zinc deficiency 100.00 100.00 83.33 96.67 Normal 93.33 86.67 80.00 96.67 Overall Accuracy (%) 95.60 87.78 81. 11 95.60

PAGE 94

94 Table 6 6. Classification result in percent correct for HSI_18 model in t able 6 2 Actual Leaf Condition Classified leaf Condition Blotch y mottle Island s Zinc deficiency Iron deficiency MN deficiency Normal Young flush Accuracy (%) Blotch y mottle 29 0 0 0 0 1 0 96.67 Island s 0 29 1 0 0 0 0 96.67 Zinc deficiency 0 0 30 0 0 0 0 100.00 Iron deficiency 0 0 1 27 0 0 2 90.00 MN deficiency 0 0 0 0 30 0 0 100.00 Normal 0 0 0 0 0 30 0 100.00 Young flush 4 0 0 3 1 15 7 23.33 Total 33 29 32 27 31 46 9 86.67

PAGE 95

95 Table 6 7. Classification result in percent correct for HSI_15 model in t able 6 2 Actual Leaf Condition Classified leaf Condition Blotch y mottle Island s Zinc deficiency Iron deficiency MN deficiency Normal Accuracy (%) Blotch y mottle 27 0 0 0 0 3 90.00 Island s 0 28 1 1 0 0 93.33 Zinc deficiency 0 0 30 0 0 0 100.00 Iron deficiency 0 0 0 30 0 0 100.00 MN deficiency 1 0 0 0 29 0 96.67 Normal 2 0 0 0 0 28 93.33 Total 30 28 31 31 29 31 95.56

PAGE 96

96 Table 6 8. Classification result in percent correct for HSI_14 model in t able 6 2 Actual Leaf Condition Classified leaf Condition Island s Zinc deficiency Iron deficiency MN deficiency Normal Mature Accuracy (%) Island s 29 1 0 0 0 96.67 Zinc deficiency 0 30 0 0 0 100.00 Iron deficiency 0 3 27 0 0 90.00 MN deficiency 0 0 0 30 0 100.00 Normal Mature 0 0 0 0 30 100.00 Total 29 34 27 30 30 97.33 Table 6 9 Confusion m atrix in percent correct for HSI_18 model in t able 6 2 Prediction outcome Positive for greening (Blotch mottle Island s ) Negative for greening ( Young flush, Normal MN, IR, ZN) Total Actual value True (success rate) 58 / 6 0 ( 96.67 %) 124 / 15 0 ( 82.67 %) 182 / 21 0 ( 86.67 %) False (fail rate) 26 / 15 0 ( 17.33 %) 2 / 6 0 ( 3.33 %) 28 / 18 0 ( 13.33 %)

PAGE 97

97 Table 6 10 Confusion m atrix in percent correct for HSI_15 model in t able 6 2 Prediction outcome Positive for greening (Blotch mottle Island s ) Negative for greening ( Normal MN, IR, ZN) Total Actual value True (success rate) 55 / 6 0 ( 91.67 %) 117 / 12 0 ( 97.50 %) 172 / 18 0 ( 95.56 %) False (fail rate) 3 / 12 0 ( 2.50 %) 5 / 6 0 ( 8.33 %) 8 / 18 0 ( 4.44 %) Table 6 11 Confusion m atrix in percent correct for HSI_14 model in t able 6 2 Prediction outcome Positive for greening (Island s ) Negative for greening ( Normal MN, IR, ZN) Total Actual value True (success rate) 29 / 3 0 ( 96.67 %) 117 / 12 0 ( 97.5 %) 146 / 150 ( 97.33 %) False (fail rate) 3/12 0 ( 2.50 %) 1 / 3 0 ( 3.33 %) 4 / 15 0 ( 2.67 %)

PAGE 98

98 Table 6 12. Classification results for shuffle data about HSI_1 5 model in percent correct Number of random data Blotch y mottle (%) Island s (%) Normal (%) MN (%) Zinc (%) Iron (%) Total (%) 1 83.33 83.33 93.33 100.00 86.67 100.00 91.11 2 90.00 96.67 96.67 93.33 96.67 100.00 95.56 3 73.33 100.00 96.67 96.67 90.00 100.00 92.78 4 83.33 90.00 96.67 86.67 96.67 90.00 90.56 5 83.33 100.00 96.67 100.00 100.00 100.00 96.67 6 8 3.33 96.67 96.67 100.00 100.00 90.00 94.44 7 93.33 96.67 96.67 96.67 90.00 96.67 95.00 8 80.00 83.33 100.00 96.67 90.00 93.33 90.56 9 9 0 .00 100.00 83.33 100.00 93.33 100.00 94.44 10 90 .67 100.00 96.67 96.67 96.67 100.00 97.78 Average Accuracy (%) 86.67 94.67 95.34 96.67 94.00 97.00 94.06

PAGE 99

99 CHAPTER 7 DETECTION OF THE DIS EASE USING PATTERN R ECOGNITION METHODS Introduction Early detection of citrus diseases is important for citrus fruit production and quality. In Florida the citrus groves are under attack from two major diseases, namely citrus canker and greening. Citrus greening diseases affect fruit quality. The negative impacts reduce the profit ability of the citrus industry, and threaten the agricultural economy of the state of Florida. Hence, early detection of the disease can reduce its spread and minimize losses for citrus growers. For reducing labor cost and improving detection accuracy, pattern recognition method have demonstrated success in a variety of r esearch areas such as aerospace, defense, medical, neurobiology, and linguistics etc. The artificial intelligence is an approach that mimics humans intelligent to build learning ability, reasoning, and perception. The research about artificial intelligence has developed into a discipline known as intelligent system. In agricultural engineering, many researchers have applied pattern recognition methods to agricultural management. B. Park et al (2007) detected fecal contamination in the visceral cavity of br oiler carcasses using a pattern recognition method. The method comprised fisher linear discriminant analysis. Images of poultry carcasses were collected using hyperspectral imaging processing. Their analysis showed that the method is capable of detecting f ecal contamination on the surface of broiler carcasses with 98.9 % classification accuracy. Xuemei Cheng et al (2003) developed an approach for fruit and vegetable defect inspection. The hyperspectral imaging classification techniques used for this study were the p rincipal c omponent a nalysis (PCA) and f l inear d iscriminant (FLD) method These methods had the ability to maximize the representation and classification effects on the extracted new feature bands. The use of hyperspectral image features provided high dimension feature

PAGE 100

100 space and reflection properties. PCA were EMPLOYED for reducing the hyperspectral dimension. FLD techniques were utilized to classify wholesome and unwholesome objects. Overall classification accuracy using only FLD soluti on was 90% but the combined PCA FLD solution had the accuracy of 93.3%. When the PCA and FLD methods were integrated, classification accuracy was better. H. Zhang et al (2007) analyzed fungal infected wheat kernels using support vector machine (SVM). The image data of wheat kernels was collected by a near infrared refectance hyper spectral imaging system. Then, after four features were extracted from input images, algorithms based on principal component analysis (PCA) technique for reducing the dimensionality of pattern vectors were designed. The NIR hyperspectral image datasets was used for the SVM classifier. After classifying the datasets, the overall cla ssification accuracy was 94.8%, with 531 kernels correctly classified and 29 kernels not. Pydipati et al (2005) developed detection in citrus leaves using statistical and neural network classifiers. He used co occurrence matrices to extract texture feature s from HSI and the SAS classifier was used to train HSI feature dataset. After reducing image datasets, the classification results using neural network method and SAS classifier was compared. SAS classifier achieved an accuracy above 95% for all classes, w hile neural network algorithms achieved the accuracy of above 90% for all classes. There are various approaches in pattern recognition. In this study, three pattern recognition approaches were used for finding an optimal pattern recognition approach. Linear models for classification : Fisher's linear discriminant analysis method Neural Networks for classification : Back propagation based on neural network method Nonlinear Kernel for classification : Support Vector Machine (SVM)

PAGE 101

101 The objec tive of this stud y is to find a pattern recognition method for detection of citrus diseases. For preliminary study, image analysis technique s based on color co occurrence method will be developed and three pattern recognition methods will be compared for best disease image classification I mages were acquired under controlled lighting conditions and low level magnification to enhance features. Specific objectives were to: Collect two image data sets of citrus canker and citrus greening diseases. Evaluate the color co occurr ence method for disease detection Develop various pattern recognition algorithms for classification of the citrus disease conditions based on the features obtained from the color co occurrence method. Compare the classification accuracies of the various p attern recognition classifiers. Materials and Methods Citrus Canker and Greening Samples In this study, two different disease scenarios were evaluated. First Citrus canker samples were collected from a grapefruit grove near Punta Gorda, Florida, during the harvest season in spring of 2007. The grapefruits were infected with six peel conditions including canker, copper burn, greasy spot, melanose, wind scar and normal Figure 7 1 shows r epresentative images for each peel condition. Thirty samples for each c ondition were selected, hence a total of 180 grapefruits were tested in this study. In the second study, leaf samples were collected at two orange groves near Immokolee in Southwest Florida, t he leaf samples used in this research were collected during sum mer and fall of 2008 Eight different classes of citrus leaves were selected for this study consisting of greening blotchy mottle, green ing islands, iron deficiency, manganese deficiency, zinc

PAGE 102

102 deficiency and normal mature. Images of leaf samples are shown in Fig. 7 2. Sixty samples of each of the seven classes of leaves were collected. The citrus canker and greening sample s were brought to a laboratory sealed in new bags with appropriate labels and put in environmental control chambers maintained at 4 C T hey were removed from cold storage about 2 hours before imaging to allow them to reach room temperature. Color Image Acquisition There ar e differentiate between systems used for Canker and Greening for acquiring best RGB images. First, canker images were acquired by a color image acquisition system in Figure 7 3 The imaging system consisted of two 13 W high frequency sealed fluorescent lig hts (SL Series, StockerYale, Salem, NH, USA), a zoom lens (Zoom 7000, Navitar, Rochester, NY, USA), a 3 CCD RGB color camera (CV M90, JAI, San Jose, CA, UDA), a 24 bit color frame grabber board with 480640 pixel resolution (PC RGB, Coreco Imaging, St. Lau rent, Quebec, CA), and a computer installed with an image capture software The setup of the lighting system was designed to minimize specular reflectance and shadow and to maximize the contrast of the images. The height of the camera and its focus were a djusted to contain the image of the whole fruit, with an approximate 100 mm100 mm field of view. Automatic white balance calibration was conducted using a calibrated white balance card before acquiring images from fruit samples. The digital color images w ere saved in uncompressed BMP format. Second, A Digital Microscope system (VHX 600K, Keyence, JAPAN) was used for acquiring RGB images from canker samples, as shown in Figure 7 4 The imaging system consisted of Halogen lamp (12V, 100W) a zoom lens ( C mo unt lens, OP 51479) a 2.11 million pixel CCD image sensor ( 1/1.8 inch) a 15 inch Color LCD monitor (TFT, 1600x1200, UXGA) and a computer installed with an image capture function and a hard disk drive unit ( Image

PAGE 103

103 format: JPEG and TIFF, Storage capacity: 700MB ). The setup of the light source was designed to minimize s p ecular reflectance and shadow and to maximize the contrast of the images. The height of the camera and its focus were adjusted to contain the whole samples Automatic white balance calibrati on was conducted using a calibrated white balance function in this system before acquiring images from samples. The digital color images were saved in uncompressed JPEG format (1200x1600, 8bit and 480x640, 8bit ) Image Pre Processing and Feature Extraction The methodology employed for citrus canker and greening classification is very similar. Figure 7 5 and figure 7 6 illustrated citrus canker and greening classification procedures. However, there are differences in pre processing since the citrus canker approach focuses on specific sub region, while the citrus greening approach examines the whole leaf surface. From the original RGB color images (480x640 pixel), citrus canker region of interest (ROI) images were extracted This ROI was focused o n peel conditions of interest The ROI selection was started manually by establishing the center point of a 64x64 pixel window on the original image Once extracted, e ach image was converted from RGB (red, green, blue) to HSI (hue, saturation, intensity) c olor format This approach obtains the useful image data and significantly reduces the computational burden for the following data analysis procedures. Citrus greening has other pre processing step. For reducing the computational burden with minimal loss of texture feature quality, the image resolution was reduced from 1600x1200 pixels to 800x600 pixels and the reduced images were then converted from eight bit to six bit per channel RGB format. The subsequent steps were repeated for each image in the datas et. After the images were reduced, edge detection of the leaf was completed on each image of the leaf sample using command 'imerode' and 'imdilate' in MATLAB Once the edge detection was finished, the image was scanned from left to right for each row in th e pixel map, and the area outside the leaf

PAGE 104

104 was zeroed to remove any background noise. Then the images were converted from RGB format to HSI format. For extracting features from digitized HSI images, Color Co occurrence Method (CCM) is used Shear and Holm es (1990) described 39 CCM texture statistics using the Spatial Gray level Dependence Matrices (SGDM). CCM texture statistics were generated from the SGDM of each HSI color feature. Each of the three matrices is evaluated by thirteen texture statistic meas ures resulting in 39 texture features per image. The SGDM is a measure of the probability that a given pixel at one particular gray level will occur at a distinct distance and orientation angle from another pixel, given that pixel has a second particular g ray level. The SGDM presented by the function P ( i j d ) This function use vector (i,j) but distance (d) and an orientation angle ( ) Figure 7 7 illustrates all the neighbors from 1 to 8 numbered in a clockwise direction and orientation angle. Table 7 1 gives a list of the texture statistics. Haralick and Shanmugam (1974) and Shearer and Holmes (1990) made detailed paper about this technique. Statistical Analysis After obtain ing 39 texture features in each image, feature sel ection was used to eliminate the redundancy in the texture feature set. The SAS procedure STEPDISC can reduce the size of the variable set and find the most significant variables for discriminating samples in to different classes. The stepwise discriminant analysis begins with no variables in the classification model. At each step of the process, the variables within and outside the model are evaluated. The Lambda met hod is deleted from the model while the variable outside the model that contributes most to the model and passes the test to be admitted is included When steps can not be taken more the number of variables in the model is reduced to its final form. Burks et al. (2000) had

PAGE 105

105 shown that when only hue or saturation information was used in the classification models classification accuracies were reduced Thus three color feature combinations including 1) hue, saturation, and intensity (H, S, I), 2) hue and sat uration (H, S), and 3) intensity (I) only were used to perform the texture feature selections. These three color combinations have demonstrated high classification accuracies in the applications for other plant discriminations (Burks et al. 2000; Pydipati et al. 2006). Input D ata P reparation and Classification Using Pattern Recognition methods Once reduced texture features were obtained from STEPDISC procedure, two datasets were created training and test ing was obtained. Table 7 2 represented two datasets for citrus canker disease and citrus greening disease. For canker disease classification, the rows of training datasets consisted of 20 samples from each of the six classes of peel condition as discussed e arlier T he columns represented the reduced texture features. For each image sample, t est datasets had 10 samples and reduced texture features. Datasets for citrus greening had seven classes with 30 samples for training and 30 samples for testing The se d atasets were analyzed with linear fisher discriminant analysis (FDA), neural network based on back propagation, and support vector classification (SVC). All analysis was done using Matlab (The Mathworks, Inc., Natick, Mass.) and the Matlab PRTools toolbox (Faculty of Applied Physics, Delft University of Technology, The Netherlands). PRTools is a toolkit for a Matlab. This package is released for the developed and evaluation of pattern recognition algorithms. Pattern Recognition Pattern recognition techniqu es consist s of features and patterns. A feature is specific aspect, quality and characteristic of some objects. The feature can be color, a symbol sign, and numerical value, such as distance, height, and weight. If features have two or more numerical

PAGE 106

106 valu es, the features can express d dimension row features called as a feature vector, and this d dimension space defined as a feature vector is a feature space. An object can be expressed as points modeled by a feature vector. Such as, a plot of features expre ssed as points in space of a scatter plot. This plot can be expressed visually to 3 dimentsion a space. If the feature space is 4 or more dimension, the feature vector cannot be plotted, but still exists in a n dimensional space. Figure 7 8 below shows exa mples of various features. Pattern means traits or features of an individual object, and is defined as a set of feature together. F eature s and pattern s are similar concepts, but features form pattern. In pattern recognition, the pattern is expressed by {x, }, x is feature vector observed and is the unique class of the feature vector. This class is also called category, group, or label. The feature vector selected to represent the class is very important and affects the selection of pattern recognition alg orithms and the cognitive approach. Hence, the feature represents characteristics of the classes that make them distinguishable. In other words, samples from one should have similar features within the class, while samples another class should have differe nt feature values. Figure 7 9 presents good feature separation on the left and poor separation on the right. The feature vector with pattern can be classified by its distribution type as follows. Linear distribution Nonlinear distribution High corre lation distribution Multi class distribution Figure 7 10 shows distribution plots of pattern types. When feature patterns are distributed by linear distribution type, the class can be classified more easily.

PAGE 107

107 Fisher`s L inear D iscriminant M ethod The central idea of Fisher's linear discriminant method is that a line projects high dimensional data and classification in a one diminsional space. M. Welling (2006) introduced Fisher's linear disciminant analysis where the formula for calculating Fishe r's linear discriminant is giving by: = (7 1) where S B is the "between classes scatter matrix" and S w is the "within classes scatter matrix" The definition of scatter matrix is: = (7 2) = (7 3) = 1 (7 4 ) where = 1 = 1 (7 5 ) and is the number of cases in class c. Neural Network Based on Back P ropagation N etwork M ethod The main idea of a back propagation network is that the connecting link weights between hidden layers are found by back propagating the errors from the output layer. Back propagation networks typically use multilayer networks. Figure 7 11 illustrates a network diagram for a multilayer network. Node s are used to represent the input, hidden, and output variables, and links between the nodes describe the weight parameters. The link contribution from additional input

PAGE 108

108 and hidden variables, X0 and Z0, represent the bias parameters. The process flow of thi s network follows the arrows shown in figure 7 12. C. M. Bishop (2006) described the derivation of a back propagation algorithm in his book "Pattern recognition and machine learning". + 1 = ( ) + (7 6 ) where is learning rate, is weight from unit j to unit i is a error of output from unit j to, is input pattern. Using equation (7 6 ), we can calculate the weight function using forward propagation as shown in figure 7 11. The output error is updated. After each forward propagation and used to re calculate the hidden layer weights using the back propagation formula. The back propagatio n formula can be described by the following equation: = ( ) (7 7 ) where = 1 2 (7 8 ) Figure 7 12 tells us that the value of for a particular hidden unit can be obtained by propagating the 's backwards from units higher up in the network. The neural network application in this research was designed by the Matlab PRTools toolbox (Faculty of Applied Physics, Delft University of Technology, The Netherlands) This application use d the back propagation formula of the neural netwo rk which shown above. Appendix B shows a detail Matlab routine. First, the data files would be loaded. The data files had a reduced texture features about each conditions. Second, the data files were divided between 30 training sets and 30 test sets using the command dataset '. The command 'dataset' consists of a set of 'm' o b jects, each given by 'k' features. A 'm' by 'k' rows represents such a

PAGE 109

109 dataset in this Matlab routine. Third, after importing t he training matrix and the testing matrix, the network was trained using the function BPXNC '. The syntax for this function is as follows: [W,HIST] = BPXNC ( A ,UNITS,ITER,W_INI,T,FID) where A Dataset UNITS Array indicating number of units in each hidde n layer (default: [5]) ITER Number of iterations to train (default: inf) W_INI Weight initialisation network mapping (default: [], meaning initialisation by Matlab's neural network toolbox) T Tuning set (default: [], meaning use A) FID File descriptor to report progress to (default: 0, no report) After training, the test data for each class was simulated using the function testc '. Support Vector Machine Method The s upport vector machine is very useful approach for classification. In recent years, SVMs method has been developed by many researchers and applied in various fields. In general, the SVM method handles 2 class problems, however multiclass SVMs have been developed by various researc hers. The multiclass solution is based on the two class support vector machine. Gunn (1998) presented the support vector classification. Figure 7 13 shows numerous possible linear classifiers for separating the data, but only one line maximizes the distanc e between it and the nearest data point of each class. This linear classifier is called the optimal separating hyper plane or hyper line. He formalized the SVM main problem as, how to find optimal separating hyper plane in n dimension space. The main formu la for the mathematical analysis is equation (7 14).

PAGE 110

110 A separating hyper plane in canonical form must satisfy the following constraints, y i w x + b 1 i = 1 l (7 9) The distance d(w,b;x) of point x from the hyperplane (w,b) is, d w b ; x = w x + b w (7 10) The optimal hyperplane is provided by maximizing the margin, constraints of equation (7 31). Where the margin is given by, w b = min { x i : y i = 1 } d ( w b ; x i ) + min { x j : y j = 1 } d ( w b ; x j ) (7 11) = min { x i : y i = 1 } w x i + b w + min { x j : y j = 11 } w x j + b w (7 12) = 2 w (7 13) Hence, the hyperplane that optimally separates the data is the one that minimizes w = 1 2 w 2 (7 14) Through this analysis, we can find the maximum margin hyper plane. Classification Result s Canker D isease C lassification b ased on P attern R ecognition A lgorithms. The texture feature selection results are summarized in Table 7 2. Four classification models were developed using the selected texture feature sets from the three color combinations [(H, S, I), (H, S), and (I)]. From the SAS STEPDISC procedure t he variables listed in Table 7 2 were selected for the first three models w ith variables arranged in the descending order of importance for the classification models.

PAGE 111

111 The classification models were named using the color features involved in the texture feature selections followed by the total numbers of the selected texture feat ures. For example, model HSI_1 3 consists of a reduced set of 13 hue, saturation and intensity texture features. As shown in Table 7 2 significant eliminations of redundant texture features were accomplished through the stepwise discriminant analysis. Nine and eleven texture features were selected for model HS_9 and model I_11, respectively. The simplification of the texture features reduces the computation burden due to the redundant data, and it also helps improve the performance of classification models. In addition to the three models described above, a classification model that used all 39 HSI texture features was used for the purpose of comparisons with other models. Thus four classification models were used to differentiate citrus peel diseases, each model was independently evaluated for classification performance. The results shown in Table 7 3 were obtained using a fisher discriminant analysis (FDA) classifier. In particular, better overall classification rates were achieved by models HSI_13 and HSI _39. Models HSI_13 and HSI_39 had overall accuracy of 91.67% and 93.33%. The results using neural network based on back propagation algorithm in table 7 4 also had good overall accuracy. Model HSI_13 achieved an overall accuracy of 95.00% and model HSI_39 an accuracy of 93.33%. Table 7 5 showed classification results using support vector method (SV M ) algorithm. This approach showed lower overall accuracy than other methods where only model HSI_13 achieved good accuracy at 95.00%. These results proved tha t model HSI_13 was best model for all classifier of canker disease Model HSI_39 also obtained good classification accuracy in FDA and the neural network methods, but it is not useful in real world applications, since Model HSI_39 may require large calcula tion time and high computation system to many texture features. The best overall

PAGE 112

112 performance was the SVM using model HSI_13, which had overall classification accuracies of 95.00%, and no individual class below 80%. C itrus G reening D isease C lassification ba sed on P attern R ecognition A lgorithms. The dataset for citrus greening consisted of 39 texture features for each image. The dataset had 36 0 rows each, representing 6 0 samples from each of the seven classes of leaves Each row had 39 columns representing the 39 texture features extracted for a particular sample image. Each row had a unique number (1, 2, 3, 4, 5, or 6) representing which class the particular row of greening blotchy mottle leaves 2 green ing island leaves 3 leaves deficient in iron 4 leaves deficient in manganese 5 represented normal mature leaves and 6 represented leaves deficient in zinc Table 7 6 shows the classificatio n models. The classification models have a name with a total numbers of selected texture features like citrus canker models. A classification models using all 39 HSI texture features was selected for comparison with the other reduced data models. The resu lts shown in Table 7 7 7 8 and 7 9 were obtained using a fisher discriminant analysis (FDA) classifier neural network based on back propagation algorithm, and support vector machine (SV M ) In table 7 7 and 7 8 models HSI_1 5 and HSI_39 achieved excellent overall classification rates. Models HSI_1 5 and HSI_39 in table 7 7 had an overall accuracy of 9 5.55 % and 9 3.89 % while Models HSI_1 5 and HSI_39 in table 7 8 had an overall accuracy of 9 3.33 % and 9 3.89 %. The results using SV M method in table 7 9 showed lower overall accuracy than other results. In table 7 8 using SV M method, model HSI_15 and HS_9 only had above accuracy of 80%. Model HS_9 had the best overall accuracy at 84.44%, yet was lower than other classification Therefore, SV M method was n ot as good for detection of citrus greening disease, since blotchy mottle confused the SVM classifier and significantly reduced the accuracy of this

PAGE 113

113 modified two class classifier If blotchy mottle and other disease class have very similar pattern, it resu lted in low accuracy. Hence, the FDA classifier using m odel HSI_15 was best model for detection of citrus greening disease. Although model HSI_39 also performed well on both cases, model HSI_39 may increase computation time for training and classification As with the canker disease classifier above, the reduced HSI texture model was best for greening detection. However, in the case, the FDA classifier outperformed the SVM Summary and Conclusions Pattern recognition and color co occurrence texture method is a useful approach for detection of citrus disease. Digitized RGB images from citrus disease consisting of various conditions were obtained by image acquisition system Texture features containing useful information for diseases classification were extracted from the pre processed RGB images that had been converted into hue, saturation, and intensity (HSI) color space representation. For each HSI image, three s patial gray level dependence matrices (SGDMs) were generated, and a total of 39 image texture features were obtained from each image sample. A stepwise discriminant analysis was used finding useful texture features from three color combinations including 1) h ue, saturation, and intensity (HSI), 2) hue and saturation (HS), and 3) intensity ( I). Classification models were constructed using the reduced texture feature sets through a discriminant function based on a measure of the generalized squared distance. Various texture features models were selected from the color combinations of HSI. The elimination of redundant texture features significantly reduces the computation burden, and it also improve d the performance of classification models. For canker detection, the reduced classification model (HSI_1 3 ) gave best accuracy ( 95.00 %) for back prop agation based neural network method and support vector machine In citrus greening, the reduced models (HSI_15)

PAGE 114

114 showed an overall accuracy above 93% in back propagation based on neural network method and the fisher discriminant analysis method. The HSI_15 model using support vector machine had low accuracy of 82.78%. This result suggested that reduced HSI models were useful for building citrus diseases detection system. In general, back propagation based on neural network method has good performance above 9 3% for both diseases conditions. The support vector machine had good classification results of 95.00% in citrus canker, but bad accuracy of 82.78% in citrus greening. Back propagation method can be a good application for disease detection. This method showed good performance and the algorithms also was simple to implement for a detection system. For future study, these methods will be evaluated on other citrus disease c ondition as well as under outdoor lighting conditions. In addition, the stability of these algorithms will be tested. In conclusion, this research demonstrated that color imaging texture feature analysis and pattern recognition could be used in the labor atory to classify citrus disease conditions Figure 7 1. Images of citrus canker diseases

PAGE 115

115 Figure 7 2 Images of citrus greening diseases Figure 7 3 Color image system for acquiring RGB images from citrus samples

PAGE 116

116 Figure 7 4 Digital microscope system for acquiring RGB images from citrus disease samples Figure 7 5 Procedures for color image analysis for citrus canker

PAGE 117

117 Figure 7 6 Procedures for color image analysis for citrus greening Figure 7 7 Nearest neighbor diagram

PAGE 118

118 Figure 7 8 Feature vector, feature space and scatter plot Figure 7 9 Good feature and bad feature

PAGE 119

119 Figure 7 10 Distribution plots with Pattern types Figure 7 11 The flow of a network

PAGE 120

120 Figure 7 12 Illustration of the calculation Figure 7 13 Possible linear classifiers for separating the data

PAGE 121

121 Table 7 1 Intensity texture features. Feature Description Equation I1 Uniformity (2 nd Moment) I2 Mean Intensity I3 Variance I4 Correlation I5 Product Moment I6 Inverse Difference I7 Entropy I8 Sum Entropy I9 Difference Entropy I10 Information Correlation 1 I11 Information Correlation 2 HX HXY1 HXY2 I12 Contrast I13 Modus

PAGE 122

122 Table 7 2.Texture feature models selected by SAS stepwise analysis for citrus canker Classification Model Color Feature Texture Feature Set HSI_1 3 H, S, I H 9 H 10 I 12 S 7 I 3 I 2 S 12 I 11 I 1 I 8 S 1 ,H 2 H 5 HS_9 H, S H 9 H 10 S 7 H 5 H 11 S 12 S 11 H 7 H 13 I_11 I I 2 I 3 I 5 I 10 I 6 I 13 I 8 I 9 I 1 I 11 I 7 HSI_39 H, S, I All 39 texture features (H 1 H 13 S 1 S 13 I 1 I 13 ) Table 7 3 Canker disease c lassification results in percent correct for all models using FDA Peel Condition Classification Model HSI_1 3 HS_9 I_11 HSI_39 Normal 90.00 90 .00 80 .00 80 .00 Canker 90.00 90 .00 90 .00 100 .00 Copper Burn 100 .00 60.00 100 .00 100.00 Greasy Spot 90 .00 90 .00 100.00 90 .00 Melanose 80.00 70 .00 70 .00 90.00 Wind Scar 100.00 90.00 10.00 100 .00 Overall Accuracy (%) 91.67 81.67 75.00 93.33

PAGE 123

123 Table 7 4 Canker disease c lassification results in percent correct for all models using BP neural network Peel Condition Classification Model HSI_1 3 HS_9 I_11 HSI_39 Normal 100.00 90 .00 80.00 100.00 Canker 100.00 70.00 90.00 90.00 Copper Burn 100 .00 70.00 100.00 100.00 Greasy Spot 10 0 .00 90 .00 100 .00 90.00 Melanose 70.00 6 0 .00 70 .00 90.00 Wind Scar 100.00 100.00 20.00 90.00 Overall Accuracy (%) 95.00 80.00 76.67 93.33 Table 7 5 Canker disease c lassification results in percent correct for all models using SVMs Peel Condition Classification Model HSI_1 3 HS_9 I_11 HSI_39 Normal 100.00 90 .00 40.00 90.00 Canker 100.00 90 .00 90.00 50.00 Copper Burn 100 .00 80.00 100 .00 30.00 Greasy Spot 90 .00 70.00 100 .00 80.00 Melanose 80.00 50.00 50.00 90.00 Wind Scar 100.00 10.00 30.00 100 .00 Overall Accuracy (%) 95.00 65.00 68.33 73.33

PAGE 124

124 Table 7 6.Texture feature models selected by SAS stepwise analysis for citrus greening Classification Model Color Feature Texture Feature Set HSI 15 H, S, I S 5 I 2 H 7, H 2 S 6 S 4 H 9 S 8 I 6, S 13, H 4, I 4, I 13, S 7, I 7 HS_ 10 H, S S 5 H 7 H 5 H 12 S 4 S 7 H 8 S 8 H 3, S 11 I_ 8 I I 2 I 8 I 9 I 6 I 5 I 7 I 10 I 1 HSI_39 H, S, I All 39 texture features (H 1 H 13 S 1 S 13 I 1 I 13 ) Table 7 7 Citrus greening c lassification results in percent correct for all models using FDA Peel Condition Classification Model HSI_1 5 HS_9 I_11 HSI_39 Blotchy mottle 83.33 63.33 53.33 93.33 Islands 93.33 80.00 76.67 90.00 Iron deficiency 100.00 100.00 86.67 93.33 MN deficiency 10 0 .00 83.33 86.67 96.67 Zinc deficiency 100.00 100.00 86.67 96.67 Normal 96.67 93.33 73.33 93.33 Overall Accuracy (%) 95.55 86.67 77.22 93.89

PAGE 125

125 Table 7 8. Citrus greening c lassification results in percent correct for all models using BP Peel Condition Classification Model HSI_1 5 HS_9 I_11 HSI_39 Blotchy mottle 83.33 73.33 66.67 80.00 Islands 86.67 70.00 70.00 100.00 Iron deficiency 100.00 96.67 90.00 100.00 MN deficiency 96.67 86.67 86.67 96.67 Zinc deficiency 100.00 96.67 83.33 93.33 Normal 93.33 83.33 73.33 93.33 Overall Accuracy (%) 93.33 84.45 78.33 93.89 Table 7 9 Citrus greening classification results in percent correct for all models using SVMs Peel Condition Classification Model HSI_1 5 HS_9 I_11 HSI_39 Blotchy mottle 60.00 63.33 46.67 60.00 Islands 90.00 8 0 .00 66.67 36.67 Iron deficiency 100.00 100.00 86.67 66.67 MN deficiency 80.00 83.33 66.67 43.33 Zinc deficiency 70.00 93.33 50.00 76.67 Normal 96.67 86.67 80.00 36.67 Overall Accuracy (%) 82.78 84.44 66.11 53.34

PAGE 126

126 CHAPTER 8 SUMMARY AND FUTURE WORK Summary Image processing and computer vision techniques are fundamentally important in agricultural engineering. Data analysis based on color co occurrence texture method is a useful approach for a detection of citrus canker and greening diseases. A color imaging system was gathered to obtain original RGB images from citrus leaves for the greening study. The samples included young flush, mature leaf greening blotchy mottle, greening islands, manganese deficiency, iron deficiency, zinc deficiency. RGB images from g rapefruits with normal and five peel diseases including canker, copper burn, greasy spot, melanose, and wind scar was acquired from a color imaging system. RGB image were converted to HSI color space representation. Each HSI image was used to generate spatial gray level dependence matrices. Once SGDM's were generated, a total of 39 image texture features were useful texture features were developed based on a stepwise discriminant analysis Through a discriminant function based on a measure of the generalized squared distance, classification models were constructed using the reduced texture feature sets. Model H SI at citrus canker and greening diseases conditions emerged as the best data model for classification. Model HSI provided good results averaging above 95% overall classification for both canker disease and citrus greening disease. Similarly, the HSI model s gave best classification accuracy (above 90%) for the pattern recognition method comparison. Through a series of random tests, the stability of classification accuracy was demonstrated This research provides an efficient laboratory based detection met hod for citrus canker and greening diseases, and shows that above mentioned techniques, such as image processing and

PAGE 127

127 pattern recognition, have indeed the potential to be used in many agricultural applications. Moreover, this research demonstrated that colo r imaging and texture feature analysis could be used for differentiating citrus diseases under the controlled laboratory lighting conditions. The most significant challenge will be created by the inherent variability of color under natural lighting condit ions. By eliminating intensity based texture features, this variability can be significantly reduced. However, hue and saturation can be somewhat influenced by low lighting conditions. This may point to the need to use an image acquisition approach with li ght availability color compensation, supplemental lighting, night time applications where lighting levels can be controlled. Future Work This thesis provided the development of the efficient laboratory based detection method for citrus canker and greening diseases Although the overall performance of this research was good, significant future work will develop new outdoor based algorithms that could be applied to autonomous scouting. Also, the leaf samples were collected in fall season. Before the all obje ctives were completed the condition of spring season was considered. The samples in spring season were collected in March, 2008 but the samples were not suitable to research in this thesis, because the number of samples was small and some samples were dea d. An extensive research of detection for citrus canker and greening diseases is planned with the new spring samples. This work will build on the research data presented in spring and fall seasons. It is anticipated that the focus of this study will establish an improved image processing approach and classification method.

PAGE 128

128 APPENDIX A MATLAB CODE FILES FO R EDGE DETECTION %% Matlab code for Edge Detection %% Program developed by Dae Gwan Kim, Research Assistant clear; for i=1:62 % The number of image samples cd( 'C: \ Documents and Settings \ student \ Desktop \ Research \ greening \ greening_0903 \ mn' ); % read the image name (load the images numbers) % ex) image_1.bmp, image_2.bmp, image_3.bmp, image_4.bmp, ..... original_name= 'mn_st_f_' ; % [X,map] = imread([original_name '.jpg']); % Preprocessing images for edge detection X = double(imread([original_name int2str(i) '.jpg' ]))/255; X=imresize(X,0.5); % converting RGB images to Binary images hsvIm = rgb2hsv(X); im =hsvIm(:,:,2); oim=imadjust(im); oim = oim/max(max(oim)); mim2 = immultiply(im, oim); mim3=imadjust(mim2); oim3 = mim3/max(max(mim3)); mim4 = immultiply(mim3, oim3); mim5=imadjust(mim4); BW = double(im2bw(im,0.5)); % Create morphological structuring element se1 = strel( 'diamond' 2); se2 = strel( 'diamond' ,2); % control image edge using 'imdilate' and 'imerode' commands BW = imdilate(BW,se1); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imdilate(BW,se1); BW = imdilate(BW,se1); BW = imdilate(BW,se1);

PAGE 129

129 BW = imdilate(BW,se1); BW = imdilate(BW,se1); BW = imdilate(BW,se1); BW = imdilate(BW,se1); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode(BW,se2); BW = imerode( BW,se2); % Fill image regions and holes BW=imfill(BW, 'holes' ); fim(:,:,1)=immultiply(X(:,:,1),BW); fim(:,:,2)=immultiply(X(:,:,2),BW); fim(:,:,3)=immultiply(X(:,:,3),BW); figure(3) imshow(fim) cd( 'C: \ Documents and Settings \ s tudent \ Desktop \ Research \ greening \ greening_0903 \ mn \ edge' ); % write resized image % ex) 64x64_image_1.png, 64x64_image_2.png, ... imwrite(fim, [ 'edge_' original_name int2str(i) '.jpg' ], 'jpg' ); end

PAGE 130

130 APPENDIX B MATLAB CODE FILES FOR PATTERN RECOGNIT ION METHODS %% Code Files for Pattern Recognition Methods %% Citrus Greening Identification Project for Models %% Program developed by Dae Gwan Kim, Research Assistant % '1' represents Blotch Mottle % '2' represents Iron Deficiency % '3' represents Islands % '4' represents MN Deficiency % '5' represents Normal leaf % '6' represents Zinc Deficiency %% Blotchy mottle clear; % load each data sets data=load( 'blotch_0907.mat' ); h=data.blotch_0907_h; s=data.blotch_0907_s; i =data.blotch_0907_i; % load reduced texture features % HSI_15 tr_bm_hsi=[s(5,1:30);i(2,1:30);h(7,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30);i(6,1:3 0);s(13,1:30);h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_bm_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,31:60);s(6,31:60);s(4,31:60);h(9,31:60);s(8,31: 60);i(6,31:60);s(13,31:60);h(4,31:60);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_bm_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30 );h(8,1:30);s(8,1:30);h(3,1: 30);s(11,1:30)]'; ts_bm_hs=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4,31:60);s(7,31:60);h(8,31:60);s(8,31: 60);h(3,31:60);s(11,31:60)]'; % I_11 tr_bm_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30 );i(1,1:30)]; ts_bm_i=[i(2,31:60);i(8,31:60);i(9,31:60);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60);i(1,31:60)]; % HSI_39 tr_bm_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_bm_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]'; %% Iron Deficiency % load each d ata sets data=load( 'iron_0907.mat' ); h=data.iron_0907_h; s=data.iron_0907_s;

PAGE 131

131 i=data.iron_0907_i; % load reduced texture features % HSI_15 tr_iron_hsi=[s(5,1:30);i(2,1:30);h(7,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30);i(6,1: 30);s(13,1:30);h(4, 1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_iron_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,31:60);s(6,31:60);s(4,31:60);h(9,31:60);s(8,31: 60);i(6,31:60);s(13,31:60);h(4,31:60);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_iron_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30);h(8,1:30);s(8,1:30);h(3,1 :30);s(11,1:30)]'; ts_iron_hs=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4,31:60);s(7,31:60);h(8,31:60);s(8,3 1:60);h(3,31:60);s(11,31:60)]'; % I_11 tr_iron_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30);i(1,1:30)]; ts_iron_i=[i(2,31:60);i(8,31:60);i(9,31:60);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60);i(1,31:60) ]; % HSI_39 tr_iron_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_ iron_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]'; %% Islands % load each data sets data=load( 'island_0907.mat' ); h=data.island_0907_h; s=data.island_0907_s; i=data.island_0907_i; % load reduced texture features % HSI_15 tr_is_hsi=[s(5,1:30);i(2,1:30);h(7 ,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30);i(6,1:30) ;s(13,1:30);h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_is_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,31:60);s(6,31:60);s(4,31:60);h(9,31:60);s(8,31:60 );i(6,31:60);s(13,31:60);h(4 ,31:60);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_is_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30);h(8,1:30);s(8,1:30);h(3,1:3 0);s(11,1:30)]'; ts_is_hs=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4,31:60);s(7,31:60);h( 8,31:60);s(8,31:6 0);h(3,31:60);s(11,31:60)]'; % I_11 tr_is_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30);i(1,1:30)]; ts_is_i=[i(2,31:60);i(8,31:60);i(9,31:60);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60);i(1,31:60)]; % HSI_39 tr _is_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_is_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]';

PAGE 132

132 %% MN Deficiency % load each data sets data=load( 'mn_0907.mat' ); h=data.mn_0907_h; s=data.mn_0907_s; i=data.mn_0907_i; % load reduced texture features % HSI_15 tr_mn_hsi=[s(5,1:30);i(2,1:30);h(7,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30);i(6,1:3 0);s(13,1:30);h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_mn_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,31:60);s(6,31:60);s(4,31:60);h(9,31: 60);s(8,31: 60);i(6,31:60);s(13,31:60);h(4,31:60);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_mn_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30);h(8,1:30);s(8,1:30);h(3,1: 30);s(11,1:30)]'; ts_mn_hs=[s(5,31:60);h(7,31:60);h(5,31 :60);h(12,31:60);s(4,31:60);s(7,31:60);h(8,31:60);s(8,31: 60);h(3,31:60);s(11,31:60)]'; % I_11 tr_mn_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30);i(1,1:30)]; ts_mn_i=[i(2,31:60);i(8,31:60);i(9,31:60);i(6,31:60);i(5,31:60);i(7,31 :60);i(10,31:60);i(1,31:60)]; % HSI_39 tr_mn_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_mn_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]'; %% Normal Leaves % load each data sets data=load( 'no_0907.mat' ); h=data.no_0907_h; s=data.no_0907_s; i=data.no_0907_i; % load reduced texture features % HSI_15 tr_nl_hsi=[s(5,1:30);i(2,1:30);h(7,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30);i(6,1:30 );s(13,1:30);h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_nl_hsi=[s(5,31:60);i(2,31:60);h(7,31:60);h(2,3 1:60);s(6,31:60);s(4,31:60);h(9,31:60);s(8,31:6 0);i(6,31:60);s(13,31:60);h(4,31:60);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_nl_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30);h(8,1:30);s(8,1:30);h(3,1:3 0);s(11,1:30)]'; ts_ nl_hs=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4,31:60);s(7,31:60);h(8,31:60);s(8,31:6 0);h(3,31:60);s(11,31:60)]'; % I_11 tr_nl_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30);i(1,1:30)]; ts_nl_i=[i(2,31:60);i(8,31:60);i(9, 31:60);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60);i(1,31:60)];

PAGE 133

133 % HSI_39 tr_nl_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_nl_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]'; %% Zinc Deficiency % load each data sets h=data.zinc_0907_h; s=data.zinc_0907_s; i= data.zinc_0907_i; % load reduced texture features % HSI_15 tr_zinc_hsi=[s(5,1:30);i(2,1:30);h(7,1:30);h(2,1:30);s(6,1:30);s(4,1:30);h(9,1:30);s(8,1:30);i(6,1: 30);s(13,1:30);h(4,1:30);i(4,1:30);i(13,1:30);s(7,1:30);i(7,1:30)]; ts_zinc_hsi=[s(5,31:60);i(2,31 :60);h(7,31:60);h(2,31:60);s(6,31:60);s(4,31:60);h(9,31:60);s(8,31: 60);i(6,31:60);s(13,31:60);h(4,31:60);i(4,31:60);i(13,31:60);s(7,31:60);i(7,31:60)]; % HS_9 tr_zinc_hs=[s(5,1:30);h(7,1:30);h(5,1:30);h(12,1:30);s(4,1:30);s(7,1:30);h(8,1:30);s(8,1:30);h(3, 1 :30);s(11,1:30)]'; ts_zinc_hs=[s(5,31:60);h(7,31:60);h(5,31:60);h(12,31:60);s(4,31:60);s(7,31:60);h(8,31:60);s(8,3 1:60);h(3,31:60);s(11,31:60)]'; % I_11 tr_zinc_i=[i(2,1:30);i(8,1:30);i(9,1:30);i(6,1:30);i(5,1:30);i(7,1:30);i(10,1:30);i(1,1:30)]; ts_zinc_i=[i(2,31:60);i(8,31:60);i(9,31:60);i(6,31:60);i(5,31:60);i(7,31:60);i(10,31:60);i(1,31:60) ]; % HSI_39 tr_zinc_hsi39=[h(:,1:30);s(:,1:30);i(:,1:30)]'; ts_zinc_hsi39=[h(:,31:60);s(:,31:60);i(:,31:60)]'; %% PRTools for Pattern Recognition % make each traning set hsi_training=dataset([tr_bm_hsi;tr_iron_hsi;tr_is_hsi;tr_mn_hsi;tr_nl_hsi;tr_zinc_hsi],genlab([3 0 30 30 30 30 30],[1 2 3 4 5 6]')); hs_training=dataset([tr_bm_hs;tr_iron_hs;tr_is_hs;tr_mn_hs;tr_nl_hs;tr_zinc_hs],genlab([30 30 30 30 30 30],[1 2 3 4 5 6]')); i_training=dataset([tr_bm_i;tr_iron_i;tr_is_i;tr_mn_i;tr_nl_i;tr_zinc_i],genlab([30 30 30 30 30 30],[1 2 3 4 5 6]')); hsi39_training=dataset([tr_bm_hsi39;tr_iron_hsi39;tr_is_hsi39;tr_mn_hsi39;tr_nl_hsi39;tr_zinc_ hsi39],genlab([30 3 0 30 30 30 30],[1 2 3 4 5 6]')); % make each test set for HSI_15 model hsi_test_bm=dataset([ts_bm_hsi],genlab([30],[1]')); hsi_test_iron=dataset([ts_iron_hsi],genlab([30],[2]')); hsi_test_is=dataset([ts_is_hsi],genlab([30],[3]')); hsi_test_mn=dataset([ts _mn_hsi],genlab([30],[4]'));

PAGE 134

134 hsi_test_nl=dataset([ts_nl_hsi],genlab([30],[5]')); hsi_test_zinc=dataset([ts_zinc_hsi],genlab([30],[6]')); % make each test set for HS_9 model hs_test_bm=dataset([ts_bm_hsi],genlab([30],[1]')); hs_test_iron=dataset([ts_iron_ hs],genlab([30],[2]')); hs_test_is=dataset([ts_is_hs],genlab([30],[3]')); hs_test_mn=dataset([ts_mn_hs],genlab([30],[4]')); hs_test_nl=dataset([ts_nl_hs],genlab([30],[5]')); hs_test_zinc=dataset([ts_zinc_hs],genlab([30],[6]')); % make each test set for I_11 model i_test_bm=dataset([ts_bm_i],genlab([30],[1]')); i_test_iron=dataset([ts_iron_i],genlab([30],[2]')); i_test_is=dataset([ts_is_i],genlab([30],[3]')); i_test_mn=dataset([ts_mn_i],genlab([30],[4]')); i_test_nl=dataset([ts_nl_i],genlab([30],[5]')); i _test_zinc=dataset([ts_zinc_i],genlab([30],[6]')); % make each test set for HSI_39 model hsi39_test_bm=dataset([ts_bm_hsi39],genlab([30],[1]')); hsi39_test_iron=dataset([ts_iron_hsi39],genlab([30],[2]')); hsi39_test_is=dataset([ts_is_hsi39],genlab([30],[ 3]')); hsi39_test_mn=dataset([ts_mn_hsi39],genlab([30],[4]')); hsi39_test_nl=dataset([ts_nl_hsi39],genlab([30],[5]')); hsi39_test_zinc=dataset([ts_zinc_hsi39],genlab([30],[6]')); % Three Pattern Recognition methods for HSI_15 model hsi_fisher=fisherc(hsi _training); hsi_bp=bpxnc(hsi_training); hsi_svc=svc(hsi_training, 'p' ,2); % Three Pattern Recognition methods for HS_9 model hs_fisher=fisherc(hs_training); hs_bp=bpxnc(hs_training); hs_svc=svc(hs_training, 'p' ,2); % Three Pattern Recognition methods for I_11 model i_fisher=fisherc(i_training); i_bp=bpxnc(i_training); i_svc=svc(i_training, 'p' ,2); % Three Pattern Recognition methods for HSI_15 model hsi39_fisher=fisherc(hsi39_training); hsi39_bp=bpxnc(hsi39_training); hsi39_svc=svc(hsi39_training, 'p' ,2);

PAGE 135

135 %% classification results using FDA % Classification results for HSI_15 Model blotchy_mottle_hsi=testc(hsi_test_bm*hsi_fisher) iron_hsi=testc(hsi_test_iron*hsi_fisher) island_hsi=testc(hsi_test_is*hsi_fisher) mn_hsi=testc(hsi_test_mn*hsi_fisher) norma l_leave_hsi=testc(hsi_test_nl*hsi_fisher) zinc_hsi=testc(hsi_test_zinc*hsi_fisher) % Classification results for HS_9 Model blotchy_mottle_hs=testc(hs_test_bm*hs_fisher) iron_hs=testc(hs_test_iron*hs_fisher) island_hs=testc(hs_test_is*hs_fisher) mn_hs=tes tc(hs_test_mn*hs_fisher) normal_leave_hs=testc(hs_test_nl*hs_fisher) zinc_hs=testc(hs_test_zinc*hs_fisher) % Classification results for I_11 Model blotchy_mottle_i=testc(i_test_bm*i_fisher) iron_i=testc(i_test_iron*i_fisher) island_i=testc(i_test_is* i_fisher) mn_i=testc(i_test_mn*i_fisher) normal_leave_i=testc(i_test_nl*i_fisher) zinc_i=testc(i_test_zinc*i_fisher) % Classification results for HSI_39 Model blotchy_mottle_hsi39=testc(hsi39_test_bm*i_fisher) iron_hsi39=testc(hsi39_test_iron*hsi39_fishe r) island_hsi39=testc(hsi39_test_is*hsi39_fisher) mn_hsi39=testc(hsi39_test_mn*hsi39_fisher) normal_leave_hsi39=testc(hsi39_test_nl*hsi39_fisher) zinc_hsi39=testc(hsi39_test_zinc*hsi39_fisher) %% classification results using BP Neural Network % Classif ication results for HSI_15 Model blotchy_mottle_hsi=testc(hsi_test_bm*hsi_bp) iron_hsi=testc(hsi_test_iron*hsi_bp) island_hsi=testc(hsi_test_is*hsi_bp) mn_hsi=testc(hsi_test_mn*hsi_bp) normal_leave_hsi=testc(hsi_test_nl*hsi_bp) zinc_hsi=testc(hsi_test_zinc *hsi_bp)

PAGE 136

136 % Classification results for HS_9 Model blotchy_mottle_hs=testc(hs_test_bm*hs_bp) iron_hs=testc(hs_test_iron*hs_bp) island_hs=testc(hs_test_is*hs_bp) mn_hs=testc(hs_test_mn*hs_bp) normal_leave_hs=testc(hs_test_nl*hs_bp) zinc_hs=testc( hs_test_zinc*hs_bp) % Classification results for I_11 Model blotchy_mottle_i=testc(i_test_bm*i_bp) iron_i=testc(i_test_iron*i_bp) island_i=testc(i_test_is*i_bp) mn_i=testc(i_test_mn*i_bp) normal_leave_i=testc(i_test_nl*i_bp) zinc_i=testc(i_test_zinc*i_bp ) % Classification results for HSI_39 Model blotchy_mottle_hsi39=testc(hsi39_test_bm*hsi_bp) iron_hsi39=testc(hsi39_test_iron*hsi39_bp) island_hsi39=testc(hsi39_test_is*hsi39_bp) mn_hsi39=testc(hsi39_test_mn*hsi39_bp) normal_leave_hsi39=testc( hsi39_test_nl*hsi39_bp) zinc_hsi39=testc(hsi39_test_zinc*hsi39_bp) %% classification results using SVC % Classification results for HSI_15 Model blotchy_mottle_hsi=testc(hsi_test_bm*hsi_svc) iron_hsi=testc(hsi_test_iron*hsi_svc) island_hsi=testc( hsi_test_is*hsi_svc) mn_hsi=testc(hsi_test_mn*hsi_svc) normal_leave_hsi=testc(hsi_test_nl*hsi_svc) zinc_hsi=testc(hsi_test_zinc*hsi_svc) % Classification results for HS_9 Model blotchy_mottle_hs=testc(hs_test_bm*hs_svc) iron_hs=testc(hs_test_iron*hs_svc) island_hs=testc(hs_test_is*hs_svc) mn_hs=testc(hs_test_mn*hs_svc) normal_leave_hs=testc(hs_test_nl*hs_svc) zinc_hs=testc(hs_test_zinc*hs_svc) % Classification results for I_11 Model blotchy_mottle_i=testc(i_test_bm*i_svc) iron_i=testc(i_test_iron*i_svc) island_i=testc(i_test_is*i_svc)

PAGE 137

137 mn_i=testc(i_test_mn*i_svc) normal_leave_i=testc(i_test_nl*i_svc) zinc_i=testc(i_test_zinc*i_svc) % Classification results for HSI_39 Model blotchy_mottle_hsi39=testc(hsi39_test_bm*hsi39_svc) iron_hsi39=testc(hsi39_test_i ron*hsi39_svc) island_hsi39=testc(hsi39_test_is*hsi39_svc) mn_hsi39=testc(hsi39_test_mn*hsi39_svc) normal_leave_hsi39=testc(hsi39_test_nl*hsi39_svc) zinc_hsi39=testc(hsi39_test_zinc*hsi39_svc)

PAGE 138

138 LIST OF REFERECNCES Aleixos, N., J. Blasco, F. Navarron, and E. Molto. 2002. Multispectral inspection of citrus in real time using machine vision and digital signal processors. Comput. Electron. Agri 3 3(2) 121 137. A.R. Jimenez, R Ceres and J.L. Pons. 2000. A survey of computer vision methods for locating fruit on trees. Trans ASAE 43(6), 1911 1920. Ashisa Mishra, Reza Ehsani, Gene Albrigo and Won Suk Lee. 2007. Spectral characteristics of citrus greening (Huanglongbing). ASABE Annual international m eeting, 0 73056 B. Park, S. C. Yoon, K. C. Lawrence, and W. R. Windham. 2007. Fisher Linear Discriminant Analysis for Improving Fecal Detection Accuracy with Hyperspectral Images. Trans ASAE 50(6), 2257 2283. Blasco, J., N. Aleixos, J. Gomez, and E. M olto. 2007. Citrus sorting by identification of the most common defects using multispectral computer vision. J. Food Eng. 83(3) 384 393. Burks, T.F., Shearer, S.A., and F.A. Payne. 2000. Classification of weed species using color texture features and dis ciminant analysis. Trans ASAE, 43(2), 441 448. Burks, T.F., Shearer, S.A., Gates, R.S, and Donohue, K.D. 2000. Back propagation neural network design and evaluation for classifying weed species using color image texture. Trans ASAE, 43(4), 1029 1037. Burks T. F. 1997. Color Image Texture Analysis and Neural Network Classification of Weed Species. Ph.D. Thesis University of Kentucky, Lexington, Kentucky. Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC 233 Spring Street, New York, NY 10013, USA. pp. 242 244 C. J. Du, and D. W. Sun. 2006. Correlating Image Texture Features Extracted by Five Different Methods with the Tenderness of Cooked Pork Ham:A Feasibility Study. Trans ASAE, 49(2), 441 448. Christopher J.C. Burgers. 1998. A tutorial on support vector machines for pattern recognition. Data mining and knowledge discovery, 2, 121 167. Donohue, K.D., Huang, L., Burks, T., Forberg, F., Piccoli, C.W. 2001.Tissue classification with generaliz ed spectrum parameters. Ultrasound Med. Biol. 27 (11), 1505 1514. Edwards, J. G., and C. H. Sweet. 1986. Citrus blight assessment using a microcomputer: quantifying damage using an apple computer to solve reflectance spectra of entire trees. Florida Sc ientist 49 (1) 48 53.

PAGE 139

139 Gaffney, J. J. 1973. Reflectance properties of citrus fruits. Trans. ASAE 16(2) 310 314. Haralick, R. M. 1979. Statistical and Structural Approaches to Texture. Processing of the IEEE, 67, 768 804. Haralick, R.M., Sha n mugam, K. 1974. Combined spectral and spatial processing of ERTS imagery data. J. Remote Sens. Environ. 3, 3 13 Haralick, R., Shanmugam, K., Dinstein 1973. Texture Features for Image classification, IEEE Trans. on systems, Man, and Cybernetics 3 610:621 H. F. Ng, W. F. Wilcke, R. V. Morey, J. P. Lang. 1998. Machine evaluation of corn kernel mechanical and mold damage Trans ASAE, 41(2), 415 420. H. Shimada, E. Shima, K. Tanaka and T. Nagayoshi. 2008. Use of personal remote sensing system in grazing land. 2008 ASABE Annual international meeting, 084473. Hodge, A., E. Philippakos, D. Mulkey, T. Spreen, and R. Muraro. 2001. Economic impact of Florida's citrus indu stry, Extension Digital Information Source (EDIS) FE307. Gainesville, Fla.: University of Florida, Department of Food and Resource Economics, 1999 2000 H. Zhang, J. Paliwal, D. S. Jayas, N. D. G. White. 2007. Classification of fungal Infected Wheat Kerne ls Using Near Infrared Reflectance Hyperspectral Imaging and Support Vector Machine, Trans ASABE, 50(5), 1779 1785. James A. Jacobs 1994. Cooperative in the U.S. Citrus Industry, Agricultural Economist, U.S. Department of Agriculture, Rural Business and Cooperative Development Service, RBCDS Research Report 137, December, 1994 K. R. Chung and R. H. Brlansky 2006. Citrus Disease Exotic to Florida: Huanglongbing (Citrus Greening), Institute of Food and Agric ultural Sciences, University of Florida pp. 210 K. J. lee, S. Kang, M.S. Kim and S. H. Noh. 2005. Hyper spectral imaging for detection defect on apple. 2005 ASAE Annual international meeting, 053075. Lu Jiang, Bin Zhu, Xiuqin Rao, Gerald Berney and Yang Tao. 2007. Black walnut shell and Meat Discrimination using Hyper spectral Fluorescence Imaging, 073089. Marylou Polek, Georgios Vidalakis, Kris Godfrey 2007. Citrus Bacterial Canker Disease and Huanglongbing(Citrus Greening), University of California, Agricultural and Natural Resources, Publication 8218, ISBN 13: 978 1 60107 439 3, ISBN 10: 1 60107 439 5 Mihran Tuceryan, and Anil K. Jain. 1998. Texture Analysis. The handbook of Pattern Recognition and Computer Vision (2 nd Edition), by C. H. Chen, L. F. Pau, P. S. P. Wang (eds.), 207 248.

PAGE 140

140 Miller, W. M. and G. P. Drouillard. 2001. Multiple feature analysis for machine vision grading of Florida citrus. Applied Eng. Agri. 17(5) 627 633. M. S. Kim, A. M. Lefcourt, K. Chao, Y. R. Chen, I. Kim and D. E. Chan. 2002. Multispectral detection of fecal contamination on apple based on hyper spectral imagery Trans ASAE, 45(6), 2027 2037. Murali Requnathan, and Won Suk Lee. 2005. Citrus Fruit Identification and Size Determination Using Machine Vision and Ultrasonic Sensors 2005 ASAE Annual international meeting 053017. Moshou, D., E. Virindts, B. De Ketelaere, J. De Baerdemaeker and H. Ramon. 2001. A neural network based plant classifier. Computers and electronics in agriculture. 31(1), 5 16. N.B. Powell, S. R. Spencer, and M. D. Boyette 2005. Machine Vision for Autonomous Machine Guidance, 2005 ASAE Annual international meeting 053091. On u ma Ruangwong and Angsana Akrapisan 2006. Detection of Candidatus Liberibacter asiatic us causing Citrus Huanglongbing disease. Journal of Agricultural Technology 2(1):111 120. Ohta, Y. 1985. Knowledge Based Interpretation of Outdoor Natural Color Scenes. Pitman Publishing Inc., Marshfield, M.A. P. M. Mehl, K. Chao, M. kim, Y. R. Chen. 2002. Detection of defects on selected apple cultivars using hyper and multi image analysis. Applied Engineering in Agriculture 18(2), 219 226. Qin, J., T. F. Burks, M. S. Kim, K. Chao, and M. A. Ritenour. 2008. Citrus canker detection using hyperspec tral reflectance imaging and PCA based image classification method. Sens. Instrum. Food Qual. Saf., doi: 10.1007/s11694 008 9043 3. Rao, C. R. 1973. Linear statistical inference and its application. John Wiley and Sons, New York, NY, USA. R. Pydipati 2004. Automatic Disease Detection in Citrus Trees. MS Thesis University of Florida Gainesville FL R. Pydipati, T.F. Burks, W.S. Lee. 200 5 Statistical and Neural Network Classifiers for Citrus Disease Detection Using Machine Vision Trans ASAE 48( 5) 2007 2014 R. Pydipati, T.F. Burks, W.S. Lee. 200 6 Identification of citrus disease using color texture features and discriminant analysis. Computers and electronics in agriculture 52, 49 59. SAS Institute. Inc. 1985. SAS Introductory Guild (3 rd ed.), Cary, North Carolina USA

PAGE 141

141 SAS Institute. Inc. 2004. North Carolina USA. Shearer, S.A., Holmes, R.G. 1990. Plant identification using co occurrence matrices. Trans ASAE 33, 2037 2044. Sev ier, B.J. and W.S. Lee. 2003. Adoption trends and attitudes towards precision agriculture in florida citrus: preliminary results from a citrus producer survey Trans ASAE pp 031100. Shearer, S.A. 1986 Plant identification using color co occurrence matrices derived from digitized images. PhD Thesis Ohio State University, Columbus, OH. Shearer, S.A., Holmes, R.G. 1990. Plant identification using co occurrence matrices. Trans ASAE 33, 2037 2044. S hulin Dave, and Ken Runtz. 1995. Image Processing Methods for Identifying Species of Plants, IEEE CAT. NO. 95CH3581 6/0 7803 2741 1. Scherz C.E. and G.K. Browm. 1968. Basic considerations in mechanizing citrus harvest. Trans ASABE 343 346. S. Kawamura M. Tsukahara, M. Natsuga, K. Itoh. 2003. On line near infrared spectroscopic sensing technique for assessing milk quality during milking. 2003 ASAE Annual international meeting 033026. Takashi Kataoka, Toshihiro kaneko, Hiroshi Okamoto and Shun ichi Hata. 2003. Crop growth estimation system using machine vision Presented at the 2003 IEEE/ASME International conference on advanced Intelligent mechatronics.AIM 2003 2, b1079 b1083. Tang, L., L.F. Tian, B.L. Stward, and J.F. Reid. 1999. Texure based we ed classification using gabor wavelets and neural networks for real time selective herbicide application. ASAE/CSAE SCGR Annual international metting, Toronto, Canada pp 993036. T. Fukagawa, K. Ishii, N. Noguchi and H. Terao. 2003. Detecting crop growth by a multi spectral image sensor. Presented at an ASAE Meeting presentation, pp. 033125. Tim R. Gottwald, James H. Graham, and Tin S. Schibert. 2002. Citrus Canker: The Pathogen and Its Impact. Online. Plant Health Progress doi:10.1094/PHP 2002 0812 01 R V Tim R. Gottwald, John V. da Graa and Renato B. Bassanezi 2007. Citrus Huanglongbing: The Pathogen and Its Impact Online. Plant Health Progress doi:10.1094/PHP 2007 0906 01 RV. Lee, W.S., and Slaughter, D. 1998. Plant recognition using hardware based neural network. Trans. ASAE pp. 98030.

PAGE 142

142 T. Yang, Y. Chen and X. Cheng. 2001. Infrared imaging and wavelet based segmentation method for apple defect inspection. 2001 ASAE Annual international meeting, pp. 01 3109. X Cheng, YR Chen Y Tao, D Chan and CY W ang. 200 3 Hyper spectral Imaging and feature extraction methods in fruit and vegetable defect inspection, 2003 ASAE Annual meeting pp. 03119 Y. Zhang, X. J. He, and J. H. Han. 2005. Texture feature based image classification using wavelet package trans form. ICIC 2005 part I, LNCS 3644, 165 173. Wyszecki, G., Stiles, W.S. 1992. Color science: concepts and methods. In: Quantitative Data and Formulae, second ed. John Wiley & Sons, New York, pp. 117 137. Max Welling [Internet] Fisher Linear Discriminant Analysis. Department of Computer Science, University of Toronto, 10 King's College Road, Toronto, M55 3G5 Canada ; [cited 2008 Oct 12]. Available form: http://www.ics.uci. edu/~welling/classnotes/papers_class/Fisher LDA.pdf Steve R. Gunn [Internet] Support Vector Machines for Classification and Regression. Technical Report. University of Southhamptom (US); [Updated 1998 May 10; cited 2008 Oct 12]. Available from: http://users.ecs.soton.ac.uk/srg/publications/pdf/SVM.pdf Florida Statistics Site [ Internet] USDA's National Agricultural Statistics Service (US); [Update: 2007 June; cited 2008 Nov] Availa ble from: http://www.nass.usda.gov/Statistics_by_State/Florida/Publications/Broc/index.asp The plant Pathology / Plant Disease Online [Internet], The American Phytopathological Society (US) [Update: 2008; Cited: 2008 Oct]. Available from: http://www.apsnet.org/media/ Citrus Greening Continues To Spread In Growing Areas [Internet], ScienceDaily LLC (US); [Updated: 2007 Jul 13; Cited: 2008 Oct 24]. Available from: http://www.sciencedaily.com/releases/2007/07/070711001507.htm

PAGE 143

143 BIOGRAPHCAL SKETCH Dae Gwan Kim was born in 1979 in Tong Young, South Korea. He graduated with a m echanical en gineering degree in March 2006 from the Dong A University, Busan, South Korea. He started his graduate studies in the D epartment of Mechanical Engineering at the University of Florida. He transferred to the D epartment of Agricultural and Biological engineering in August 2007. He was a member of the A gricultural R obotics and M echatronics G roup (ARMg) in the D epartment of Agricultural and Biological Engineering, where he worked as a research assistant under advisor Dr. Thomas F. Burks.