<%BANNER%>

Vision Based Robotic Convoy

Permanent Link: http://ufdc.ufl.edu/UFE0025035/00001

Material Information

Title: Vision Based Robotic Convoy
Physical Description: 1 online resource (76 p.)
Language: english
Creator: Merritt, Brandon
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: autonomous, computer, convoy, gps, ground, infrared, jaus, servo, targets, vehicles
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Mechanical Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: This thesis covers the design and implementation of a computer vision based vehicle tracking and following system. Using multiple infrared targets arranged into an array on a vehicle, an infrared camera located on a following vehicle tracks the vehicle. The position of the leading vehicle can be determined based on the location of targets in the image. To improve the operational angle of the system, a panning camera mechanism was designed and implemented. To improve the range of the system, a second infrared camera with a zoom lens was also used for tracking. The sensor was integrated with a vehicle controller, allowing for convoy operation, without any additional sensor input. Typical accuracies for position data from the vision sensor were plus or minus one meter each direction, with resolutions of 0.25 meter.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Brandon Merritt.
Thesis: Thesis (M.S.)--University of Florida, 2009.
Local: Adviser: Crane, Carl D.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0025035:00001

Permanent Link: http://ufdc.ufl.edu/UFE0025035/00001

Material Information

Title: Vision Based Robotic Convoy
Physical Description: 1 online resource (76 p.)
Language: english
Creator: Merritt, Brandon
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: autonomous, computer, convoy, gps, ground, infrared, jaus, servo, targets, vehicles
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Mechanical Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: This thesis covers the design and implementation of a computer vision based vehicle tracking and following system. Using multiple infrared targets arranged into an array on a vehicle, an infrared camera located on a following vehicle tracks the vehicle. The position of the leading vehicle can be determined based on the location of targets in the image. To improve the operational angle of the system, a panning camera mechanism was designed and implemented. To improve the range of the system, a second infrared camera with a zoom lens was also used for tracking. The sensor was integrated with a vehicle controller, allowing for convoy operation, without any additional sensor input. Typical accuracies for position data from the vision sensor were plus or minus one meter each direction, with resolutions of 0.25 meter.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Brandon Merritt.
Thesis: Thesis (M.S.)--University of Florida, 2009.
Local: Adviser: Crane, Carl D.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0025035:00001


This item has the following downloads:


Full Text

PAGE 1

1 VISION BASED ROBOTIC CONVOY By BRANDON T. MERRITT A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2009

PAGE 2

2 2009 Brandon T. Merritt

PAGE 3

3 To Stephanie, for her loving support and constant encouragement

PAGE 4

4 ACKNOWLEDGMENTS I would like to thank the entire ASI team for their help with the integration of the project with their system. I am especially grateful to Dr. Crane, for his support and guidance with the project, and with my education. I thank Steven Velat for his major contributions to the development of the project concept Finally, I appreciate t he contributions made by Shannon Ridgeway toward the system design and advancement of my abilities as an engineer.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS .................................................................................................................... 4 LIST OF TABLES ................................................................................................................................ 7 LIST OF FIGURES .............................................................................................................................. 8 ABSTRACT ........................................................................................................................................ 10 CHAPTER 1 INTRODUCTION ....................................................................................................................... 11 Purpose ......................................................................................................................................... 11 Design Specifications .................................................................................................................. 11 Approach ...................................................................................................................................... 12 Background Research and Review ............................................................................................. 13 2 INFRARED TARGET DESIGN ............................................................................................... 21 Design Requirements .................................................................................................................. 21 Illuminator Selection and Capabilities ....................................................................................... 21 Prototype Omni -directional Target Design ............................................................................... 22 Target Design I ............................................................................................................................ 23 Target Design II ........................................................................................................................... 24 3 IMAGE PROCESSING AND COMPUTER VISION ............................................................. 31 Camera Setup ............................................................................................................................... 31 Theoretical and Empi rical Vision Models ................................................................................. 32 Image Processing Methodology and Coordinate Transformations .......................................... 35 JAUS Software Design ............................................................................................................... 38 4 PANNING CAMERA MECHANISM DESIGN ...................................................................... 46 Requirements ............................................................................................................................... 46 Mechanism Designs .................................................................................................................... 46 Motor Control and Software Design .......................................................................................... 49 5 RESULTS AND DISCUSSION ................................................................................................ 52 Static Testing ............................................................................................................................... 52 Testing of System with G round Truth ....................................................................................... 53

PAGE 6

6 Sensor Data Qualitative Results with ASI Controller ............................................................... 54 Conclusions ................................................................................................................................. 54 Future Work ................................................................................................................................. 55 APPENDIX : MECHANICAL DRAWINGS ................................................................................. 62 LIST OF REFERENCES ................................................................................................................... 73 BIOGRAPHICAL S KETCH ............................................................................................................. 76

PAGE 7

7 LIST OF TABLES Table page 2 1 Target I cost analysis .............................................................................................................. 28 2 2 Target II cost anal ysis ............................................................................................................ 30 3 1 Summary of camera and lens setup and corresponding calibration coefficients ............... 44 3 2 Summary of uncertainties for camera models ...................................................................... 44 3 3 Scoring Parameter .................................................................................................................. 44 4 1 Cost analysis of smart motor panning mechanism design ................................................... 50 4 2 Cost analysis of stepper motor panning mechanism design ................................................ 51

PAGE 8

8 LIST OF FIGURES Figure page 1 1 Vehicle test platforms ............................................................................................................ 20 2 1 Infrared target concept ........................................................................................................... 26 2 2 P rototype infrared target design ............................................................................................ 26 2 3 Target I d esign and f abrication .............................................................................................. 27 2 4 Target I design ........................................................................................................................ 27 2 5 Pow er LE D voltage vs. current data ..................................................................................... 29 2 6 Infrared t arget II design and fabrication ............................................................................... 29 2 7 Target design vi sual comparison from CCD camera ........................................................... 30 3 1 Projection of targets onto CCD ............................................................................................. 39 3 2 Geometrically calculated range and angle from pixel distances ......................................... 39 3 3 Calibration data and corresponding residuals for 25mm lens and 648x488 camera. ........ 40 3 4 Calibration data and corresponding residuals for 4mm lens and 1032 776 camera. ........ 42 3 5 Example of s oftware identified target array ......................................................................... 44 3 6 Parameters and coordinate system s used for point transformation ..................................... 45 4 1 Stand alone vision sensor used for milit ary convoy ............................................................ 50 4 2 Stepper mo tor torque vs. rotational speed ............................................................................ 51 4 3 Stepper motor camera mech anism used with urban navigator ............................................ 51 5 1 Theoretical model range error for 4 mm focal lengt h, 1032 776 camera .......................... 56 5 2 Theoretical model angle error for 4mm focal length, 1032 776 camera ........................... 56 5 3 Theoretical model range error for 25mm focal length, 648 488 camera ........................... 57 5 4 Theoretical model angle error for 25mm focal length, 648 488 camera ........................... 57 5 5 Image processing resolution ve rsus target separation distance ........................................... 58 5 6 Error introduced from relative pitch between vehicles ........................................................ 58

PAGE 9

9 5 7 Setup of leader vehicle test platfor m with GPS aligned with targets .................................. 59 5 8 Recorded vision sensor r ange vs. GPS range driving test ................................................... 59 5 9 Range error versus time ......................................................................................................... 60 5 10 Leader vehicle angle error versus time ................................................................................. 60 A 1 Stepper motor assembly drawing, sheet 1 ............................................................................ 62 A 2 Stepper motor assembly drawing, sheet 2 ............................................................................ 63 A 3 Motor stage part for stepper motor assembly ....................................................................... 64 A 4 Housing base for stepper m otor assembly ............................................................................ 65 A 5 Smart -motor mech anism assembly drawing, sheet 1 ........................................................... 66 A 6 S mart -motor mech anism assembly drawing, sheet 2 ........................................................... 67 A 7 Gear ho using for smart -motor mechanism ........................................................................... 68 A 8 Top bearing plate for smart -motor mechanism .................................................................... 69 A 9 Bearing input shaft for smart -motor assembly ..................................................................... 70 A 10 Camera stage for smart -motor assembly .............................................................................. 71 A 11 Bearing output shaft for smart motor assembly ................................................................... 72

PAGE 10

10 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science VISION BASED ROBOTIC CONVOY By Brandon T. M erritt August 2009 Chair: Carl Crane Major: Mechanical Engineering This thesis covers the design and implementation of a computer vision based vehicle tracking and following system. Using multiple infrared targets arranged into a n array on a vehicle, an infrared camera located on a following vehicle tracks the vehicle. T he position of the leading vehicle can be determined based on the location of targets in the image To improve the operational angle of the system, a panning camera mechanism was designed and implemented To improve the range of the system a second infrared camera with a zoom lens was also used for tracking The sensor was integrated with a vehicle controller allowing for convoy operation, without any additional sensor input. Typical accuracies for position data from the vision sensor were plus or minus one meter each direction, with resolutions of 0.25 meter.

PAGE 11

11 CHAPTER 1 INTRODUCTION Purpose Military convoys in hostile environments impose a high safety risk to soldiers. Reducing the required number of soldiers occupied with driving duties, would improve sa fety. In order to reduce the manpower necessary for these operations, an automated system was designed allowing multiple unmanned vehicles to follow a leading manned vehicle. To eliminate the possibility of a third party disrupting the convoy, radio communication between vehicles is not used. Radio frequencies may be jammed to prevent insurgents from detonating improvised explosive devices and as such no radio communications are used in this project. Instead, a tracking methodology using night vision infra red targets was developed. The path of the manned vehicle is measured, and the system plans a path for the remaining convoy vehicles to follow. Communication between vehicles could also be achieved by flashing the infrared targets in a unique coded sequen ce. To provide a smarter convoy system, an implementation of the project was also carried out on a system outfitted with more sensor intelligence. This includes GPS, LADAR obstacle detection, and lane tracking. This system was designed to track and follow a leading vehicle without following its path explicitly, thus allowing intersection behavior as well as observation of highway safety requirements. Design Specifications Several requirements were outlined to insure the system would perform reliably in a variety of environmental conditions. The system must have an operational range of 5 to 50 meters, with an error of no more than one meter from the calculated result. It must determine the relative location of a point on the lead vehicle, but measurement of the lead vehicle orientation is

PAGE 12

12 not required. Also, the convoy must be able to operate in day or night conditions as well as in the presence of precipitation. The infrared target array must be invisible to the human eye, and consume no more than 100 watts of power. Both the array and the tracking camera must be easily mou nted to vehicles in the convoy. In addition, the design must be modular, allowing quick installation and removal from convoy vehicles. This modularity reduces setup time and transition ti me to manual operation. Approach To minimize system complexity and cost, a CCD camera system is used to measure the relative position of the leading vehicle. An array of infrared targets is mounted onto the leading vehicle. Then, after initial calibration a range is calculated based on the pixel distances between targets.. Also, based on the position of the center of the array in the image and the angle of the panning motor the angle of the vehicle point from the camera is calculated. The offset from the center of the image is subsequently used to control a servo motor, keeping the targets in the center of the image. The x and y locations relative to this sensor are then calculated based on a calibration with the camera lens setup, and transformed to the follower vehicle coordinate system based on the sensor position on the vehicle. Two drive --by --wire vehicle platforms were used for the design. The first vehicle was a military truck actuated and automated by Autonomous Solutions Inc. (ASI) for the Convoy project. This truck was also equipped with a physical tether sensor, used for convoy operations. The tether would attach to a leader vehicle and measure the relative range and angle from the follower. Engineers at ASI also developed a contr oller to perform basic path following using this data. The task here was to re place the physical tether with the vision based system. The other platform used for testing was the Urban Navigator, developed by CIMAR for the DARPA urban

PAGE 13

13 challenge. This vehicle is fully automated and allowed for testing of a smart convoy system. F igure 1 1 shows the two vehicles used for the project. Software was written to track the targets with either one wide angle camera or with a wide angle camera and a telephoto camera. With the dual -camera setup, each solution is given a score based on comparing the area of the targets, the array geometry, and the average score of previous solutions. The solution with the higher score is then used to track and report position to the co ntroller. The addition of a telephoto camera significantly extends the stable range of the system from 30m to 60m Target array setups of two and three targets have been designed and implemented for tracking the leader vehicle and creating a working convoy system. The use of three targets greatly improved the environmental noise rejection, further extending system range and preventing almost any noise to be recognized as a solution. Multiple infrared target s were also designed and tested for use. The or ientation of the vehicle is not calculated in order to simply the overall system design. Using a more complex array with additional infrared targets would allow for orientation measurement but is not necessary for a working vision based convoy. Background Research and Review Developing autonomous vehicle control and implementation has been an ongoing area of interest as technology becomes more advanced. Likewise, as technology is improved, user friendly integration and sensible integration of robotic vehicles has become more present in a variety of fields. Convoy based autonomous systems have been explored in a variety of ways. These implementations can range from communicative methods with relays working in non-field of view environments to target cre ation and visual camera servoing. Early attempts in vehicle tracking have included creating models of vehicles. This required that the tracking system be very observant of a high number of details. However, the

PAGE 14

14 details were accurate for only a limited num ber of vehicles [1]. The clear problem with this being that vehicle tracking and subsequent following is most applicable in a large scale when there are numerous known details of the leading vehicle. Looking at such spec ific details creates a problem since it is difficult to acquire specific data from all vehicles in existence. To the adjust tracking ability to meet the needs of more generic target size s and shape s a group of researchers from France began trying to track pedestrians. Though this did not involve tracking an object in a way that another vehicle or robot would follow the lead, their experimentation did provide valued results. This group found that by using a pan tilt camera system, they could track an object with no specific distinguishing f eatures. Though they still received minimal tracking errors, most of which can be attributed to reaction time, the findings provide a level of success through a visual servoing system [2]. The addition of a servoing system allows a wider field of view for object tracking. Zielke used symmetry to detect and track a leader vehicle. "Symmetry is a powerful concept that facilitates object detection and recognition in many situations [since] many objects display some degree of overall symmetry" [3]. In this sys tem symmetry provides a way in which to measure the distance between the lead and follower as well as generating specific means to ensure target acquisition. In this example, "to exactly measure the image size of the car in front, a novel edge detector ha s been developed which enhances pairs of edge points if the loc al orientations at these points are mutually symmetric with respect to a known symmetry axis" [3]. In other words, symmetry like shape recognition can be used to aid in reducing environmental n oise when selecting the target. Another way to complete a convoy maneuver is through measuring the GPS position of the lead and the following vehicle. This idea has existed since the "1939 World's Fair [when] General Motors introduced the concept of autom ated highways with vehicles controlled

PAGE 15

15 longitudinally and laterally" [4]. Using GPS has advantages that radar sensors cannot provide. GPA data can identify vehicle location in a measurable variable (latitude and longitude), is low cost, and can be used to measure indirect distance [5]. GPS can also be important "to compensate for sensing limitations caused from having a limited (directional) view of the world with an imprecise localization of other vehicles"[4]. The problem with using GPS to locate and tr ack a vehicle is that the "receiver must maintain its lock on each satellite's signal for a period of time that is long enough to receive the information encoded in the transmission"[6]. From research and experience, GPS data has little reliability when t he signal is obscured by cloud cover, bends, overpasses, and tunnels. Other research has taken the convoy concept and issue of communication along a different route. The reason for developing in a different manner comes from the fundamental question regar ding out of sight maneuvers. Some researchers are exploring the acquisition and use of relay nodes. The case follows that "the relay robots follow the lead robot and automatically stop where needed to maintain a solid communication network between the lead robot and remote operator" [7]. Recently, infrared beacons have been used as point of reference in convoy tracking. In this example, "a series of IR beacons were developed to emit a specific band of IR to which the camera filter was matched" [8]. Based o n this implementation the following vehicle has no direct need for communication with the lead vehicle. In this way, it is pivotal that the following convoy vehicle be able to collect images based on some camera structure and filter our unnecessary inform ation. Similarly, on a smaller scale, others have used "real time target tracking to track the movement of the vehic le immediately ahead of itself in convoy" [9]. By using similar real tim e data acquired through camera systems, engineers have been able to create

PAGE 16

16 a small scale scenario where remote control vehicles can "follow its leader down a straight hallway and around corners in real time" thus showing that "convoy utilizing this architecture can work with at least two vehicles" [9]. In creating such sy stems, the convoy through potential use of targets and a following camera device to record images is part of "the simplest and most efficient strategy" in this way, "the lead robot is to perform no explicit communication and simply perform its actions in t he most efficient manner possible" [10]. These approaches provide simple integration and show an improvement over earlier strategies. A connecting feature between both convoy systems is the data recorded and its reliability in visual tracking. Most notabl y, "the visual tracking algorithm must maintain tracking in the presence of outdoor lighting conditions, including various forms of shadowing on the target and the roadway" [11]. It is important to note that "the leader vehicle may have to be more intelli gent and sophisticated to explore the unknown environment and communicate this information to the followers. Followers may have less sophisticated and sensory capabilities" all of which help justify the expense and rationale in using a robotic convoy [12]. Problems in visual tracking can seriously affect the likelihood of a working convoy system. Of these problems there are three main categories that need to be addressed. First, one must address the actual object being tracked. As shown in previous resear ch, the item being tracked needs to be recognizable. This was shown through symmetry, infrared beacons, and characteristics. Several sources of literature have taken a different approach to address the object itself and its motion. Second, "the lighting in the environment can change causing the intensity of the object and all surrounding objects to change" [12]. Similarly, "if another object in the scene has features simila r to the object being tracked, false alarms are generated" [12]. Following, the last area of concern is the camera itself. When considering the camera, attention is paid to the

PAGE 17

17 focus and field of view All three of the considerations listed above must be judge d when creating a working trackin g system for convoy operations [12] Once addressed, t his leads to the next obstacle of extracting real time images. In some systems, "the visual information that we want to extract from the camera images is the size (in image coordinates) of a vehicle driving ahead of us" [13]. From this model and camera integration, convoy v ehicles will be controlled based on an ability to visually pick out the correct symmetry of the leading vehicle. By this, "the detection and tracking system exploits the symmetry property of the rear view of normal veh icles" [13]. Others have investigated more model based tracking systems. In this way, "visual servoing, which consists of controlling the motion of the robot based on visual features is among the most used techniques for tracking moving objects" [14]. In implementing a visual servoing system, many r esearchers have found the convoy task or mere vehicle tracking to be quite challenging. Many experts have found this "task is particularly challenging to accomplish due to the wide range of situations that must be taken into account, i.e. moving camera, cluttered background, partially occluded vehicles, different vehicle colors and textures" [15]. In order to address these challenges, one can look at the most recent inclusion of infrared emitters and tracking. Infrared tracking involves using a camera to find, filter, and organize infrared images in a way that the convoying vehicle can find an acceptable following solution. From this development, inf rared tracking extends systems into a new realm of data acquisiti on. The "IR cameras are able to extend vision beyond the usual limitations" they can work beyond those systems that "simply mimic human vision"[15]. In addition to target data collection, particular emphasis has been placed on distance estimation. In cal ibrating and recording valid distance data, "a mere monocular calibration is too sensitive to vehicle (and thus camera) pitch and roll and to a non-flat road slope" [15]. As

PAGE 18

18 previously found on small scale models not using infrared technology to track, the re is a need "to avoid the risk of collision while maintaining assured linkage of the convoy" system [10]. In this way, as technology improves, we are still bound by some of the more basic constraints that have existed for an extended time in developing a convoy system. Further supporting distance estimation through effective measurement, researchers from Graz University of Technology in Austria used more than calibration and slope to determine the location of beacons. These researchers investigated beacon tracking by using "two CCD -cameras capturing the position of several red light emitting diodes (LED s) [where] the position of the beacons in 3 d space is reconstructed out of the two 2 d images provided by the cameras" [16]. This camera system being sim ilar to the one created out of the University of Tennessee where the "image created by the camera is a 2 D interpretation of a 3 D environment" [12]. In Madritsch and Gervautz's proposed tracking system the use of two CCD cameras allows for creating a geom etric relationship between all variables and determining the actual distance of the LED beacons. [16] This is later facilitated for use in convoy since "LED s can be easily recognized and separated from the background without the need of defined lighting c onditions [and] a further advantage is that rotationally symmetric LED 's do not change their appearance much when viewed from different directions" [16]. Overall, the approach with infrared beacons that Madritsch and Gervautz applied created very positive and reliable results with position data. This is highly applicable to further investigation and study in the area of autonomous convoy solutions. Once the variables of communication, selecting beacons or targets to follow, camera image, and following dis tance have been decided upon, research has also shown three main considerations for the formation o r the convoy. First, the following vehicle needs to make a

PAGE 19

19 decision to join a convoy of leave from a convoy [17]. This can be completed in a convoy where th e following vehicle has advanced sensory ability and communication. Second, "once the vehicle has joined the convoy, its driving is influenced by the presence of the other vehicles in the convoy" [17]. In this way, the follower needs to be responsive to th e changes in speed and trajectory in the path. Finally, some suggest that advanced ability following vehicles should have a direct impact on the convoy itself. This is to say that the following vehicle is "negotiating with the other agents participating in the coalition" [17]. However, it is important to note that for efficiency and economic value, the following vehicle does not have to be as advanced as the leading vehicle [12]. This decision would then simplify the convoy and dismiss some of the communica tion variables. Based on completed background research there is a considerable application for further research and investigation in the field of convoy systems. With the more recent advancement in infrared and availability of powerful infrared LED emitters one can take prior findings and move forward to generate a more responsive and robust autonomous convoy system. These new investigations are highly applicable to current world and lifestyle situations. First, "military applications of convoy driving are the most obvious [also] other applications can be found, for example, in flexible factories, where an automated robotic convoy is used for product transportation"[14]. In fact, actual "target classification and tracking is one of the key battlefi eld tactical applications" of the convoy system [18]. Taking all of these factors, limitations, and developments into account has led to the following experimental development of convoy systems using infrared targets and a servo ing camera to follow a fully autonomous lead vehicle's beacons at all times

PAGE 20

20 Figure 1 1. Vehicle t est p latforms

PAGE 21

21 CHAPTER 2 INFRARED TARGET DESI GN Design Requirements In order to make the system more covert, infrared markers were chosen over traditional visible spectrum lights. The designed targets must be bright enough to be at least twice as bright as ambient light during daytime use within the operating range of the system. In order to ensure safety for operators, the targets must be eye safe and not distrac ting to a vehicle operator. Also, to extend battery life and keep the array mobile, each target must not use more than 50 watts power per target. In addition, the light output from the targets must also be uniform with a horizontal cone angle of at least 180 degrees and a vertical cone angle of at least 45 degrees. Finally, the targets must be water resistant and able to withstand ambient temperatures of 100 degrees Fahrenheit or more. Several revisions of targets were designed and built in order to meet these rigid requirements. Illuminator Selection and Capabilities For proof of concept, commercially off the shelf Lorex VQ2121 12VDC infrared illuminators were chosen. This minimizes complexity and system design time. The emitters performed reliably and with low power consumption. However, the illuminators proved to be too directional and did not provide the required horizontal viewing angle. The cone angle for these emitters was rated at 100 to 120 degrees, but was actually closer to 90 degrees after t esting. To meet the required 180 degree horizontal cone, a mount was designed to position three emitters at 45 degree angle separation for each target. F igure 2 1 shows the basic design and fabrication of the target. As shown in the Figure 2 1, the infrared light is visible to the digital camera. However, looking at the target solely through human eyes, the infrared light appears very dim. Although

PAGE 22

22 the target was visible for the required 180 degree horizontal cone angl e, the light from the target was not uniform and presented problems with the image processing and tracking. Due to classification as more than one target within the image, the system experienced dropouts around turns and at intermediate angles. This probl em would hamper the convoy application in real life situations. Prototype Omni -directional Target Design To improve tracking around corners and at all angles, a marker with a more continuous light distribution was designed and tested. The infrared LEDs f rom the LOREX emitters were removed from their boards and tested in a rapid prototyped housing. The resulting light distribution was improved significantly without a loss in operational range. The leds were bench tested to find the ideal operational vol tage by measuring the brightness in the CCD camera compared to the power consumption. The brightest LED forward voltage compared to power consumption was found to be 1.7 V. Using a 12 VDC battery for power, sets of 7 LEDs were combined in series. Followi ng, the series created were then wired in parallel to reach the approximate desired forward voltage drop for each LED. Due to using a battery, voltage for the battery can vary between 10.5 and 14 volts depending on the state of the vehicle. However, under normal operating conditions, voltage is approximately 12.5V. For this variation in supply voltage, the individual LEDs would need to handle voltage between 1.5 and 2.0V. This variation in voltage seemed to be within the safe range for the LEDs. Thus, to reduce power consumption, resistors were not added in series with the LEDs. F igure 2 2 shows the prototype target design and fabrication. The improved distribution of light around the entire target greatly improved system performance. The targets functioned well during initial testing, however higher ambient temp eratures during operation later caused reliability problems. The effect of increased junction

PAGE 23

23 temperature on the maximum forward voltage was not considered in the design. Without a regulated so urce, some LEDs were damaged. Due to the lack of a circuit board design and the large quantity of LEDs in the housing, repair was difficult and motivated the design of a more reliable and robust target prototype. Target Design I The next design of the i nfrared target required the omni -directional light output of the prototype target. The target should be the same size, with a radius of approximately three inches. This was completed by focusing on the reliability of the LOREX illuminators. As an independe nt study project, two electrical engineering students designed the circuit board for the illuminator. The housing design was also improved significantly from the prototype. This change included protecting LEDs from water damage and allowing easy access to the boards for repair and maint enance. F igure 2 3 shows this target system This target revision provided excellent brightness and light output, but consumed significantly more power than the LOREX illuminators. Power consumption data was measured using a variable DC power supply and measuring the current and voltage of the target. At a normal battery voltage of 12.5V the target consumed 50W of power. From testing with the CCD camera, the relative brightness compared to that at 12 .5V was measured qualita tively. F igure 2 4 summarizes the power consumption and brightness of the target. The relative brightness was calculated by taking a snapshot with the infrared CCD camera, defining a rectangular boundary region for each target, and then integrating the pix el values across the bounded target area using code developed in MATLAB Higher pixel values indicated a brighter target. From the above plots, the target demonstrated the best efficiency between 20 and 30 Watts (10.611.3V). The target was approximately half as efficient near the maximum operating voltage. Operation at maximum voltage for extended periods could cause thermal failure of the

PAGE 24

24 LEDs and resistors. However, at voltages of 12.5V or less, extended running periods did not cause a mechanical prob lem. A cost analysis of the materials needed to manufacture the target is presented in the T able 2 1 As shown in the table, each target has a material cost of 327 dollars. The high cost and efficiency problems at higher voltages motivated the design of yet another infrared target. Target Design II The second target design was carried out using high power infrared LEDs. Unlike 5mm LEDs generally used for indicators or electronic remote controls, these illuminators are better designed for illumination. These LEDs generally range from .5 to 5 Watts and are designed to be driven via constant current control. Most power LEDs also come mounted onto a heatsink to improve thermal dissipation and increase operational efficiency as well as lifespan. Also, the smaller number of LEDs necessary for a sufficiently bright target to meet range requirements simplifies wiring and eliminates the need for more expensive circuit boards. This then improves the overall system application. Inexpensive infrared LEDs rated at 0.5W were selected to test out the new type of LEDs. However, no data sheet was available to provide general specifics of the LED. Bench testing was then carried out in order to select a prope r constant current driver. The graph in Figure 2 5 shows th e resulting forward voltage versu s forward current for the LEDs. From testing, the most efficient operation was found at 0.6 Amps, making the actual LED power consumption approximately 1.2W each. To reach the necessary brightness, it was estimated that a power consumption of 20 Watts was necessary. Thus a quantity of 16 LEDs was chosen. Following, a higher design input voltage of 24VDC was chosen to lower the required current for the driver,.

PAGE 25

25 To power the LEDs, a 1 amp constant current driver with inter nal adjustment between 40 and 110% was chosen to power two series groups of 8 LEDs in parallel. Each set in series would then be driven at a little over .55A. The selected driver was a LuxDrive Buckpuck 3021D I 1000 which requires an input voltage higher than the total forward voltage drop of the LEDs plus a 2 to 4V input margin. A datasheet is provided in Appendix A. In order to mount and cool the LEDs, a computer heatsink and fan from was selected. To protect the system against rain and impact an inexpensive clear Sterlite brand plastic container was selected and used as an encasement. Angle brackets and a spacer were fabricated to mount the components to the array pole. Figure 2 6 shows the design and fabrication of the second infrared target After testing the selected driver, the maximum current was close to 1.2A, which provided 0.6A for each series set of LEDs. The total forward voltage drop for eight LEDs at 0.6A is 15V requiring an input voltage of at least 17VDC. The maximum permissible input voltage for the driver is 32VDC. For cooling, a 12V (nominal) fan is also powered off the input voltage. The fan was tested and worked at up to 30V. In order to test this design and fabrication, a 19VDC laptop power supply was used. Upon testing, the targ ets performed with high reliability and efficiency. The total power consumption was 22W per target. Brightness was reduced from the previous design but still sufficient for the required operating range. Improved thermal dissipation and air circulation coupled with reduced heat generation eliminated thermal issues experienced with previous target designs. Due to constant current control, variations in supply voltage did not affect the brightness of the target. From testing with the CCD camera, the brightness in the camera is approximately 75% of the brightness of the first design (at 12.5V) but uses 50% less power,

PAGE 26

26 making the new design 50% more efficient than the previous target. A snapshot taken with the CCD camera with an 850n m band pass filter is shown in Figure 2 7 T able 2 2 summarizes the cost for materials for the second target design. As shown, the cost was reduced to 30% of the cost of the first design. Both targets functioned well in operation, but the second target design provided a less expe nsive alternative with a lower power consumption than the first design while maintaining comparable brightness Figure 2 1. Infrared t arget c oncept Figure 2 2 Prototype i nfrared t arget d esign

PAGE 27

27 Figure 2 3 Target I d esign and f abrication A Figure 2 4. Target I design. A) Power consumption. B) Relative brightness. 0 10 20 30 40 50 60 70 80 90 8 9 10 11 12 13 14 15Power Consuption (W)Supply Voltage (VDC)Target I Power Consumption

PAGE 28

28 B Figure 2 4. Continued Table 2 1. Target I c ost a nalysis Component Quantity Rate Cost (USD) Material 25 in^3 5 USD/in^3 $125 LED's 168 0.25 USD/unit $42 Circuit Boards 6 25 USD/unit $150 Hardware $10 TOTAL $327 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 10 20 30 40 50 60 70 80 90Relative BrightnessPower Consumption (W)Target I Relative Brightness

PAGE 29

29 Figure 2 5 Power LED voltage vs. current data Figure 2 6 Infrared target II design and fabrication 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 1.2Voltage (V)Current (Amps)Power IR LED Voltage vs. Current

PAGE 30

30 Figure 2 7 Target design visual comparison from CCD camera Table 2 2 Target II cost analysis. Component Quantity Rate (USD/unit) Cost (USD) BuckPuck 1000 Driver 1 18 .00 $18 LED's 16 2.65 $48 Heat sink and Fan 1 15 .00 $15 Hardware $15 TOTAL $96

PAGE 31

31 CHAPTER 3 IMAGE PROCESSING AND COMPUTER VISION In order to calculate the position of a leading vehicle, two infrared markers are separated vertically and positioned onto the leading vehicle, allowing for the calculation of distance and angle in the image. Matrix Vision Blue Fox USB CCD cameras were selected for the project due to availability, simp le connectivity, excellent image quality, and fast frame rates. Camera Setup For the implementation of the project with Autonomous Solutions Inc. one camera equipped with a 4mm focal length wide angle lens was used. To improve system range, a second camera with a 25mm telephoto lens was added. For the project implementation with ASI, a 4mm wide angle lens was selected in order to allow the system to allow operation at close ranges. An additional camera was considered but not included to reduce the co st and complexity of the system. To improve accuracy and reduce noise a high resolution, 1024 768 CCD was paired with the lens. A large target separation distance of 1.5 meters was chosen to increase the resolution of the image processing solution. In daylight conditions, the maximum system range was limited to approximately 30 meters. This was sufficient for tracking and following with the autonomous solutions controller at speeds up to 20mph, however increased range was desired. A second low resoluti on camera equipped with a 25mm lens was then added to the system. The lower resolution camera was selected to improve image processing speed. The lower resolution is also acceptable because less accuracy is needed at longer ranges. This camera setup exte nded system range to approximately 60 meters, and improved stability and accuracy at ranges near the maximum range limits of the 4mm lens setup.

PAGE 32

32 Theoretical and Empirical Vision Models In order to derive the relationship between the separation distance bet ween two targets fixed at a known measurement the offset in the image and the actual relative position of the target array, a geometric model was developed using SOLIDWORKS 2009. This model approximates that all light reaching the CCD is directed through the lens focal point and that aberrations caused by variation in wavelength are negligible. These assumptions should be valid since only light within a small bandwidth inside the infrared spectrum is observed. F igures 3 1 and 3 2 define the measured pixel distances and show the geometry of the projection of light from the infrared emitters onto the CCD. Using geometry from the three dimensional model and unit conversion between pixels and length, the following relationship describing the angle and range of the target array were formed: [ ] = 180 [ ] [ ] [ ] [ ] (3 1) [ ] = [ ] [ ] [ ] cos [ ] (3 2) w here Plength is the length in mm of one pixel on the CCD, Poffset is the number of pixel between the center of the image and the center of the targets, f is the focal length of the camera, Dt is the actual distance between the top and bottom targets in the array, and Pseparation is number of pixels between the top and bottom target in the image Because of servo tracking, the target array will be centered in the image, with viewing angles limited to plus or minus 10 degrees. For small angles, the inverse tangent can be approximated as its contents. At 10 degrees, this approximation introduces an e rror of 0.10 degrees. This error is acceptable since it is much smaller than other errors in the system, such as

PAGE 33

33 the 0.5 degree resolution of the stepper motor used for tracking. Therefore a simplified model for theta is derived as: [ ] = 180 [ ] [ ] (3 3) Assuming the lens is well constructed, the target separation distance is measured without error, and the camera lens setup produces a rectilinear image without distortion, the model should relate the defined pixel distances in the image to the actual position of the vehicle with good accuracy. Further since the camera model is generic to the lens focal length, a variable focal length motorized lens could be used to increase system range and accuracy without the addition of another camera. To verify the accuracy of the theoretical model, empirical data was collected and analyzed. Using an existing grid for measurement, x and y distances from a fixed origin to different intersection s on the grid were measured with a tape measure. The camera lens setup was then located at the origin and the angle was calibrated to assure that no horizontal pixel offset from center was measured with a target array position at zero degrees. The actual position of the targets was recorded with the measured horizontal pixel offset from the center of the image and the spacing between the top and bottom targets. The measurements were repeated for different target separation distances, Dt, to assure that the range was linear with target separation as derived through the theoretical model. Because of the capability of servo tracking of the target array, the targets will be kept in the middle of the recorded image a majority of the time. For small angles, the i nverse tangent is linear, thus a linear calibration model for the vehicle angle was estimated. [ ] = 1 [ ] (3 4)

PAGE 34

34 = 2 [ ] [ ] 1 [ pixels ] cos ( [ ] ) ( 3 5) Values of the constants C1 and C2 were calculated using a least squares regression analysis. Plots of the calibration data an d residuals are shown given in F igures 3 3 and 3 4 The figures show the collected data followed the model with very little deviation fro m the derived model. For the range of the collected angle data, the approximation of a linear angle calibration did not add significant error to the system. To verify the calibrated model matches the theoretical model without error, predicted values of t he calibration coefficients were determined using the lens focal length, camera CCD size and camera resolution. The calculated values of the regression coefficients theoretical coefficients, and camera properties are presented in Table 3 1 The standard deviations of the residuals were analyzed to predict the error in the models. From the residual plots, t he accuracies of range predictions increase with increasing pixel separation in the image. Although the calibration model seemed to predict the position with less error, there are other small errors in measurement. For example, errors in the dimensions of the grid and measured target separation could be built in to the coefficients of the calibrated model, falsely improving the results of the calibrated mo del. To make the system more flexible to adjustments in camera and lens setup, the theoretical model is preferred. The estimated uncertainties in the models are shown in T able 3 2 : Because of the uncertainty in data collection measurements, it is concluded that prediction errors in the two models are not significantly different. The theoretical model was chosen, because of its adaptability to different camera setups without recalibra tion. This makes it highly modular and applicable to convoy camera lens setup. However, any new camera and lens setup should be checked to assure the optics have predictable characteristics and follow the model before usage.

PAGE 35

35 Image Processing Methodology and Coordinate Transformations Image processing software written in C++ and using the open CV library was used to process real time camera data. A single channel grayscale image is acquired from the Matrix Vision Blue F ox CCD cameras. With the addition of infrared band pass filters, ambient noise is greatly reduced. Thresholding to a binary i m age is then carried out to identify bright areas within the image. Lastly, continuous contours from the binary image are identified and saved into an array of str uctures containing the x location, y location, and contour area within the image. For an array containing two vertically spaced targets, comparing potential targets within an image is only a matter of checking if the contours are close to vertical and the n comparing the areas and aspect ratios to see if a matching pair of targets is found from the contour data. A score is given to the possible solution based on their area ratio, angle from vertical, comparison of their aspect ratios, and their variation f rom the previous three solution sets. The highest score solution from the group is then selected. From the x and y locations of the contours, the pixel separation and offset from center are calculated. The pixel offset is then driven to zero using the pan ning mechanism. To improve noise rejection and to extend range, a third target located equidistant in a line between the top and bottom targets was added. With the third target, sorting of the contours from highest to lowest y position must be performed b efore checking if the set is a match. Several additional checks are performed to reject sets of contours that are not a match. To check if the three contours are in a line, the distance between the top and bottom contour is compared to the sum of the dista nces between the top and middle and bottom and middle contours. Also, the distance between the top and middle and the distance between the bottom and top are

PAGE 36

36 compared to check if the contours are equally spaced. These checks eliminated random environmental noise, allowing acceptance of more possible targets. The overall solution scoring function results in a non -dimensional value ranging from 0 to 1. This confidence value is calculated for the best solution in each camera. For comparing the areas, the foll owing function generates a score associated with how well the areas match, where aave is the average area of the potential targets, ai is the area of an individual contour, and i is the number of contours to be compared: < = = (3 5) = = 1 (3 6) To score the current solution based on the previous three iterations, the following function was designed: = 1 1 360 [ ] 1 360 [ ] (3 7) In the above function, Rmax is the maximum allowable system range, vehicle is the angle of the leading vehicle measured in the sensor coordinate frame, and tilt is the relative roll angle of the array in the image. If there is no change between the current solution and the average of the last three solutions, the pscore function will be equal to one. The more the solution differs from the last three, the closer the score gets to zero. For the system using three targets, the contours are numbered from one to three, sorted from highest to lowest y position, and a score, bas ed on how close the geometry of the array matches the actual geometry, is calculated. Also, the total solution score is calculated from the three sub -scores. = 1 ( 12 + 23 13 ) max ( 12 13 13 ) 2 1 1 ( 12 23 ) max ( 12 13 ) 2 1 (3 8)

PAGE 37

37 = (3 9) Based on the sensitivity of the parameters used for scoring, the individual exponents were selected. Table 3 2 summarizes the selected values for the scoring parameters. These scoring functions proved to be very effective in selecting the appropriate target from the image. F igure 3 5 shows a screenshot of the software and the scoring functions ability to reject noise. If the solution is found in more than one camera, the solution with the higher score is selected, used for servo tracking, and then transformed from the sensor coordinate system to the vehicle coordinate system. This relative position can be used for basic convoy behaviors such as path following. To find the absolute position of the leader vehicle, the latitude, longitude, and yaw of the follower vehicle is measured using GPS and inertial measurement sensors on the vehicle. This data allows for the calculation of the UTM position of the follower vehicle. Using the calculated relative position data, a transformation to the UTM position of the lea der vehicle is carried out. Figure 3 6 shows the coordinate systems and parameters for the transformations. The formulations of the position of the leader in the UTM coordina te system and the position of the leader in the follower coordinate system are given below in equations 310, 311, 3 12, 313, and 3 14. = cos ( ) sin ( ) 0 sin ( ) cos ( ) 0 0 0 1 0 0 0 0 1 (3 10) = 1 0 0 0 1 0 0 0 1 0 0 0 0 1 (3 11)

PAGE 38

38 = 0 1 (3 12) = = + + cos ( ) + + sin ( yaw ) + + cos ( ) + + + sin ( yaw ) 0 1 (3 13) = = + + 0 1 (3 14) JAUS Software Design In order to make the system more modular, the joint architecture for unmanned systems, JAUS, standards were followed for the software design. This design allows for plu g and play setup with other sensors and subsystems. Further information is available on the open JAUS website [19]. It also enabled integration with existing sensors such as GPOS. Running on separate computers or nodes, the convoy sensor establishes a se rvice connection with the GPOS component and receives a JAUS message containing the latitude, longitude, and yaw. In order for components to work, the open JAUS node manager must also be running on each machine. UTM initialization and conversion from latitude and longitude are made using functions built into the CIMAR core library. The image capturing from the CCD cameras are carried out in separate threads, increasing component speed significantly. Stepper motor homing and control are also done inside the software. Using one camera, an update rate of approximately 30 Hz is typical on the current computer setup. Using two cameras drops the update rate to around 15 Hz.

PAGE 39

39 Figure 3 1. Projection of t argets onto CCD Figure 3 2 Geometrically c alculated range and a ngle from p ixel d istances

PAGE 40

40 A B Figure 3 3 Calibration data and corresponding residuals for 25mm lens and 648x488 camera. A) Angle calibration. B) Angle calibration residual. C) Range calibration. D ) Range calibration residual. y = 0.01729x R = 0.999860 1 2 3 4 5 6 0 50 100 150 200 250 300 350Angle (degrees)Offset (pixels)25mm Focal Length, 648x488 Angle Calibration 0.1 0.08 0.06 0.04 0.02 0 0.02 0.04 0.06 0.08 0 50 100 150 200 250 300Angle Residual (degrees)Offset (pixels)25mm Focal Length, 648x488, Residual Plot of Angle Calibration

PAGE 41

41 C D Figure 3 3. Continued y = 3,395.57x R = 1.000 5 10 15 20 25 30 35 40 0 0.002 0.004 0.006 0.008 0.01 0.012Range (m)Dt(m)/(Psep(pixels)*cos(theta))25mm Focal Length, 648x488 Range Calibration 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0 10 20 30 40 50 60 70Calibration Range Residual (m)Dt(m)/(Psep(pixels)*cos(theta))25mm Focal Length, 648x488, Residual Plot of Range Calibration

PAGE 42

42 A B Figure 3 4 Calibration data and corresponding residuals for 4mm lens and 1032 776 camera. A) Angle calibration B) Angle calibration residual C) Range calibration D ) Range c alibration r esidual y = 0.06467x R = 0.99988 0 5 10 15 20 25 0 50 100 150 200 250 300 350Angle (degrees)Offset (pixels)4mm Focal Length, 1032x776 Angle Calibration 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0 50 100 150 200 250 300 350Angle Residual (degrees)Offset (pixels)4mm Focal Length, 1032x776, Residual Plot of AngleCalibration

PAGE 43

43 C D Figure 3 4. Continued y = 874.848x R = 1.0000 5 10 15 20 25 30 35 40 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045Range (m)Dt(m)/(Psep(pixels)*cos(theta))4mm Focal Length, 1032x776 Range Calibration 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04Range Residual (m)Dt(m)/(Psep(pixels)*cos(theta))4mm Focal Length, 1032x776, Residual Plot of Range Calibration

PAGE 44

44 Table 3 1. Summary of camera and lens setup and corresponding calibration coefficients. Parameter Camera 1 s etup Camera 2 s etup Horizontal Length (pixels) 648 1032 Vertical Length (pixels) 488 776 Actual Pixel Length (mm/pixel) 7.407E 03 4.651E 03 Rated Lens Focal Length (mm) 25 4 Theoretical Value of C1 (pixels) 3375.0 860.0 Theoretical Value of C2 (deg/pixel) (small angles) .01698 .06662 Calibrated Value of C1 (pixels) 3395.6 874.8 Calibrated Value of C2 (deg/pixel) (small angles) .01729 .06467 Percent Error in C1 calibration (%) 0.61 1.73 Percent Error in C2 calibration (%) 1.85 2.94 Predicted Focal Length from Calibrated C1 (mm) 25.15 4.07 Predicted Focal Length from Calibrated C2 (mm) 24.55 4.12 Table 3 2. Summary of u ncertainties for c amera m odels Parameter 1032x776, f=4mm; Theoretical 1032x776, f=4mm; Calibration 648x488, f=25mm; Theoretical 648x488, f=25mm; Calibration Range Error (m) 0.3 0.2 0.15 0.1 Percent Full Scale Range Error (%) 3.3 2.7 1.5 1.4 Angle Error (deg) 0.5 0.4 0.2 0.1 Table 3 3. Scoring Parameters Rmax [m] R exp t v exp tt exp l exp1 l exp2 e exp1 e exp2 120 6 4 4 800 1.75 40 2.5 Figure 3 5 Example of s oftware i dentified t arget a rray

PAGE 45

45 Figure 3 6 Parameters and c oordinate s ystems u sed for p oint t ransformation

PAGE 46

46 CHAPTER 4 PANNING CAMERA MECHA NISM DESIGN Requirements In order to allow operation of the convoy around tighter turns and to allow for the use of a zoom lens, a panning mechanism was added to the system design. The design of the mechanism had several requirements. The mechanism must be accurate, controllable, and have a minimum speed of one rotation per second to allow sufficient tracking around corners To prevent damage from rain or debris, the housing for the mechanism mus t be splash proof and protective against incidental contact. Two implementations of the mechanism were designed and used. For development with the ASI system a servo motor and gear box was designed. Later, for use on future convoy systems a more compact, lower cost, stepper motor solution was developed. Both designs are outlined below. Mechanism Designs For the system used with Autonomous Systems Inc., an Animatics smart motor equipped with a 3 0:1 worm gear box was designed. The smart motor was chosen in order to reduce software development time and to improve mechanism controllability. Because of low torque requirements and small packaging space, the SM2315D smart motor model was selected. With 27 oz in of continuous torque at 5000rpm, the addition of a gear reducer was essential. F or controllability during small angle adjustments, increased output torque, and low speed tracking, and a non back drivable output, a worm gear reducer was chosen Because the system was packaged in one enclosure, the form fact or of the reducer was a design constraint. To reduce the footprint of the gearbox and to provide accurate, low backlash positioning, a custom reducer was designed and fabricated for the project. To achieve the approximate desired maximum output speed, a gear reduction of 30:1 was chosen. This

PAGE 47

47 reduction would give a maximum output speed of 2.7 rotations per second. Maximum output torque at this ratio would be 810 oz in or 4.2 ft lb. However, t his output torque was limited by the gearbox design which did n ot need to be able to support torques of this magnitude. The output load was also balanced such that the center of mass was very close to the axis of rotation. Because little output loading will be present, keyways were not used to lock components onto the input and output shafts. Simple set screw connections were used instead. For the worm wheel and worm gear, compact gears from WM Berg were selected due to their compact size and low backlash properties. A 60 tooth anti -backlash worm wheel, paired with a double threaded precision worm gear formed the desired 30:1 reduction. With precise positioning of the gearing, backlash can be neglected. Ball bearings were used on each side of the gear pair to handle shaft loading. A limit switch was also incorporated into the mechanism for angular reference. Mechanical drawings and a bill of materials for the gearbox design are attached in appendix B for reference. A summary of the mechanism cost is shown in T able 4 1. In order to make the entire system modular, the power system, computer, and camera mechanism was packaged in a single, easily installed housing. A NEMA 4X aluminum enclosure was modified through the addition of an acrylic window for the camera mechanism, external heat sinks to improve thermal dissipatio n, power switches, power plugs, and data output plugs. Figure 4 1 shows the system used with the ASI controller. For the implem entation of the project on the Urban N avigator, existing computing and power resources were located inside the car. Because of the size and weight of the system used for Autonomous S olutions, a new more compact panning mechanism was designed in a separate housing. T o reduce the cost of the mechan ism, a stepper motor was chosen because of the capability of high torque at low speeds without the need for additional gearing. Also, because of

PAGE 48

48 the capability of open loop control by keeping track of motor steps from a homed position, an encoder was not needed. The selection of an appropriate driver was the only other requirement. Many st epper motor drivers were considered for the project. Because of the simple USB interface, low cost, and software libraries built in a variety of programming languages, a P hidget 1063 bipolar stepper motor driver was selected. The driver also supported 1/ 16th step microstepping for smooth operation and high resolution position control. Another advantage of the driver was the wide 9 30VDC input voltage. Example code, written in C++ and C# programming languages was also provided for the driver allowing for decreased development time for software development A compact high torque bipolar stepper motor was paired with the selected driver. Several surplus NEMA 23 Lin Engineering stepper motors were compared because of their low cost. With a typical accuracy o f half a step for well constructed stepper motors and a desired accuracy of 0.5 degrees, a minimum of 360 steps per revolution was desired After comparing many models, t he 5709M 05S was selected because of its high holding torque of 175 oz in high low s peed torque and relatively high resolution of 400 steps per revolution The motor torque curve is shown in F igure 4 2 Again because of the high low speed torque, no additional gearing was needed. Cameras could be driven directly from the motor output sh aft, allowing for low cost and a compact housing design. Limit switches were also positioned in the housing for homing. To protect against rain and incidental contact, a transparent acrylic pipe was used as a window for the housing. The stepper motor cons umes a maximum of 30Watts of power from the driver, most of which is translated into heat which must be dissipated out of the housing to prevent damage to CCD cameras. Since the stepper motor is bolted against the base, a luminum was used for the

PAGE 49

49 base of th e housing to aid in heat transfer from the stepper motor. In operation, heat was not an issue and the required power for the motor was closer to 10W. Several parts in the housing were manufactured using a rapid prototype machine, providing accuracy and lo w cost. Only two components in the housing were machined, and the required tolerances were large enough for the parts to be made in house. This made the overall cost of the assembly very low in comparison to the smart motor assembly. The cost of the stepp er motor mechanism and housing are shown in the following table Prints for the stepper motor assembly and machined parts are also attached in Appendix B. The stepper motor mechanism was mounted on the front bumper of the Urban Navigator using shock absorbers. Wiring for the motor and cameras was run through the firewall to the bumper. Also, because the distance between the computer and the cameras was greater than 15 feet, a USB 2.0 hub was added to boost the signal for the cameras to pr event data corruption. Figure 4 3 shows the manufactured stepper motor mechanism as mounted on the Urban Navigator. Motor Control and Software Design For the smart motor mechanism, a serial interface was developed so that motor commands could be sent throu gh the software. The smart motor also has the capability to read digital inputs. A roller limit switch was positioned at a known angle and used for homing during start up. The angle of the motor was read using a built in optical encoder and reported throug h the serial interface. An integrated PID controller provided robust position control of the motor. Motor gains were tuned to provide an optimal response without significant steady state error. A separate homing routine was written and saved on the smart m otor memory. After homing, real time motor position commands were sent to correct the angle of the leader vehicle in the image for tracking Automatic searching was never implemented in this system.

PAGE 50

50 For stepper motor positioning open loop control allows for precise positioning of the motor as long as the motor does not stall after homing. To prevent stalling, the motor current limit, desired velocity, and acceleration are tuned to provide optimal performance. Once properly tune d, stalling was not an issue in the design. At start up, the motor must be homed using the limit switches in the assembly. The maximum update rate of the digital inputs on the stepper motor driver is 62 Hz according to the specifications. To ensure accu rate angle measurement of the motor angle upon hitting the limit switches, the speed of the motor is greatly re duced until hitting the switch. Event driven functions included in the Phidget library were used to detect switch depression or release. After ho ming, relative angle commands are sent to the motor to center the target array. Another event driven function is called when the position of the motor changes. If a solution is not found by the image processing software for a tuned number of iterations, a searching algorithm is executed to find the leader vehicle. Table 4 1. Cost analysis of smart motor panning mechanism design Component Cost (USD) Smart Motor 2315D $1000 Hardware (Bearings, Gears, Locknuts) $125 Machined Parts $1100 Total Cost $2225 Figure 4 1. Stand alone vision sensor used for military convoy.

PAGE 51

51 Figure 4 2. Stepper motor torque vs. rotational speed. [20] Table 4 2. Cost a nalysis of s tepper m otor p anning m echanism d esign Component Cost (USD) Lin Engineering 5709M 05S $15 Phidget 1063 Bipolar Stepper Driver $65 Acrylic Window $30 Material Cost for Machined Parts $35 Cost for Rapid Prototype Parts $93 Miscellaneous Hardware $22 Total Stepper Motor Mechanism Cost $260 Figure 4 3. Stepper motor camera mechanism used with urban navigator.

PAGE 52

52 CHAPTER 5 RESULTS AND DISCUSSI ON Static Testing To verify the accuracy and precision of the sensor, static data was recorded at various x,y locations, given by dimensions of a grid taken using a measurement wheel. For this test a target separation distance of one meter was chosen. The resulting errors between collected and predicted position for each camera setup are shown in F igures 5 1 through 54 Errors from the static test were well within the requirements for the system. Generally, the camera setup equipped with the 25mm zoom lens provided accurat e measurements with a precision of plus or minus 0.5 meters and plus or minus .25 degree with very little bias. The wide angle lens setup provided slightly less accurate range results, with a slight bias of approximately 0.15meters. This can be attributed to errors in lens construction such as variations in focal length, runout, and angular misalignments between the camera and lens. However, the small bias is still within the limits of the system requirements. Overall the 4 mm lens camera setup results i n expected errors of plus or minus one meter and 0.5 degrees. Slight improvement in accuracy can be achieved by extending target separation distance, thus increasing the pixel offset at a fixed distance. F igure 5 5 shows the variation in range for a change of one pixel in separation distance. The largest source of error will occur when a large relative pitch exists between the leader and follower vehicles. Equation 5 1 shows the range error as a function of pitch angle ,p. [ ] = [ ] [ ] [ ] cos ( ) [ ] [ ] [ ] cos ( ) cos ( ) (5 1) For most highway driving situations, this pitch angle was predicted to be within 15 degrees. Figure 5 6 shows the effect of varying pitch angle s at a range of 20 meters

PAGE 53

53 As seen in the figure, error increases significantly at larger pitch angles. The system would need additional image processing techniques to determine the orientation of the vehicle in addition to the position. In order reduce complexity, this error was acceptable for the operational conditions of the system. Testing of System with Ground Truth To further verify the accuracy of the system, to assure that parameters are correctly modeled and to test the effect of small pitch angle changes in driving conditions, image processing position data was logged and compared to recorded GPS data from the leader and follower vehicles. The follower vehicle was equipped with a North Finding Module, giving a precise determination of yaw, Latitude and Longitude data, with a typical maximum error of 10 cm and one degree in north and south directions. The leader vehicle was equipped only with a NovAtel GPS receiver, also with a typical position error of 10cm. With a total accuracy on the same order of magnitude as the vision sensor, the GPS data provide d a good benchmark for validating the accuracy of the image processing data. The setup with the GPS unit is shown below in Figure 5 7 Figure 5 8 shows the Recorded Range between the two vehicle coordinate systems for the vision component and ground truth. From GPS data, the range is calculated from the conversion of each vehicles latitude and longitude coordinates to UTM Northing and Easting c oordinates. The range and angle are then simply given by geometry in equations 5 2 and 5 3 as : = 2 + 2 = arctan / (5 2 ) (5 3 ) Figures 5 9 and 510 show the difference between ground truth and the vision solution.

PAGE 54

54 The recorded errors in range and angle were very similar to the static test s What is more telling of the system performance is the continuity of the vision data. In order to demonstrate the lack of noise and stability of the system, Figure 5 11 shows the UTM Easting position versus time for the vision solution and actual GPS dat a. Very little noise was present in the vision solution, and data followed the GPS measurements with errors close to the expected values. Sensor Data Qualitative Results with ASI Controller The Autonomous Solutions Inc. controller designed to work with t he physical tether sensor, was used for unmanned operation using this data. The physical tether provided highly accurate range and angle data using encoders to calculate the range and angle. Based on the velocity of the leader vehicle a controller calcula tes a desired following distance. Filtering was carried out by the controller to prevent small fluctuations in position data to result in incorrect velocities. Details of the controller design are outlined in ASIs final report [2 1 ]. Because the vision solution had very little noise, the transition for the use of vision data with the vehicle went smoothly. The path controller and vehicle spacing controller performed well with the vision data. From testing, the higher resolution image processing data pro vided better results. For best system performance a high resolution camera large target separation distance, and lens with the largest allowable focal length should be selected. Velocities up to 20 mph were tested autonomously without issue. Conclusions After research and experimentation, the convoy system showed the ability to operate reliably during autonomous control at ranges up to 60 meters. The selection of high powered infrared LEDS proved to be a sufficient targeting beacon at this range. Also t he protective housing s designed for the beacons and panning camera mechanism were effective in creating

PAGE 55

55 appropriate thermal dissipation and protecting the equipment from basic weather elements. In addition, it was found that the target design and implement ation allowed for robust tracking of the leader vehicle and target array without deviating from the correct solution. Overall, the entire convoy system showed a level of effectiveness that supports further inquiry into this area of technological advancemen t and gains. The system could be implemented easily onto a vehicle already equipped with drive by wire capability. All that is needed is a panning camera, image processing intelligence, and infrared targets on a leader vehicle. The simplicity of the desi gn would allow for low production cost and simple installation. Future Work To further advance system intelligence, the addition of other intelligence such as obstacle detection, lane sensing, and prior knowledge of the road network would further enhance t he capabilities of the system. This behavior would allow for observance of highway traffic rules, and intersection behavior. Although additional sensors require higher cost and more com plexity, these behaviors allow for safer navigation with other manual vehicles. This concept would be easily implemented on the current vehicle platform. Using the current architecture for the autono mous vehicle used in the DARPA Urban C hallenge already equipped with a full sensor package the additional data from the tracki ng sensor would be used to determine the direction of travel of the follower v ehicle, without explicitly controlling the vehicle path. The system would be capable of obstacle avoidance, deviation of the follower from the desired lane of travel, and observa nce of intersection rules. Essentially the tracking software would only report the current road, lane of travel, direction, and velocity. Because the tracking software was written using JAUS, tracking could be implemented without much additional developm en t.

PAGE 56

56 Figure 5 1. Theoretical model range error for 4 mm focal length, 1032 776 camera. Figure 5 2. Theoretical model angle error for 4mm focal l ength, 1032 776 c amera 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0 50 100 150 200 250 300 350 400Range Error (m)Seperation (pixels)Theoretical Model Range Error vs. Pixel Seperation; 4mm Focal Length, 1032x776 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0 50 100 150 200 250 300 350Angle Error (degrees)Offset (pixels)Theoretical Model Angle Error vs. Offset; 4mm Focal Length, 1032x776

PAGE 57

57 Figure 5 3 Theoretical model range error for 25mm focal l ength, 648 488 c amera Figure 5 4 Theoretical model angle error for 25mm focal l ength, 648 488 c amera 0.5 0.4 0.3 0.2 0.1 0 0.1 0 50 100 150 200 250 300 350Range Error (m)Separation (pixels)Theoretical Model Range Error vs. Pixel Seperation; 25mm Focal Length, 648x488 0.2 0.15 0.1 0.05 0 0.05 0 50 100 150 200 250 300 350Angle Error (degrees)Offset (pixels)Theoretical Model Angle Error vs. Offset; 25mm Focal Length, 648x488

PAGE 58

58 Figure 5 5 Image p rocessing r esolution versus t arget s eparation d istance Figure 5 6 Error i ntroduced from r elative p itch b etween v ehicles 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 0.5 1 1.5 2 2.5Range Resolution (m)Target Separation Distance (m)Image Processing Resolution at 20m Range 25mm Lens Resolution 4mm Lens Resolution 0 1 2 3 4 5 6 7 8 0 5 10 15 20 25 30 35Predicted Range Error (m)Relative Pitch Angle Between Vehicles (deg)Effect of Relative Pitch Between Vehicles on Range Accuracy 25mm Lens 4mm Lens

PAGE 59

59 Figure 5 7 Setup of l eader v ehicle t est p latform with GPS a ligned with t argets Figure 5 8 Recorded v ision s ensor range vs. GPS range d riving t est 8 9 10 11 12 13 14 15 16 17 18 0 50 100 150 200 250 300Calculated Range (m)Time (s)Vision Solution vs. Ground Truth GPS Data for Convoy Test Vision GPS

PAGE 60

60 Figure 5 9 Range e rror versus time. Figure 5 10. Leader v ehicle a ngle e rror versus time. 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 50 100 150 200 250 300Range Error (m)Time (s)Range Error vs. Time 1.5 1 0.5 0 0.5 1 1.5 0 50 100 150 200 250 300Angle Error (degrees)Time (s)Angle Error vs. Time

PAGE 61

61 Figure 5 11. UTM g round t ruth and v ision s olution versus time. 3292520 3292530 3292540 3292550 3292560 3292570 3292580 3292590 0 50 100 150 200 250 300Leader UTM Northing (m)Time (s)Recorded UTM North Position for Vision and Ground Truth Vision GPS

PAGE 62

62 APPENDIX A MECHANICAL DRAWINGS Figure A 1. Stepper m otor a ssembly d rawing, s heet 1

PAGE 63

63 Figure A 2 Stepper m otor a ssembly d rawing, s heet 2

PAGE 64

64 Figure A 3 Motor s tage p art for s tepper m otor a ssembly

PAGE 65

65 Figure A 4 Housing b ase for s tepper m otor a ssembly

PAGE 66

66 Figure A 5 Smart -m otor m echanism a ssembly d rawing, s heet 1

PAGE 67

67 Figure A 6 S mart -m otor m echanism a ssembly d rawing, s heet 2

PAGE 68

68 Figure A 7 Gear h ousing for s mart -m otor m echanism

PAGE 69

69 Figure A 8 Top b earing p late for s mart -m otor m echanism

PAGE 70

70 Figure A 9 Bearing i nput s haft for s mart -m otor a ssembly

PAGE 71

71 Figure A 10. Camera s tage for s mart -m otor a ssembly

PAGE 72

72 Figure A 11. Bearing o utpu t s haft for s mart m otor a ssembly

PAGE 73

73 LIST OF REFERENCES 1 B. Coifman, D. Beymer, P. McLauchlan, and J. Malik, A real time computer vision system for vehicle tracking and traffic surveillance," Transportation Research Part C: Emerging Technologies vol. 6, no. 4, pp. 271288, Aug 1998. Last accessed Jul. 2009 < http://www.ceegs.ohio -state.edu/~coifman/documents/TR Crw.pdf >. 2 A. Crutal, F. Chaumette, and P. Bouthemy, "Complex object tracking by visual servoing based on 2D image motion," Proc. of the Fourteenth Internation Conference on Pattern Recognition Brisbane, Qld., Australia, vol. 2, pp. 12511254, 16 20 Aug. 1998. Last accessed Jul. 2009 . 3 T. Zielke, M. Brauckmann, and W. von S eelen, "CARTRACK: Computer visionbased car -following," Proc. IEEE Workshop on Applications of Computer Vision, Palm Springs, CA, pp. 156 163, 30 Nov. 2 Dec. 1992. Last accessed Jul. 2009 < http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=240316&is number=6177>. 4 F. Michaud, P. Lepage, P. Frenette, D. Ltourneau, and N. Gaubert, "Coordinating maneuvering of automated vehicles in platoons," IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 4, pp. 437447, Dec. 2006. Last accessed Jul 2009 < http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4019445&isnumber=401942 5 >. 5 J. Wu, M. McDonald, M. Brackstone, Y. Li, and J. Guo, "Vehicle to vehicle communication based convoy driving and potential applications of GPS," The 2nd International Workshop on Autonomous Decentralized System pp. 212217, 6 7 Nov. 2002. Last accessed Jul. 2009 < http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1194673&isnumber=26870 >. 6 E. Abbott, and D. Powell, "Land -vehicle navigation using GPS," Proc. of the IEEE, vol. 87, no. 1, pp. 145162, Jan. 1999. Last accessed Jul. 2009 < http://www.ece.uwaterloo.ca/~ndpbucha/MSCI442/LandVehicle%20Navigation%20Using%20GPS.pdf>. 7 H. G. Nguyen, N. Pezeshkian, M. Raymond, and A. Gupta, "Autonomous communications relays for tactical robots," Proc. 11th ICAR, Coimbra, Portugal, pp. 35 40, 30 Jun 3 J ul. 2003. Last accessed Jul. 2009 < http://www.dtic.mil/cgi bin/GetTRDoc?AD=ADA422031&Location=U2&doc=GetTRDoc.pdf>. 8 S. J. Velat, A. Dani, C. D. Crane, and W. Dixon, "Vision based object tracking for robotic assisted convoy operations," Unpublished Journal Document CIMAR University of Florida pp. 1 6, 2008. 9 J.D. Anderson, D.J. Lee, B. Tippetts, and R. Schoenberger, "Using real time vision to control a convoy of semi autonomous unmanned vehicles," AUVSI's Unmanned Systems North America, Orlando, FL, 29 31 Aug. 2006. Last accessed Jul. 2009 .

PAGE 74

74 10. G. Dudek, M. Jenkin, E. Milios, and D. Wilkes, "Experiments in sensing and communication for robot convoy navigation," Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, Human Robot Interaction and Cooperative Robots, Pittsburg, PA vol. 2, pp. 268273, 5 9 Aug. 1995. Last accessed Jul 2009 < http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=526171 >. 11. H. Schneiderman, M. Nashman, A.J. Wavering, and R. Lumina, "Vision-based robotic convoy driving," Machine Vision and Applications, vol. 8, no, 6, pp. 359364, Nov. 1995. Last accessed Jul. 2009 < http://www.springerlink.com/content/u518334141426711/fulltext.pdf?page=1>. 12. M. W. Eklund, G. Ravichandran, M. M. Trivedi, and S. B. Marapane, "Real -time visual tracking using correlation techniques," Proc. of the Second IEEE Works hop On Applications of Computer Vision, Sarasota, FL, pp. 256263, Dec. 1994. Last accessed Jul. 2009 < http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=341319&isnumber=7985 >. 13. M. Schwarzinger, T. Zielke, D. Noll, M. Brauckmann, and W. von Seelen, "V ision -based car following: detection, tracking, and identification," Proc. of the Intelligent Vehicles '92 Symposium, Detroit, MI, pp. 24 29, Jul. 1992. Last accessed Jul. 2009 < http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=252228&isnumber=6442 >. 14. F. Belkhouche and B. Belkhouche, "Modeling and controlling a robotic convoy using guidance laws strategies," IEEE Transactions on Systems, Man, and Cybernetics, Part B:Cybernetics, vol. 35. no. 4, pp. 813825, Aug. 2005. Last accessed Jul. 2009 < http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01468252 >. 15. L. Andreone, P. C. Antonello, M. Bertozzi, A. Broggi, A. Fascioli, and D. Ranzato, "Vehicle detection and localization in infra red images," The IEEE 5th International Conference on Intelligent Transportation Systems, Singapore, pp. 141146, Sept. 2002. Last accessed Jul. 2009 < http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1041203&isnumber=22289 >. 16. F. Madritsch and M. Gervautz, "CCD -Camera Based Optical Beacon Tracking for Virtual and Augment ed Reality," Computer Graphics Forum, vol. 15, no. 3, pp. 207216. 1996. Last accessed Jul. 2009 < http://www3.interscience.wiley.com/cgi bin/fulltext/120705981/PDFSTART >. 17. M. A. Khan and L. Blni, "Convoy driving through ad -hoc coalition formation," 11th IEEE and RTAS, San Francisco, CA, pp. 98105, 7 10 Mar. 2005. Last accessed Jul. 2009 < http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1388377 >. 18. C. Meesookho, S. Narayanan, and C. S. Raghavendra, "Collaborative classification applications in senso r networks," Proc. Sensor Array and Multichannel Signal Processing Workshop, Washington D.C., pp. 370374, 4 6 Aug. 2002. Last accessed Jul. 2009 < http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1191063&isnumber=26597 >.

PAGE 75

75 19. OPENJAUS, Open Joint Architecture for Unmanned Systems Reference Architecture. [Internet Website]. V ersion 3.3.0. www.openjaus.com, 2008, L ast accessed Jun. 2009 20. Lin Engineering, High Torque Motor. [Internet Website]. http://www.linengineering.com/line/contents/ste pmotors/5709.aspx, 2009, L ast accessed Jun. 2009. 21. M. Hornberger, B. Thayn, B. Silver, R. Carta, B. Turpin, C. Schenk, and J. Ferrin, "Scientific and technical report: robotic convoy phase II final report," Unpublished Document Autonomous Solutions Inc. pp. 1 61, Jun. 2008.

PAGE 76

76 BIOGRAPHICAL SKETCH Brandon Merritt was born in 1985 and raised in Pensacola, Florida. A fter graduating as valedictorian of B.T. Washington High School c lass of 2004, he earned a B achelor of S cience degree in mechanical eng ineering at the University of Florida, gr aduating in December of 2007. He continued his education at the u niversity, entering the graduate school in January o f 2008, pursuing a m aster s deg ree in mechanical engineering. He intend s to apply his acquired sk ills to work in indust ry as a robotics design engineer.