Citation
Implementation of Laser Range Sensor and Dielectric Laser Mirrors for 3D Scanning of Glove Box Environment

Material Information

Title:
Implementation of Laser Range Sensor and Dielectric Laser Mirrors for 3D Scanning of Glove Box Environment
Creator:
SOLANKI, SANJAY CHAMPALAL ( Author, Primary )
Copyright Date:
2008

Subjects

Subjects / Keywords:
Boxes ( jstor )
Coordinate systems ( jstor )
Dielectric materials ( jstor )
Laser beams ( jstor )
Lasers ( jstor )
Mirrors ( jstor )
Reflectance ( jstor )
Sensors ( jstor )
Servomotors ( jstor )
Three dimensional modeling ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Sanjay Champalal Solanki. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
8/1/2004
Resource Identifier:
80246470 ( OCLC )

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

IMPLEMENTATION OF LASER RANG E SENSOR AND DIELECTRIC LASER MIRRORS FOR 3D SCANNING OF GLOVE BOX ENVIRONMENT By SANJAY CHAMPALAL SOLANKI A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2003

PAGE 2

Copyright 2003 by Sanjay Champalal Solanki

PAGE 3

This document is dedicated to my family for their ever-extending support and encouragement.

PAGE 4

ACKNOWLEDGMENTS I would like to thank Dr. Carl Crane for his support and guidance throughout the process of my project work. I would also like to thank Dr. Michael Nechyba and Dr. John Schueller for their guidance and for serving on my committee. I would like to thank all of the personnel of the Center for Intelligent Machines and Robotics for their support and expertise. Finally, I thank my wife and my parents for their love and help throughout my career. iv

PAGE 5

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES............................................................................................................vii LIST OF FIGURES.........................................................................................................viii ABSTRACT.........................................................................................................................x CHAPTER 1 INTRODUCTION........................................................................................................1 2 LITERATURE REVIEW.............................................................................................3 Research Trends............................................................................................................3 Data Registration and Integration..........................................................................3 Data Visualization.................................................................................................9 Principle of Operation of a Laser Sensor....................................................................12 Laser Mirrors..............................................................................................................14 3 EXPERIMENTAL SET UP.......................................................................................18 Objective.....................................................................................................................18 System Components...................................................................................................20 Dielectric Laser Mirrors......................................................................................20 Stepper Motor......................................................................................................21 Servo Motor and Controller.................................................................................21 Visual Camera System........................................................................................23 Mechanical Design.....................................................................................................24 Hardware Interfacing..................................................................................................27 Implementation...........................................................................................................28 4 VISUALIZATION.....................................................................................................35 Software......................................................................................................................35 Implementation...........................................................................................................36 v

PAGE 6

5 RESULTS AND CONCLUSION...............................................................................41 Results.........................................................................................................................41 Conclusion..................................................................................................................44 Future Work................................................................................................................45 APPENDIX A DIAGRAMS...............................................................................................................48 B FLOWCHARTS.........................................................................................................52 LIST OF REFERENCES...................................................................................................61 BIOGRAPHICAL SKETCH.............................................................................................63 vi

PAGE 7

LIST OF TABLES Table page 2-1 Description of connections to parallel port..............................................................27 2-2 Description of motor commands..............................................................................28 vii

PAGE 8

LIST OF FIGURES Figure page 1-1 General purpose glove box.........................................................................................1 2-1 The 3D imaging sensor LMS-Z210...........................................................................4 2-2 Stanford large-statue scanner.....................................................................................5 2-3 IBM research..............................................................................................................8 2-4 Virtuoso scanner augmented by IBM.........................................................................9 2-5 System for 3D surface acquisition and reconstruction.............................................10 2-6 3D scanning device..................................................................................................11 2-7 Optical Triangulation Measurement.........................................................................13 2-8 Time of flight measurement.....................................................................................13 2-9 Reflection from a polished mirror surface...............................................................14 2-10 Principle of interference...........................................................................................15 2-11 Reflectance of a multilayer coating..........................................................................16 2-12 Reflection of a three layer dielectric mirror.............................................................17 3-1 Concept of 3D scanning of glove box......................................................................19 3-2 Graph of reflectivity vs. wavelength for laser mirrors from Thorlabs, Inc..............20 3-3 Theory of operation of servo....................................................................................22 3-4 Glove box prototype.................................................................................................24 3-5 Free body diagram of linear motion system.............................................................25 3-6 Laser mounting on centre guide block.....................................................................26 3-7 Bottom mirror mounting..........................................................................................26 viii

PAGE 9

3-8 Coordinate systems transformations........................................................................30 3-9 Bottom mirror beam reflectance angle.....................................................................32 4-1 Image from CCD camera.........................................................................................37 4-2 Display of data points obtained directly from laser.................................................38 4-3 Display of data points obtained from the beam deflected by the mirrors................39 4-4 Display of complete set of data points.....................................................................40 5-1 Data points registered for plane surface...................................................................41 5-2 Conceptual set up for 3D scanning..........................................................................45 5-3 Surface reconstruction using cocone........................................................................46 A-1 A 3D model of the glove box scanning system........................................................48 A-2 Assembly drawing of the glove box scanning system.............................................49 A-3 Electrical layout of the system.................................................................................50 A-4 Layout of the Mini SSC II circuit board..................................................................51 B-1 Flowchart for readLaserMirror.c..............................................................................52 B-2 Flowchart for subroutine motor.c.............................................................................56 B-3 Flowchart for glovebox.c.........................................................................................57 ix

PAGE 10

Abstract of Thesis Presented to the Graduate school of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science IMPLEMENTATION OF LASER RANGE SENSOR AND DIELECTRIC LASER MIRRORS FOR 3D SCANNING OF GLOVE BOX ENVIRONMENT By Sanjay Champalal Solanki December 2003 Chair: Carl D. Crane III Major Department: Mechanical and Aerospace Engineering A glove box is a sealed environment used to perform experiments on hazardous substances or for experiments that must be performed in controlled atmosphere. Such experiments can be performed by an automated robotic system to avoid human interaction with the substances. To use such a system inside the glove box the environment of the box must be modeled. This thesis presents the design and implementation of a 3D scanning and modeling system for a glove box. A laser range scanner and dielectric laser mirrors are used to scan the glove box from multiple views. A different approach of acquiring multiple views using the same laser scanner has been put forth. The system uses the concept of deflecting the laser beam using dielectric laser mirrors and hence shifting the viewpoint to a different location. Since the laser scans distances in a plane, a stepper motor drive is used for simultaneous linear motion of the laser scanner and mirrors are used to scan the entire length of the glove box. The rotary motion of the laser mirrors is achieved by a servo motor. Range data are registered from x

PAGE 11

two different viewpoints. The data are aligned and visualized into a common world coordinate system using OpenGL. A visual image of the environment is captured for color information. The whole process of registering data points, capturing the visual image and controlling the drive systems is automated under the Linux operating system. To check the accuracy of the data points registered from the beam deflected by the laser mirrors, a plane surface is scanned. Relative accuracy of the system, not accounting for the gross displacement error, is checked by finding a best fitting plane to this data and then comparing the actual data points with the best fitting plane. The root mean square of the error values for the above set of data points was 6.281 mm. For data points registered for the same surface from the direct beam of the laser, the result obtained was 5.758 mm. Thus results of the range readings obtained from the deflected beam were comparable to those obtained from the direct beam. xi

PAGE 12

CHAPTER 1 INTRODUCTION Many organizations require handling hazardous substances, which may include plutonium, highly reactive substances, and other radioactive materials. The glove box is a sealed environment used to examine such types of substances. Figure 1-1 (Labconco corporation) shows an example of such a glove box. A pair of gloves is mounted and sealed to the front panel of the glove box for the user to perform various operations. Glove boxes are versatile and can be used for handling chemicals, pharmaceutical, or biological materials in controlled atmospheres (at laboratories, research institutes, schools, semiconductor firms, chemical firms, pharmaceutical firms, etc.). Sometimes the experiments performed are so hazardous that human interaction must be kept to a minimum. Use of an automated robotic system inside the glove box to perform the various activities would be ideal for minimum human interaction. However, in order Figure 1-1. General purpose glove box 1

PAGE 13

2 to use an automated robotic system, the environment of the box must be modeled so that any object that could obstruct the robot’s path is known. The objective was to design a scanning and modeling system to provide input to the automated system about the glove box environment. The robot could then be programmed to use this information to recognize the basic objects with which it must interact; and to confirm the object’s initial position, final placement, and the path of the robot. In 1998, the Center for Intelligent Machines and Robotics (CIMAR), University of Florida, started a project under the sponsorship of the U.S. Department of Energy (DOE) to design a scanning and modeling system for use in a glove box environment. The objective of the scanning and modeling system was to generate a 3-dimensional model of the glove box and its contents. This project is an enhancement of the glove box 3-D scanning and modeling system. The earlier system was run on three different computers, one each for laser scanning, CCD camera, and visualization system. One objective of this project was to integrate the whole system into one operating system. In the previous work, the glove box was scanned only from the top leaving many details unrevealed. This information was not sufficient for generating the actual 3D model of the glove box environment. In this thesis a new concept was put forth wherein it is possible to scan the glove box from different viewpoints using the same laser scanner, thus revealing more details.

PAGE 14

CHAPTER 2 LITERATURE REVIEW Research Trends To develop the scanning and modeling system for a glove box, a study of various research activities regarding digitization of the real world was conducted. The building of realistic models of the real world may be done for various applications (such as obstacle detection for autonomous systems, digitizing of artifacts, computer graphics, reverse engineering, land-surveying applications, and biomedical applications). Research in this area may be categorized into two broad categories: Data registration and integration Data visualization. Data Registration and Integration Data registration is the process of acquiring multiple views of the object or scene. For this purpose the sensor registers the data from one viewpoint and is moved to the next position. The data may be in the form of 3D data points or color information of the object. Various commercial sensors are available for this purpose. Many research institutes develop their own sensors or use a combination of different available sensors. The sensor collects and registers the data from different vantage points. All of the views are integrated to form a consistent model of the world. The Imaging, Robotics and Intelligent Systems (IRIS) Laboratory at the University of Tennessee, Knoxville has been conducting research in the field of building 3D imaging systems. The RIEGL LMS Z210 laser range sensor (Figure 2-1) is used for data 3

PAGE 15

4 acquisition. The sensor has a typical measuring range of 350 m (up to 700 m) with an accuracy of 25 mm. The field of view of the sensor is 80 degree by 333 degree. The optional True Figure 2-1. The 3D imaging sensor LMS-Z210 Color Channel, integrated in the LMS-Z210, provides the color of the target’s surface as additional information to each laser measurement. Color data are included in the binary data stream of the LMSZ210. The color channel allows straightforward texturing of 3D models by unequivocal correspondence of color pixels and range measurement. The Stanford computer graphics laboratory is conducting research in various areas of 3D digitization and graphics. One of the many important projects of Professor Marc Levoy was the Digital Michelangelo project. A special-purpose large-statute scanner (Figure 2-2) was made for this purpose. The scanner was custom built by Cyberware. It consists of an 8 foot vertical truss, a 3 foot horizontal arm that translates vertically on the truss, a pan tilt head that translates horizontally on the arm, and a scanner head. The

PAGE 16

5 entire assembly rests on a rolling base. To maximize flexibility, the pan tilt assembly can be mounted atop or below the horizontal arm and it can be turned to face virtually in any direction. To facilitate scanning deep crevices, the scanner head can also be rolled 90 degrees from its current vertical configuration to a horizontal configuration. To protect statues from contact with the gantry, there is an elaborate system of automatic shutoffs and interlocks. Finally the entire scanner head and pan-tilt assembly are encased in foam rubber. The scanner head consists of a laser, a range camera, a fiber optic white-light source, and a high-resolution color camera. Figure 2-2. Stanford large-statue scanner

PAGE 17

6 The laser and range camera permit digitization of 3D points with a depth resolution of 0.1 mm, a typical sample spacing of 0.29 mm and a standoff of 1120 mm. The working volume is 1.1 m wide by 7.6 m high. The light source and color camera permit measurement of surface color with a pixel size of 0.125 mm over the same working volume. To reduce vibration, the pan and tilt motions employ precision ball screw drives. To reduce deflection of the gantry during these motions, the centers of rotation and the centers of mass are made to coincide. To reduce deflection of the gantry during horizontal motion, any translation of the pan-tilt assembly in one direction is counterbalanced by translation in the opposite direction of a lead counterweigh that slides inside the horizontal arm. Using some of the ancillary components the gantry can reach a maximum height of 24’. Data integration, the process of aligning multiple 3D data sets in a common coordinate system, is the next step towards building a 3D model. Data integration is the process of computing the transformations required to place the data into a unique world coordinate system. The integration process is achieved either by hand or through the use of an external position measurement device such as a Global Positioning System. One mechanical approach is to mount the scanner on a robot arm equipped with an absolute positioning sensor or by keeping the sensor fixed and moving the object on a calibrated platform. An iterative closest point technique discussed by Besl and McKay (1992) is a widely used method for data registration and integration. The algorithm requires a procedure to find the closest point on a geometric entity to a given point. The method is well suited for applications such as evaluating the closest point on the given shape to a

PAGE 18

7 given digitized point or for alignment of two different views by applying the method locally at the overlapping of two views. Huber and Hebert (2001) from the Robotics Institute of Carnegie Mellon University have presented a technique for automatically registering multiple 3 dimensional data sets. In another work, Dias et al. (2000), discusses a way to generate complete high resolution models of real world scenes from passive intensity images and active range sensors. The paper discusses the point based matching technique in which a certain set of points known as the control points are found on the reflectance image from the laser sensor. Displaced frame difference correlation is used for matching pairs in the reflectance and the intensity images. Finally a color blending technique is discussed as in most cases several video images are necessary to cover the whole reflectance image. The IBM research lab’s Visual Technologies Department is developing scanning systems (Bernardini et al. 2002) for producing virtual objects that can be rendered with high visual quality. Unlike traditional applications, the end product of scanning for Computer Graphics is a model that can be used to render a realistic image of the object under novel conditions, i.e., in a location or under lighting conditions that can exist only in computer simulation. The emphasis is on visual, rather than metric accuracy. A systematic error in the shape may be less important than errors in the color or apparent shininess of the object. The research is focused on the development of scanning systems that require surface properties as well as shape, use commodity digital cameras, and produce texture mapped triangle meshes that can be efficiently rendered by graphics hardware.

PAGE 19

8 A schematic of the 3D capture methodology is shown in the Figure 2-3 for creating a detailed 3 Dimensional model of the Michelangelo’s Florentine Pieta. The 3D scanner Figure 2-3. IBM research (a) multiple digital photos are taken (b) surface shape, color and detail are computed for each (c) scans are aligned and merged into a single model was built on a multi base line stereo system, supplemented by a photometric system. The scanner is a customized version of the virtuoso shape camera from Visual Interface Inc. A photographic flash projects a pattern of vertical stripes on the subject. At the same time six b/w digital cameras photograph the illuminated area from different angles. An additional color digital picture, registered with the b/w cameras, provides a texture image. A multiview stereo algorithm (part of the software suite that comes with the Virtuoso system) computes a triangle mesh approximating the scanned area. Each scan typically covered a 200 mm by 200 mm area and comprised on average about 10,000 measured points. The typical intersample distance for these scans is about 2 mm. The virtuoso scanner was augmented with a photometric system (Figure 2-4) consisting of five light sources and the built in color camera plus some control electronics. For each camera pose five additional color pictures are taken each with one of the five light sources, while all other lights are turned off. Low power laser sources are used to project red light dots onto the statue. The projectors generate an eleven by eleven grid of rays. Used at a distance of

PAGE 20

9 about 1 meter, they produce an irregular pattern of red dots on the statue, with an average spacing of 20 to 40 mm between dots. Figure 2-4. Virtuoso scanner augmented by IBM An additional picture of the dots was taken to help in the alignment of overlapping meshes. The color pictures have a resolution of 1280 by 960 pixels, with 24 bit RGB per pixel. The acquired raw data consists of six b/w stripe images plus six color images per camera pose. The Virtuoso Developer software was used to compute triangle meshes for each single scan from the six stripe images. Then on a number of algorithms were applied to the data to arrive at the final model Data Visualization Data visualization is the process of building object based representation of large data sets and displaying photo realistic models. This would allow the user to interact with the data through input devices like a mouse or a cyber-glove.

PAGE 21

10 In this area the Imaging, Robotics and Intelligent Systems Laboratory, University of Tennessee is conducting research in the fields of scene segmentation, data reduction and object modeling. Dey et al. (2001) from the Ohio State University has developed the Cocone software for reconstructing a surface from its sample points. The input is the co-ordinates of the point cloud in 3D and output is a piecewise linear approximation of the surface which is made of Delaunay triangles with vertices in the input points only. The software is based on Voronoi/Delaunay computation. The program works on the assumption that the sample is dense and is obtained from a smooth surface. In one different approach towards 3D scanning, Tognola et al. (2003) presented a prototype for 3D scanning system together with a surface reconstruction algorithm to obtain an explicit reconstruction of both open and closed surfaces, particularly focused for biomedical applications. Figure 2-5 shows a block diagram of the proposed architecture for acquisition (3D scanning + registration). The scanning device is composed of a visible laser source, two CCD cameras, and a real time video processor. Figure 2-5. System for 3D surface acquisition and reconstruction

PAGE 22

11 Figure 2-6 shows the experimental set up during laser scanning. The two digital cameras and the video processor are popularly used to measure the 3D coordinates of properly illuminated retro-reflective targets called markers. In this study, instead of acquiring the 3D coordinates of passive reflective markers, the system was used to measure the 3D coordinate of an ‘active marker’, i.e., the laser spot projected over the surface that has to be scanned. The surface can thus be entirely digitized by manually sweeping the laser over the object surface. The 3D surface reconstruction procedure is based on the triangular mesh model. The triangular mesh approximating the surface is obtained through an adaptive deformation of a geometrical model based on minimizing an error function. The error function is used to obtain a non-uniform triangular mesh characterized by a greater density of triangles in the regions of the surface with a higher spatial frequency content. Figure 2-6. 3D scanning device Another interesting research activity is the virtual glove box which started in 2000 and is currently under development at the Fraunhofer Applications Centre for Computer Graphics in Chemistry and Pharmaceuticals (Kromker and Seiler 2001) in Frankfurt, Germany. The core of the project is a new Virtual Reality I/O device in which virtual

PAGE 23

12 objects are shown in a stereoscopic display and can be manipulated with both hands by the user through haptic feedback devices. Principle of Operation of a Laser Sensor Optical distance measurement is used in a wide variety of industrial, commercial and research applications. Most sensors use a visible or infrared laser beam to project a spot of light onto a target, the surface to which the distance is to be measured. The distance from the spot back to the light-detecting portion of the sensor is then measured in one of the several ways. The general factors to consider when specifying a laser distance sensor include maximum range, sensitivity, target reflectance and specularity, accuracy and resolution, and sample rate. The methods used to measure distance are Optical Triangulation and Time of Flight Distance Measurement (optical metrology centre). The Optical Triangulation measurement is used to measure distance with accuracy from a few microns to a few millimeters over a range of few millimeters to meters at a rate of 100 to 60,000 times per second. A single point optical triangulation system uses a laser light source, a lens and a linear light sensitive sensor. The geometry of an optical triangulation system is illustrated in Figure 2-7. A light source illuminates a point on an object, an image of this light spot is then formed on the sensor surface, as the object is moved the image moves along the sensor, by measuring the location of the light spot image the distance of the object from the instrument can be determined provided the baseline length and the angles are known. Laser time-of-flight instruments offer very long range distance measurement with a trade off between accuracy and speed. As shown in Figure 2-8, a short pulse of light is emitted and the delay until its reflection returns is timed very accurately or a phase

PAGE 24

13 Figure 2-7. Optical Triangulation Measurement difference between emitted and reflected waves is measured. If the speed of light is known, the distance to the reflecting object can be calculated. Figure 2-8. Time of flight measurement Many commercial laser sensors are available which work on one of the above two principles. To name one of them, the VI-910 is available from Minolta. The scanner has been mainly designed for reverse engineering and CAD applications. It uses a 640 by 480 pixels CCD and four rotary filters for R, G, B and 3D measurements for producing 3D data and color images. The instrument comes with a Polygon Editing Software designed for automatic data registration, processing of automatic data and conversion to various formats. The instrument uses the laser beam light cut method whereby the object is

PAGE 25

14 scanned by a split shaped laser beam. The light reflected from the object then enters the CCD camera. The distance to the object is obtained by the triangulation method to from 3D data. In addition to the distance data, color images can also be captured. By dispersing the received light by a rotary filter, color image data is captured by the same CCD as that used for the distance data. Laser Mirrors Various approaches have been used earlier to register data points from multiple viewpoints. However in all these cases either the object or the sensor have been moved relative to each other so as to obtain a different view. The idea of using a combination of laser mirrors to deflect the laser beam and register data points from a different viewpoint is unique. The following paragraph describes in brief the theory of laser mirrors. When light strikes the surface of any object some of the light is reflected, some is absorbed by the object and some is transmitted through the object. A mirror surface reflects most of this light. The reflection from a mirrored surface is specular. The angle of incidence is equal to the angle of reflection (Figure 2-9). Figure 2-9. Reflection from a polished mirror surface

PAGE 26

15 The amount of light reflected by a mirror depends on the nature of reflecting surface (composition, structure, density, color and so on), the texture of the reflecting surface (smooth or rough, regular or irregular, dull or polished, etc.), the wavelength and polarization of the light, the angle at which the light strikes the surface. The mirror for reflecting the laser beam needs to be of extremely high reflectivity, on the order of greater then 99% particularly for the specified band of wavelength. These mirrors are generally composed of several dielectric materials and use the principle of interference to produce high reflectance values. Figure 2-10 illustrates this principle. The incident light is divided into two different beams by a partially reflecting surface. Each of these beams travels a different distance and interference is observed when these two rays recombine. As shown in the figure, the light from the source impinges upon surface S 1 . Part of this light is reflected and the remainder penetrates the medium between the surfaces S 1 and S 2 and reflects back and forth between them. The thickness of the dielectric medium is h 1 and 1 is the angle of incidence in the medium of index of refraction n 1 . Figure 2-10. Principle of interference If the reflectance of the surface is low, the beam may reflect only once before it is greatly attenuated. If the reflectance is close to unity, it may reflect as many as 30 times.

PAGE 27

16 In either case the beams that reflect from either side of the film have traveled different optical paths and interference is observed when they recombine. If several dielectric films each with a quarter wave thickness and with alternatively high and low index of refraction are stacked, the reflected beams from all the interferences are in phase upon leaving the uppermost boundary as shown in the Figure 2-11. Modern coating technology produces reflectance greater then 99.99% using approximately 20 layers of dielectric stacks. Figure 2-11. Reflectance of a multilayer coating The Figure 2-12 shows that the reflectance of a dielectric mirror is a strong function of the wavelength of the incident light. The graph is the reflectance of a three layer dielectric mirror as a function of wavelength. As the number of dielectric layers is increased the wavelength band becomes narrower and the percentage reflectivity increases. The curve should be even narrower and the peak higher if more dielectric layers were used.

PAGE 28

17 Figure 2-12. Reflection of a three layer dielectric mirror

PAGE 29

CHAPTER 3 EXPERIMENTAL SET UP Objective An aluminum frame of size 1016 mm 1524 mm 1574 mm was erected to simulate the glove box. The goal was to design and implement a 3D modeling and scanning system for this glove box. Earlier work had been done on the system wherein the laser sensor could register data points from the top of the glove box. The sensor used for data acquisition was the indoor version Laser Measurement System (LMS) 200-30106 made by SICK. An infrared laser beam is generated by the scanner’s internal diode. If the beam strikes an object, the reflection is received by the scanner and the distance is calculated depending on the time of flight. The pulsed laser beam is deflected by an internal rotating mirror so that a fan shaped scan is made of the surrounding area. The laser scans a range of either 180 or 100 degrees. Although three range resolutions are possible 1, 0.5 or 0.25 degree, the resolution of 0.25 degree can only be achieved with a 100 degree scan. Because of the smaller beam width the laser is more susceptible to false echoes due to small particles. A dust storm could register as an obstacle to the laser scanner. The accuracy of the laser measurement is +/15 mm for a range of 1 to 8 meters. However it was realized that the data obtained from a stationary sensor was not enough and did not reveal all the details of the glove box. Hence it was necessary to collect data from the side views of the glove box. Different approaches were thought of as to how it would be possible to collect data from the different sides. 18

PAGE 30

19 One of the possibilities was to use the conventional way of registering data from one viewpoint and move the sensor to the next viewpoint. However, it would not be economical to design a system for moving the whole sensor system from one viewpoint to another. A better approach was developed where it would be possible to scan the whole glove box by moving the sensor system only in one direction and implementing a set of mirrors so as to register data from the side view. Figure 3-1 shows the schematic of the concept. The sensor can scan the data in only one plane and it has to be moved in a direction perpendicular to the plane of scanning so Figure 3-1. Concept of 3D scanning of glove box

PAGE 31

20 as to cover the entire length of the glove box. As shown in the schematic with the implementation of a set of mirrors, it would be possible to register data from a different viewpoint simultaneously as the sensor registers the data from the top. As seen in the figure after the sensor has scanned the glove box from the top, one of the beams pointing at 180 degree would be deflected by the set of mirrors and the viewpoint of the data registered could be shifted to the position of the bottom mirror. Then the whole system could be moved ahead to scan the next plane. System Components The following paragraphs explain in detail the system design and the various components used. Dielectric Laser Mirrors The mirrors used for the system are made by Thorlabs, Inc. These mirrors are made of fused silica as substrate material with a coating of dielectric layers on it. They offer excellent reflectivity over the stated wavelength ranges. Figure 3-2 is the graph of percentage reflectivity as a function of wavelength in nanometers. The wavelength of the Figure 3-2. Graph of reflectivity vs. wavelength for laser mirrors for angle of incidence up to 45 degrees Sick Laser range sensor is 950 nanometers. As seen from the graph the laser mirror of type E03 has reflectivity of greater then 99% for a wavelength range of 700 to 1200

PAGE 32

21 nanometers and hence best suits this purpose. The mirrors are 25.4 mm in diameter. The different mountings were procured along with the mirror to hold the mirror in the required positions. Stepper Motor The stepper motor is used for controlling the linear motion of the whole sensor system. The motor used is from Servo Systems and is provided along with the controller. It is a synchronous stepping motor. The motor has 200 steps per revolution and produces a torque of 2.12 newton-meters. The motor is energized by a 24 volt DC power supply. The stepper motor has several windings which need to be energized in the correct sequence before the motor’s shaft will rotate. Reversing the order of the sequence will turn the motor the other way. If the control signals are not sent in the correct order, the motor will not turn properly. The circuit responsible for converting the step and direction signals into winding energization patterns is the stepper motor controller. The controller used for controlling the stepper motor is model number CMD50 from American Precision Industries. Servo Motor and Controller The servo motor ‘SO3TFX 2BB’ from ‘GWS’ is used for controlling the rotary motion of the mirror. The servo generates more than 0.48 newton-meters of torque at 4.8 volts which is enough for the application. The Mini SSC II serial servo controller (Scott Edwards Electronics, Inc.) as shown in the Appendix A is used for controlling the servo motor. The default configuration of the controller is set to receive serial data at a baud rate of 2400 bits per second to control servos over a range of 90 degrees. Servo positions are expressed in units from 0 to 254, so each unit corresponds to a 0.36 degree change in the servo

PAGE 33

22 position. However, the controller can be set to control servos over a range of 180 degrees, with each unit corresponding to 0.72 degree change in position. The baud rate can also be set to 9600 bits per second. The servo motor connections are shown in the Figure A-4. A maximum of 8 servo motors can be connected on one board. The connect power for the servos is 4.8 to 6.0 volts D.C. The power input required for the controller is 9 volts D.C. The controller requires only two connections to the computer, the serial data and the serial ground and can be interfaced to the computer through the DB9 serial port. Figure 3-3 explains the theory of operation of the servo. As shown, the signal to the servo consists of pulses ranging from 1 to 2 milliseconds long, repeated 60 times a second. The servo positions its output shaft in proportion to the width of the pulse. In the default configuration of the controller, the position value of 0 corresponds to 1 millisecond and 254 correspond to 2.016 milliseconds. A one unit change in position Figure 3-3. Theory of operation of servo value corresponds to a 4 microsecond change in pulse width which gives a resolution of 0.36 degree per unit

PAGE 34

23 To command a servo to a new position requires sending three bytes. Byte 1 is the sync marker (255), byte 2 is the servo number (0-254) and byte 3 is the position (0-254). These must be sent to the controller as individual byte values. Visual Camera System The purpose of the visual camera system is to obtain a visual image of the glove box from an overhead vantage point which can later be used as a texture for the 3D surface for visualization. The Hitachi KPD-50 ” CCD color camera was used along with the Comiscar 6.0 mm f1.4 manual lens. The system uses an Integral Technologies MV Pro frame grabber. This frame grabber has been chosen because of its RGB capabilities. Unlike composite cables, where color signals are mixed, the RGB system divides information into separate red, green, and blue color information and transfers it through separate wires. This feature can later expand signal processing capabilities. Initially a software developer’s kit had been purchased along with the frame grabber. It contained a set of C++ libraries that allowed incorporating the frame grabber instructions into any custom program. However the software was based on the Windows operating system since at that time a Linux version of the software was not yet developed. The Integral Technologies Inc. had later developed frame grabber drivers for Linux and one was obtained. The software contained the Linux video capture driver and shared library for the integral technologies flash bus video capture card. It also contained many sample programs. One of the sample programs “offscrn.c” was used to grab the image of the glove box contents. The program uses “Imlib.h” image library to help with image display. The program was modified so that it would automatically capture an image and save it in a file “offscreen.tga.”

PAGE 35

24 Mechanical Design Figure 3-4 shows the set up for the glove box. The present set up is designed for scanning the glove box from the top and one of the sides. Figure 3-4. Glove box prototype Figures A-1 and A-2 show the 3D model of the the scanning system designed for the glove box. The linear drive system consists of an acme screw coupled to the stepper motor through a flexible coupling. The screw diameter is 0.5 inches with pitch of 10 threads per inch. The screw is supported with bearings at both ends. A round plastic nut housed inside the nut flange provides the linear motion as the motor rotates the screw. Three guides are mounted on the frame. The vertical and the horizontal arms connect the three guide blocks thus forming an ‘L’ shaped structure. The nut on the acme screw is attached to the centre guide block. The free body diagram of the

PAGE 36

25 ‘L’ shaped structure is shown in Figure 3-5. As seen the force ‘F’ is exerted by the motor on the centre guide block. Initially sliding contact linear bearings from /20 Inc’ were Figure 3-5. Free body diagram of linear motion system used. The bearing pads were made from self lubricating plastic. However the friction between the sliding pairs was very high and hence caused a moment to act on the horizontal arm, thus preventing the linear motion of the ‘L’ shape structure. This linear motion system was replaced by ‘HSR25 LM’ guides from ‘THK’. The guide blocks from ‘THK’ have rolling elements placed between raceways. The rolling motion that the rolling elements give rise to, reduce the frictional resistance to 1/40 th to 1/20 th of that in a slide guide. This system ensured a precise simultaneous linear motion of all the three guide blocks. The laser range scanner and the camera system are mounted on the centre guide block as shown in figure 3-6. The laser mirror mounted on the top corner guide block is fixed at an angle of 45.

PAGE 37

26 Figure 3-6. Laser mounting on centre guide block The bottom guide block supports the other mirror coupled to the servo motor as shown in figure 3-7. The rotary motion of this mirror is controlled by the servo motor. The mirror mountings are provided with setting screws for fine adjustment of the mirrors. Figure 3-7. Bottom mirror mounting

PAGE 38

27 Hardware Interfacing Figure A-3 shows the electrical layout of the system. A Pentium II 350 MHz processor with the Red Hat 7.1 Linux operating system is used as the central processing unit. The stepper motor drive and limit switches are interfaced to the parallel port. The laser scanner is connected to the serial port 1, the servo motor controller is connected to the serial port 2 and the CCD camera to the frame grabber. Table 2-1 describes the connection of the stepper motor and the limit switches. In order to avoid overloading the computer’s power source, the limit switches were connected to the parallel port using five Potter & Brumfield IDC-5 optical relays and a separate 24 volt power supply that also energizes the stepper motor. Two hardware stops were installed at the extreme ends of the rail. These stops cut off power to the motor before the cradle hit the structure of the box. The laser is interfaced to serial port 1 of the computer with an RS-232 cable. The laser requires three connections, the signal ground connected to pin 7, request to send data connected to pin 4, and data is received on pin 3. Table 2-1 Description of connections to parallel port Description Parallel Port Bit Parallel Port Pin Motor Run/Reset D0 2 Motor Half/Full D1 3 Motor Step In D2 4 Motor CW/CCW D3 5 Limit Switch 1 S3 15 Limit Switch 2 S4 13 Limit Switch 3 S5 12 Limit Switch 4 S6 10 Limit Switch 5 S7 11

PAGE 39

28 The servo motor controller is connected to the serial port 2 of the computer. The controller requires only two connections to the computer, serial data and signal ground. The serial data is connected to the pin 3 and ground is connected to the pin 5 of the DB9 serial connector. Implementation The code “readLaserMirror.c” was written to control the whole process and create a file of data points obtained from the laser. The data points are written in a file as a list of Cartesian coordinates in units of centimeters with reference to a world coordinate system. The origin of the world coordinate system is shown in Figure 3-8. This point is the first scan point registered by the laser.. Subroutine “motor.c” handles the stepper motor and the switch control. The motor function sets the direction, designates the motor for half or full step, enables the bits of the motor, and sends the desired amount of steps the motor should rotate. The number of pulses sent to the motor, the direction, the length of the pulse and the delay between pulses are sent to the function as parameters. Signals sent to the parallel port to energize the motor are described in Table 2-2. The subroutine is exited if the motor turns a desired amount of steps or if the cradle triggers a limit switch at the end of the rail. The motor control software flowchart is listed in Appendix B. The motor function sends commands to the stepper motor to control lateral movement of the range sensor Table 2-2 Description of motor commands Command Hex Byte to Parallel Port Binary Equivalent Pulse ON clockwise 03 00000011 Pulse OFF clockwise 07 00000111 Pulse ON counterclockwise 0B 00001011 Pulse OFF counterclockwise 0F 00001111

PAGE 40

29 The software package provided with the laser was working under the Windows environment. This software package was designed to obtain distances from the objects in the field of laser’s field of view and compare them to the actual location of specified zones. The zone locations could be defined by simple menu driven commands. Thus the program would take and use distance measurements continuously, but because of the desired simplicity there is no way of using these measured values in an external program. A new Laser Measuring System control program had to be written to obtain distances from objects in the glove box. The program that is now being used is a modified version of LMS control program that was written by Dr. David Novick of CIMAR, University of Florida. The program was written for the autonomous vehicle project and contains an extensive library of functions to communicate with the Laser in the Linux environment. The program “readLaserMirror.c” makes use of these functions to communicate with the laser. A flowchart of the code readLaserMirror.c is shown in Appendix B. To begin scanning the laser port is initialized to19200 baud transmission and the servo port is initialized to 9600 baud transmission. Next the motor function sends a command to the stepper motor for the homing position. Then the laser is set for a 100 degree scanning angle and a resolution of 0.25 degree. The “readLaser” function reads the values scanned by the laser for a scanning range of 100 degree and stores them in an array in the order of measurement. The location of the value in the matrix denotes at which angle of the laser diode the measurement was obtained. Since only the objects lying on the floor of the glove box are of interest, only

PAGE 41

30 the middle of the matrix, representing the centre range of 35 degree of the scan starting from 70 degree up to 105 degree are recorded in the file. Before the data is written to the file it is converted to Cartesian coordinates and the required transformations are performed to represent the data in the world coordinate system. Figure 3-8 shows the world and the local coordinate systems with the laser and the bottom mirror as the viewpoint. To align the laser coordinate system with the world coordinate system the laser coordinate system is first translated in the Y and Z directions and then rotated along the X axis by 180 degrees. Figure 3-8. Coordinate systems transformations The coordinates of a point measured in the local coordinate system with laser as the view point can be transformed in the world coordinate system by the transformation matrices as follows:

PAGE 42

31 11z_local1y_local1local_10000)180cos()180sin(00)180sin()180cos(000011000_100_01000011xlaserzlaseryzyx 1.311__1__1_111_1_1_1000_)180cos()180sin(0_)180sin()180cos(000011localzlaserzlocalylaserylocalxzyxlocalzlocalylocalxlaserzlaseryzyx Where x_local 1 = (scan number) (lateral movement of the laser between two scans) y_local 1 = (measured distance) cos (laser beam angle) z_local 1 = (measured distance) sin (laser beam angle) y_laser and z_laser are the y and z coordinates of the laser position measured with respect to the world coordinate system The origin of the XYZ coordinate system is located at the first point read by the scanner. A total number of 141 readings is obtained from the laser in a single scan After the laser scans the glove box from the top and the values are written in the file, the laser is initialized to register data points by shifting the viewpoint at the bottom mirror. For this the laser is set to a scanning range of 180 degrees with 1 degree resolution. The laser beam corresponding to the 180 degree angle points to the centre of the top mirror. All other range values in the array are discarded except for the 180 degree

PAGE 43

32 Figure 3-9. Bottom mirror beam reflectance angle value. Since the top mirror is placed at an angle of 45 degrees, the beam from the top mirror is reflected on the centre of the bottom mirror. The bottom mirror is rotated by the servo motor in steps of 0.85 degrees in between two scans. The function “writeBuf” sends the command to the servo motor controller. The arguments passed to the function are the three bytes of individual values as discussed earlier. As the bottom mirror rotates, the normal to the surface of the bottom mirror changes, hence the angle of incidence and the angle of reflectance also change. As shown in Figure 3-9 the change in the mirror angle by ‘’ causes both the angle of incidence and reflection to change by an equal amount. As a result the angle of the reflected beam with respect to the horizontal changes by ‘2 ’, which is 1.7 degrees and the total scanning range is 52 degrees. At each step the beam reflects from the bottom mirror, hits the object and travels the same path back. The laser measures the total distance traveled by the beam. The distance traveled by the beam from the laser to the bottom mirror is fixed.

PAGE 44

33 After subtracting this fixed distance from the total distance the distance of the point from the bottom mirror is obtained. Since the range data and the angle of the beam reflected from the bottom mirror are available at every step, the local Cartesian coordinates of the point can be computed. The local coordinate system of the bottom mirror is then aligned with the world coordinate system of the glove box. To align the mirror coordinate system with the world coordinate system the mirror coordinate system is first translated in the Y and Z directions and then rotated along the X axis by 180 degrees similar to the laser coordinate system. The coordinates of a point measured in the mirror coordinate system can be determined in the world coordinate system as follows: 12_2_2_10000)180cos()180sin(00)180sin()180cos(000011000_100_01000011localzlocalylocalxmirrorzmirroryzyx 2.312__2__2_112_2_2_1000_)180cos()180sin(0_)180sin()180cos(000011localzmirrorzlocalymirrorylocalxzyxlocalzlocalylocalxmirrorzmirroryzyx where x_local 2 = (scan number) (lateral movement of the laser between two scans) y_local 2 = (range distance from bottom mirror) cos (reflected beam angle) z_local 2 = (range distance from bottom mirror) sin (reflected beam angle)

PAGE 45

34 y_mirror and z_mirror are the y and z coordinates of the bottom mirror measured with respect to the world coordinate system The stepper motor controls the linear motion. After the data points are collected in one plane from both viewpoints, the stepper motor moves the scanning system linearly by 0.25 cm. Thus the linear distance between two scans is 0.25 cm which is equivalent to 200 steps of the stepper motor. The same procedure is repeated for the entire length of the glove box. The total number of scans and the distance between two scans can be varied as per the requirement.

PAGE 46

CHAPTER 4 VISUALIZATION The purpose of the visualization system is to display the range sensor measurements as a 3D model of the glove box. The 3D model is represented as a point cloud. This model can be rotated and viewed from different vantage points by the user. The visual camera system captures an image of the glove box environment and gives the color information. Software The visualization program uses the OpenGL graphics package that operates under the Linux environment. OpenGL is a software interface to graphics hardware. This interface consists of about 250 distinct commands that one could use to specify the objects and operations needed to produce interactive three dimensional operations. It does not provide high level commands for complicated three dimensional shapes. The objects have to be built from a small set of geometric primitives, points, lines, and polygons. OpenGL does not contain any command for managing windows hence the OpenGL utility toolkit (GLUT) is used for managing tasks such as opening windows and reading events from the mouse and keyboard. The program glovebox.c was written to accomplish the task of visualization. The code reads the data points from the data file and displays them as individual points to form a point cloud. The flowchart of the code is given in Appendix B. The program starts with GLUT routines to specify the initial window size, window position and display mode (single buffer and RGBA color model). The function “init ( )” sets the background 35

PAGE 47

36 color and reads the data points from the file. Callback functions are registered to display graphics and handle the input events. The callback function display ( ) is executed whenever GLUT determines that the contents of the window are to be redisplayed. This function takes care of the viewing and the modeling transformations and draws the point cloud. Callback “reshape (int w, int h)” is executed whenever the window is resized. This function takes care of the projection matrix to resize the whole display. Callback “keyboard (unsigned char key, int x, int y)” is executed whenever the keyboard button is pressed. This function rotates the view about X, Y or Z axis whenever the respective keys are pressed. This is accomplished by making the necessary changes in the modeling transformation matrix and redisplaying the view. Implementation Two boxes were placed in the glove box and scanned by the range sensor by executing the code “readLaserMirror.c”. When the center guide block hits the limit switch at the center of the glove box the image of the boxes is captured by the visual camera system and saved in file “offscreen.tga”. Figure 4-1 shows the image captured from the camera. Three different data files were created by the code “readLaserMirror.c”. The file ptcloudlaser.iv registers the data points available from the direct view of the laser, file ptcloudmirror.iv registers the data points available from the deflected beam and the file ptcloudlasermirror.iv registers all the data points. The program glovebox.c is executed to display the data points from each of these files. Figure 4-2 displays the points collected directly from the laser for the two boxes. The viewpoint for these data points is the laser source. The sides of the boxes are hidden from the direct view of the laser source. Hence the range sensor does not register any data from the sides. Figure 4-3 shows the data points registered from the beam deflected by the laser mirrors. The

PAGE 48

37 viewpoint for these data points is the bottom mirror. Hence the laser is able to register the data points from one of the sides facing the bottom mirror. Figure 4-4 is the display of the complete set of data points after aligning them in the world coordinate system. Thus the use of laser mirrors to register data points from a different viewpoint assists in giving more information about the glove box environment. Figure 4-1. Image from CCD camera Although it is not possible to have complete information of the object from just two views, the same principle of using laser mirrors could be applied on each side so as to have the complete details of the object from all sides.

PAGE 49

38 Figure 4-2. Display of data points obtained directly from laser

PAGE 50

39 Figure 4-3. Display of data points obtained from the beam deflected by the mirrors

PAGE 51

40 Figure 4-4. Display of complete set of data points

PAGE 52

CHAPTER 5 RESULTS AND CONCLUSION Results To check the accuracy of the data points registered from the beam deflected by the laser mirrors, a plane surface is scanned by executing the code “readLaserMirror.c”. The total number of data points registered for the plane surface is 4089. Figure 5-1 shows the display of these data points. Relative accuracy of the system, not accounting for the gross displacement error, can be checked by finding a best fitting plane to this data and then comparing the actual data points with the best fitting plane. Figure 5-1. Data points registered for plane surface 41

PAGE 53

42 Let ‘r’ be any point on the plane given by zkyjxir ‘s’ is the vector perpendicular to the plane given by CkBjAis writing equation of the plane 1.50Dsr Let ‘G’ be the matrix of ‘n’ number of points given by nnnzyxzyxzyxG222111 ‘b’ be a length of ‘n’ vector given by nDDDb00210 The equation 5.1 of plane can be written as 2.5bsG The matrix ‘G’ and the vector ‘b’ are both known and a least square solution technique will be used to obtain a solution for ‘s’, called s optimum such that the sum of the squares of the length n vector ‘e’ is minimized, where error ‘e’ is defined as 3.5optimumsGbe

PAGE 54

43 In other words ‘s optimum’ will be determined such that, e 1 2 + e 2 2 + e 3 2 + -------+ e n 2 is minimized where e 1 , e 2, -------------, e n are components of ‘e’ The sum of the squares the components of vector ‘e’ may be written as 4.5eeee2n232221eeT Where CzByAxeiiii 0-D The minimum will occur when the derivatives of equation 4 taken with respect to A, B and C all equal to zero. Evaluating the partial derivatives, nnTnnTnnTezezezezBCeeeyeyeyeyBBeeexexexexAAee332211332211332211222 The derivatives will be equal to zero when, 000332211332211332211 nnnnnnezezezezeyeyeyeyexexexex Therefore 5.50001321321321321bGGGssGbGeGeneeezzzzyyyyxxxxTToptimumoptimumTTnnn

PAGE 55

44 Assuming ‘D 0 ‘ equal to 1, equation 5.5 is used to find the optimum value of ‘s’ for the data points registered by the deflected beam 0000.00148.00000.0optimums Converting it to a unit vector 010optimums The components of vector ‘e’ are found using equation 5.3. The root mean square value of the elements of the vector ‘e’ is obtained as 6.281 mm. The same plane is scanned by the direct beam from the laser. The root mean square value of the elements of the vector ‘e’ obtained in this case is 5.758 mm. Thus the results obtained from the beam deflected by the laser mirrors are comparable to the results obtained from the direct beam. Conclusion The scanning and modeling system consists of three modules, the range scanning system, visual camera system and the visualization system. All the three systems operate on Linux platform on a single computer. The use of dielectric laser mirrors to obtain the data points from a different vantage point has been successful. Certain critical issues were observed while using the mirrors. The change in the ambient light such as the presence of a bright light source near the glove box affects the measurements. The presence of highly reflective surfaces near the mirrors affects the measurements. During initial set up the mirror readings were not

PAGE 56

45 found to be accurate, the reason for this being the aluminum surface of the vertical arm connecting the mirrors. However after painting the aluminum surface black the readings were satisfactory. Objects with different reflectivity were scanned. It was observed that the measurements were less accurate for objects with less reflectivity as compared those with high reflectivity. Future Work The mirrors were used to scan the glove box from only one side, this is not enough to reveal all the details as discussed before. It would be to have the same set up of mirrors on all sides to scan the glove box completely. At present the system takes a lot of time since only one beam of the 180 degree scan is used. Each time the laser scans the whole plane only one reading is registered from the deflected beam. Probably the best approach would be to have a single stationary laser beam source and mirrors on the top and sides. Figure 5-2 shows a schematic of such a system. Figure 5-2. Conceptual set up for 3D scanning

PAGE 57

46 The laser beam could be deflected by the top and side mirrors to register data from different viewpoints. Since the mirrors are lighter in weight it would be easier to translate and rotate them. Information from the range sensor and the camera could later be used for data segmentation to identify individual objects. Some work had been done in this area. The data points of the floor of the glove box and the objects were separated. This was simply done by separating all the data points with Z-value in a particular range. Finally a surface fitting technique and texture mapping could be applied on the segmented data for better visualization. Figure 5-3. Surface reconstruction using cocone

PAGE 58

47 The cocone software developed by Dey et al. (2001) at the Ohio State University was tried for surface reconstruction. However, since the data was incomplete and scarce, it was not possible to generate a good surface. Figure 5-3 shows an example output from cocone for one of the readings. As it can be seen, since the data is not completely available and scarce, the surface is not properly constructed. The next step would be to collect data from all the sides, segment the data for individual objects, and apply surface reconstruction techniques.

PAGE 59

APPENDIX A DIAGRAMS Figure A-1. A 3D model of the glove box scanning system 48

PAGE 60

49 Figure A-2. Assembly drawing of the glove box scanning system

PAGE 61

50 Figure A-3. Electrical layout of the system

PAGE 62

51 Figure A-4. Layout of the Mini SSC II circuit board

PAGE 63

APPENDIX B FLOWCHARTS This appendix contains the flowcharts for the programs described in the text. Figure B-1. Flowchart for readLaserMirror.c 52

PAGE 64

53 c Limit switch 1 ON Set laser to 100 degrees scanningangle and 0.25 degrees resolution Read laser range values i = start angle X = scan Number * lateral movement of the laser between two scansy = y_laser (measured distance * cos (laser angle))z = z_laser (measured distance * sin (laser angle)) i < end angle YesYes D Print to file “x, y, z” No E FNo i++ Print “Start Scanning” J = 0 Figure B-1. Continued

PAGE 65

54 Figure B-1. Continued

PAGE 66

55 Close output file End G j++ J < scannumber E F YesNo Figure B-1. Continued

PAGE 67

56 Figure B-2. Flowchart for subroutine motor.c

PAGE 68

57 Start Initialize GLUT Set display mode to RGBA color model andsingle buffering Set window size Set window location Init() sets background color toblack and reads data points Event QueueMain Loop() displayfunction keyboardfunction reshapefunction Figure B-3. Flowchart for glovebox.c

PAGE 69

58 Figure B-3. Continued

PAGE 70

59 Subroutine for reshape(int w, int h) Specify viewport Specify current matrix forprojection transformation Set current matrix to identity matrix Apply projection transformation to define theviewing volume in perspective projection Set current matrix formodel view transformation Exit Figure B-3. Continued

PAGE 71

60 Subroutine for keyboard(unsigned char key, int x, int y) key pressed = z Rotate clockwiseabout z axis Key pressed =Z Rotate counterclockwise about z axis display Exit Key pressed =Y Key pressed =y Key pressed =x Key pressed =X Rotate clockwise abouty axis Rotate counterclockwise about y axis Rotate clockwise aboutx axis Rotate counterclockwise about x axis yesNoyesyesyesyesyesNoNoNoNo No Figure B-3. Continued

PAGE 72

LIST OF REFERENCES Besl P J, McKay N D. 1992. A Method for Registration of 3-D Shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence.14(2):239:255. Bernardini F, Rushmeier H, Martin I, Mittleman J, Taubin G : 2002 Building a digital model of Michelangelo’s Florentine Pieta. IEEE Computer Graphics and Applications. 22(1):59-57. Center for Occupational Research and Development. Laser Electro-optics Technology tutorial. n.d. http://www.dewtronics.com/tutorials/lasers/leot/index.html . Site last visited August 2003. Dey T K, Giesem J, Goswami S, Hudson J, Zhao W. 2001. Tight Cocone, [Computer Program]. Available from URL: http://www.cis.ohio-state.edu/~tamaldey/cocone.html . Site last visited August 2003. Dias P, Sequeira V, Goncalves J G M, Vaz F. 2000. Automatic registration of laser reflectance and color intensity images for 3D reconstruction. Proceedings of SIRS2000, Reading, UK Huber D F, Hebert M. 2001. Fully Automatic Registration of Multiple 3D data sets. IEEE Computer Society Workshop on Computer Vision Beyond the Visible Spectrum (CVBVS 2001). http://www.ri.cmu.edu/pub_files/pub3/huber_daniel_f_2001_1/huber_daniel_f_2001_1.pdf . Site last visited August 2003. Imaging, Robotics and Intelligent Systems Laboratory, University of Tennessee. n.d. http://imaging.utk.edu . Site last visited August 2003. Kromker D, Seiler C. 2001. The Virtual Glove Box – A new I/O Device. ERCIM News. http://www.ercim.org/publication/Ercim_News/enw46/seiler.html . Site last visited August 2003. Owczarz M. 2000. 3D Plutonium glove box scanning and modelling system. Master’s thesis, University of Florida, Gainesville Optical Metrology Centre. n.d. http://www.optical-metrology-centre.com/index.html . Site last visited August 2003. Scott Edwards Electronics, Inc. Serial servo controllers. n.d. www.seetron.com . Site last visited August 2003. 61

PAGE 73

62 Stanford Computer Graphics Laboratory. 1999. The digital Michelangelo project. http://graphics.stanford.edu/projects/mich . Site last visited August 2003. Thorlabs, Inc. n.d. http://www.thorlabs.com . Site last visited August 2003. Tognola G, Parazzini M, Svelto C, Ravazzani P, Grandori F. 2003. A fast and reliable system for 3D surface acquisition and reconstruction. Image and Vision Computing. 21: 295-305 Woo M, Neider J, Davis T, Shreiner D. 2000. OpenGL Programming Guide. 3 rd ed. Addison Wesley, Reading, MA.

PAGE 74

BIOGRAPHICAL SKETCH Sanjay Solanki was born in India. He completed his bachelor’s degree in Mechanical Engineering in 1998. After his graduation he worked as a research and development engineer in an automobile industry. After working for 2 years, he realized that his knowledge was not sufficient to bring a breakthrough in the industrial world. He decided to go for higher education. He came to the United States in 2001, to pursue his master’s degree in mechanical engineering at the University of Florida. He chose to specialize in robotics. For the next 2 years, the Center for Intelligent Machines and Robotics at the University of Florida was his home. There he worked on various research projects under the guidance of Dr. Carl Crane III. 63