Design, Characterization, and Controls for a Surgeon-Interactive Robotic Imaging System

MISSING IMAGE

Material Information

Title:
Design, Characterization, and Controls for a Surgeon-Interactive Robotic Imaging System
Physical Description:
1 online resource (81 p.)
Language:
english
Creator:
Elmore, Timothy Zane
Publisher:
University of Florida
Place of Publication:
Gainesville, Fla.
Publication Date:

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Mechanical Engineering, Mechanical and Aerospace Engineering
Committee Chair:
BANKS,SCOTT ARTHUR
Committee Co-Chair:
FREGLY,BENJAMIN J
Committee Members:
CRANE,CARL D,III
BOVA,FRANK J

Subjects

Subjects / Keywords:
computed -- imaging -- robotic -- surgical -- tomography
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre:
Mechanical Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract:
A surgeon-interactive robotic imaging system can provide improvements in the field of surgical imaging such as reduction of X-ray dose, improved reconstruction quality, and reduced overall size and bulk, improving surgeon access. However, using robotic manipulators to perform surgical imaging is not trivial. A new robotic manipulator must be designed, and there are dynamic and control challenges that must be addressed. Finally, there are path planning challenges, finding the optimum trajectory through a cloud of desired views, while avoiding collisions and meeting other safety constraints. Design of a novel manipulator and static identification of the kinematic and stiffness properties of that manipulator are performed. Coordination of a serial-link manipulator and a prismatic/revolute manipulator is demonstrated. Optimal trajectory planning is accomplished using a variation of methods developed for the classic traveling salesman problem. The prototype system demonstrates feasibility for all aspects of a surgeon-interactive robotic imaging system to provide a suite of novel and useful capabilities for image-guided surgical procedures.
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility:
by Timothy Zane Elmore.
Thesis:
Thesis (Ph.D.)--University of Florida, 2013.
Local:
Adviser: BANKS,SCOTT ARTHUR.
Local:
Co-adviser: FREGLY,BENJAMIN J.

Record Information

Source Institution:
UFRGP
Rights Management:
Applicable rights reserved.
Classification:
lcc - LD1780 2013
System ID:
UFE0045930:00001


This item is only available as the following downloads:


Full Text

PAGE 1

1 DESIGN, CHARACTERIZATION, AND CONTROLS FOR A SURGEONINTERACTIVE ROBOTIC IMAGING SYSTEM By TIMOTHY Z. ELMORE A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2013

PAGE 2

2 2013 Tim Elmore

PAGE 3

3 I dedicate this dissertation to my family, friends, and coworkers, for without their support such work would not have been possible.

PAGE 4

4 ACKNOWLEDGMENTS I would like to thank my advisor, Dr. Scott Banks, for his endless wisdom, support, and dedication towards this research. He was without a doubt the most vital part of this research. I also thank my committee, Dr. B.J. Fregly, Dr. Carl Crane, and Dr. Frank Bova for their guidance and input. Finally, I would like to thank my fellow lab members Ira Hill, Nick Dunbar, Jared Jones, David Walker, and Brian Park for their support, intellectual contributions, and constant encouragement

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS .................................................................................................. 4 LIST OF TABLES ............................................................................................................. 7 LIST OF FIGURES ........................................................................................................... 8 ABSTRACT ................................................................................................................... 10 CHAPTER 1 INTRODUCTION ..................................................................................................................11 Background .............................................................................................................................11 The Need for Image Guidance ................................................................................................11 History of Image Guided Surgery ...................................................................................11 Current State of Image Guided Surgery ..........................................................................12 A New Approach Two Robotic Manipulators ..............................................................14 Manipulator Design ................................................................................................................15 Trajectory Planning ................................................................................................................16 Reconstruction Sensitivity ......................................................................................................16 2 MANIPULATOR DESIGN ....................................................................................................18 The Goal .................................................................................................................................18 Introduction .............................................................................................................................18 Methods ..................................................................................................................................19 Implementation and E xperimental Assessment ...............................................................24 Experimental Setup .........................................................................................................27 Results .....................................................................................................................................30 Discussion ...............................................................................................................................34 Conclusion ..............................................................................................................................35 3 MANIPULATOR CONTROL AND COORDINATION ......................................................36 The Goal .................................................................................................................................36 Introduction .............................................................................................................................36 Methods ..................................................................................................................................38 Pre planned Control .........................................................................................................38 Haptic Control .................................................................................................................41 Haptic Study ....................................................................................................................44 Protection of Human Subjects .........................................................................................46 Results .....................................................................................................................................47 Discussion ...............................................................................................................................49 Conclusion ..............................................................................................................................49

PAGE 6

6 4 TRAJECTORY PLANNING .................................................................................................51 The Goal .................................................................................................................................51 Description of Reconstruction Algorithm ..............................................................................51 Problem Description ...............................................................................................................52 Branch and Bound Algorithm .................................................................................................55 Co nclusion ..............................................................................................................................60 5 CT RECONSTRUCTION SENSITIVITY ANALYSIS ........................................................62 The Goal .................................................................................................................................62 Introduction .............................................................................................................................62 Methods ..................................................................................................................................63 Results .....................................................................................................................................66 Discussion ...............................................................................................................................70 Conclusion ..............................................................................................................................71 6 CONCLUSION .......................................................................................................................73 APPENDIX: BACK PROJECTION CONFIGURATION .................................................... 74 LIST OF REFERENCES .................................................................................................. 76 BIOGRAPHICAL SKETCH ............................................................................................. 81

PAGE 7

7 LIST OF TABLES Table page 21 Datasets collected with the experimental system ............................................................ 28 22 Changes to as a result of calibration ........................................ 31 23 Original D H parameters for the UTR .......................................................................... 31 24 Modifications to the D H parameters for the UTR after calibration, unloaded ................... 31 25 Modifications to the D H parameters for the UTR after calibration, 5kg load .................... 31 26 Modifications to the D H parameters for the UTR after calibration, 28kg load .................. 32 27 Error of calibrated model ............................................................................................ 34 31 Settling times for a typical 150mm move ...................................................................... 47 32 Time and force data for the nine position study ............................................................. 48 41 Time and path length for brute force ............................................................................ 54 42 Time and number of unique paths for branch and bound ................................................. 57 43 The UTRs dynamic constraints for path planning purposes ............................................ 57 44 Time to complete both the random a nd optimally sorted paths ........................................ 59 51 Error types for cases evaluated. .................................................................................... 64 52 RMS error and correlation coefficient for cases 1 5 ....................................................... 67

PAGE 8

8 LIST OF FIGURES Figure page 11 The human s pine, with vertebrae l abeled ..................................................................... 12 12 Current intraoperative ima ging solutions A.) the C arm B .) O arm ................................ 13 13 Siemens Artis Zeego .................................................................................................. 13 14 Proposed configuration wit h X ray source attached to under table robot (UTR) and x ray image detector connected to Mits ubishi PA 106CE robotic arm ............................... 14 21 SolidWorks model of UTR .......................................................................................... 20 22 Available torqu e as a function of velocity .................................................................... 22 23 Joint 2s position, velocity, acceleration, and torque as a function of time ........................ 22 24 Labeled joint axes of the UTR ..................................................................................... 23 25 A set of 50 unique positions ........................................................................................ 26 26 Experimental setup showing PA10 and UTR ................................................................. 28 27 Solidworks FEA model of typical UTR arm loading during acceleration .......................... 30 28 Position error for unloaded case. Red points are not included in the optimization, and are present to verify the calibration. All points are included in reported error metrics......... 32 29 Position error for 5kg load case. Red points are not included in the optimization, and are present to verify the calibration. .............................................................................. 33 210 Position error for 28kg full payload case. Red points are not included in the optimization, and are present to verify the calibration. .............................................. 33 31 Block diagram of PA10 control.................................................................................... 38 32 Spine with light sensor positions noted. ........................................................................ 44 33 Actual apparatus with light sensors. ............................................................................. 45 34 Controller performance for a typical 150mm move ........................................................ 47 35 Controller performance for a near collision ................................................................... 48 41 An example trajectory ................................................................................................ 53 42 An e xample t ree ......................................................................................................... 55

PAGE 9

9 43 Branch and bound pseudocode ................................................................................... 56 43 First 10 rows and columns of the cost matrix C ............................................................. 58 51 MTF as a function of spatial frequency for cases 1 6 ...................................................... 67 52 Modified SheppLogan phantom with white hori zontal line indicating location of 1 D intensity plot ......................................................................................................... 67 53 Intensity plots for a cross section of the modified SheppLogan phan tom reconstruction .... 68 54 Reconstructions of cranial CT scan .............................................................................. 69

PAGE 10

10 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy DESIGN, CHARACTERIZATION, AND CONTROLS FOR A SURGEONINTERACTIVE ROBOTIC IMAGING SYSTEM By Timothy Z. Elmore December 2013 Chair: Scott Banks Cochair: B.J. Fregly Major: Mechanical Engineering A surgeoninteractive robotic imaging system can provide improvements in the field of surgical imaging such as reduction of X ray dose, improved reconstruction quality, and reduced overall size and bulk, improving surgeon access. However, using robotic manipulators to perform surgical imaging is not trivial. A new robotic manipulator must be designed, and there are dynamic and control challenges that must be addressed. Finally, there are path planning challenges, findin g the optimum trajectory through a cloud of desired views, while avoiding collisions and meeting other safety constraints. Design of a novel manipulator and static identification of the kinematic and stiffness properties of that manipulator are performed. Coordination of a serial link manipulator and a prismatic/revolute manipulator is demonstrated. Optimal trajectory planning is accomplished using a variation of methods developed for the classic traveling salesman problem. The prototype system demonstrates feasibility for all aspects of a surgeon interactive robotic imaging system to provide a suite of novel and useful capabilities for image guided surgical procedures.

PAGE 11

11 CHAPTER 1 INTRODUCTION Background Back pain costs more than $100 billion annually in the US. Over 650,000 surgical procedures are performed annually, with costs exceeding $20 billion [1] Spine surgery is benefited by some form of navigation, or image guided surgery (IGS) in order to properly place hardware in particular vertebrae. As of the late 1990s, four approaches were used: live fluoroscopic guidance, pre surgical CT scans, virtual f luoroscopy, and 2D 3D registration. Although these technologies resulted in significant patient benefits, the ir success was limited by inaccurate registrations, increased surgeon radiation exposure, and/or the unrealistic assumption of rigid alignment betw een the CT scan and the operating room. The Need for Image Guidance The success of spine surgery is dependent upon the placement of the hardware. Studies have been performed on the accuracy requirements for pedicle screw placement [2] It was reported that maximum permissible translational/rotational error tolerances ranged from 0.0mm/0.0 at T5 to 3.8mm/12.7 at L5. The hum an spine is shown in Figure 11. Therefore, with zero permissible error of some vertebrae, improvements in navigation during spine surgery promise to improve patient outcomes. Furthermore, we see that there are other areas of improvement, such as patient r adiological dose, eq uipment size, and equipment cost History of Image Guided Surgery Over the past 10 years, a new technology has emerged that has expanded utilization of IGS. Intraoperative cone beam computed tomography (CBCT) has provided significant ad vances, reducing the drawbacks associated with the aforementioned approaches, while providing useful views and measurements to the surgeon. However, one of the biggest

PAGE 12

12 drawbacks to the current CBCT systems is the bulk of the machine, and its interference w ith the surgeons approach to the surgical field. Additionally, current technologies have limited mechanical degrees of freedom, resulting in increased dose to the patient to provide the same quality reconstruction. Figure 1 1. The human spine, with vert ebrae labeled [3] Current State of I mage Guided Surgery Current navigation methods have limitations, namely cost, limited imaging capabilities, and bulk, resulting in crowding an already overcrowded operating room environment. The popular imaging tools for the operating room currently are the C Arm and O arm, as shown in Figure 1 2.

PAGE 13

13 Figure 1 2. Current intraoperative imaging solutions A.) the C arm [4] B.) O arm [taken by investigator] A robotic imaging system can overcome these limitations, but development of such a system presents many challenges that have not been addressed. The Siemens Artis Zeego, a new imaging system, attempts to address these issues by mounting a conventional C arm on a robot arm, but the required arm is 23 times bulkier than existing O arm technology. Figure 1 3. Siemens Artis Zeego [5] Additionally, the available views are still limited, as the SID (source to image distance) is fixed, and the alignment of the source and sensor are also fixed. Betz et al. [6] suggested in 2002 there A.) B.)

PAGE 14

14 might be advantages to having dual robot imaging system, but no surgical imaging platforms to date have put the x ray source and x ray detector on separate robotic manipulators. A New Approach Two Robotic Manipulators The solution shown in Figure 14 seeks to overcome all existing limitations without compromise. By attaching the x ray source and detector directly to robots, the bulk of the manipulators can be greatly reduced, while allowing 6 DOF control over the image detector, and 5 DOF control over the x ray source. Figure 1 4. Proposed configuration with X ray source attached to under table robot (UTR) and x ray image detector connected to Mitsubishi PA 10 6CE robotic arm Until recently, CT reconstruction algorithms were limited by computational complexity to methods requiring equal spacing of views typically rotating about a single fixed axis Recent work has shown a constellation of views to be more advantageous, in term s of reconstruction quality and reduced number of radiographic images [8], [9] These advanced techniques are made possible by GPU accelerated reconstruction [10] However, existing imaging systems rigidly fix the source and sensor, using either a curved metal arm or putting both on a circular sliding track. These solutions are bulky and inflexible in terms of medical imaging and desired views/dose

PAGE 15

15 concerns. To expand the possible vi ewing gamut, i t is advantageous to move the source and sensor to separate robotic manipulators, for which there is precedent [11], [12] Manipulator Design No manipulator exists with the degrees of freedom and payload capacity necessary to position the x ray source under a surgical operating table without excessive bulk. Thus, a custom mani pulator was designed, using off the shelf motors and actuation stages. This under table robot (UTR) had to be carefully designed, minimizing moment arms and necessary motor torques, while still retaining the flexibility required by the imaging algorithms. The manipulator was characterized for stiffness and overall positional accuracy. The accuracy of the robots is important because it contributes to the 3D reconstruction error, and reconstruction error contributes to pedicle screw placement error. The Mit subishi PA10 robot has already been the subject of an accuracy study [11] This study showed that after correcting robot geometry and stiffness parameters, the mean PA10 position error drops to roughly 0.33mm. A validation protocol for the UTR has been evaluated using a Coordinate Measuring Machine (CMM ). The results of the validation protocol are reported in Chapter 2. The results show a RMS position error of 0.248mm and peak error of 0.507mm for the fully loaded UTR with x ray tube payload. Errors of this magnitude are considered to be of little to no effect on the visualization of fine detail s post reconstruction [1 3] Manipulator endeffector speeds are low in this application, as the target does not move appreciably. Maximum accelerations are estimated at 2.5m/s2, with maximum end effector velocities of 0.25m/sec. Thus, dynamic constraints are less of a concern, and static accuracy is paramount. T his new under table robot must be moved in coordination with a Mitsubishi PA10 manipulator, with end effectors locked together by a virtual rigid link. Control of the Mitsubushi

PAGE 16

16 PA 10 has already been performed and valid ated [11] Thus, Chapter 3 will focus on the control of the UTR and PA 10, and coordination of the two robots. A validation protocol has been performed using a CMM. Trajectory Planning Research at Stony Brook University has resulted in a reconstruction algorithm that allows for 3 D image reconstruc tion without full 180 degree sweeps of the x ray equipment [8] Instead, the algorithm uses prior knowledge of the anatomy to choose view that provide the greates information for the reconstruction. T he order in which these views are collected is unimportant to the reconstruction algorithm, but can make a significant impact on imaging time. For example, if one need ed to visit New York, Chicago, and San Francisco, it would not make sense to visit Chicago first or last, as it lies in the middle of the other two cities. This problem is trivial for the example with 3 locations, but the reconstruction algorithm requires 30 or more views. This is the classic traveling salesman problem, and a framework to calculate the cost matrix based on the manipulator dynamics and sort the views such t hat imaging time is minimized has been completed and tested as reported in Chapter 4. Reconstruction Sensitivity In Chapter 5, the sensitivity of CBCT reconstruction algorithm s to source/sensor positioning errors is evaluated common imaging metrics including relative error, correlation coefficient, modulation transfer function, and intensity plots These metrics will be evaluated for nominal reconstructions and several reconstructions whose forward projection positions and orientations ha ve been modified by random error of varying magnitudes before reconstruction. Some example cases are provided for the reconstruction of a head phantom. Finally, the sensitivity of this reconstruction algorithm will be compared against the results of the UTR

PAGE 17

17 positioning acc uracy study performed in Chapter 2 to evaluate the feasibility of the manipulator/algorithm combination.

PAGE 18

18 CHAPTER 2 MANIPULATOR DESIGN As described previously, no robotic manipulator exists that meets the size and performance goals for the x ray sour ce portion of this imaging system. Thus, a 5 DOF custom robotic manipulator was designed to minimize motor torque demands, while meeting stiffness and accuracy requirements. This manipulator has to fit under a standard Jacksonstyle operating table, while carrying a 25kg payload and being able to extend above the operating table for lateral view imaging. The Goal The goal of this section is to de sign, build, and evaluate a new manipulator with the purpose of accurately positioning an x ray source under static and dynamic conditions Introduction The importance of geometric calibration of radiographic systems has been well demonstrated [14] Jeffray et al. demonstrated that geometric calibration of errors as small as 2mm peak magnitude can result in improvements to the reconstruction. Many researchers have developed nonlinear geometric calibration models for robotic systems. In fact, the PA 10 robot used in this application was thoroughly evaluated by a prior U.F. PhD student [11] This model was significantly more complicated due to the presence of harmonic drives, highe r motor torques, errors propagating through serial links etc. A UTR based upon stepper motors and drives will not have nonlinearity due to gearbox backl ash, but will lack high accuracy knowledge of link geometries and joint flexibilities [15] Link geometries are CAD modeled and machined to exacting tolerances but assembly tolerances will yield robot geometric parameters that deviate from nominal values. Additionally, links can be modeled using FEA (finite element analysis) to provide expected link stiffness es but bearing and joint flexibilities are only specified

PAGE 19

19 to be less than a maximum tolerance. For these reasons, it is equally important to characterize actual geometric uncertainties and joint flexibilities. Research performed on a prismatic/revolute (PR) gantry assembly [16] deals with quantifying repeatable error due primarily to deflection and manufacturing tolerances in a 6 DOF positioner similar to the 5 DOF positioner being considered. The 6DOF apparatus uses a prismatic/prismatic/prismatic/r evolute/revolute/revolute (PPPRRR) configuration, while the UTR uses a prismatic/prismatic/revolute/prismatic/revolute ( PPRPR ) configuration. However, the approach provides useful guidance because it focuses specifically on errors resulting from linear pos itioners. Additional consideration of error sources showed positional error due to thermal expansion is not negligible. Consider a robotic manipulator with 1 meter reach constructed entirely of aluminum. If the room temperature increases from 20C to 25C a change in length of 200 microns will be observed. This is not insignificant, as overall error is desired to be 1mm or less for both manipulators [13] The body of research dealing with thermal expansion modeling in robots is limited, but authors have addressed the issue [17] Thus, temperature sensors are necessary for best accuracy in application, but ignored here as all measurements will be conducted under climate controlled conditions Methods A 5 DOF custom robotic manipulator was designed to minimize motor torque demands, while meeting stiffness and accuracy requirements. The basic design for the manipulator was built on a standard X Y stage, composed of two prismatic joints. A rotary stage was placed on top of that table, and a vertical, or Z translation prismatic stage on top of that. Finally, the tube mount was made to be able to rotate about the tubes long axis. The entire manipulator can be seen in Figure 2 1. Only 5 degrees of freedom are necessary to point the x ray source, as the

PAGE 20

20 detector robot has the abil ity to rotate the image plane in register with the x ray aperture/collimator Other goals include fit ting under a standard Jackson style oper ating table, while carrying a 25 kg payload and being able to extend above the operating table for lateral view imag ing. In order to keep eccentric loads on the rotary stage within limits, a counterbalance is necessary, but not shown in Figure 21. This brings the total load increase on joints 13 to 56kg. Figure 2 1. SolidWorks model of UTR Since it was identified that structural stiffness/encoder accuracy is paramount, extra attention was focused on these aspects. The linear actuators chosen have accuracy ratings of 154m (x axis) and 114m (y and z axes), and quadrature e ncoder resolution of 5 microns. The target i s less than 1mm of positional error, to be verified by a CMM (Coordinate Measuring

PAGE 21

21 Machine). Static identification is most important, as the manipulator will be operating in quasistatic conditions, with relatively low accelerations and velocities. Steppe r motors were chosen due to low cost, and suitable torque curves. Servo drive motors produce significantly less low speed torque for their size, instead spinning at high RPMs to produce similar power outputs. Thus, for this application, servo drives would require expensive low backlash gear reduction drives, and complicated control algorithms to counteract the nonlinearity of backlash [18] Additionally, stepper drive motors can be run open loop, and are tolerant of encountering unexpected obstacles. High rotational resistance results in missed steps, which are flagged in the controller by their signature back EMF sensed by the drive electronics. This does not damage the motor, while servomotors are prone to damage from output shaft lockup. Even though these stepper m otors can be run in open loop with guaranteed accuracies due to their inherent design, the apparatus was designed with encoders at each joint to ensure joint position during experimental validation. One drawback to stepper motors is that t hey must be sized appropriately and controlled with an acceleration profile developed from their speed/torque curves. That is, the acceleration limit for the motor is a function of the motors current velocity. Figure 22 shows this phenomenon, for the smaller of the two t ypes of motors used The motors were powered with 48VDC, for which torque curves were not provided by the manufacturer. They can be estimated from the 24VDC and 36VDC curves provided.

PAGE 22

22 Figure 2 2. Available torque as a function of velocity [19] These calculations have been performed for all 5 axes, with an assumed payload of 1.5 times that which is expected in the design. Motors were selected to give similar end effector velocities to that of the PA 10 manipulator, though clinical imaging will occur at speeds less than 1/10th maximum end effector velocity Figure 23. Joint 2s position, velocity, acceleration, and torque as a function of time 0.01 0.1 1 10 100 1000 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Time (sec) torque avail (Nm) accel (m/s^2) ang vel (rev/s) lin vel (m/s) disp (mm)

PAGE 23

23 A sample result of these dynamics calculations is shown in Figure 2 3. This calculation was performed to find the position, velocity, and acceleration as a function of time if maximum acceleration is commanded. Although significant work was applied in the initial design of the UTR, there is still link flexibility and manufacturing/assembly tolerances that can be modeled and quantified to improve end effector accuracy. Fi gure 2 4. Labeled joint axes of the UTR The forward kinematics for the UTR are straightforward. = + 0 + 15 + + 0 + 15 0 0 1 0 0 0 1 (2.1)

PAGE 24

24 Where , and are the z, y, and z prismatic joint positions as shown in Figure 24, and and are the revolute joint angles about their respective joint axes, also shown in Figure 2 4. Finally and are shorthand for the sin and cosine of joint i. Similarly, we find the inverse kinematics are also straightforward. = 0 0 2 1 (2.2) = tan + (2.3) = tan (2.4) = + + (2.5) = + (2.6) = + (2.7) The transformation or the desired image plane, is known. Using that transformation matrix, then a vector is projected that is the desired central ray. From there, it is a matter of evaluating the tube elevation and azimuth, then finding the appropriate positions f or the prismatic joints. Implementation and Experimental Assessment I n order to measure actual end effector positioning, a FaroArm Silver ( Faro Technologies, Lake Mary, FL ), is rigidly mounted to the robot frame. A FaroArm is a multilink measurement device with an arm and tooling ball, or probe which is freely positioned by hand. Typically, the tooling ball is run along a groove or other marker on the object, creating a point cloud that can be later analyzed. Although the FaroArm uncertainty is higher than that of the

PAGE 25

25 other methods ( 127 micron), it offers the ability to check multiple links at once, narrowing down the source of uncertainty and providing useful feedback for design improvements. The transformation from the robot base to the robot endeffector is defined as follows: (2.8) Each transformation matrix represents the kinematic change from joints 1 through 5. The product of these 5 transformations provides the forward kinematics for the UTR [20] This transformation can be approximated by the design paramet ers for each link, but cannot be known exactly. The deviations from design parameters are the desired outputs from the calibration process [21] The FaroArm is defined similarly, with 6 revolute links. The joint space to task space conversion is handled by the FaroArm itself only a point in XYZ coordinates is reported. This point is given in the FaroArm coordinate system, which is not aligned with the coordinate system of the UTR. Thus, another unknown transformation exists: = 0 0 0 1 (2.9) This transformation matrix is initialized by moving the robot to 7 positions along its x axis, and a least squares line fit is defined as, Si milarly, the robot is moved to 7 positions along its y axis, and a least squares line fit defines Then, the cross product of these two vectors is defined as the robot z axis in the CMM system, In the last step, the y axis of the robot, orthogonal to the basis vectors and is defined as The rotation matrix from FAROARM to BASE,UTR is initialized from these three vectors. The translation vector from FAROARM to BASE,UTR is constructed by direct measurement of the robot while in the home position (all joint values equal to zero). Thus, the transformation is initialized. T T T T T TEE BASE UTR EE 5 4 5 3 4 2 3 1 2 xCMM BASE yCMM TEMP zCMM BASE zCMM BASE xCMM BASE yCMM BASE

PAGE 26

26 Next a set of 5 0 joint space positions is generated for the robot. These 50 views cover the robots workspace that would be used during the collection of x ray images. Using the MATLAB uniform random number generator, each joint space is sampled evenly. The r esulting task space positions are shown in Figure 25. Figure 2 5. A set of 50 unique positions A trust region reflective nonlinear least squares optimization was implemented in MATLAB (Mathworks, Natick, MA) to minimize the cost function: = , + ( ) In the function above, is defined as the x, y, and z components of the i th view. The subscript m refers to the robot fiducial position as measured by the FaroArm. The subscript d refers to the desired position, of the robot end effector. Both are measured in the FaroArm -0.4 -0.2 0 0.2 0.4 0.6 -0.4 -0.3 -0.2 -0.1 0 0.1 0.45 0.5 0.55 0.6 0.65 0.7 X (m) Y (m)Z (m) x

PAGE 27

27 coordinate system. Finally, is the variation in each DH parameter. They are included in the cost function in order to drive the variation out of robot parameters and into the coordinate transform betwe en the FaroArm and the UTR, where applicable. Experimental Setup The measurement device in the experimental setup consists of a FaroArm model IND 03. It is fixed rigidly to the frame of the UTR robot, and is equipped with a ball probe for measurement of 1/ 8 holes in the robot endeffector. The FaroArm was not designed to be used with LabVIEW, so its proprietary serial communication protocol was reverse engineered. Data points (X,Y ,Z) can be collected at up to 5 0 per second. This is a limitation of the 57600 baud serial port. The UTR robot consists of two linear stages in the X and Y directions, a rotary stage about the Z axis, and a linear stage along the Z axis. The fifth joint is par t of the payload, and it c ontrols tube elevation The robot is equipped with stepper motors configured to 2000 steps/rev, resulting in a resolution of 0.005mm/step on the linear stages, and 0.0025deg/step on the rotary stage. The fifth axis uses a direct drive stepper motor, with r esolution 0.18deg/step. This can be improved further with the addition of a low backlash gear reduction drive, but the orientation of the tube does not impact the reconstruction quality as long as the focal point of the x ray tube is on the axis of rotation. These steppers are run in open loop, though the system is equipped with encoders. Stepper motors are typically not for use in closedloop control. The stepper motors are outfitted with back EMF detection to identify missed steps due to unexpected dynami cs or obstacles [22] The robot is controlled using National Instruments LabVIEW, coupled with five NI 9512 1axis motion controller s and five P70530 stepper motor drives. The motors themselves are coupled to the joints with low backlash spline type couplers The stages are equipped with home and limit switches, the former defines the zero point for each joint.

PAGE 28

28 Figure 2 6. Experimental setup showing PA10 and UTR The experimental setup, as shown in Figure 2 6, was used to collect seven datasets to quantify UTR positioning error. These datasets are described in Table 21, and will be referred to in the results. The positions of the first four datasets were visited in order of increasing specific joint position (X,Y, Z rot and Z trans). T he next three datasets were visited in order of increasing joint 3 position. The volume covered in the 50 position dataset is shown in Figure 25. Table 2 1. Datasets collected with the experimental system Name Num ber of Positions Robot_Xaxis 7 Robot_Yaxis 7 Robot_Zrot 13 Robot_ Ztrans 15 Robot unloaded 50 Robot 5kg load 50 Robot 28kg load 50

PAGE 29

29 Before analyzing data, it is important to examine the instrument being used to measure the data, and check for any impact on results. Ideally, the measurement device creates zero impact on the test system, or the impact is at least measurable. For example, a typical voltmeter has a resistance in the mega Ohm range, large enough to have little effect upon most circuits. The FaroArm works by placing the ball probe in a feature, guiding the probe by hand. The feature, a hemisphere equal in size to the ball pro be, was positioned such that the probe force to engage the hemisphere is downward, with gravity. This is because the manipulator is stiffest in this direction, resulting in the least impact on measurements as a result of the force imparted by the FaroArm p robe. Using a randomly selected 40 of the 50 positions measured, the nonlinear least squares optimization described earlier was performed. The remaining 10 positions are used to validate the calibration. A total of 22 variables are used to represent the de viations from nominal values. Six of these deviations are used in finding , the transformation between the coordinate systems of the FaroArm and UTR. The RMS and peak errors were calculated by the following formulas: ( ) = ( , ) + ( , ) + ( , ) (2.10) = 1 50 ( ) (2.11) = ( ) (2.12) Where denotes the position number, denotes the current desired position, and denotes the current measured position. All 50 positions were used in the error metric calculations.

PAGE 30

30 Results Figure 2 7 shows the results of one of many FEA studies performed on the UTR. In this case, the end effector is loaded with a force of 125N, which is representative of a 2.5m/sec2 acceleration. Additionally, it is loaded with force due to gravity. This acceleration is the maximum available with the selected motors. The maximum deflection at the end effector in this loading configuration is 140 microns Figure 2 7. Solidworks FEA model of typical UTR arm loading during acceleration The worst case FEA model shows deflection of 250 microns, at maximum acceleration. Typical accelerations will be roughly 20% of maximum accelerations, to maintain reasonable joint velocities for the operating room environment. The first 6 deviations are used to calibrate the transformation matrix between the FaroArm and the UTR base (Table 2 2) The remaining 16 deviations are used to calibrate the first 4 joints of the UTR (Figures 2 8 through 2 10 ) The deviations used to calibrate the

PAGE 31

31 geometric model of the UTR had deviations with magnitude less than 1 deg or 1mm (Tables 2 3 through 26 ) Joints 1 through 5 are depicted in Figure 24. Table 2 2. Changes to as a result of calibration Rotation (deg) Translation (mm) X Axis 0.0 605 2. 5296 Y Axis 0. 1 290 0. 4645 Z Axis 0. 4975 0. 4234 Table 2 3. Original D H parameters for the UTR theta (deg) d (mm) a (mm) alpha (deg) offset from joint angle link offset link length link twist home switch Joint 1 pi/2 0 0 pi/2 0 Joint 2 pi/2 0 0 pi/2 0 Joint 3 0 0 20 0.0000 0 0 Joint 4 0 0 0 pi/2 500mm Joint 5 0 75 .000 0 0 pi/2 0 Table 2 4. Modifications to the D H parameters for the UTR after calibration unloaded theta (deg) d (mm) a (mm) alpha (deg) offset from joint angle link offset link length link twist home switch Joint 1 0.0000 n/a 0.0000 0.1 248 0.0000 mm Joint 2 0. 1051 n/a 0.0000 0.0 261 0.0000 mm Joint 3 n/a 0.0000 0. 0 735 0. 00 54 0. 000 0 deg Joint 4 0.00 0 1 n/a 0. 0 000 0. 000 0 0.00 0 0 mm Joint 5 n/a n/a n/a n/a n/a Table 2 5. Modifications to the D H parameters for the UTR after calibration 5kg load theta (deg) d (mm) a (mm) alpha ( deg) offset from joint angle link offset link length link twist home switch Joint 1 0.0000 n/a 0.0000 0.10 46 0.0000 mm Joint 2 0.0 90 7 n/a 0.0000 0.0 430 0.0000 mm Joint 3 n/a 0.0000 0. 0 390 0. 01 98 0. 000 0 deg Joint 4 0.00 0 1 n/a 0. 0 000 0 0000 0.0 0 00 mm Joint 5 n/a n/a n/a n/a n/a

PAGE 32

32 Table 2 6. Modifications to the D H parameters for the UTR after calibration 28kg load theta (deg) d (mm) a (mm) alpha (deg) offset from joint angle link offset link length link twist home switch Joint 1 0.0000 n/a 0.0000 0.0 829 0.0000 mm Joint 2 0.0 9 51 n/a 0.0000 0. 1234 0.0000 mm Joint 3 n/a 0.0000 0.2 203 0. 0 468 0. 000 2 deg Joint 4 0.000 6 n/a 0.0000 0 0000 0.0000 mm Joint 5 n/a n/a n/a n/a n/a Note that the D H parameters listed in Table 24 are for the unloaded case. Because the payload of the UTR is the fifth axis, the Joint 3 link length and Joint 5 link offset will change when the payload is installed. Joint 3 gets longer (a longer effective arm) and the offset along S5 changes as well.Therefore, it is not meaningful to compare Joint 5s link length change in the 28kg configuration with the unloaded and 5kg load configuration link length changes. Because the offset along S5 is just a property of the chosen probe point, it is not listed. With an act ual x ray tube, this distance would be the offset to the focal point of the tube. Figure 2 8. Position e rror for unloaded case Red points are not included in the optimization, and are present to verify the calibration. All points are included in reported error metrics. 0 5 10 15 20 25 30 35 40 45 50 0 50 100 150 200 250 300 350 400 Position NumberError, microns

PAGE 33

33 Figure 2 9. Position error for 5kg load case. Red points are not included in the optimization, and are present to verify the calibratio n. Figure 2 10. Position error for 28kg full payload case. Red points are not included in the optimization, and are present to verify the calibration. 0 5 10 15 20 25 30 35 40 45 50 0 50 100 150 200 250 300 350 400 Position NumberError, microns 0 5 10 15 20 25 30 35 40 45 50 0 100 200 300 400 500 600 Position NumberError, microns

PAGE 34

34 Table 2 7. Error of calibrated m odel Dataset RMS Error (mm) Peak Error (mm) UTR 5 0 p oint u nloaded 0.16 0 0.30 3 UTR 50 p oint 5kg load 0.17 4 0. 316 UTR 50 p oint 28 kg load 0.248 0.5 07 PA 10 unloaded [11] 0.32 (mean) 0.66 Table 2 7 shows the RMS and peak e rrors for the calibrated geometric models produced by each dataset. The 28kg load reflects the load on the endeffector. An equal counterbalance is necessary, bringing the total load increase on joints 13 to 56kg. Discussion The UTR was demonstrated to have the necessary positioning capabilities to accurately position the x ray source. Specifically, the UTR with full payload was shown to have RMS total positioning errors of 0.248 mm RMS and 0.507mm peak over the sam pled workspace Furthermore, the UTR was shown to have the spatial capabilities necessary to perform the imaging tasks while not interfering with fixed operating room equipment. The results of Chapter 5 demonstrate that this level of positioning error has relatively insignificant effects on image reconstruction. Relative to current Iso C arm and O arm systems, the proposed system also provides enhanced usability. By moving the imaging equipment to separate robotic manipulators, equipment can be moved out of the way to gain access for a particular procedure. Additionally, it is possible to capture almost any single view, if the surgeon desires confirmation of a particular screw placement, for example. Both of these tasks can be controlled through closedloop force feedback, resulting in an easy to use platform for surgical imaging and navigation. Additionally, the use of 5 and 6 DOF robotic manipulators permits noncircular imaging trajectory for advanced Cone Beam Computed Tomography (CBCT) reconstructions.

PAGE 35

35 By removing the circular imaging constraint, improvements in patient dose and/or reconstruction quality can be realized by imaging only the most salient views [8], [23], [24] With a priori knowledge of the area to be imaged, the most informative views can be identified. Frequently, these views lie outside of a circular trajectory, and therefore must be disregarded due to current limitations in imaging hardware. Moving the ima ging hardware to two separate robots increases the available workspace significantly, permitting the desired radiographic views to be captured. With positioning errors less than 1mm, it is possible to perform good quality reconstructions using standard FeldkampDavis Kress (FDK) framework. With more advanced algorithms such as iterative and algebraic methods, off axis views can be utilized to further improve image quality for given radiological dose, or reduce radiological dose for a given im age quality [8] Future steps include a study of the surgeon robot interaction (Chapter 3) a study of optimal trajectories to gather a set of views (Chapter 4) and finally a study of the impact of positioning error on image reconstruction (Chapter 5) Conclusion Through the development of this ma nipulator, a new surgical imaging concept has been introduced. By moving the x ray source and sensor to separate robotic manipulators, improvements in surgical ergonomics and workflow, radiological dose and image reconstruction quality can be realized. The early prototype shows travel speeds and positioning accuracies that demonstrate the feasibility of the concept.

PAGE 36

36 CHAPTER 3 MANIPULATOR CONTROL AND COORDINATION Since a dynamic radiographic imaging system requires precise alignment between the source and sensor for quality image reconstruction, accurate control of the imaging robots is important The dynamics and control laws for the Mitsubishi PA 106CE have been well explored by researchers here at the University of Florida [11], [12], [25] and elsewhere. This work will be relied upon, but not recreated in this project Instead, thi s project will focus on new, additions to existing tools within the Dynamic Robotic Imaging Control Software (DRICS) platform [26] and the integration of the UTR. The Goal The goal of this section is to develop and evaluate the controllers for both the pre planned trajectory mode and the surgeoninteractive (haptic) mode This includes controller s for both the PA10 and UTR robots. Introduction In order to perform robotic surgeoninteractive manipulator based ima ging, two separate controls tasks are necessary: autonomous control and haptic control. Autonomous control is needed for the collection of images, the position of which is predefined by an offline path planning algorithm. Dozens of control schemes have been created and published, but they can be generally classified as linear or nonlinear in approach es The well known PID (proportional integral derivative) controller [27] is probably the most popular of the linear control schemes, and the RISE (robust integral of the sign of the error) control scheme [28] is a recently developed nonlinear control scheme with promising results for robotic manipulator control. Additionally, there exists a need for haptic control that is, control of the robotic manipulator by di rect interaction. This serves two purposes: guiding the manipulator into

PAGE 37

37 position, and feedback to the surgeon of joint limits and other boundaries. Haptic controllers can be generally classified as impedance controlled devices or admittance controlled dev ices or direct force feedback devices (explicit force control) [29] [30] A device under impedance control behave s as follows: the operator moves the device, and the device react s with a force if a virtual object is met. Therefore, the operator feel s the mass and frict ion of the device, so the manipulator must be low inertia and low friction (i.e. backdrivable) in order for effective control to take place. This control scheme is suitable for lightweight, cable driven manipulators such as the RIO (M AKO Surgical, Davie, F lorida). On the other hand, admittance control is the inverse of impedance control. A device under admittance control behaves as follows: the operator applies a force to the haptic device, and the device reacts with the proper displacement. Advantages of admittance control include considerable freedom in the mechanical design of the device, e.g. having high inert ia and friction. As a result, the device can be very robust with high stiffness and capable of exerting high forces and torques. In order to be a good candidate for impedance control, the robot must have relatively low joint friction. The Mitsubushi P A10 used in this study has Columb friction torques equal to 9.08, 9.28, 5.81, 1.48, 1.84, and 0.87 Nm for joints 16, respectively [11] With friction values that high, applied endeffector forces will need to be in excess of 15N for most of the workspace desired. Therefore, impedance control (especially without force feedback) is not suitable nor is the position control aspect needed for the surgeon interactive imaging application Direct a dmittance control is not suitable either, as there are typically multiple robot configurations to achieve any desir ed end effector post and singular configurations will require additional control logic For instance, if the arm is fully extended and then force is applied to move the end effector inward, joint 3 of the arm could move in either direction. A dmittance cont rol relies on the

PAGE 38

38 Jacobian inverse, which becomes ill conditioned at physical singularities. Although there are strategies for dealing with illconditioned Jacobian matrices such as the damped least squares method and null space method [31] admitta nce control is not a good fit for th is application. Therefore, gravity compensated explicit force control will be evaluated. In explicit force control, forces and torques are applied to the load cell mounted on the endeffector, and these are then resolved through the 6 manipulator joints. Torques are applied at each of these six joints with direction and magnitude proportional to the applied forces and torques, in order to attempt to minimize the applied force [32] Methods In conventional (positionbased) control of a serially linked servo powered robotic manipulator the block diagram looks like this: Figure 3 1. Block diagram of PA10 control The controller provides desired torques or velocities to the robot s servo box The PA10 accepts either command type though torque commands bypass one of the internal servo box control loops, and allow for a greater degree of control (we cannot modify the velocity control loop parameters within the servo box). Pre planned Control Although haptic control is necessary for surgeons to be able to define specific views and to move the robots out of the way, pre planned autonomous trajectory control is necessary to gather the fluoroscopic views that form the basis for 3D reconstruction. The trajectory is

PAGE 39

39 discretized at a finite step, and both robots are given position targets along a path. These targets are continually modified, forming a trajectory. In autonomous control mode, w e use Crane's analytic method for inverse kinematics of the PA10 to convert from task space to joint space [33] There are numeric methods, but as is the case with most numeric methods, they do not always converge to the best solution and are often more computationally complex For the PA10, there are up to 8 possible configurations/solutions. Numeric approaches typically only return one of these the closest one to the initial guess set in the direction that the optimizer travels first. Therefore, the analytic approach is better. Inverse kinematics solvers fail when a "singularity" is approached. This happens when two joints line up, which can happen with joints 1, 4, and 6 on the PA10. In this case, the number of unique solutions goes from 8 to infinity. Although singularities typically will not be encountered in the PA10 workspace for imagin g tasks, the condition number of the manipulator Jacobian can be used to monitor closeness to singular configurations. ( ) = (3.1) As the condition number increases, the manipulator is approaches a singularity We can simply drive through the singularity by holding the previous command until the condition number improves. Although the PA 10 robot uses servo motors equipped with harmonic drives and the UTR uses stepper motors, both can be controlled with velocity commands Th e PA 10 servo box converts these joint velocities to motor current, using an internal clos ed loop P I controller. The performance of this controller is excellent at the low velocities encountered in the surgical imaging application. The UTR uses stepper motors, which are driven by a variable frequency

PAGE 40

40 square wave. The stepper motor velocity is linearly proportional to the square wave frequency as long as the available motor pull in torque is greater than the motor load. Thus, velocity control is possible in open loop. Encoders are used for real time confirmation of desired position and velocity, and trip a fault if velocity is not within tolerance (if the apparatus experiences a failure or unexpected object in its path). Additionally, a stall detection circuit utilizing back EMF measurement is in place similar to the method described by Bui et al. [22] Thus, open loop control of the UTR is possible, as long as commanded velocities and accelerations are within the dynamic limits governed by the available motor t orque [34] Because stepper motors are capable of open loop position and velocity control, the control task is relatively simple. Conversely, the servo motors of the PA10 require closed loop control. Two methods of control are evaluated: linear (dual loop digital P/P I [27] ) and nonlinear (RISE [28] ). The dual loop P/PI controller consists of an outer position loop running on a realtim e target, the CompactRio 9024 ( National Instruments Austin. TX) Typical loop times are 1000s for the outer loop. The inner velocity loop runs on the servo drive provided by the manipulator manufacturer. Loop time for the velocity loop is 665uS. Finally, there is an analog control loop for joint current, also on the manufacturers servo drive. This analog control loop can accept torque commands directly, bypassing the af orementioned digitally implemented control loops. The RISE control law calculates torque commands with the following equation: = ( + 1 ) + [ ( + 1 ) + ( ) ] (3.2) = (3.3) = (3.4)

PAGE 41

41 where , are positive constant control gains. Both of these controllers will be tested for their tracking performance in a typical move between imaging views. An instantaneous move of 1 50mm in task space is executed, and the joint positions and torques are recorde d. Haptic Control During manual control of the system, the PA10 assumes the master role, and the UTR attempts to follow. Operation is governed by the following explicit force control law: = , (3.5) C ommanded joint torques are a function of the force input on the handle, and optionally the current position of the undertable robot and any possible collisions with the environment This function is nonlinear, and includes a deadband to eliminate end effector drift and a non linear hyperbolic tangent function with tunable parameters to handle joint limits and collision avoidance Consider the following, based on virtual work: [27] = + (3.6) W here: F is a vector of Cartesian forces/torques (measured by the load cell at t he end effector) is an infinitesimal (vector) distance in Cartesian coordinates (delta X) is the set of 6 joint torques is an infinitesimal rotation (vector) of the 6 robot joints (delta q) G is the vector of joint torques due to gravity The Jacobian is a matrix of partial derivatives between two vectors: = (3.1)

PAGE 42

42 = (a column vector of the linear velocity and angular velocity of the end effector) and = [ ] (a column vector of joint velocities) The Jacobian is a function of the geometry of the robot and can be used for force (stiffness) control. The current framework calculates the Jacobian using the method outlined by Paul and Shimano [35] We s ee that the Jacobian relates Cartesian space to j oint space. Therefore: = (3.2) Knowing that, we can manipulate as follows: = + (3.3) We factor and rearrange: = ( + ) (3.4) which holds for all so we cancel = ( + ) (3.5) which can be solved for t: = (3.6) which relies on the Jacobian matrix t ranspose, not the inverse of the Jacobian Matrix, so there is no problem even at singularity as we do not need to calculate the inverse [27] Admittance control relies on the inverse of the Jacobian, while explicit force control relies only on the transpose of the Jacobian. Thus, the problem of a singularity is an issue in admittance control, but not direct force control. We measure can calculate can calculate so we can find which again, represents the desired joint torques. Then, we multiply the control gai ns by This can be thought of this as "increasing the leverage" as t his is the degree to which the motors assist. If the gains were zero, the friction at the joints and inertia of the manipulator

PAGE 43

43 would have to be overcome by the force applied at the end effector. If the gains were 100,000, then applying even the slightest force would result in extreme motion. Without any user int eraction, there is roughly 1N of noise measured at the load cell. We use gains of 8 15. In this case, and for typical poses, a force of 10N at the load cell results in a torque command of roughly 25N m, which is about 1 5% of maximum available torque. During manual control of the robot, it may be desirable to reduce the control effort based upon a possible collision or joint limi t. The control law then becomes: = (3.7) where is a column vector comprised of , as described in Equation 3.9, is the vector of control gains, and is the Hadamard product. Care must be taken to match the coordinate system of with the coordinate system of the robot end effector This will have the effect of the surgeon feeling increasing resistance at a prescribed limit. The is measured by a ATI Mini4 5 F/T load cell, with 6 degrees of freedom, and a calibrated range of +/ 290 N, +/ 10 N m. This signal is lowpass filtered by a firstorder EWMA (Exponentially Weighted Moving Average) filter, set at 2Hz. This is justifiable since the majority of frequen cy content of human skeletal muscle lies under 2Hz [36] The filter is implemented digitally, as follows: ( ) = ( ) + ( 1 ) ( 1 ) (3.8) where x(n) is the current measurement, y(n 1) is the previous filtered value, and y(n) is the current filtered value. This filter topology was chosen because it is very simple computationally, and will not tax the real time processor, it consisting of just two multiplications, an addition, and a subtraction. Using this filter provides gain margin in the control loop by attenuating high frequency sig nals which otherwise would feed back and resonate at high control gains, resulting i n uncontrolled osc illation.

PAGE 44

44 Haptic Study In order to test the effectiveness of the haptic control laws, an apparatus was produced composed of a spine model and 8 circular photoresistors (active diameter 2.54mm). One photoresistor is placed at each target shown in Figure 32, and one sensor is mounted on each side of the cube shown at T7. Figure 3 2. Spine with light sensor positions noted. The 3D printed cube holds four sensors total, and measures 35mm on each side. A standard red laser (spot diameter 2.54mm) is mounted to the control handle, which is attached to the load cell mounted on the end effector of the manipulator. The photoresistors are monitored with a USB 6009 (National Instruments, Austin, TX) set to a 250hz sampling rate. The photoresistors not mounted on t he cube are shielded using four 30 3D printed cones, angled at 45 from vertical. The cones over sensors located at the C1 and L4 vertebrae are facing the head and feet of the virtual patient, respectively. The cones over sensors at T1 and T12 are facing the user. The purpose of these cones is to enforce a variety of tas k space positions and orientations. The actual apparatus is shown in Figure 33. The surgical handle and laser diode are visible at the top of the figure

PAGE 45

45 Figure 3 3. Actual apparatus with light sensor s. A group of 5 subjects (4 male) volunteered for th e study. The task was to position the robotic manipulator so that the laser diode illuminated the photo sensor. A threshold of 1.5v was set, indicating an approximately 50% overlap between the photo sensor and laser spot. This threshold had to be held for 1s, at which point an audible beep was played, indicating to the subject to proceed to the next position. The subject was instructed to start with the sensor at C1, and proceed down the spine, visiting the sensors mounted on the cube in a clockwise fashion. After triggering the sensor at L5, the subject was instructed to return to the sensor at C1, stopping the trial. The trial was repeated 5 times per subject, and joint space, load cell forces/torques, and photosensor voltages were recorded through time. T his dataset was collected for both the low controller gain ( = [ ] ) and high controller gain ( = [ ] ) settings. Time to complete the task, RMS force magnitude, and peak force magnitude are reported for each s ubject and each controller.

PAGE 46

46 Protection of Human Subjects During haptic evaluation, the protection of human subjects is paramount. Therefore, there is a need for constant real time evaluation of safe operation. The primary danger exists when the robot must operate near the patient or the mounting hardware. The former presents obvious danger to the patient, and the latter presents danger to the surgeon in the form of a pinch injury. However, the patient can be represented by a bounding cylinder and the geometry of the mounting hardware is known, so countermeasures can be taken. It is propos ed that a so called danger index be used to quantify the risk of collision [37] Logically, this index should consider the distance between the robot and the object to be avoided. As the safety index decreases, the maximum allowable task space velocity and control gains are red uced. This safety index is defined in Equation 3.9, where is the distance between the end effector and object to be avoided, and is the distance at which control gains should decrease. As the end effector position approaches the object to be avoided, becomes less than resulting in a safety index less than 1. By multiplying the desired control gains against the safety index before applying them in the control law of Equation 3.7, the amount of torque provided by the joint motors for motion will decrease to zero as the object is approached, while gravity compensation torque remains unaffected. = tanh (3.9) During manual control of the robot, a joint limit (of either robot) may be encountered. Simply put, this is the limit approached as the end effector is driven towards a position not achievable by one of the robots For example, as the robot is pulled in one direction, eventually it will run out of travel. As this limit is approached, limit index similar to the safety index can be created, and applied in a similar fashion As a limit is encountered, increasing resistance will be

PAGE 47

47 felt by the surgeon, but he /she can continue to drive the robot through the position. Optionally, the hyperbolic tangent function can be bounded, limiting or eliminating the manipulators ability to impart a counter force on the user. This is more natural, more tunable, and safer than virtual spring based impedances as the counter force is a function of the applied force, not solely the end effector position in space Furthermore, none of the proposed positions put the operators hand in any pinch point positions. Additionally the operator has a deadman switch linked to the robot emergency stop. This is similar to past protocols for human protection [12] Results The results of the autonomous (pre planned) control are shown in Figure 34. A move of 150mm in task space was commanded. Table 3 1 shows the settling times for a typical move between imaging views. Figure 3 4. Controller performance for a typical 150mm move A.) P D /P I control B.) RISE control Table 3 1. Settling times for a typical 150mm move P/PI Control RISE Control Settling time, e=1mm 567ms 629ms Settling time, e=0mm 667ms 791ms -500 0 500 1000 1500 0 20 40 60 80 100 120 140 160 180 Time, msPosition, mm Desired Position Actual Position -500 0 500 1000 1500 0 20 40 60 80 100 120 140 160 180 Time, msPosition, mm Desired Position Actual Position B.) A.)

PAGE 48

48 Figure 35 shows a typical collision avoidance scenario. In this scenario, the end effector of the robot is pushed towards a fixed object whose position is known. As the distance to the collision boundary approaches zero, the operator continues to push towards the boundary, noted by the increase in app lied force at t=7 seconds. The robot responds, increasing joint torques to compensate for the operators applied force in the direction of the collision. This continues to 35N, at which point the operator relents. The robot does not move until the operator s applied force swings negative, pushing the end effector away from the collision boundary (t=8 seconds). Table 3 2 shows the results of the nine position study. Time to completion, RMS and peak force values are averaged over the 5 trials to produce a sin gle metric. Figure 3 5. Controller performance for a near collision A.) Distance to collision bound B.) Force applied by operator Table 3 2. Time and force data for the nine position study Low Gains (mean values) High Gains (mean values) Time (s) RMS (N) Peak (N) Time (s) RMS (N) Peak (N) Subject 1 47.358 15.2 58.468 47.11 11.064 39.09 Subject 2 46.332 9.978 39.534 40.882 8.318 30.314 Subject 3 41.95 9.164 37.542 43.196 9.726 40.06 Subject 4 55.366 8.696 40.544 51.466 7.57 36.076 Subject 5 48.04 8.188 36.74 55.128 7.692 32.404 5 6 7 8 9 10 11 12 13 14 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Distance to Collision Bound Time (s)Distance (m) 5 6 7 8 9 10 11 12 13 14 -20 -10 0 10 20 30 40 Force Applied by Operator Time (s)Force (N) B.) A.)

PAGE 49

49 Discussion For the preplanned control task, the instantaneous response is primarily limited by the available control effort, as expected. It became apparent during the implementation of these controllers that the RISE controller, while stable, is difficult to tune. One set of gains may yield good (critically damped) performance, but that same set of gains in a different portion of the workspace may yield overshoot. Not only is the performance of the P D /PI control smoother, but it also reaches 1mm tracking error and 0mm tracking error faster. In fact, it is able to reach steady state in just under two thirds of a second. The collision avoidance strategy proved robust, even for fast approach velocities ( 0.5m/ s) and small boundary thicknesses (2cm). The manipulator is able to present high stiffness and high counter force (>35N) at the collision boundary, while only requiring 5 7N of applied force for most control tasks in the workspace. It is able to stop the collision only 28 milliseconds after the collision boundary has been crossed, primarily as a result of the hyperbolic tangent function increasing smoothly as the boundary is crossed, instead of in a discontinuous fashion. A framework was created to analyze the control systems performance in high precision positioning tasks using closed loop force feedback. The study required positioning of the center of the imaging plane within a window 2.5mm in diameter, while maintaining prescribed approach angles within +/ 15 degrees. In both high and low gain cases, it took approximately 5 seconds on average to move the manipulator to the desired position and orientation, indicating smoothness and sufficient damping in the system to avoid hunt ing and overshoot that is often experienced in haptic control of heavy, high friction manipulators. Conclusion Effective control schemes for the Mitsubushi PA106CE manipulator as a part of a surgical imaging platform have been demonstrated. In order to meet the requirements of the

PAGE 50

50 platform, the manipulator must be able to complete both preplanned trajectories and function as haptic device. A traditional PD/PI scheme was found sufficient for the former, and explicit force control with a hyperbolic tangent based collision avoidance sc heme was found suitable for the latter. A potential improvement on the scheme described is direct compensation for the manipulator dynamics, such as the inertia and joint frictions. Such parameters have been experimentally measured for the PA10 and other manipulators [11] and could be implemented on faster real time processors, such as National Instruments cRio 9081RT. The collision avoidance routine would also benefit from a faster real time processor. Though lightweight algorithms exist such as the Lin Canny closest features algorithm [38] collision detection was limited to direct calculation of the end effector distance to the a plane in the workspace, in order to keep control loops above 500hz on the cRio 9024 processor.

PAGE 51

51 CHAPTER 4 TRAJECTOR Y PLANNING In order to perform 3D reconstruction, it is necessary to collect a series of 2D images, or views, of the subject. These views may be snapshots as the camera travels along an arc path, or may be a more random, disorganized set of views. In the past, it was necessary to collect all of these views along a single semi circular arc. However recent advancements in reconstruction algorithms have allowed for the contribution of off axis views [8] The Goal The goal of this section is to develop and test novel pathplanning algorit hms for the collection of 2D images, for the case of stationary imaging (zero velocity while exposing). Description of Reconstruction Algorithm Dose reduction is always a goal in medical imaging, and one of the ways to reduce dose is to reduce the number of images needed for the reconstruction. Recent advances in reconstruction techniques have allowed for reduced radiological dose given the abilit y to perform noncircular imaging trajectories [8] Given a rough 3D shape, there are algorithms to identify the salient views, reducing the number of views necessary. For example, if the object to be imaged were a cube, it would be desirable to collect 3 or 6 images, each with a view ing axis perpendicular to the face of the cube. By doing this, all of the objects geometry is collected with a minimum of images taken. In addition to dose concerns, a greater number of images results in a longer imaging time, increasing wait and lengthening medical procedures. A generalized model for the object to be scanned exists given some information about the patient, and once the patient/object combination has been scanned once, subsequent scans will benefit further from the more precise model. The n, via gradient filtering, the models edge points can be detected. Running a Hough Transform [39]

PAGE 52

52 on these edge points will create clusters of the radial distance and angular orientation of the more prominent edges of the object, which are most important. These clusters can be used t o drive advantageous viewing angles (image plane rotation angles) which then become desired robot end effectors positions and orientations. These positions and orientations are encoded as transformation matrices. Problem Description Unfortunately, these d esirable transformation matrices are unlikely to happen to be specified i n an order advantageous to robot imaging, with respect to imaging time and mechanical velocity/acceleration constraints. Assuming reasonable velocities and ac celerations, it may take up to 5 seconds to move between imaging positions. With ~30 images necessary to reconstruct a typical spine, total imaging time would be in excess of 2 minutes. This is not ideal some procedures require multiple sets o f reconstructions, and waiting 23 m inutes for each would not be clinically desirable However, the reconstruction algorithm does not depend upon the order of image collection, nor does any other part of the procedure. Thus, motivation exists to sort these sets of end effector positions in order to minimize imaging time given some set of dynamic constraint s These constraints are primarily mechanical and nature, and they could be changed with the use of more powerful joint motors. The problem of arranging a set of positions/orientations is a well known problem, classified generally as the traveling salesman problem (TSP) [40] The TSP is a NP hard, NP complete computational problem, [41] [42] specifically a combinatorial optimization problem. It has been studied since at least the 1930s [43] and less formally prior to that. Salesmen must travel to a set of cities, but rarely care about the order of visitation. The path length can be minimized, but is subject to the following number of possible permutations as long as symmetry is assumed

PAGE 53

53 Figure 4 1. An example trajectory. Pictured are the PA 10 (above), patient, and UTR (below). The image plane is depicted in red, and the desired views with red arrows. The shortest trajectory is shown with green lines. = ( 1 ) 2 (4.1) S ymmetry is defined as indifference i n cost with regard to path direction (point A to B vs. B to A). This means that the number of possible paths increases dramatically as the number of points, or image views in this case, increase. For 20 views, there are 6.08x10^16 views available. For 30 views, 4.42x10^30 paths exist. In addition, the Mitsubishi PA 106CE robot has a maximum of 8 possible joint configurations for some endeffector position orientations, and the under table robot (UTR) has 2 possible joint configurations for some endeffecto r positionorientations.

PAGE 54

54 Thus, the number of joint space paths considering all robot inverse kinematic solutions is as follows: = ( 1 ) 2 8 2 (4.2) After considering the robots proposed, for 20 views, 5.6*1037 views exist. The possibility for multiple kinematic solutions from each robot only worsens the numerical problem that lies ahead. It is not computationally possible to evaluate all 5.6*1037 paths that exist in this case, within a reasonable period of time on current workstation hardware. However, it is possible to compute all paths and analyze them for the shortest path for cases of approximately N<10. Such an approach commonly known as the brute forc e method [44] will be referred to as the Global Optimal Solution, or GOS. The GOS is one algorithm that guarantees finding the best path. Results for this are shown in Table 41. Table 4 1. Time and p ath l ength for brute f orce # of Views # of Paths Computational Time 5 120 0.049 seconds 9 362880 0.855 seconds 10 3628800 8.35 seconds 11 39916800 90.958 seconds 12 479001600 18.191 minutes 13 6227020800 3.94 hours 14 87178291200 2.29 days 15 1307674368000 34.48 days There are dozens of alternative algorithms available that attempt to find the best path, but likely will result in a path that is marginally longer but can be found computationally in a reasonable amount of time. For example, the nearest neighbor algorithm (NN) can be used but this typically results in a path 25% longer than optimal [45] but can result in the worst possible tour for some cost matrices Another alternative, C hristofides algorithm, guarantees a solution

PAGE 55

55 with path cost less than 150% of optimal [46] Genetic algorithms have also been used with success in applicati ons with multiple robot arms [47] Branch and Bound Algorithm One of the inefficiencies associated with the brute force method is that solution trees are fully explored, even when it is known that their cost exceeds that of a previously f ully populated path. An example is shown in Figure 42. The investigator has written an adaptation of the well known Branch and Bound algorithm [48], [49] Figure 4 2. An Example Tree Consider the example shown in Figure 42. By evaluating the solution tree down to the 123451 solution, we can generate an upper bound of 2.8 seconds. Thus, trees containing subpaths from views 14 or 15 cannot contain optimal solutions, as the time cost to travel from point 14 or 15 already exceeds 2.8 seconds. This example is somewhat nonsensical, as the time cost from 1 4 or 15 canno t exceed the time cost from 1 2345. A nonsensical example was used due to space limitations. T he general principle holds that some trees will quickly have a cost greater than an established upper bound. Thus, the rest of the tree should be pruned, and l eft unexplored. In this case, half of the tree would be pruned, resulting in significant speed increases. Using the same CPU and programming language as before, the branch and bound algorithm can find the optimal path for 90 views in roughly the same time as the brute force

PAGE 56

56 method took to find the optimal path for 9 views. Psuedocode for the branch and bound scheme used is shown in Figure 43. Reduce cost matrix C (row and column reduction) W hile C is larger than 2x2 Create matrix T For each C(i,j)=0, let T(i,j)=max(min(row Ci),min(col Cj)) Choose (k,l) to maximize (T(k,l)). //the path without (k,l) Make a branch from current pair to pair ~(k,l). Let the cost be cost+T(k,l) Make a branch from current pair to pair (k,l) //path with(k,l) Delete row k and column l in C. Check for existing stored paths, find start and end cities m,n Set C(m,n) to infinity Reduce C, let s be sum of reducing constants Let the cost for path with k,l be cost + s End Figure 4 3 Branch and bound pseudocode The total co mputational time of the branch and bound routine for 6 sets of views ranging from n=30 to n=200 are shown in Table 42. For both the pseudo code in Figure 43 and the actual code above, the branch and bound approach is modified with the tothe right variat ion, which states that the code exits when the last node to the right is evaluated, not when the guaranteed optimal solution is proven. When branching, the branch with the pair most likely to be included in the globally optimal path is selected This is th e path with the pair (k,l), as shown in Figure 4 3. While it is likely that this pair is in the globally optimal solution, that is not guaranteed to be the case. To guarantee optimality, all branches with cost greater than the tothe right cost much be exp lored until their cost exceeds the bound set in the tothe right case. Branch and bound can even be modified further to become branch and cut, which was utilized in 2005 to solve a problem visiting 33,810 points [50] and it was proven that no shorter tour exists. The computation took approximately 15.7 CPU years to complete.

PAGE 57

57 Table 4 2. Time and number of unique p aths for branch and bound n Views # of paths Computational Time 30 2.65253E+32 0.006 seconds 50 3.04141E+64 0.037 seconds 80 7.1569E+118 0.405 seconds 100 9.3326E+157 1.663 seconds 150 5.7134E+262 13.351 seconds 200 7.8865E+374 68.479 seconds The computational time savings realized with the branch and bound algorithm increase further as the number of desired views increase the branch and bound routine can calculate the best path for 200 views in the same time that the brute force method takes to calculate the best path for 11 views. For example, let us sort 20 random views, the generation of which is described in the methods section of Chapter 2. We have identified that the dynamics of the UTR are the primary limitation, as it is moving a payload 10 times greater in mass than the PA 10. Therefore, we calculate the desired end effector position of the UTR for each view, apply inverse kinematics, and use the joint space positions of the UTR for the following example. Only 20 positions are used to keep Figure 4 3 readable. We feed these views into the Branch and Bound program, with joint dynamics set to the parameters in Table 4 3. These dynamics are selected such that 50% of the available motor torque is never exceeded, to ensure that the stepper motors do not miss steps. Table 4 3. The UTRs dynamic constraints for pa th planning purposes Joint Velocity Limit Acceleration Limit 1 150 500 2 150 500 3 45 135 4 100 100 5 30 100

PAGE 58

58 With the dynamic constraints set, a cost matrix C is calculated. The cells of C represent the cost to move from view i to view j. When i=j, the value is set to 99999, to denote a prohibited move, since we do not desire to move from the current view to the current view. Cost matrix C is sh own in Figure 43, in units of seconds. After the co st matrix has been calculated, the branch and bound routine can begin. = 99999 1.941 1.725 1.288 2.173 1.983 1.124 1.508 1. 930 2. 813 1.941 99999 2.484 1.774 0.963 1.068 1.865 1.773 2. 225 2. 041 1.725 2.484 99999 2.668 2.201 3.219 1.055 1.044 1. 185 4. 192 1.288 1.774 2.668 99999 2.018 1.819 2.049 1.957 2. 409 1. 858 2.173 0.963 2.201 2.018 99999 1.352 1.866 1.556 1. 942 2. 325 1.983 1.068 3.219 1.819 1.352 99999 2.601 2.508 2. 960 2. 036 1.124 1.865 1.055 2.049 1.866 2.601 99999 1.031 1. 586 3. 574 1.508 1.773 1.044 1.957 1.556 2.508 1.031 99999 1. 206 3. 481 1.930 2.225 1.185 2.409 1.942 2.960 1.586 1.206 99999 3. 933 2.813 2.041 4.192 1.858 2.325 2.036 3.574 3.481 3. 933 99999 Figure 4 3. First 10 rows and columns of the cost matrix C The branch and bound routine starts by splitting all possible solutions into two groups. In this case, those groups are the paths containing the view pair (10,12), and the paths not containing the view pair (10,12). The minimum cost of each group is calculated and assigned to that node on the tree. The results of the first few branches are shown in Figure 44. Note that the second branching results in a path cost greater than the minimum path cost for the first branchs nonselected node. This means the optimal solution could lie in that branch (i.e. the path not containing pair (10,12), but this is unlikely. Therefore, the branch to the right approach does not guarantee o ptimality, but it does result in consistent computational time required for a given N views. The lowest cost associated with an unevaluated branch (in this case, 22.28) serves as the lower bound for the global optimal solution. Since the tothe right algorithm results in a path cost

PAGE 59

59 of 25.11 (Table 4 4), we accep t that there is little to be gained by exploring other branches (in this case, less than 3 seconds of imaging time). Figure 4 4. The first few nodes of the solution tree. After completion of the branch and bound algorithm, we evaluate the total time it would take the UTR to traverse the path, visiting all of the views in the randomly sorted case and the optimally sorted case. Th e results are shown in Table 44. Table 4 4. Time to complete both the random and optimally sorted paths Path Time (sec) Random 56.18 Optimized 25.11 Clearly, the optimal solution saves imaging time, in this case a reduction of more than 50% of the unsorted path. The reasons become clear when we observe the joint positions in the sorted and unsorted cases. In Figure 4 4, Joint 3s trajectory over the 20 imaging view s is shown.

PAGE 60

60 Figure 4 4. Unsorted and optimally sorted trajectories for Joint 3 It has been demonstrated that solving the optimal path planning problem is important even for small view sets (n=18). However, t he problem becomes even more important to solve for more typical sets of desired imag es (n=30 50). Imaging time savings on the order of 3 5 minutes can be realized, simply by sorting the views optimally. Conclusion Although this research focuses primarily on typical imaging trajectories performed during spine surgery there is interest in expanding the development to include other parts of the body. This path planning research is just as applicable to imaging other parts of the body, or any object in general. The results demonstrated present a compelling case for the use of a path planning optimization algorithm. For sets of views with roughly 60 views or less, the branch and bound algorithm can find globally optimal solutions in a few seconds of computational time. For larger 0 2 4 6 8 10 12 14 16 18 20 -50 0 50 100 150 200 250 View/Position NumberJoint Position (degrees) Original Path Optimized Path

PAGE 61

61 view sets, the branch and bound algorithm can be modified with the socalled to the right variation, which does not guarantee optimality but reaches a near optimal solution in a minute or less of computational time for sets of views 200 or less.

PAGE 62

62 CHAPTER 5 CT RECONSTRUCTION SENSITIVITY ANALYSIS It is necessary for a positioner error sensitivity analysis to be performed for the reconstruction algorithm. Currently, no such analysis has been performed, so it is unknown if the inherent positional errors as a result of each robotic manipulator will be of consequence. The Goal A sensitivity study will be performed to examine the impact of manipulator inaccuracy on 3D reconstruction quality Introduction The FDK algorithm [51] developed by research staff at Ford Mo tor Company in the early 1980s, was truly groundbreaking for the field of computed tomography. This algorithm allowed for direct three dimensional reconstruction from two dimensional projections, including projections produced from cone beam computed tomog raphy (CBCT). Because this algorithm is so prevalent, it is used as the basis of the sensitivity study. Many advancements have been made in the area of CBCT reconstruction, including iterative methods such as algebraic reconstruction methods, including the Simultaneous Algebraic Reconstruction Technique (SART) [52] One of the advantages of SART is it robustness to low numbers of projections [53] For years, the advantages of algebraic reconstruction techniques remained underutilized due to long reconstruction times. In the last 15 years, progress has been made in accelerating these re constructions through commonly available consumer Graphics Processing Units (GPU) [54] As the majority of portable CBCT machines to date use hardware fixed to a track, most error sensitivity research has been performed on the effects of known, repeatable positioning errors, typically due to mechanical sag [13] Studies have shown that positioning errors with magnitude less than 2mm can result in measurable differences in quality of reconstruction, both subjectively

PAGE 63

63 and objectively [14] This study will examine the effects of unknown, nonmeasurable positioning error with magnitudes on the order of those reported for the Mitsubushi PA 106CE [11] and the UTR the magnitudes of which are 1mm or less The FDK algorithm will be tested for sensitivity to positional error. This is error in actual position, when actual position is measured to be identical to desi red, but the manipulator was not actually in that position. Such error is caused by encoder resolution limits and link/joint stiffness. By necessity, t he backprojection algorithm will be supplied with images that were generated using position data that is not the same as the position data supplied to the forward projection algorithm. Therefore, it is necessary to study the effects of these errors in order to quantify their impact on reconstruction quality. Methods A framework for the forward and back projection of existing voxelized volumes has been created1. This framework performs forward projection and backprojection using the CUDA ( Compute Unified Device Architecture ) parallel computing platform. This platform r uns on NVIDIA GP Us. The use of a GPU allows for parallel processing, which yields improvements in image reconstruction time by an order of magnitude or more [54] The CUDA platform is available in C, C++, and Fortran, with third party wrappers available for other languages. The projection code was written in C++ for best performance, and a MATLAB script was written to feed data to the projection routines and analyze the resulting data. A 32 bit 5123 volume consumes 524MB, so a card with at least 768MB of onboard memory is required. For this analysis, a GeForce GTX 660Ti with 3GB of onboard memory was 1 Framework created in partnership with Klaus Mueller and Eric Papenhausen from Stony Brook University.

PAGE 64

64 used. This is a present day intermediate level consumer GPU, retailing for USD $199 in 2013. On a GTX660, a reconstruction of a 5123 volume can be performed in 5 seconds or less. The gold standard dataset is a 5123 phantom created from a cranial CT scan. Forward cone beam projection is performed using a virtual detector with element size 0.388x0.776mm, and a detector count of 1014x374 in x and y, respectively. In order to preserve fidelity, the forward projection has 2048 steps. A total of 360 projections are taken, equally spaced about a circular trajectory. The projections are taken with the actual translation matrices defining the position of the x ray source and detector. The actual translation matrices are defined as the desired translation matrix, plus the robot positioner error. Therefore, it is possible to inject 6degree of freedom error into the projection positions and orientations of both the source and the sensor. Six cases are evaluated. The 6 cases are d escribed in Table 5 1. Table 5 1. Error types for cases evaluated. Case number Description 1 no error 2 500m, 0deg 3 1000m, 0deg 4 500m, 0.1deg 5 1000m, 0.1de g 6 1800m, 1.2deg In the five cases for which error is added, the error is of a uniform distribution, added to the x, y, and z positions and orientation coordinates, with indicated magnitude. The error injected will vary from projection to projection, bounded by the magnitude listed, with uniform distribution. Cases 2 and 4 represent typical position and orientation errors for a calibrated Mitsubushi PA 10 robot [11] cases 3 and 5 represent twice the typical positioning errors of a calibrated PA 10, and case 6 represents the typical errors of an uncalibrated PA 10. The projections are then input to the Feldkamp Davis Kress (FDK) filtered backprojection algorithm. During the backprojection, a ramp filter is used to reduce the blurring

PAGE 65

65 inherent from the backprojection process. The output from the backprojection, a 3D recon struction, will be analyzed mathematically using several standard metrics. Image error was assessed by calculating the correlation coefficient RMS volume error, and relative image error [53] [55] The 2D correlation coefficient c=corr(f,f0) was calculated using E quation 5.2: = ( ) ( ) ( ) ( ) (5. 2) The center slice of the volume was selected for the 2D correlation coefficient calculation. The RMS volume error was calculated using the following formula: = 1 ( ) (5. 3) Finally, the relative error is calculated as follows: = 0 (5.3) In Equation 5.3, is the image being evaluated, and is the gold standard image. Both are of size 512x512 pixels. Spatial resolution was assessed by calculating the spatial frequency response (SFR) or modulation transfer function (MTF) [56] This is a sort of B ode plot for images, tracking image intensity as a function of image frequency. To perform the assessment, a sine wave phantom was created in MATLAB. One drawback to the sine wave approach to calculating the MTF is the impact of image noise on the measurement. Unfortunately, since the source volume is discretized, the use of a commonly known slant ed edge approach is not appropriate. Th erefore, the sine wave virtual phantom was created with spatial frequency varying from 0.13 lp/mm to 1

PAGE 66

66 lp/mm. After completing the reconstruction process, the MTF is calculated with Equation 5.1, using the center slice of the volume. ( ) = ( ) ( ) ( ) + ( ) ( 0 ) ( 0 ) ( 0 ) + ( 0 ) (5.1) In Equation 5.1, ( ) is the intensity of a single row of pixels through the sine wave phantom area. ( 0 ) is the intensity in the low frequency (DC) region that is, the intensity of a pure black or pure white region. Note that since the imaging evaluation is done virtually, the virtual phantom has been discretized. The voxel size for the MTF evaluation is decreased to 0.25mm3, resulting in 4 available voxels for the highest spatial resolution tested, 1 lp/cm. This discretization means that at high spatial resolutions, there are not enough samples even in the original volume to fully sample the sine wave and capture its magnitude. As such, the MTF found by Equation 5.1 is divided by the MTF calculated from the original volume. In addition to the clinical head CT phantom reconstruction, a reconstruction of the modified SheppLogan phantom is performed [57] [58] We employ a 3D version, size 5123. The modification, described by Toft [58] is done to give better contrast. The reconstruction is performed with 360 equally spaced views, and injected errors a s described in Table 5 1. Results Table 5 2 pr esents the RMS error and correlation coefficient results for cases 1 5. The RMS and relative errors are quite small, owing primarily to the fact that the errors injected are unbiased. The results of the MTF evaluation are shown in Figure 51. Due to the aforementioned discretization issue, the results of the last point on the plot are subject to high error.

PAGE 67

67 Table 5 2. RMS error and correlation coefficient for cases 1 5 RMSe e (%) r Case 1 no error 0 0 1 Case 2 500um, 0deg 3.6997 0.392 0.999485 Case 3 1000um, 0deg 6.48125 0.798 0.998217 Case 4 500um, 0.1deg 4.82475 0.551 0.999077 Case 5 1000um, 0.1deg 6.95376 0.985 0.998001 Case 6 1 8 00um, 1 2 deg 18 7668 5.412 0.980662 Figure 5 1. MTF as a function of spatial frequency for cases 1 6 Figure 5 2. Modified SheppLogan phantom with white horizontal line indicating location of 1D intensity plot 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 lp/mmMTF Case 1 Case 2 Case 3 Case 4 Case 5 Case 6

PAGE 68

68 The intensity plots of a slice through the modified SheppLogan phantom are shown in Figure 5 3. The location of this 1D slice is shown in Figure 5 3. The results of the head phantom are shown in Figure 54. The images are from the 256th slice of the volume, cropped and scaled to fit the figure. The arrow on case A (n o error) and case E ( twice the calibrated robot error) point out visible differences in image sharpness. Figure 5 3. Intensity plots for a cross section of the modified SheppLogan phantom reconstruction A.) Case 1 B.) Case 2 C.) Case 3 D.) Case 4 E.) Case 5 F .) Case 6 A.) B.) C.) E.) D.) F.)

PAGE 69

69 Figure 5 4. Reconstructions of cranial CT scan A.) Case 1 B.) Case 2 C.) Case 3 D.) Case 4 E.) Case 5 F .) Case 6 B.) C.) D.) E.) A.) F.)

PAGE 70

70 Discussion In order to generate design goals for manipulator based imaging platforms, a study of the impact of positioning and orientation errors on reconstruction quality was necessary. The results of this study are promising. Very small image quality reductions are present in reconstructions performed with typical calibrated manipulator errors as compared to reconstructions without any error. One drawback to the methodology performed here is that the phantom is a virtual phantom, discretized with voxels 250 m per side Therefore, the maximum spatial resolution that can be evaluated with a sine wave pattern virtual phantom is 1 lp/mm. A review of other studies gives some insight into the fact that reconstruction quality degrades with positioning, but typically these studies are a result of the types of inaccuracies seen in clinical imaging platforms such as the C arm. These inaccuracies are different in magn itude and uniformity as compared to the inaccuracies encountered with calibrated robotic imaging platforms. For instance, Jaffray et al. reported in 2002 that the typical mechanical sag of a CBCT gantry is on the order of 2mm, and that a simple calibration process improved the imaging [14] However, no metrics (image error, MTF, intensity plots) were provided for the pre post calibration comparison, only slices from the reconstruction and image subtraction. In 2012, Wicklein et al. reported on a method to perform misalignment correction through a markerless online calibration procedure [59] This procedure is helpful to correct for misalignments, but the error magnitudes they utilized (0 to 3 BPM, split between translations and rotations) are not the type of errors seen with calibrated robotic manipulators. As demonstrated in this study positioning error accounts for the majority of the reconstruction degradation. As seen in Table 5 2, the RMS and relative errors between the no error case (case 1) and cases with error (cases 2 6) are quite small particularly for the calibr ated robot error cases (cases

PAGE 71

71 25) The correlation coefficients for the four calibrated robot error cases are very close to 1. As we would expect, the metrics point to decreasing image quality as error increases. Of note is the fact that moving from 500 m errors to 1000 m errors results in a greater degradation in image quality than simply adding 0.1 of error to the 500 m case. The results from the MTF analysis are also encouraging, especially since we expect this metric, one of image sharpness, to be affected most by the uniform random error. Compared to the no error case, there is only marginal reduction in image sharpness in cases 2 5. Through most of the spatial resolution bandwidth evaluated, the decrease in MTF was approximately 0.1, or 10% of the DC value. Examination of the cross sectional intensity plots yields a few observations. First, the background noise increases marginally with increasing injected error. The difference in noise floor is particularly noticeable in cases 3 and 5, the cases w ith the greatest position (not orientation) error. The width and intensity of the 3 ellipsoids is relatively stable through all cases. Finally, an analysis of the reconstructed head phantom shown in Figure 53 provides a means of subjectively evaluating th e resulting image quality. As shown, the images are nearly indistinguishable except near areas of detail and contrast, as indicated by the red arrows. There is little to no additional streaking or noise present in cases 25 as compared to the control. Case 6 was included in the analysis to demonstrate clearly the necessity of calibrating robotic manipulators for CBCT imaging tasks. Using nominal geometric parameters provided by the manufacturer, and failing to account for joint flexibility, will result in inaccurate robot positioning and significantly degraded reconstruction results. Conclusion Based on the quantitative and qualitative image metrics evaluated, the reconstruction quality is degraded very little for errors magnitudes of 500 m/0.1 or less. For larger errors, the impact to reconstruction quality is primarily in the area of image sharpness but is still difficult to

PAGE 72

72 notice visually Some of this image blurring can be removed through the choice of filter sharpness during the filtered backproje ction stage. At higher injected error levels there are slightly increased streak artifacts as well, which may obscure the visibility of low contrast objects. One thing to remember is that no imaging system is completely free of positioner error, including existing C arm, O arm, and other CBCT techniques. This means that the gold standard no error case discussed i s not achievable in practice. Therefore, given that robotic manipulators of suitable reach and payload can be calibrated to within 500 m [11] it has been demonstrated that CBCT imaging can be performed with separate robotic manipulators and no external mot ion capture hardware with almost negligible loss of image quality.

PAGE 73

73 CHAPTER 5 C ONCLUSION The surgeon interactive imaging system described here is the first of its kind. This system can provide reduced patient dose, improved image reconstruction, and better usability in surgery by adding degrees of freedom to the image plane, opening viewing angles previously unavailable. The key contributions are outlined as follows: Design, assembly, and accuracy validation of a novel 5DoF robotic manipulator for positioning of the x ray source Design and evaluation of a haptic control law for the surgeoninteractive portion of the imaging task A framework for opt imally forming trajectories through the desired views A study of the sensitivity of FDK reconstruction algorithm to positioning errors associated with the robotic manipulators selected for use The new platform shows feas ibility not only for improved spine surgery imaging, but also expansion to perfor m gait and stairclimbing studies simply by swapping out one of the prismatic linear stages for a longer version. These stages are commonly found in variants over 10ft, allowing capture of an entire stride, both swing and stance phase. Overall, this work has potential to make significant contributions to the research area by providing a flexible platform for imaging studies, and eventually provide a practical replacement for ex isting O arm and C arm based solutions.

PAGE 74

74 APPENDIX BACK PROJECTION CONFIGURATION README FOR BACK PROJECTOR (beta) -----------------------------------------------------CONFIG FILE INSTRUCTIONS: "input file volume: --File containing the projections to be backprojected (FDK) "input file angles: -File containing a list of angles (a sample has been provided, see raw_AngleN.txt) "output file: -File location where the backprojected volume will be wri tten to "number of projections: --Number of projections to be backprojected "voxel size (isotropic): -Isotropic voxel size "isocenter: -Isocenter "detector element size x (mm): --Horizontal size of the detector element in milimeters "detector element size y (mm): --Vertical size of the detector element in milimeters (cone beam only) "number of detectors x: --Number of detector elements in the trans axial direction "number of detectors y: --Number of detector elements in the axial direction source to isocenter: -Distance from the source to isocenter "isocenter to detector: -Distance from the isocenter to detector "volume size: -Number of cubic voxels CONFIG FILE EXAMPLE -----------------------------------------------------input fi le volume: projections.raw input file angles: raw_AngleN.txt output file: result.vol number of projections: 364 voxel size (isotropic): 0.5 isocenter: 127.75 detector element size x (mm): 0.388 detector element size y (mm): 0.776 number of detectors x: 1014 number of detectors y: 374 source to isocenter: 647.7 isocenter to detector: 520.7 volume size: 512 -----------------------------------------------------This will perform a cone beam backprojection for 364 projections. The projections contain 1014 det ector cells; each detector cell is 0.388mm long. The resulting volume is written to "result.vol" and can be viewed using the imageJ viewing tool. HELPFUL LINKS

PAGE 75

75 ImageJ Website: http://rsbweb.nih.gov/ij/ INSTRUCTIONS FOR IMAGEJ VIEWING go to: File > Import >Raw (Select volume file) Image type: 32bit Real Width: volume size (e.g. 512) Height: volume size Offset to first image: 0 Number of images: volume size Gap between images: 0 Check "Little endian byte order" Select OK

PAGE 76

76 LIST OF REFERENCES [1] B. I. Martin, R. a Deyo, S. K. Mirza, J. a Turner, B. a Comstock, W. Hollingworth, and S. D. Sullivan, Expenditures and health status among adults with back and neck problems., JAMA vol. 299, no. 6, pp. 65664, Feb. 2008. [2] Y. Rampersaud, D. Simon, and K. Foley, Accuracy requirements for image guided spinal pedicle screw placement., Spine (Phila. Pa. 1976)., vol. 26, no. 4, pp. 352 359, Feb. 2001. [3] U. Gille, Vertebral Column Colored Public Domain. [Online]. Available: http://en.wikipedia.org/wiki/File:Gray_111_ _Vertebral_columncoloured.png. [4] Hatchmpi, No Title. [Online]. Available: http://www.hatchmpi.com/FLUOROSCOPIC_C ARMS_main.htm. [5] Medical Imaging Equ ipment. [Online]. Available: http://www.medimagingsales.com. [6] R. Betz and P. Noegel, Radiography device, US Pat. 6,435,715, 2002. [7] L. Mitsubishi Heavy Industries, Instruction Manual for Installation, Maintenance, and Safety. Tokyo, Japan, 2003. [8] Z. Zheng and K. Mueller, Identifying sets of favorable projections for few view low dose cone beam CT scanning, 11th Int. Meet. Fully Three Dimensional Image Reconstr. Radiol. Nucl. Med., pp. 314317, 2011. [9] E. A. Pearson, S. Member, S. Cho, and C A. P. Member, Non circular Cone Beam CT 3175, 2010. [10] K. Xu, F. Mueller, Accelerating popular tomographic reconstruction algorithms on commodity PC graphics hardware, IEEE Trans. Nucl. Sci., vol. 52, no. 3, pp. 654663, Jun. 2005. [11] C. A. Lightcap, Measurement and control issues in a novel dynamic radiographic imaging system, pp. 1 132, 2008. [12] J. D. Yamokoski, Safe and Robust Sensor Motion Planning a nd Control, pp. 1120, 2009.

PAGE 77

77 [13] J. H. Siewerdsen, D. J. Moseley, S. Burch, S. K. Bisland, a. Bogaards, B. C. Wilson, and D. a. Jaffray, Volume CT with a flat panel detector on a mobile, isocentric C arm: Pre clinical investigation in guidance of minima lly invasive surgery, Med. Phys., vol. 32, no. 1, p. 241, 2005. [14] D. a Jaffray, J. H. Siewerdsen, J. W. Wong, and A. a Martinez, Flat panel cone beam computed tomography for image guided radiation therapy., Int. J. Radiat. Oncol. Biol. Phys. vol. 53, no. 5, pp. 133749, Aug. 2002. [15] P. Automation, 402/403/404XE Series Product Manual. Cleveland, Ohio, 2012. [16] C. Mavroidis, S. Dubowsky, P. Drouet, J. Hintersteiner, and J. Flanz, A systematic error analysis of robotic manipulators: application to a high performance medical robot, Proc. Int. Conf. Robot. Autom., vol. 2, pp. 980985, 1997. [17] J. Santolaria, J.A. Yage, R. Jimnez, and J.J. Aguilar, Calibration based thermal error model for articulated arm coordinate measuring machines, Precis. Eng., vol. 33, no. 4, pp. 476485, Oct. 2009. [18] Liptk, Ra dnor, Pa: Chilton Book Co, 1982. [19] N. Instruments, Stepper Motors and Encoders. [Online]. Available: http://sine.ni.com/ds/app/doc/p/id/ds 311/lang/en. [20] P. I. Corke, Robotics, Vision & Control: Fundamental Algorithms in Matlab. Springer, 2011. [21] B. Mooring, Z. S. Roth, and M. R. Driels, Fundamentals of manipulator calibration. J. Wiley, 1991. [22] D. T. Bui, Tanh M. Chow, Stall Detection Circuit and Method, US Patent No. 69006572005. [23] J. Zambelli, B. E. Nett, S. Leng, C. Riddell, B. Belanger, and G.H. Chen, Novel C arm based cone beam CT using a source trajectory of two concentric arcs, p. 65101Q 65101Q 10, Mar. 2007. [24] D. Soimu and N. Pallikarakis, RECONSTRUCTIONS USING FDK ALGORITHM, pp. 36. [25] I. J. Hill, A Novel Testing Platform for Characterizing Cervical Spine Biomechanics Gainesville, Fla: University of Florida, 2013. [26] DRICS Dynamic Radiographic Imaging Control Software, 2013. [Online]. Available: http://code.google.com/p/drics/.

PAGE 78

78 [27] W. E. Snyder, Industrial robot s: computer interfacing and control PrenticeHall, 1985. [28] P. M. Patre, W. MacKunis, K. Kaiser, and W. E. Dixon, Asymptotic Tracking for Uncertain Dynamic Systems Via a Multilayer Neural Network Feedforward and RISE Feedback Control Structure, Autom. Control. IEEE Trans., vol. 53, no. 9, pp. 21802185, 2008. [29] R. Van der Linde and P. Lammertse, The HapticMaster, a new highperformance haptic interface, Proc. pp. 15, 2002. [30] G. Zeng and A. Hemami, An overview of robot force control An over view of robot force control Ganwen Zeng and Ahmad Hemami, vol. 15, no. 05, pp. 473482, 2013. [31] J. Wahrburg, Improving robot arm control for safe and robust haptic cooperation in orthopaedic procedures, no. October, pp. 316322, 2007. [32] D. E. Whitney, Historical Perspective and State of the Art in Robot Force Control, Int. J. Rob. Res., vol. 6, no. 1, pp. 314, Mar. 1987. [33] C. D. Crane and J. Duffy, Kinematic Analysis of Robot Manipulators Cambridge University Press, 2008. [34] Danaher, PACIFIC SCIENTIFIC T SERIES NEMA 23 HIGH TORQUE MOTORS, Washington, D.C. [35] R. Paul and B. Shimano, Kinematic control equations for simple manipulators, Decis. Control Incl. 17th 1978. [36] S. C. Cannon and G. I. Zahalak, The mechanical beha vior of active human skeletal muscle in small oscillations, J. Biomech., vol. 15, no. 2, pp. 111121, 1982. [37] time safety for human robot interaction, Rob. Auton. Syst. vol. 54, no. 1, pp. 112, 2006. [38] M. Lin and J Canny, A fast algorithm for incremental distance calculation, Robot. Autom. 1991. 1991. [39] R. O. Duda and P. E. Hart, Use of the Hough transformation to detect lines and curves in pictures, Commun. ACM vol. 15, no. 1, pp. 1115, Jan. 1972. [40] M. M. Flood, The Traveling Salesman Problem, Oper. Res. vol. 4, no. 1, pp. pp. 6175, 1956. [41] D. Johnson and L. McGeoch, The traveling salesman problem: A case study in local optimization, Local search Comb. Optim., pp. 1103, 1997.

PAGE 79

79 [42] J. Clausen, Branch and bound algorithms principles and examples, Dep. Comput. Sci. Univ. pp. 130, 1999. [43] V. K. Menger, A. Wunsch, and S. Probleme, Bericht tiber ein mathematisches Kolloquium, 1929. [44] J. Bentley, No Title, Unix Review 1997. [45] G. Gutin, A. Yeo, and A. Zverovich, Traveling salesman should not be greedy: domination analysis of greedy type heuristics for the TSP, Discret. Appl. Math., no. January, 2002. [46] N. Christofides and C.M. U. P. P. A. M. S. R. GROUP., Worst Case Analysis of a New Heuristic for the Travelling Salesman Problem Defense Technical Information Center, 1976. [47] E. Xidias, Time optimal task scheduling for two robotic manipulators operating in a threedimensional environment, Proc. vol. 224, no. 7, pp. 845855, Nov. 2010. [48] J. D. C. Little, K. G. Murty, D. W. Sweeney, and C. Karel, An Algorithm for the Traveling Salesman Problem, Oper. Res. vol. 11, no. 6, pp. pp. 972989, 1963. [49] S. Dubowsky and T. D. Blubaugh, Planning time optimal robotic manipulator motions and work places for point to point tasks, Robot. Autom. IEEE Trans., vol. 5, no. 3, pp. 377381, 1989. [50] D. L. Applegate, The Traveling Salesman Problem: A Computational Study Princeton University Press, 2006. [51] L. Feldkamp, L. Davis, and J. Kress, Practical conebeam algorithm, JOSA A vol. 1, no. 6, pp. 612619, 1984. [52] A. H. Andersen and A. C. Kak, Simultaneous Algebraic Reconstruction Technique (SART): A superior implementation of the {ART} algorithm, Ultrason. Imaging, vol. 6, no. 1, pp. 8194, 1984. [53] W. Chlewicki, C. Badea, and N. Pallikarakis, FOR LIMITED NUMBER OF PROJECTIONS, pp. 3 5. [54] K. Mueller and R. Yagel, On the use of graphics hardware to accelerate algebraic reconstruction methods, in SPIE Me dical Imaging Conference 1999, vol. 3659, no. 2, pp. 615625. [55] X. Jia, Y. Lou, J. Lewis, R. Li, X. Gu, C. Men, W. Y. Song, and S. B. Jiang, GPU based fast low dose cone beam CT reconstruction via total variation., J. Xray. Sci. Technol., vol. 19, no. 2, pp. 13954, Jan. 2011.

PAGE 80

80 [56] P. Burns, Slantededge MTF for digital camera and scanner analysis, IS TS PICS Conf., 2000. [57] L. A. Shepp and B. F. Logan, The Fourier reconstruction of a head section, Nucl. Sci. IEEE Trans. vol. 21, no. 3, pp. 2143, 1974. [58] P. Toft, The Radon Transform, Theory and Implementation (unpublished dissertation). [59] J. Wicklein, H. Kunze, W. a Kalender, and Y. Kyriakou, Image features for misalignment correction in medical flatdetector CT., Med. Phys., vol. 39, no. 8, pp. 491831, Aug. 2012.

PAGE 81

81 BIOGRAPHICAL SKETCH Timothy Zane Elmore was born in Fort Lauderdale, Florida He graduated from the University of Florida in 2008 with a B.S. in Mechanical Engineering, and a B.S. in Aerospace Engineering. He joined the Orthopaedic and Biomechanics Laboratory at the University of Florida in 2008 to begin research in robotics and m edical engineering. He became a PhD candidate in the field of Mechanical Engineering in 2012, and graduated with a PhD in Mechanical Engineering from the University of Florida in 2013.