<%BANNER%>

Strategies for Adaptive Radiation Therapy

Permanent Link: http://ufdc.ufl.edu/UFE0024305/00001

Material Information

Title: Strategies for Adaptive Radiation Therapy Robust Deformable Image Registration Using High Performance Computing and Its Clinical Applications
Physical Description: 1 online resource (166 p.)
Language: english
Creator: Xia, Junyi
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: dir, gpu, imaging, motion, recontouring, registration, therapy, validation
Nuclear and Radiological Engineering -- Dissertations, Academic -- UF
Genre: Nuclear Engineering Sciences thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Image guided radiation therapy (IGRT) requires developing advanced methods for target localization. Once target motion is identified, the patient specific treatment margin can be incorporated into the treatment planning, accurately delivering the radiation dose to the target and minimizing the dose to the normal tissues. Deformable image registration (DIR) has become an indispensable tool to analyze target motion and measure physiological change by temporal imaging or time series volumetric imaging, such as four-dimensional computed tomography (4DCT). Current DIR algorithms suffer from inverse inconsistency, where the deformation mapping is not unique after switching the order of the images. Moreover, long computation time of current DIR implementation limits its clinical application to offline analysis. This dissertation makes several major contributions: First, an inverse consistent constraint (ICC) is proposed to constrain the uniqueness of the correspondence between image pairs. The proposed ICC has the advantage of 1) improving registration accuracy and robustness, 2) not requiring explicitly computing the inverse of the deformation field, and 3) reducing the inverse consistency error (ICE). Moreover, a variational registration model, based on the maximum likelihood estimation, is proposed to accelerate the algorithm convergence and allow for inexact image pixel matching within an optimized variation for noisy image pairs. The algorithm evaluation was carried out using a simulated phantom, a four-dimensional single photon emission computed tomography myocardium phantom, and clinical 4DCT images. After applying ICC, the ICE was reduced by up to 99% and the phantom error was reduced by up to 32%. For noisy image pairs, the likelihood based inverse consistent DIR algorithm outperformed other algorithms, it achieved fast convergence and attained a phantom error reduction of 56%, compared to the classic fast diffusion registration algorithm. Second, an auto re-contouring framework is developed for automatically propagate the planning contours in the planning image dataset to a new image dataset. It consists of DIR, surface reconstruction, and post processing. Visual estimation was applied to evaluate the re-contouring performance. The auto re-contouring framework is also applied to automatic generating internal target volume (ITV) and its probability density function by combing auto-contoured gross tumor volume of phase CTs. Third, a proof of concept study was carried out to accelerate the DIR computation using high performance graphics processing unit (GPU). It was demonstrated that the GPU implementation of DIR was able to speed up the computation by up to 60 times, achieved near real-time performance of DIR for clinical images (i.e., 1.8 seconds for image pairs with the size of 256 x 256 x 30 for 100 iterations) and improved the feasibility of applying deformable image registration in routine clinic procedures.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Junyi Xia.
Thesis: Thesis (Ph.D.)--University of Florida, 2009.
Local: Adviser: Samant, Sanjiv Singh.
Local: Co-adviser: Chen, Yunmei.
Electronic Access: RESTRICTED TO UF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE UNTIL 2011-08-31

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0024305:00001

Permanent Link: http://ufdc.ufl.edu/UFE0024305/00001

Material Information

Title: Strategies for Adaptive Radiation Therapy Robust Deformable Image Registration Using High Performance Computing and Its Clinical Applications
Physical Description: 1 online resource (166 p.)
Language: english
Creator: Xia, Junyi
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: dir, gpu, imaging, motion, recontouring, registration, therapy, validation
Nuclear and Radiological Engineering -- Dissertations, Academic -- UF
Genre: Nuclear Engineering Sciences thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Image guided radiation therapy (IGRT) requires developing advanced methods for target localization. Once target motion is identified, the patient specific treatment margin can be incorporated into the treatment planning, accurately delivering the radiation dose to the target and minimizing the dose to the normal tissues. Deformable image registration (DIR) has become an indispensable tool to analyze target motion and measure physiological change by temporal imaging or time series volumetric imaging, such as four-dimensional computed tomography (4DCT). Current DIR algorithms suffer from inverse inconsistency, where the deformation mapping is not unique after switching the order of the images. Moreover, long computation time of current DIR implementation limits its clinical application to offline analysis. This dissertation makes several major contributions: First, an inverse consistent constraint (ICC) is proposed to constrain the uniqueness of the correspondence between image pairs. The proposed ICC has the advantage of 1) improving registration accuracy and robustness, 2) not requiring explicitly computing the inverse of the deformation field, and 3) reducing the inverse consistency error (ICE). Moreover, a variational registration model, based on the maximum likelihood estimation, is proposed to accelerate the algorithm convergence and allow for inexact image pixel matching within an optimized variation for noisy image pairs. The algorithm evaluation was carried out using a simulated phantom, a four-dimensional single photon emission computed tomography myocardium phantom, and clinical 4DCT images. After applying ICC, the ICE was reduced by up to 99% and the phantom error was reduced by up to 32%. For noisy image pairs, the likelihood based inverse consistent DIR algorithm outperformed other algorithms, it achieved fast convergence and attained a phantom error reduction of 56%, compared to the classic fast diffusion registration algorithm. Second, an auto re-contouring framework is developed for automatically propagate the planning contours in the planning image dataset to a new image dataset. It consists of DIR, surface reconstruction, and post processing. Visual estimation was applied to evaluate the re-contouring performance. The auto re-contouring framework is also applied to automatic generating internal target volume (ITV) and its probability density function by combing auto-contoured gross tumor volume of phase CTs. Third, a proof of concept study was carried out to accelerate the DIR computation using high performance graphics processing unit (GPU). It was demonstrated that the GPU implementation of DIR was able to speed up the computation by up to 60 times, achieved near real-time performance of DIR for clinical images (i.e., 1.8 seconds for image pairs with the size of 256 x 256 x 30 for 100 iterations) and improved the feasibility of applying deformable image registration in routine clinic procedures.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Junyi Xia.
Thesis: Thesis (Ph.D.)--University of Florida, 2009.
Local: Adviser: Samant, Sanjiv Singh.
Local: Co-adviser: Chen, Yunmei.
Electronic Access: RESTRICTED TO UF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE UNTIL 2011-08-31

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0024305:00001


This item has the following downloads:


Full Text

PAGE 1

1 STRATEGIES FOR ADAPTIVE RADIATION THERAPY: ROBUST DEFORMABLE IMAGE REGISTRATION USING HIGH PERFORMANCE COMPUTING AND ITS CLINICAL APPLICATIONS By JUNYI XIA A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2009

PAGE 2

2 2009 Junyi Xia

PAGE 3

3 To my loving wife, Yuming, and to my parents for their uncondition al love, support, and encouragement

PAGE 4

4 ACKNOWLEDGMENTS First and foremost I would like to express my deepest appreciation and gratitude to my advisor, Dr. Sanjiv Samant, for his guidance throughout my years in UF. He provided me a friendly and welcome en vironment, gave me freedom to select my research directions, and spent countless hours in mentoring me. His support, patient, understanding, and kindness motivate me to do my best work. I would also like to thank my co -chair, Dr. Yunmei Chen, for her inva luable ideas and suggestions. Her insight of mathematics makes me on the track and keeps me focused. It has really been honor working with her. I would also like to thank other committee members, Dr. David Gilland, Dr. Wesley Bolch, and Dr. Murali Rao, fo r their useful discussions and suggestions. It was very fortunate to work with my colleagues: Arun Gopal, Heeteak Chung, and Bart Lynch. We spent quality time together, discussing research problems as well as life experience. Special thanks would go to Dr Chihray Liu and Jean Peng for their clinical insight. Most importantly, I would like to thank my wife, Yuming, and my parents for their unconditional support and encouragement.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS .................................................................................................................... 4 LIST OF TABLES ................................................................................................................................ 7 LIST OF FIGURES .............................................................................................................................. 8 LIST OF ABBREVIATIONS ............................................................................................................ 11 ABSTRACT ........................................................................................................................................ 13 CHAPTER 1 ROLE OF IMAGE REGISTRATION IN RADIATION THERAPY ..................................... 15 1.1 Target Motion .................................................................................................................... 20 1.2 Auto Re -contouring and Labeling .................................................................................... 22 2 LITERATURE REVIEW OF DEFORMABLE IMAGE REGISTRATION ......................... 26 2.1 Review of Similarity Measures ........................................................................................ 27 2.1.1 Feature Based Measures ....................................................................................... 27 2.1.2 Intensity Based Measures ..................................................................................... 28 2.1.3 Hybrid Measures ................................................................................................... 30 2.2 Review of Transformation Models .................................................................................. 31 2.2.1 Affine and Polynomial Models ............................................................................ 31 2.2.2 Smooth Basis Function Models ........................................................................... 32 2.2.3 Biomechanic al Models ......................................................................................... 34 2.2.4 Optical Flow Models ............................................................................................ 35 2.2.5 Regularization on Dense Field Models ............................................................... 36 2.3 Review of Optimization Strategies .................................................................................. 38 2.4 Review of Inverse Consistent Registration ..................................................................... 39 3 VALIDATIONS OF DEFORMABLE IMAGE REGISTRATION ........................................ 51 3.1 Accuracy ............................................................................................................................ 51 3.1.1 Physical Phantoms ................................................................................................ 52 3.1.2 Digital Phantoms ................................................................................................... 53 3.1.3 Validation Metrics ................................................................................................ 55 3.1.4 Visual Verification ................................................................................................ 58 3.2 Robustness ......................................................................................................................... 58 3.3 Consistency ........................................................................................................................ 59

PAGE 6

6 4 A GENERALIZED INVERSE CONSISTENT CONSTRAINT FOR DEFORMA BLE IMAGE REGISTRATION ......................................................................................................... 61 4.1 The Inverse Consistent Constraint ................................................................................... 62 4.2 The Inverse Consistent Demons (I Demons) Algorithm ................................................ 63 4.2.1 Demons Algorithm ............................................................................................... 63 4.2.2 Bijectivity Demons (B Demons) Algorithm ....................................................... 64 4.2.3 IDemons Algorithm ............................................................................................. 64 4.3 The Inverse Consistent Fast Diffusion Registration (I -FDR) Algorithm ...................... 65 4.4 A Robust Var iational Model for Inverse Consistency .................................................... 67 4.5 Algorithm Evaluations ...................................................................................................... 69 4.5.1 Image Data for Evaluation ................................................................................... 69 4.5.2 Evaluation Criteria ................................................................................................ 71 4.6 Results ................................................................................................................................ 72 4.6.1 Simulated Phantom Images .................................................................................. 72 4.6.2 Physical Myocardium Phantom Images .............................................................. 74 4.6.3 Clinical Images ..................................................................................................... 75 4.6 .4 Comparison of FDR, I -FDR, and Christensens Inverse Consistent Registration (CICR) .............................................................................................. 77 4.7 Discussion and Conclusion ............................................................................................... 79 5 CLIN CIAL APPLIATION OF DEFORMABLE IMAGE REGISTRATION ..................... 105 5.1 Auto Re -contouring ......................................................................................................... 105 5.1.1 Surface Reconstruction....................................................................................... 108 5.1.2 Post Processing ................................................................................................... 108 5.2 Auto Internal Target Volume Generation ...................................................................... 109 6 NEAR REAL TIME DEFORMABLE IMAGE REGISTRATION USING GRAPHICS PROCESSING UNITS ............................................................................................................. 117 6.1 Review of High Performance Computing In Medical Image Registration ................. 118 6.1.1 Review of Image Registration Using FPGA ..................................................... 118 6.1.2 Review of Image Registration Using Cell Broadband Engine (CBE) ............ 119 6.1.3 Review of Image Registration Using Supercomputers or Clusters ................. 120 6.1.4 Review of Image Registration Using Graphics Processing Unit (GPU) ......... 123 6.2 Materials and Methods .................................................................................................... 128 6.3 Results .............................................................................................................................. 129 6.4 Discussion and Conclusion ............................................................................................. 132 APPENDIX CODE SAMPLES OF RE -CONTOURING ......................................................... 140 LIST OF REFERENCES ................................................................................................................. 150 BIO GRAPHICAL SKETCH ........................................................................................................... 166

PAGE 7

7 LIST OF TABLES Table page 4 1 Change in step size of the FDR group and Demons -group algorithms for robust measure ................................................................................................................................. 104 4 2 The mean and standard derivation (SD) of APE, ICE, and SSE of seven DIR algorithms using myocardium physical phantom, the image pixel dimension is 1 mm x 1 mm x 1 mm on X, Y, and Z direction. .......................................................................... 104 6 1 Maximum, mean and standard deviation of the difference in correlation coefficients between the CPU implementations and the GPU implementation in 100 iterations ....... 138 6 2 Performance comparison between the CPU and GPU based implementations of DIR for clinical imaging data ...................................................................................................... 138 6 3 Performance comparison between the CPU ( 2.8GHz Intel Dual Core with 1.5GB RAM) and GPU (NVidia 8800 GTX) implementations of the Demons DIR algorithm for clinical imaging data in Sharp et al.s GPU implementation ...................................... 139

PAGE 8

8 LIST OF FIGURES Figure page 1 1 Representative slices demonstrating 4DCT respiratory motion. Due to the diaphragm motion, the location of the lung tissue changes within the breathing cycle. ...................... 25 2 1 Deformable image registration algorithms consist of three components: similarity measures, transformation mode ls, and optimization strategies ........................................... 48 2 2 1D parame ter space illustrating the local minima (point B and point C on the blue curve) and the global minima (point A) ................................................................................ 48 2 3 The forward transformation h (x ) matches the point A in the source image S (x ) to the point B in the target image T (x ), the backward transformation g (x ) matches the point B in the target image T (x ) to the point A in the source image S (x ). .................................. 49 2 4 Computation of the inverse deformation field. The forward transformation h (x ) at point A determines the motion vector of point A i.e, from point A to point B ................. 50 3 1 SPECT images of the dynamic physical phantom ............................................................... 60 4 1 The simulated phantom. ......................................................................................................... 83 4 2 The evaluation of the FDR, I -FDR FDR -Sigma, and I -FDR -Sigma algorithms using the simulated phantom. .......................................................................................................... 84 4 3 Change in energy with the number of iterations at a step size c orresponding to the minimum APE ........................................................................................................................ 85 4 4 The standard deviations (SD) of the FDR, I FDR, FDR -Sigma, and I -FDR -Sigma algorithms with the change in step size using the simulated phantom. .............................. 86 4 5 The evaluation of the Demons, I Demons, and B Demons algorithms using the simulated phantom ................................................................................................................. 87 4 6 The standard deviations (SD) of the Demons, I Demons, and B -Demons algorithms with t he change in step size using the simulated phantom. ................................................. 88 4 7 The evaluation of the FDR, I -FDR, FDR -Sigma, and I -FDR -Sigma algorithms using the physical myocardium phantom ....................................................................................... 89 4 8 The standard deviations (SD) of the FDR, I FDR, FDR -Sigma, and I -FDR -Sigma algorithms with the change in step size using the physical myocardium phantom. .......... 90 4 9 The evaluation of the Demons, I Demons, and B Demons algorithms using the physical myocardium phantom ............................................................................................. 91

PAGE 9

9 4 10 The standard deviations (SD) of the Demons, I Demons, and B -Demon s algorithms with the change in step size using the simulated phantom. ................................................. 92 4 11 The evaluation of the FDR, I -FDR, FDR -Sigma, and I -FDR -Sigma algorithms using the clinical 4DCT lung images .............................................................................................. 93 4 12 The evaluation of the Demons, I Demons, and B Demons algorithms using the clinical 4DCT lung images .................................................................................................... 94 4 13 The evaluation of the Demons group and FDR -group algorithms using 4DCT clinical lung images for 200 iterations. ................................................................................ 95 4 14 Visual comparison of DIRs using 4DCT dataset of a lung cancer patient ......................... 96 4 15 Distribution of the inverse consistency error (ICE) in gray scale. Higher intensity region indicated the larger ICE. ............................................................................................ 97 4 16 Auto re -contouring of gross tumor volume (GTV). ............................................................. 98 4 17 Comparison of the FDR, I -FDR, and CICR algorithms using the physical myocardium phantom. ........................................................................................................... 99 4 18 Comparison of the FDR, I -FDR, and CICR algorithms using 4DCT images from a lung cancer patient ............................................................................................................... 100 4 19 Comparison of two DIR algorithms using SSE and visual estimation ............................. 101 4 20 Quiver plot of the deformable field computed in Figure 4 19 after 600 iterations .......... 102 4 21 The evaluation of the FDR, I -FDR, FDR -Sigma, and I -FDR -Sigma algorithms using high noise myocardium phantom. ....................................................................................... 103 5 1 Flowchart of the auto re -contouring. The 3D surface mesh is reconstructed from the 2D layers of the manually contoured structures ................................................................ 111 5 2 Mesh deformation. The white cubes represent the nodes (A, B, and C) of a triangle from the planning surface mesh. ........................................................................................ 112 5 3 Surface reconstruction and surface mesh deformation. ..................................................... 113 5 4 A 1D example of interpolation based re contouring and surface reconstruction based re -contouring. The contour in the middle plane (plane 2) is needed to be generated based on the known contours in the plane 1 and the plane 3. ........................................... 114 5 5 Surface reconstruction from 2D planar contours ............................................................... 115 5 6 Auto re -contouring for lung image using DIR. Physician drawn contours (drawn on separate source CT) and computer generated contours, using auto re -contouring, have been overlaid on a target CT. ...................................................................................... 115

PAGE 10

10 5 7 ITV generation. The color map represents the probability that a GTV voxel occupies a specific pixel lo cation for all phases in the respiratory cycle. The ITV is the union of all nonzero probability voxels. ........................................................................................ 116 6 1 Memory architectures of the traditional parallel computers .............................................. 134 6 2 Deformable image registration using lung imaging. 2D image slices presented here are based on volumetric deformable image registration using demons method. ............ 135 6 3 Performance comparisons between the CPU (single threading and multithreading) and the GPU based implementations of the Demons DIR algorithm as a function of image size. ............................................................................................................................ 136 6 4 Computational efficiency of the GPU programming for DIR. ......................................... 137

PAGE 11

11 LIST OF ABBREVIATIONS AP Anterior/Posterior API Application Programming Interface ART Adaptive Radiation Therapy CBCT Cone Beam Computed Tomogr aphy CBE Cell Broadband Engine CICR Christensens Inverse Consistent Registration CT Computed Tomography CTV Clinical Target Volume DIR Deformable Image Registration EPID Electronic Portal Imaging Device FDR Fast Diffusion Registration FEM Finite Element M ethod FPGA Field Programmable Gated Array GPU Graphics Processing Unit GTV Gross Tumor Volume HPC High Performance Computing ICC Inverse Consistent Constraint ICDIR Inverse Consistent Deformable Image Registration ICP Iterative Closest Point ICRU Internati onal Commission on Radiation Units and Measurements I-FDR Inverse Consistent Fast Diffusion Registration IGRT Image Guided Radiation Therapy IMRT Intensity Modulated Radiation Therapy ITV Internal Target Volume

PAGE 12

12 MRI Magnetic Resonance Imaging MRSI Magnetic Resonance spectroscopic imaging PCA Principle Component Analysis PDF Probability Density Function PPE Power Processor Element PSNR Peak Signal -to Noise Ratio PTV Planning Target Volume RL Right/Left SI Superior/Inferior SPE Synergistic Processor Element SPECT Single Photon Emission Computed Tomography SD Standard Deviation SSD Sum of Squared Distance SSE Sum of Squared Error TPI Time per iteration TMPI Time per mega pixels per iteration

PAGE 13

13 Abstract of Dissertation Presented to the Graduate School of the Uni versity of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy STRATEGIES FOR ADAPTIVE RADIATION THERAPY: ROBUST DEFORMABLE IMAGE REGISTRATION USING HIGH PERFORMANCE COMPUTING AND ITS CLINICAL APPLICAT IONS By Junyi Xia A ugust 2009 Chair: Sanjiv Samant Cochair: Yunmei Chen Major: Nuclear Engineering Sciences Image guided radiation therapy ( IGRT ) requires develop ing advanced methods for target localization. Once target motion is identified, the patient s pecific treatment margin can be incorporated into the treatment planning, accurately delivering the radiation dose to the target and minimizing the dose to the normal tissues. Deformable image registration (DIR) has become an indispensable tool to analyze target motion and measure physiological change by temporal imaging or time series volumetric imaging, such as four -dimensional computed tomography (4DCT). Current DIR algorithms suffer from inverse inconsistency, where the deformation mapping is not unique after switching the order of the images. Moreover, long computation time of current DIR implementation limits its clinical application to offline analysis. This dissertation makes several major contributions: First, an inverse consistent constraint (ICC) is proposed to constrain the uniqueness of the correspondence between image pairs. The proposed ICC has the advantage of 1) improving registration accuracy and robustness, 2) not requiring explicitly computing the inverse of the deformation field, and 3) r educing the inverse consistency error (ICE). Moreover, a variational registration model, based on the maximum likelihood estimation, is proposed to accelerate the algorithm convergence and allow for inexact

PAGE 14

14 image pi xel matching within an optimized variatio n for noisy image pairs. The algorithm evaluation was carried out using a simulated phantom, a four dimensional single photon emission computed tomography myocardium phantom, and clinical 4DCT images. After applying ICC, the ICE was reduced by up to 99% an d the phantom error was reduced by up to 32%. For noisy image pairs, the likelihood based inverse consistent DIR algorithm outperformed other algorithms, it achieved fast convergence and attained a phantom error reduction of 56%, compared to the classic fa st diffusion registration algorithm. Second, an auto re -contouring framework is developed for automatically propagate the planning contours in the planning image dataset to a new image dataset. It consists of DIR, surface reconstruction, and post processin g. Visual estimation was applied to evaluate the re -contouring performance. The auto re -contouring framework is also applied to automatic generating internal target volume (ITV) and its probability density function by combing autocontoured gross tumor vol ume of phase CTs. Third, a proof of concept study was carried out to accelerate the DIR computation using high performance graphics processing unit (GPU). It was demonstrated that the GPU implementation of DIR was able to speed up the computation by up to 60 times, achieved near real -time performance of DIR for clinical images (i.e., 1.8 seconds for image pairs with the size of 256 x 256 x 30 for 100 iterations) and improved the feasibility of applying deformable image registration in routine clinic procedu res.

PAGE 15

15 CHAPTER 1 ROLE OF IMAGE REGIST RATION IN RADIATION THERAPY Cancer is a group of disease characterized by growth and spread of tumors. In United States and Canada, more than 1,200,000 people are diagnosed with cancer annually. More than one in three people will develop cancer during their life time and about one in four wil l die of cancer. Among the individuals who developed cancer, about one half will be treated with radiation therapy, either alone or in combination with other cancer treatments. Radiation therapy is a clinical process using the ionizing radiation for the tr eatment of the malignant tumors (cancer). The goal of the radiation therapy is to deliver the radiation dose accurately to the well defined target volume while sparing the radiation damage to the surrounding normal tissues. Radiation therapy can be classif ied by the location of the radiation source: 1 External radiation therapy. The radiation is delivered from radioactive sources outside the body, such as linear accelerator; 2 Brachytherapy. The radiation is delivered from radioactive sources inside the body. Radiation therapy is a complex procedure and involves many processes 1, including 1) diagnosis and clinical evaluation; 2) therapeutic decision; 3) imaging for treatment planning; 4) target volume localization; 5) fabrication of treatment aids; 6) treatm ent simulation; 7) treatment planning; 8) treatment delivery; 9) patient evaluation; 10) patient follow up. Not all patients will go through each step; however, it is very important to execute each step with the greatest accuracy possible. Errors or uncert ainties occurred in one step will affect the accuracy of the subsequent steps and finally impact the patient outcome. Such closely related processes can also be interpolated as a chain. Among them, one crucial process is to determine the location and the e xtent of the disease s relative to the adjacent critical normal tissu es (target volume localization). Target volume usually includes gross tumor volume (GTV), clinical target volume (CTV), and planning target volume (PTV). In ICRU report 52 and 60, GTV is defined as the gross

PAGE 16

16 demonstrable extent and location of the tumor. Delineation of the GTV is possible if the tumor is visible, palpable or demonstrable through imaging. Clinical target volume consists of GTV and microscopic regions that cannot be detected by imaging techniques. Delineation of the CTV assumes that there are no tumor cells outside of the volume. Internal target volume extends the CTV with the compensation for internal physiological movements and variation in size, shape, and position of the C TV. Planning target volume (PTV) extends CTV and takes into account the patient motion and patient position errors. T he future of the radiation therapy will be focused on image guided radiation therapy (IGRT) and adaptive radiation therapy (ART). IGRT re quires development of advanced hardware and software to identify the organ or the tumor distortion and motion Once the tumor deformation is identified, the patient specific treatment margin can be incorporated into the treatment planning, in replace of th e currently applied general margin. ART requires inclusion of imaging into the optimization of treatment planning on a dynamic basis. Target volume localization is, therefore, a very important step to the success of IGRT or ART. Target volume localization is a dynamic process because the volume might change during the course of the radiation therapy, for example, a maximum displacement of 7.5 mm was measured for the prostate during treatment 2, average motion of 9.2 mm was observed in the lesions located ne ar the heart 3. Langen and Jones 4 classified the target motion into three categories: 1) patient position -related organ motion; 2) inter -fractional target motion; and 3) intra -fractional target motion. Patient position -related target motion is due to the change of the patient positioning, for example, when patient position changes from supine to lateral decubitus, relative location between the internal organs and the bone anatomies might change. Inter -fractional target motion occurs between the treatments fractions and it is on a day to day basis, inter -fractional target

PAGE 17

17 motion may be due to patient weight gain or loss. Intra -fractional target motion occurs within the treatment when the radiation is delivered to the patients. Respiratory and cardiac motion s are the main sources for intra -fractional target motion. Heinzerling et al. 5 reported a mean overall tumor motion of 13.6mm for the low lobe lung and the liver tumors. Furthermore, the target motion pattern could change during the radiation therapy, for example, the lung cancer patient might be more nervous for the first few fractions than the later fractions, and the change of their breathing pattern could affect the magnitude of the target motion. The widely applied intensity-modulated radiation therap y (IMRT) techniques can deliver radiation to the target volumes of complex shape and minimize the dose to the nearby normal tissues. Therefore it is especially important to understand the target motion for IMRT because high dose gradient i s located near th e target area, e ven small variation of the target volume or the target location might significantly change the planning dose distribution. For the past few years, the target motion has been intensively studied. In the following, we will give a brief review of target location techniques using Electronic portal imaging devices (EPID), optical devices, electromagnetic treatment positioning device, and four dimensional computed tomography (4DCT). EPID was introduced to monitor the target motion by locating the implanted radio -opaque markers. EPID has been under the development by the researchers and linear accelerator manufactures for almost 20 years, from the initial television camera -based system to current flat panel systems. EPID was widely used in the patie nt positioning. Herman et al. 6 investigated the patient setup accuracy of the prostate cancer patients by aligning the orthogonal portal image with the simulated digital reconstructed radiographs, the mean misalignment of the markers was 5.6 mm after init ial patient setup for 20 patients. Similar studies were carried out by other researchers 7 10. However, implanting markers is an invasive surgical procedure and is not always

PAGE 18

18 feasible for all cancer patients, for example, maker implantation will be complex for certain lung cancer patients. Moreover, portal imaging only provides two -dimensional (2D) geometric information while target volume change is happened in three dimensional (3D) space. It is difficult to predict the 3D tumor motion from the 2D projecti ons. Finally, the number of markers used in the previous study was limited, usually consisting of 3 to 5 gold markers, which are not enough to represent the full details of the target motion. Optical tracking is another technique to measure target motion 11. In optical tracking, infrared markers are first attached to the patient. The infrared markers are either active or passive. Active markers are infrared light emitting diodes. Passive markers are generally sphere or disks coated with reflective material to reflect the infrared light. The infrared detectors, such as charged coupled device (CCD) camera, detect the reflected light from the infrared markers and determine the markers 3D locations from the collected 2D images. Optical tracking system is a noninvasive technique; it has the advantage of high spatial resolution and providing real time patient tracking. However, current optical systems are limited to track external markers. In order to predict the internal target motion, the correlation between in ternal motion and external marker motion has to be modeled. Gierga et al. 12 investigated the motion correlation between the external marks and internal markers and concluded that relative large discrepancy occurred between the external marker motion and i nternal marker motion even though they are generally correlated in respiratory motion case. Furthermore, optical tracking system is difficult to reveal the tumor shape evolution during the course of the radiation therapy. An electromagnetic treatment targ et positioning device, called Calypso 4D localization system ( Calypso Medical Technologies, Seattle, WA), was undergo clinical evaluation in several institutions 13. The Calypso system consists of beacon transponder, electromagnetic

PAGE 19

19 array, tracking statio ns, and optical system. Beacon transponder contains a miniature electronic circuit and has a size of 8.7 mm in length and 1.85 mm in diameter. The transponder does not have internal energy source. When excited by the electromagnetic field generated by the electromagnetic array, the transponder resonates and emits a signal at a unique frequency, which is acquired by the array and processed by the software to determine the location of the transponder. The optical system is used to continuously track the posit ion of the array and determine the transponder location with respect to the linear accelerators isocenter. Even though the Calypso system provides real time 3D target information, it has the following disadvantages 14: 1) it is an invasive techniques and beacon transponders are needed to be surgically implanted, 2) transponders migrant within the tissue and the stability is an concern. Studies found the mean distance variation between transponders was 1.3 mm at 4 days after implant and was 0.8mm at 14 days after implant. Recently 4DCT has been introduced to clinical applications. 4DCT images are reconstructed by time resolved 3D images and are capable of presenting both spatial and temporal patient anatomical information. For example, in 4DCT lung imaging all CT projection images are first acquired while patient breathe freely on the CT table. Then the image projections corresponding to specific breathing phases are binned together using the information from breathing control devices (such as spirometer o r active breathing control (Varian Medical Systems, Palo Alto, California )). Figure 1 1 shows a 4DCT images consisting of a sequence of ten equally divided phases of three dimensional CT images over the patients breathing cycle P hase 50 corresp onds to the maximum expiration; phase 0 corresponds to the maximum inspiration. T he duration of expiration was acquired between phase 0 and phase 50, the duration of inspiration was acquired between phase 50 and phase 0. With the acquired time series 3D

PAGE 20

20 images, imag e registration can be used to study and measure the target mot ion and patient anatomical change. Beside 4DCT, image registration can also be used to measure the target motion from cone beam CT (CBCT) images. CBCT makes it possible to acquire high resolution volumetric 3D images to determine the patient setup errors prior to irradiation. CBCT utilizes large area collimation of x rays, and involves scattered radiation which results in loss of image quality and increases imaging dose to patients compared with the narrow beam collimation in conventional computed tomography (CT). CBCT is widely used to facilitate high precision target positioning 15 19. By definition, image registration computes the transformation between two image volumes to make them similar It can be classified into rigid image registration and deformable image registration (DIR). In rigid image registration, the transformation is limited to translation, rotation, scaling, and shear ing. The rigid image registration ignores the possible target deformation that occurs natural ly in various organs, i.e. lung and prostate, and is only useful when the organ deformation is small In contrast, DIR computes the deformation fields that can be used to measure the target deformation. A detailed review of the DIR was discussed in Chapter 2. In the following, we will give a brief review of the role of DIR in the frame of radiation therapy. 1.1 Target Motion Fraction to fraction variations of patient anatomy and patient setup lead to the uncertainty of the dose delivery, such as, under -dose to the tumor and over dose to the healthy tissues. The dose uncertainties might have more significant effects on IMRT, where regions of high dose and/or high dose gradient exist. One of the clinical applications of DIR i n target motion is to measure the respiratory motion. Coselmon et al. 20 proposed applying the DIR consisting of mutual information and thin-

PAGE 21

21 plate splines to measure the lung motion of 11 patients, 30 control points were manually selected near the interior of the lung and the border of the lung. The result indicated that landmarks motion was 0.4 mm 2.7 mm in the RL direction, 8.1 mm 6.6 mm in the AP direction, and 3.2 mm 8.6 mm in the IS direction. Liu et al. 21 investigated the tumor motion from 152 lung cancer patients using the accelerated demons22, the tumor motion of 2.6 mm to 14.7 mm was found in the SI direction, 1.4 mm to 5.8 mm in the LR direction, 1.9 mm to 2.7 mm in the AP direction. Christensen et al. 23 modeled the lung tissue as elas tic material and applied inverse consistent registration algorithm to measure the correlation of the motion vector and the readings of the spirometer, the result indicated the strong correlation between the average expansion/compression of the lung measure d by image registration and air flow rate measured by spirometry. Measuring liver tumor motion is another application of DIR. Researches show that live tumor motion could reach 19 mm in tidal breathing 24. A finite element modal based multi -organ DIR meth od, MORFEUS, was developed by Brock et al. 25. It described the surface interfaces between the organs, the material properties were assigned to each organ for allowing accurate deformation of internal structures. For five liver cancer patients 26, it was f ound that up to 1 cm motion between the diaphragm and the tumor center of mass for registration of the exhale and inhale CTs, up to 6.8 mm motion was observed between the CT and the cone -beam CT. Using gated magnetic resonance (MR) imaging, Rohlfing et al. 27 analyzed the liver motion using an intensity based approach including rigid registration for global motion and B -spline free -form DIR for local deformation. 3D MR images of four volunteers were acquired between end -of inspiration and endof -expiration. An average of 10 mm deformation over the entire liver w as observed in four volunteers by the deformation field.

PAGE 22

22 1.2 Auto Re -contouring and Labeling The DIR computes the correspondence between the pixels in the two image volumes. This correspondence enables us to auto re contouring and labeling. There are generally two frameworks for re contouring 28: 1) registration based framework and 2) segmentation based framework The registration based framework can be grouped into four steps: 1 Deformation fields comp uting. The deformation between the planning volume (the one has contoured structures) and the target volume (the one needed to be contoured). 2 3D surface mesh generated from 2D contours. Typically the outlined structures are in 2D and it is desirable to c onvert the series of 2D geometry into 3D surface mesh in order to apply 3D deformation fields. 3 Structure propagation. Each point on the surface mesh is moved to the new location by the deformation fields. 4 Post processing. The deformed 3D structures are r e -sliced into new 2D structures for visualization. The segmentation based framework applies iterative algorithms to seek the distinct features in the images, such as the high gradient area, as the boundary. Compared with registration based framework, the segmentation based framework does not affected by the accuracy of the physiciandrawn contours; however, it cannot be used to re contouring the structures whose boundaries are not obvious, such as CTV or PTV. In contrast, r egistration based framework is a ble to propagate all the physician-drawn contours. It has to be noted that the quality of the propagated contours using DIR based approaches re lies on not only the accuracy of the DIR algorithm but also the accur acy of the original contours. Collins et al. 29 presented a brain auto re -contouring method based on the brain atlas and the nonlinear registration procedure using MRI. Lu et al. 28 proposed a method for automatic contours propagating using 4DCT images. His method combined the DIR and surface reco nstruction. Surface meshes are constructed from the manually contoured structures, after

PAGE 23

23 deformed by the deformation field, the deformed surface meshes are re -sliced to the 2D contours. The details of auto re -contouring will be discussed in Chapter 5 As D IR gains focus in the radiation therapy, it faces challenges. The challenges of DIR are 1 Long computation time Because of solving a large amount of variables, DIR is computational intensive and require s long computation time. Typical reported CPU times in published literature s for DIR of CT data sets include: 5 minutes for a 128 x 128 x 128 volume using demons30, 3 minutes for a 256 x 256 x 61 volume on Pentium III 933MHz PC using calculus of variations31, 37 seconds for 1024 x 1024 (2D) on SGI OCTANE 175M Hz using calculus of variations32, 6 minutes for a 256 x 256 x 61 volume on Pentium 2.8GHz PC using accelerated demons22. The long computation time limit s its application s to be widely used in the clinics. To the best of our knowledge, no current commercia l treatment planning system is able to performance deformable registration in near real time. 2 Validation Online applications of DIR in radiotherapy demand high accuracy. Due to the complexity of the DIR, gold standard of the DIR is difficult to obtain. S t udies have been carried out using a simple mathematical deformation for algorithm validation 31, however, the simple mathematical deformation is not suffi cient to model the tumor motion or deformation actually occurred in the patient. 3 Lack ing physiologica l constraints. DIR assumes one to -one mapping, which indicate s that the pixels correspondence within two image volumes should be invariant of the registration sequence, for example, the correspondence between the pixels in image A and image B should be independent of whether deforming image A to image B or image B to image A. This relation is also called inverse consistency. The current DIR algorithms used radiation therapy usually does not always preserve inverse consistent because of lack of constraints i n matching criteria to uniquely describe the one to -one correspondence between the images 33. 4 Robustness. The robustness of the registration algorithm involves registration results should not significantly changed when parameters vary, in other words, the registration result should be insensitive to parameter selections. The rest of the dissertation has been organized into the followings: Chapter 2 provided a literature review of the DIR including inverse consistent registration; Chapter 3 provided a surve y of the existing validation techniques for DIR ; Chapter 4 discussed a novel inverse consistency constraints for constraining DIR; Chapter 5 present the clinical application of auto re -contouring using DIR and auto internal target volume generation; In Cha pter 6, a proof -of -

PAGE 24

24 concept study was developed to achieve near real time DIR using emerging technology of graphics process units

PAGE 25

25 Figure 1 1. Representative slices demonstrating 4DCT respiratory motion. Due to the diaphragm motion, the location of the lung tissue changes within the breathing cycle.

PAGE 26

26 CHAPTER 2 LITERATURE REVIEW OF DEFORMABLE IMAGE REG ISTRATION Image registration is to find the transformation that aligns two or more image volumes together. It is a fundamental problem in medical imaging and it can be applied within or between different imaging modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasound (US). The goal of image registration in radiation therapy is to reduce the treatment margins, allow performing safe dose esc alation, and improve the outcome of patient treatment 34. Image registration methods have been extensively reviewed 35 43. It can be categorized by the type of the transformations involved, i.e., rigid registration versus deformable registration (also call ed non rigid). For rigid registration, only translation and rotation are considered, for instance, 3D rigid registration has totally six parameters: three rotations and three translations. Automatic rigid inter -modality and intra -modality image registratio ns have been developed for years and are available in some commercial software. The rigid registration can be applied to the cases where the relative shape change of the target (i.e., tumor or organ) is small. However, in reality, most of the human organ m otions are not always modeled by the rigid transformation 44. To describe more complex change of the target motion, DIR is introduced to compute the transformation with more degree of freedoms. Reviews of the DIR algorithms can be found in 40, 43, 45, 46. Generally, DIR algorithms consist of three components (Figure 2 1): similarity measures, transformation models, and optimization strategies. Similarity measures compute the similarity between two images. Transformation models specify how to deform one im age to match another. Optimization strategies seek to determine the parameters of the transformation model in order to maximize or minimize the matching criterion.

PAGE 27

27 In the following, we will first give a review of each component of DIR, and then we will re view the inverse consistent registrations, which reduce the asymmetry of the common unidirectional DIR algorithms 2.1 Review of Similarity Measures In DIR, similarity measures can be categorized into three groups: 1) Feature (geometric) based measures, in cluding points, curves, and surface. Incorporating those features ensures that the transformation is biologically valid and is consistent with the underlying anatomical and physiological constraints. 2) Intensity based measures. Image similarity is define d as mathematical or statistical measurements of the intensity distribution of the two images. Intensity based measures usually fully automatic and need less human intervention. 3) Hybrid measures. It combines both the feature based measures and intensity based measures. 2.1.1 Feature Based Measures In feature based measures, points (or landmarks) 47 51, curves, and surfaces are used for registration purpose. Both anatomical features (such as bifurcations) and fiducial markers can be used as points or land marks. In general, there are two steps in point based registration: 1) detecting the landmarks and 2) establishing the correspondence of the landmarks. Landmarks can be either manually selected by experts or automatically detected using image processing al gorithms 52. The measurement of the landmarks localization error was essential for accurate registration 50. Many studies devoted to investigate how to establish the robust correspondence between the landmarks, for exampl e, Besl and McKay 53 introduced the iterative closest point (ICP) algorithm to efficiently locate the corresponding points between two images without shape representation. The ICP algorithm iteratively searches the closest point between two point sets and estimates motion using the corresponding point pairs. ICP is very popular for registration of two point sets, however, it was susceptible to gross outliers. Many extensions of ICP method

PAGE 28

28 were proposed to improve the robust on outlier rejection, such as random sampling and the least median o f squares estimator 54, hybrid ICP algorithm 55, Gaussian -weighted correspondence matrix 56, and Hausdorff distance 57. Correspondence of the landmarks can be determined by the mass and stiffness matrix based on the Gaussian of the distances 58. Cross and Hancock 59 built a graph representation to constrain the search of the correspondences by Delaunay triangulation and expectation -maximization optimization algorithm. Besides points or landmarks, curve features (such as ridge or crest lines) extracted from the images can be used for deformable registration. Crest lines are defined as joining points of zero maximal principle curvature. In brain imaging, Subsol et al. 60 first represented the gyri and sulci of different volunteers with the crest lines, then re gistered them to construct the brain atlas. Curve features were also used in the multimodality registration, where the anatomies revealed similar geometric features in both imaging modality. Maintz 61 registered CT and MRI images of human head using ridge extraction. Other image registrations using ridge features 62 64 were also proposed. 2.1.2 Intensity Based Measures Comparing with the feature based measures, intensity based measures do not require feature extraction; instead, they utilize mathematical o r statistical criteria to match the images. Based on types of image modalities, intensity based similarity measures can be grouped into two categories: 1) measures for mono -modality and 2) measures for multi -modality. 2.1.2.1 Measures for mono modality Th e assumption of mono-modality measures is that the pixel intensity of images does not change during the deformation, or intensity change is linear. An example of the former is sum of squared distance (SSD) or sum of squared error (SSE), an example of the l ater is correlation coefficient (CC). SSD and CC can be written in Equation (2.1) and (2.2) respectively.

PAGE 29

29 2 12 11 ()()N ii iSSDIxIx N (2 1 ) 1122 1 22 1122 11()() ()()N ii i NN ii iiIxIIxI CC IxIIxI (2 2 ) where 1I and 2I are the mean pixel intensity of image I1 and image I2, image I1 and image I2 have N pixels. The first application of SSD was reported by Horn and Schunck 65 for solving an energy function incorporating optical flow and linear differentiable regularization. Lu et al. 31 and Fisher et al. 32 proposed using SSD as a similarity measure in their energy functions and applied different numerical schem es to obtain the deformation field. In Christensen and Johnsons work 66, SSD was utilized to measure the deformation in the brain MRI images. 2.1.2.2 Measures for multi modality For multi -modality images, directly comparing pixel intensity is limited be cause same object may have different pixel intensities from different modalities. For example, the bony anatomies have high pixel intensities in CT images; however, they have low pixel intensities in MRI images. One approach to measure the similarity for m ulti -modal images is to find a function f that after applying the function, the intensity between I1 and I2 are similar, that is, 12() IfI An example of these measures is correlation ratio (I1|I2) 67 112 12 1(|) (|)1 () VarIEII II VarI (2 3 ) where 1() VarI is the variance of 1I 12(|) EII is the conditional expectation, 112(|) VarIEII measures the degree of 1I that is functionally independent of 2I is in the range between 0 and 1, =0 indicates that 1I and 2I are functional independence; =1 indicates 1I and 2I are

PAGE 30

30 functional dependence. Two images are considered to have more functional dependence if their similarity increases. Another approach for similarity measures is to analyze probability relationship between images, such as mutual information (MI). Basically, MI indicates how well one image can repr esent another. MI is defined as 121212(,)()()(,) MIIIHIHIHII (2 4 ) where 1() HI and 2() HI are the Shannon entropy of image I1 and I2, 12(,) HII is joint entropy of image I1 and I2. Applications of using mutual information in multi -modality DIR were reported in cardiology using 3D ultrasound 68, brain 69, a nd liver 70. 2.1.3 Hybrid M easures The intensity based similarity measures utilizes the intensity information to match the images without segmenting the image features, in contrast, the feature based registration matches the images based on the features t hat usually sparsely distributed in the image. To achieve more robust and accurate measures, hybrid measures combine both intensity based measures and feature based measures together. Christensen et al. 71 proposed a two-step method to recover the large o rgan deformation due to the intracavitary applicators. Firstly, a landmark based fluid registration was applied to obtain the initial deformation, and then volumetric viscous fluid registration utilized the initial deformation to compute the final deformation. The hybrid approach has the advantage of robustness from land -maker based algorithms and flexibility from intensity based methods. Pluim et al. 72 reported registration robustness improvement after combining mutual information and spatial gradient fea tures. Hellier and Barillot 73 proposed a framework incorporating intensity based SSD similarity measures with cortical constraints for deformable inter -subject brain registration. The cortical constraints were expressed as the

PAGE 31

31 extracted cortical sulci seg mentation. Cachier et al. 74 proposed an iconic feature based registration incorporating intensity based similarity measures with geometric distance. 2.2 Review of Transformation Models Transformation models determine how to deform one image to match anot her image. Transformation models can be classified by different criteria, such as the effective range of the transformation 34. Here we will classify them by the types of the transformations, which include 1) affine/polynomial models, 2) smooth basis func tion models, 3) biomechanical models, 4) optical flow models, and 5) regularization on dense field models. Affine and polynomial models, smooth basis function models require control points and they act like interpolation function to find the displacement o f non -grid locations given the displacement on the grids. For others transformations, such as biomechanical models, optical flow models, and regularization on dense field models, control points are not required. 2.2.1 Affine and Polynomial Models The affi ne transform deals with linear transformations, i.e., rotation, translation, shearing, and scaling. It preserves the collinearity, i.e., parallel lines are parallel after transformation. Affine transformation can be expressed as () TxAxb ( 2 5 ) where A is the transformation matrix, b is the translation vector, x and T (x ) are the input and output vector, respectively. In 2D space, cos sin sin cos A It is, however, difficult to represent a non linear deformation using only one single affine transformation. Collins et al. 75 proposed combining a group of local affine transformations to c ompute the brain deformation. Transformations can also be represented by polynomial functions 76, which are defined as

PAGE 32

32 2 012() ...n nTxaaxaxax (2 6 ) where a0, a1, a2, and an are the polynomial coefficients, n is the degree of the polynomial function. One drawback of polynomial model is that oscillations tend to appear for high-order polynomial and local defor mation is difficult to represent 40. 2.2.2 Smooth Basis Function Models The summation of smooth basis functions, such as thin -plate spline 77 and B -spline, is another approach to model transformations. The name thin -plate spline refers to the physical ana logy of bending a thin sheet metal. Thin -plate spline is defined as 22 123 1(,) lnN iii iTxyAAxAyFrr (2 7 ) where 2222()()iiirxxyyd ( xi, yi) is a control point in thin -plate spline. From physics point of view, Equation (2.7) describes a plate of infinite surface deformation under the loads centered at ( xi, yi), where the deflation is Ti. The distribution of the load is specified by d2. The load becomes a point load when d2 approaches zero. Equation (2.7) contains N+3 unknowns ( A1, A2, A3, and Fi (i =1, N)). N equ ations can be obtained by substituting point ( xi, yi) and deflation Ti into Equation (2.7) the rest three equations can be obtained using 10N i iF (2 8 ) 10N ii ixF (2 9 ) 10N ii iyF (2 10) Equation (2.8) assumes the plate under the loads will not move, and remain stationary. Both Equation (2.9) and (2.10) indicate the moments along x and y ax is are zero, that is, no rotation is allowed for the plates. Thin -plate spline was first applied in image registration of remote sensing

PAGE 33

33 images by Geshtasby 78, thin -plate spline was also used for MRI/MRSI mapping 79, inverse consistent registration 80, an d analyzing shape variation in schizophrenia 81. Similar to thin plate spline, multiquadric function is another type of basis functions, it is defined as 123 4(,) (,)N ii iTxyFFxFyFRxy (2 11) where 222(,)()()iiiRxyxxyyd ( xi, yi) is the control point, d has the same meaning as in Equation (2.7) Equation (2.11) contains N unknowns, which are determined by substituting point ( xi, yi) and deflation Ti into Equation (2.11) Both thin -plate spline and multiquadric models wer e found to be suitable when the number of control points was not large and the distance variation between the control points was small 82. The main disadvantage of thin -plate spline and multiquadric models is that each control point has a global influence of the transformation, any change of one control point will affect the transformation and these global effects of control points make it difficult to model complex and local deformation. Rather using thin plate spline models, some researchers used B -splin e models by adjusting the locations of control points. Using B -spline models83, the deformation field can be described as 333 ,, 000(,,) ()()()lmniljmkn lmnTxyzBuBvBw (2 12) where ,, ijk is the control point in a control points mesh with the size of nx ny nz and uniform spacing [/]1,[/]1,[/]1,xyzixnjynkzn /[/],xxuxnxn /[/],yyvynyn /[/]zzwznzn Bl represents the l th B -spline basis function

PAGE 34

34 3 0 32 1 32 2 3 3()(1)/6 ()(364)/6 ()(3331)/6 ()/6 Buu Buuu Buuuu Buu (2 13) The advantage of B -spline models is that they are locally controlled, changes of control point locations only affect the transformation of the neighborhood of the control points, which make B-spline computationally efficient 83. However, the disadvantages of traditional B -spline registration are 1) certain measures have to be utilized to prevent folding of the deformation field 43; 2) the challenge of determining optimal n umbers of control points 84, few control points leads to coarse match and large number of control points may lead to local oscillations. Application of B-spline models included measuring liver motion 27, PET CT image fusion 85, cardiac modeling 86, 87, an d breast deformation 83. 2.2.3 Biomechanical Models Biomechanical models use physical properties of organs and its boundary conditions to compute deformations and motions, such as brain shift 88, 89, brain tumor deformation 90, breast deformation 91, 92, heart motion 93, and liver motion 26, 94. It is based on the Finite Element Method (FEM). The FEM is a classical engineering analysis technique that produces solutions to partial differential equations (PDEs) describing complex spatially distributed system s and processes. It has been widely used in structural and continuum mechanics. In essence, the FE M strategy divides the domain of interest (e.g. liver ) into an interconnected set of sub-regions or elements by tetrahedral mesh Discrete approximations to t he PDEs that govern the physical processes to be simulated (e.g. consolidation theory) are developed on each element which can possess its own local properties allowing compl ex geometries and tissue heterogeneities to be represented through a simple building -block structure. G iven sufficient resolution of the

PAGE 35

35 geometry of interest, FE M methods produce highly accurate solutions to complex equations under realistic conditions. In order to simulate organ deformation, some FEM parameters should be considered: 1) Boundary condition. The accuracy of the finite element model depends on the accuracy of the boundary condition. Zhang et al. 95 reported applying contact impact analysis on the boundary of the lung surface between inhale and exhale breathing phases to i mprove the accuracy. 2) Material property. The physical properties of each organ should be specified, such as Youngs modulus and Poissons ration. However those physical properties usually vary between patients. Chi et al. 96 investigated the registratio n accuracy with respect to the change of the physical properties for rectum wall, bladder wall, and prostate. 2.2.4 Optical Flow Models The optical flow was introduced by Horn and Schunck 65 to estimate the motion between frames in an image sequence. The fundamental assumption is that the brightness of the images is preserved, that is, the pixel intensity of the same object does not change within two image frames. Given two images (,,,) Ixyzt and (,,,) Ixxyyzztt the optical flow v elocity u can be described as (,,,)(,,,) IxyztIxxyyzztt (2 14) Equation (2.14) can be rewritten as (,,,) 00 IxyztIdxIdyIdzIt xdtydtzdtt (2 15) Equation (2.15) can be simplified as I Iu t (2 16)

PAGE 36

36 where [/,/,/] udxdtdydtdzdt Equation (2.16) is under constrained and many smoothing constraints were proposed to address this problem. Thiron 30 proposed an iterative method to constrain the flow velocity u At each iteration, the flow velocity u was firstly computed without constraints, and then the Gaussian filter was applied to convolve u to enforce the smoothness. Carefully choosing the smoothing parameters, such as the standard deviation and the width of the Gaussian filter, is important to obtain good registration results. Horn and Schunck 65 proposed a constraint by minimizing the square of the magnitude of the gradient of the optical flow velocity u Hellier et al. 97 used robust estimators as a constrain to preserve the natural discontinuity of the deformation field, which usually appear on the structure boundary. 2.2.5 Regularization on Dense Field Models DIR was also formulated as an optimization problem. The cost function (also called energy function) E usually contai ns similarity measure ( Esim) and regularization ( Ereg). Similarity measure describes how similar source image S(x) and target image T(x) is, regularization is to constrain the deformation field. The goal is to find the optimal deformation field u to minimi ze the energy function E(u) ()()() argmin()sim regEuEuEu uEu (2 17) where is the weighting factor for balancing the similarity term and regularization term. Here smaller value of similarity term indicates better matching. For certain similarity measures, such as cross correlation, where larger value indicates better matching, the similarity term in Equation (2.17) can be represented by the negative of the similarity measures. Equation (2.17) can be explained in a Bayesian framework 46, 98: the similarity measures term acts like a likelihood term expressing the probabilities of matching between the source image and target image; the regularization term acts like a prior expressing the prior knowledge of the deformation fields.

PAGE 37

37 Another way to understand Equation (2.17) is to consider the similarity term as a driving force to maximizing the similarity between two images, to consider regularization term as a penalty function to constrain the transformation. Both feature based s imilarity measures and intensity based similarity measures can be applied as similarity term, readers can refer to Chapter 2.1 for the detail discussion of the possible forms of the similarity measures. For regularization term, the common one is the linea r differential regularization, where large variation in deformation field is significantly penalized. 2()regEuu (2 18) Another group of regularizations is based on physical models. Linear elastic regularization 66, 99 assumes the deformation is governed by the Navier equation in elasticity theory and deformed in an isotropic and homogeneous elast ic entity, it can be written as 2()()()regEuuu (2 19) where 2 is the La placian operator, is the Nabla operator, u is the divergence of u is the Lame constants, is the elastic material property. The elastic regularizat ion assumes small deformation. To handle large deformation, viscous fluid regularization 100102 was introduction. Viscous fluid regularization constrains the deformation as the flow of the viscous fluid; it can be efficiently expressed as 2()() vv u vvu t (2 20) where v is the velocity of the deformation. Because viscous fluid regularization allows lar ge deformation, the deformation field is needed to be checked to avoid folding. Other regularizations include bi harmonic 103 model and membrane 104 model.

PAGE 38

38 2.3 Review of Optimization Strategies After constructing a cost function, the next step is to obta in the transformation parameters using optimization. This problem can be stated as: given a cost function f and the unknown parameters, find the optimal set of parameters that maximize or minimize the cost function. Generally whether maximizing or minimizi ng the cost function is trivia because maximizing f equals to minimizing f and vice versa. Many registration algorithms are suitable using existing optimization strategies, such as Powells method, steepest gradient descent method, conjugate gradient meth od, and downhill simplex method. A review of optimization algorithms can be found in the classic book by Press105. We consider the case for steepest descent optimization algorithms. Let u the deformation field to be optimized, f (u ) is the energy function () fu is the first derivatives of the energy function, an optimized u can be computed using the steepest descent optimization algorithm () u fu t (2 21) where u is the increment of the deformation field, t is the step size. Equation (2.21) shows that the optimal direction is always the direction of the local downhill gradient ( () fu ). One of the difficulties with optimization algorithms in DIR is that the optimized transformation parameters that results in a good images may not be physically meaningful 43. A common example is the change of the topology, such as tearing or folding of the deformation field. In reality, both tearing and folding do not occur in the evolution of anatomical structures. However, tearing may result in optimized transformation parameters that make two images more similar to each other, i.e., having lower cost function value, even though the resulting transformation is not physically valid. Therefore, relying on the cost function itsel f might not be sufficient to achieve good optimization parameters. Care must be taken to make sure the

PAGE 39

39 resulting transformation parameters are physically feasible. For example, a common method to check the preservation of topology is to compute the determi nant of the Jacobian matrix of the deformation field. Positive determinant indicates the topology is preserved. Another challenge optimizers facing is local minima, where the optimizer is converged to the local minimum and the optimized parameters do not reflect the global minima. For example, in Figure 2 2, we consider optimizing a parameter with the cost function in blue, X axis is the parameter space. The point A is the ideal global minimum the optimization algorithm should find, however, it is very common for optimization algorithms to converge in the local minima, i.e., point B or point C in Figure 2 2. One of the approaches of avoiding the local minima is to construct an alternative convex cost function, such as the red curve in Figure 2 2. Unfortuna tely, parameters space of cost function cannot always be easily constructed as a convex function, especially for deformation image registrations that contain possibly thousands of parameters. Multi resolution is another widely used approach to reduce the p ossibility of trapping in the local minima, however, its convergence to the global minima is not guaranteed. 2.4 Review of Inverse Consistent Registration The goal of image registration is to find the correspondence between two images, which are called so urce image S and target image T Here source image S is deformed to match target image T The correspondence between two images are often assumed to be one -to -one, indicating each point in template image T is only mapped to one point in source image S In verse consistency is defined as the correspondence between two images is invariant when switching the order of the target image and source image, f or example, as shown in Figure 2 3, point A in image S is mapped to point B in image T when deforming image S to image T ; for inverse consistent registration, after switching order of T and S point B in image T should still be mapped to point A in image S i.e., point A and point A are at the same location. Let us define

PAGE 40

40 the forward transformation h (x ) the tran sformation from source image S to target image T and the backward transformation g (x ) the transformation from target image T to source image S mathematically an inverse consistent transformation can be expressed by (()) hgxI or (()) ghxI (2 22) where h (x ) = x + u (x ), g (x ) = x + v (x ), u (x ) and v (x ) are the forward and the backw ard deformation field, respectively. Nielsen and Markussen 106 argued that inverse consistency was one of the most important properties of deformation fields. It should be noted that the concept of inverse consistency is different than the concept of the i nverse of deformation field. For inverse consistency, both forward and backward transformations are involved. For inverse of the deformation field only one transformation such as forward transformation or backward transformation, is required Because so me inverse consistency methods require computing the inverse of the deformation field, let us first give a brief review of how to compute inverse. Figure 2 4 illustrates the problem of computing inverse of the deformation field. One of the common misunders tandings of computing inverse is to simply add a negative sign to the forward transformation, i.e., DA is the inverse of AB in Figure 2 4, this method might lead to the wrong inverse because the true inverse vector is CA not DA Methods of computing the inverse of the deformation field had been reported 66, 107 109. Christensen and Johnson 66 proposed an iterative scheme to calculate inverse deformation field based on grid search, a good initial guess of the inverse was desired for fast convergence. Cachier and Rey 108 computed the inverse using the Newton scheme. Ashburner 109 reported a method based on the concept of tetrahedral mesh and affine transformatio n inversion, where the deformation field was decomposed into local affine transforms and the inverse deformation field was computed by combining the inversion of local

PAGE 41

41 affine transformation. Chen et al. 107 reported an iterative inverse computation approac h based on the fixedpoint theory to achieve fast convergence. The Insight Registration and Segmentation Toolkit 110 included an inverse deformation field computation approach using a look up table based scatter data interpolation, however, unlike iterativ e approaches, look -up table based approaches requires large memory and long computation time. The unidirectional DIR algorithms have the difficulty of maintaining the inverse consistency because of the lack of matching criteria to uniquely define the one -to -one correspondence between the images 66. Cachier et al. 108 suggested the asymmetry ( inverse inconsistency) of unidirectional registration was due to 1) order non-preservation of the energies. For two forward transformation h1 and h2 where energy f unction E (h1) > E ( h2), their inverse energy g1 and g2 might not follow E (g1) > E (g2), 2) non -stable transformation space, 3) local minima. The optimization might reach different local minima of the energy when the target image and the source image were swi tched. There are a number of works on developing inverse consistent DIR. One category of inverse consistent methods is to minimizing coupled symmetric energy functions involving forward and backward transformation, each of them consists of a similarity t erm, a regularization term, and an inverse consistency term. Christensen and Johnson 66 was among the first ones to introduce the inverse consistent image registration by jointly estimating the forward and backward transformation using the symmetric energy function. The proposed energy function was given as

PAGE 42

42 22 22 22 22 11(,) (())()(()) ()() ()()()()()()()() ()(); ()()simregicc sim reg iccEhgEEE EShxTxSTgxdx ELuxLwxdx Ehxgxgxhxdxuxwxwxuxdx hxxuxgxxwx (2 23) where T and S are the t arget and the source image, h and g are the forward transformation and the backward transformation u and w are the forward and the backward displacement field, () ux and () wx are the inverse of the forward and the inverse of the backward L is the linear elastic regularization operator 2()()(())() Luxuxuxux where 123/,/,/ xxx and 22222 123/,/,/ xxx To solve Equation (2.23) 3D Fourier series representation was used to parameterize the forward and backward displacement field. The displacement field has the form ,[] ,[]()[] ()[]jxNk k jxNk kuxke wxke (2 24) where 112233,[] 222 xNkkxkxkx [] k and [] k are (3x1) complex -valued vectors. After Fourier parameterization, the problem of computing u (x) and w (x) is equivalent to finding their corresponding [] k and [] k The advantage of using Fourier parameterization is that the number of the parameters could be reduced. This model gave considerably good result with carefully chosen parameters. The limitation of th is model is that the inverse deformation field is required to be explicitly computed at each iteration, which may introduce cumulative numerical error a nd result in long computation time. Similar inverse consistent regularization was also applied to consistent B -spline registration 111, consistent landmark and intensity based

PAGE 43

43 registration using thin plate spline 80, 112, and local large deformation 113. T o improve the computational efficiency, Christensen et al. 114 reported consistent registration on the object of interest, incorporating information of both boundary and the intensity inside the object, rather than the entire image. Yeung and Shi 115, 116 proposed an inverse consistent approach considering stochastic uncertainty 1 1 ()()hg h g hgIr hg gh (2 25) where h and g are stochastic error for forward transformation h and backward transformation g r is the error due to the imperfectness of the inverse consistency. A generalized total least square approac h was adopted to solve the stochastic problem. Zhang et al.69 presented a consistent multi -modality deformable by estimating the forward transform h and backward transform g in a variational framework. The consistent energy function was expressed as 2 12 2 12((),)((()))(()) ((),)((()))(())fb f bEEE ESimShTDhxdxghIdxdx ESimTgSDgxdxhgIdxdx (2 26) where Ef and Eb are the forward and backward registration energy. Sim (S (h ), T ) and Sim (T (g ), S ) are mutual information. ( D (h (x ))) and (D (g (x ))) are the regularization term, 2(()) ghIdxdx and 2(()) hgIdxdx are the inverse consistency residue, which are minimized when forward transformation g and backward t ransformation h are inverse to each other. Alvarez et al. 117 proposed a symmetrical energy function to estimate optical flow with

PAGE 44

44 occlusion detection. His model replaced the similarity term and regularization term in Equation (2.26) with 2 21 ((),) (())() max() (()(())())sim T regESimShT ShxTxdx Tx EtracehDTxhdx (2 27) where () h is the Jac obian matrix of h (x ), (()) DTx is a projection matrix in perpendicular direction to () Tx The idea of applying Equation (2.27) is to prevent the displacement field be ing smoothed across the object boundary. Leow et al. 118 proposed an consistent unidirectional approach by implicitly including inverse consistency constraint to directly solve the following energy 1222(,)(())()(())(())()(())EEEhgShxTxdxRhxTgxSxRgx (2 28) The forward and backward transformation can be numerical updated as 11 12 12(), ()nn nnhhgg (2 29) where is a small number, 1 and 1 are vector field representing gradient descent direction of E1 in the forward and backward direction accordingly, 2 and 2 are similarly defined for E2. 1 and 2 can be computed by 11 22() ()h gEh Eg (2 30) 2 and 1 can be determined from the inverse consistency relationship between hn+1 and gn+1 11 12 12(())(())nn nnhgid h g id (2 31) Taking Taylors expansion of Equation (2.31) and collecting up to the f irst order term, one gets

PAGE 45

45 1212()()() Dhhh (2 32) where D is the Jacobian operator, it is expressed as 11 1 1 n nn nff xx D ff xx where x1,, xn are the directions of the deformation vectors, f1,, fn are the component of the deformation field in each direction. Using Equation (2.32) 2 and 1 can be de termined by 22 11()() ()() Dhh Dgg (2 33) Therefore, the update scheme of hn+1 can be obtained by Equation (2.29) Equation (2.30) and Equation (2.33) More important, hn+1 can be updated using only previous forward transformation hn without computing the backward transformation g and inverse field g1. However, one disadvantage of this model is that the truncation error in deriving Equation (2.32) using Taylor expansio n may not be small and may be accumulated during the iterations. Cachier et al 108 investigated the asymmetric results of uni-directional registrations and proposed an inversion-invariant energy to symmetrize the asymmetric energy function. In their meth od, the energy function (Equation (2.34) ) consisted of symmetric similarity term and symmetric regularization term. Equation (2.34) could be solved using two approaches: 1) alternatively solving both forward and backward mapping using either finite element implementation or finite difference implementation. 2) Using finite element implementation to solve Equation (2. 35) only involved computing forward mapping by approximating backward transformation using forward transformation

PAGE 46

46 22 22(,,) (())()() (,,) (())()()f simreg b simregETSuEETxuxSxdxuxdx ESTwEESxwxTxdxwxdx (2 34) 2 211 (,,)(1)((())())(1) 2fETSu uTxuxSx u u (2 35) Inverse consistency can also be achi eved in the context of large deformation diffeomorphic flow based registration. By definition, a diffeomorphic transformation is a mapping with the properties of one to -one, onto, and continuously differentiable. Avants et al. 119 presented a inverse cons istent approach by dividing the transformation into two parts, forward transformation 0, th and backward transformation 1,1 tg Assume their exits an intermedian image I(x ) that is within the geodesic path of conn ecting source image S (x ) and target image T (x ), 0, th is the transformation from S (x ) to I(x ) at time t and 1,1 tg is the transformation from T (x ) to I (x ) at time t The cost function is given by 22 2 0.5 0,1,1 0 22 0.50.5 1()() subject to hg tttt hghg ttEvvShTgdt vvvv (2 36) where h tv and g tv are flow satisfying 0, t h tdh v dt and 1, t g tdg v dt Equation (2.36) can be explained as deforming both S (x ) and T (x ) toward the mean image at time t =0.5 to obtain a solution by minimization with respect to 0, th and 1,1 tg The complete forward transformation h and backward transformation g can be expressed by -1 1 0,1,1 1,10, and httgtthggh (2 37)

PAGE 47

47 Equation (2.37) indicates the transformation is inverse consistent because hgid and ghid Similar approaches were also proposed by Beg and Khan 120, Joshi et al. 121, and Yang et al. 122.

PAGE 48

48 Deformable Image Registration Similarity Measures Transformation Models Optimization Figure 2 1. Deformable image registration algorithms consist of three components: similarity mea sures, transformation models, and optimization strategies. Figure 2 2. 1D parameter space illustrating the loc al minima (point B and point C on the blue curve ) and the global minima (point A). X axis is the parameter space ; Y axis is the value of the cost function, showing in blue curve. The goal of the optimization algorithm is to find the parameters corresponding to the global minima and avoid local minima. The dotted red curve demonstrating the ideal convex parameter space which optimization algorithm will always converge to the global minima X A B C Y

PAGE 49

49 Figure 2 3 The forward transformation h (x ) matches the point A in the source image S (x ) to the point B in the target image T (x ), the backward transfo rmation g (x ) matches the point B in the target image T (x ) to the point A in the source image S (x ). Ideally the choice of registration direction (i.e., forward transformation or backward transformation) should not affect the correspondence, i.e., A and A are at the same point However this is usually not hold for unidirectional registration algorithms. h ( x ) g ( x ) S ( x ) T ( x ) A B A

PAGE 50

50 Figure 2 4 Computation of the inverse deformation field. The forward transformation h (x ) at point A determines the motion vector of point A i.e, from point A to point B Assuming h (x ) is known, to compute inverse transformation h1(x ) at point A is to find the point C, which moves to point A by the motion vector. Point D is the point determined by simply adding a negative si gn to the forward motion vector AB Point C and point D are not always the same. D C B A h 1 ( x ) h ( x )

PAGE 51

51 CHAPTER 3 VALIDATIONS OF DEFOR MABLE IMAGE REGISTRA TION Validation of DIR refers to demonstrating a registration algorithm applied to representative data in a given application consistently succeeds within the acceptable range (e.g., maximum or mean error) for the application 43. In DIR, validation can be challenge because ground truth, also called gold standard, is not generally available. The general validation approaches applied in the rigid registration validation, such as landmarks registration errors, is not enough for validating the points further away from the landmarks. Jannin et al. 123 suggested registration validation may include the following criteria: 1) accuracy; 2) precision; 3) robustness; 4) consistency or c losed loop. In this chapter, we will discuss registration validation from the view of accuracy, robustness, and consistency. Unlike human studies, precision is not a big validation issue in image registration because most algorithm hardly contain random co mponents and will produce consistent result once the image data and registration parameters are determined. It has to be noted that validation through precision, robustness, and consistency do not require the existence of gold standards. 3.1 Accuracy Accu racy is the degree to which a measurement is true or correct. Deformable registration requires high accuracy to represent the actual deformation for its clinic applications. The most straight forward accuracy measurements is to compute the difference betwe en the measured values with the true values, however, as mentioned before, it is very difficult to obtain the ground truth from the deformation in the real world application. A trade -off is to simulate the deformation by either mathematical simulation or p hysical phantom study. For patient images, validation metrics and its corresponding mathematical or statistical tools had been studied for the validation purpose.

PAGE 52

52 3.1.1 Physical Phantoms Kashani et al. 124 investigated the feasibility of using a simple de formable phantom to validate deformable registration algorithms. The phantom consisted of a diagnostic thoracic imaging phantom with a deformable foam insert, where small plastic markers were distributed. The foam was deformed by a one -dimension drive stag e in SI direction to simulate the exhale and the inhale respiratory state. The markers were served as ground truth and were digitally removed before the registration. The maximum errors of markers position were reported between 7.5 mm and 9 mm for three r egistration algorithms, which were thin plate splines, B splines, and affine DIR algorithms. Another deformable lung phantom was proposed by Serban et al. 125. The phantom consisted of a piston driving mechanism and a Lucite cylinder filled with water and a latex balloon stuffed with dampened natural sponges, where the sponges represented lung tissues, nylon wires and Lucite bead represented vascular and bronchial bifurcations. Lu et al. 31 reported a gel -balloon deformable phantom, where the deformation w as simulated by inserting or removal heavy oil to inflate or deflate the balloon. Parker and Gilland 126 evaluated the myocardial wall motion for gated cardiac emission tomography using a modified commercial available, dynamic, physical phantom (Figure 3 1). It consisted of a water -filled torso containing two lungs, a Teflon spine, and a beating heart. The heart assembly was composed of a 50 mL latex membrane and a 100 mL cylindrical latex membrane. The myocardium motion was monitored by measuring the radi oactivity injected into the volume between the two membranes. The computer controlled pump managed the beating motion while sending ECG signal to the SPECT for gating purpose. Ten point source markers were attached to the outer surface of the myocardium wi th approximate uniform spacing. Marker scan was first carried out to locate the markers position. In marker scan, each marker was injected approximately 3 mCi of 99mTc, after that, the phantom was scanned with three

PAGE 53

53 consecutive 60 minutes gated SPECT usin g low energyhigh resolution collimator. The markers were then allowed to decay to negligible level. The markers locations were computing using center of mass for SPECT images. To simulate the myocardium motion in the myocardium scan, high activity 99mTc ( 73 mCi) was injected into the myocardium compartment with a long SPECT scan time. The phantom was not moved during the marker scan and myocardium scan in order to ensure the physical correspondence between the markers and myocardium images. The myocardium phantom has the advantage of automatic marker removal by the marker scan and the consecutively scan to represent the myocardium motion, without the need to manually remove the markers from the image volume that might change the pixel intensity distributi on in the images. The advantages of physical phantoms are that the motions of the markers represent the real physical motion in the phantom and the motion is physiologically consistent. The disadvantages of the physical phantoms are 1) the difficulty to de sign a phantom considering all the variability encountered in the clinical situations. All existing physical phantoms use simple materials, such as sponges, membrane, or gels, to represent their targeting organs, which can be different than the reality. 2) The validation using physical phantoms is not on voxel -by -voxel basis and overall accuracy could be limited by the number of markers available in the phantom. Physical phantoms have difficulty to estimate the registration errors beyond the markers locations. 3.1.2 Digital Phantom s Digital phantom utilizes mathematical, statistical, or image processing methods to simulate the organ deformation. Gholipour et al. 127 presented a digital brain phantom to validate the registration between low resolution images of echo planar imaging (EPI)) and high resolution images of proton density image (PD). To simulate the artifacts and problems of EPI images, the digital phantom was constructed as followings: first, a high resolution PD images was down

PAGE 54

54 sampled and the inte rested regions were segmented, then the effect of signal loss artifacts was added to the interested regions in down sampled PD images using the simplified signal fade model, after that both deterministic and random distortions were simulated using markers incorporating physical measured distortion and random number generator, finally a thin-plate spline interpolation was used to generate the deformation field. Because of the usage of thin plate spline interpolation, this digital brain phantom might be bias toward the thin-plate spline based DIR. Commonly used mathematical functions include harmonic functions 31, 107, sinusoidal function 30. Harmonic functions are defined as a twice continuously differentiable function f satisfying Laplace equation, i.e. 20 f where 222 2 222 12...nxxx The following harmonic functions 31, 107 were applied to simulate the 2D motion: 1 ()cos() ()cos() tan(/) uxbxm uybym yx (3 1 ) where u (x ) and u (y ) are the simulated displacement field on x and y direction, respectively. Compared to physical phantoms, deformation generated by digital phantoms allows evaluating deformat ion fields using voxel -by-voxel comparison, however, the disadvantage of mathematical functions is that 1) the generated deformation may be too simple and not reproduce the true anatomical deformations of the patients; 2) there might exist functional depen dencies between the models for data simulation and the models for evaluation 123. To generate clinically realistic deformation field, Wang et al.22 applied the fiducial marker based thin plate spline registration to register a pair of CT images and conside red the registration result as the gold

PAGE 55

55 standard for validating other registration methods This approach is more clinically realistic than the simple mathematical deformation, however, the validation result may be more biased toward the thin -plate spline registration To reduce the bias of the deformation field generated from a particular deformation algorithm, Xue et al. 128 described a statistical approach to generate the deformation field from a number of training samples for inter individual and intra individual brain deformation. The idea was to estimate the probability density function (pdf) of the deformation fields from a limited number of training samples. In this approach, the global and local training deformation was statically analyzed using Wa velet -Packet transform, Jacobian matrix and Markov random field. The simulated deformation can then be constructed by randomly sampling the statistical distribution in an unconstrained or a landmark constrained fashion. Biomechanical model was also used to generate simulated deformation field. Fahmi et al. 129 proposed to simulate 2D kidney deformation and 3D brain deformation using finite element method. The generated deformation field was considered as gold standard for validation purpose. Overall, the realism of the simulated deformation is difficult to prove and the simulated deformation may not take into account the true viability encountered in clinic situations. 3.1.3 Validation Metrics The validation metrics refers to the mathematical or statisti cal methods to measure the outcome of the registration. Validation metrics can be based on pixel intensity distribution, volumetric criteria, and distance measures. 1. Pixel intensity distribution, such as correlation coefficient (CC) 130, sum of squared e rror (SSE), mutual information. CC is defined as 22()() ()() TTSS CC TTSS (3 2 )

PAGE 56

56 where T and S are the target and the source image respectively, T and S represent the mean intensity of the target image and the source image, respectively. Larger CC value indicates better similarity of two images. The perfect mat ch between two images gives a CC value equals 1. CC measurement is limited to the same modality images and is not sensitive to the global intensity scale change. SSE is defined as 2 1()n ii iSSETS (3 3 ) where Ti and Si are the target and the source image, respectively. CC can be used in the cases where the pixel intensities in the images are linearly rel ated, in contrast, SSE can only be used in the cases where the range of the pixel intensities is same. Shannon introduced the ideas of mutual information in 1948. Mutual information measures how much information one random variable tells about anther random variable. In the field of image registration, it is a measure of how well one image matches with the other because mutual information is larger for more similar images and smaller for less similar images. The main advantage of using MI is that the actu al form of the intensity dependency between the two images does not have to be specified. This makes MI well suited as a criterion of multi -modality registration. The mutual information I( A B ) can be expressed as (,)()()(,) IABHAHBHAB (3 4 ) where H (A ) and H ( B ) are the Shannon entropy of image A and B H ( A B ) is the joint entropy of image A and B Moreover, t he normalized mutual information (NMI) ()() (,) (,) HAHB NMIAB HAB has been shown to be considerably more robust than mutual information.

PAGE 57

57 2. Volumetric criteria, such as dice similarity index (DSI) 130, 131, average volume 132, intensity variance metric 131, determinant of Jacobian matrix 133. The Dice similarity index is the overlap ratio between the volume of interest in the deformed image and the gold standard, it can be written as 2cg cgVV DSI VV (3 5 ) where Vc is the volume of interest of the deformed image, Vg is the gold standard. For a perfect match, DSI is equal to one, whereas two volumes non-overlap volumes lead to the DSI of zero. For a n dimensional deformation fields Dn (x1, xn), the Jacobian matrix J can be expressed as 11 1 1 n nn nDD xx J DD xx where x1,, xn are the directions of the deformation vectors, for example, for 3D deform ation fields, x1, x2, x3 correspond to the X Y and Z direction in Cartesian coordination system. The absolute value of the Jacobian determinant of the deformation field at a given point provides the information of volumetric change at the point. If the J acobian determinant is less than one, the volume shrinks at the point. If the Jacobian determinant equals one, the volume at the point keeps constant. If the Jacobian determinant is large than one, the volume expands at the point. Jacobian determinant can also measure the orientation change in the deformation fields. The positive Jacobian determinant indicates the preservation of the orientation while the negative Jacobian determinant indicates the change of the orientation, such as folding. Tissue folding is

PAGE 58

58 hardly observed during the normal organ deformation, thus, the Jacobian determinant provide a way to check whether the deformation is physically feasible. 3. Distance measures 127, 132. The most straight forward distance measure is to compute the error between computed values and the gold standards. However, distance measures may not sufficient enough to characterize the deformation. Hellier et al. 132 reported using principal component analysis to characterize shape similarity. 3.1.4 Visual Verificatio n Visual verification is the most common validation technique and should be used as routine ongoing validation approach at every opportunity 134. Several approaches can be used to assist the visual verification, such as the difference image (the difference between the deformed source image and the target image), deformed mesh grid (applying deformation field to uniformly spaced grid to observe the deformation), quiver plot, and stream line (the deformation field is visualized as a stream of the fluid). The reported limits of visual verification were 2 mm in translation, 2 degree for inplane rotation, and 4 degree for cross -plane rotation. 3.2 Robustness Robustness is defined as the performance change of a method in the presence of disruptive factors, such as intrinsic data variability, pathological change 123, in other word, small variations of the input should result small variations of the output 36. In DIR, robustness refers to the registration result should be insensitive to the noise level of the input images or the input parameters. The common input parameters of DIR are 1) image dataset; 2) registration parameters. Therefore, the robustness can be refereed to measure the variation of the result of the algorithm when the following conditions vary: 1) variation of the image quality, such as the image noise level; 2) variation of the registration parameters.

PAGE 59

59 3.3 Consistency Consistency studies the effect of transformation composition in the DIR. For instance, let Ta,b denote the transformation from image a to image b the consistency is to study the composition of n transformations from the identity, that is, whether 1,22,33,4,1...nTTTT is close to identity. If n is equals to 2, then consistency validation changes to 1,22,1TT which is the i nverse consistency discussed in Chapter 2. More general group -wise consistency was studied by Geng et al. 135, 136. Consistency measures do not require the existence of gold standard. However, it cannot be considered as the only validation approach in a validation study because significant difference deformation fields may result in the same consistency.

PAGE 60

60 (a ) (b ) Figure 3 1. SPECT i mages of the dynamic physical phantom. (a) The frame 3 of the gated SPECT images with the location of one marker illustrated in green circle; (b ) The location of the marker in (a) in the frame 8 of the gated SPECT images.

PAGE 61

61 CHAPTER 4 A GENERALIZED INVERS E CONSISTENT CONSTRA INT FOR DEFORMABLE I MAGE REGISTRATION In Chapter 2, we reviewed inverse consistent registrations, where most studies were focused on how to generate symmetric cost functions to reduce the inverse inconsistency. Here instead of explicitly compositing the symmetric cost function, we derived an inverse consistent constraint (ICC) using the fixed -points theory. Our new approach does not require the explicit computat ion of the inverse of the deformation field that might introduce numerical errors and result in long computation time. Moreover, the ICC does not involve any approximations for the inverse mapping and the robustness is not necessarily restricted by the par ameter selection. Our method extends the previous work by Chen et al .107 to the domain of inverse consistent registration. In Chens work, an iterative approach based on the fixed point theory was used to only compute the inverse of the deformation field. The rest of the chapter will be organized as the followings: the ICC will be first introduced, then the ICC methodology will be applied to two popular registration algorithms in medical imaging: the D emons 30 algorithm and the fast diffusion registration ( FDR) 32 algorithm. A variation model (FDR SIGMA) based on the likelihood estimation will also be introduced to improve the registration accuracy, together with its inverse consistent model (I FDR SIGMA). After that, the performances of the resulting invers e consistent demons (I -Demons), the inverse consistent fast diffusion registration (I -FDR), and the I FDR SIGMA algorithm will be evaluated with their respective unconstrained counterparts using a simulated phantom, a myocardium physical phantom, and clini cal images. Finally, the comparison of Christensens inverse consistent registration algorithm (CICR) 66 (discussed in Chapter 2.4) will be presented. To the best of our know ledge, the comparison with CICR has not been carried out by others authors on the topic of inverse consistency.

PAGE 62

62 4.1 The Inverse Consistent Constraint In mathematics, a fixed -point of a function f refers to a function mapping to itself, i.e., x is a fixed point of f (x ) if and only if f(x ) = x. For example, x = 0 is a fixed -point of the function f (x ) = x2 + x To be an inverse consistent DIR (ICDIR), the transformation, which consists of the forward transformation h (x ) followed by the backward transformation g (x ), is an identical transformation. Similarly, the backward transformation g (x ) followed by the forward transformation h (x ) is also an identical transformation. This inverse consistent relation can be expressed as (()) (()) ()() ()() ghxxx hgyyy hxxux gyyvy (4 1 ) where () ux and () vx are the displacement field of the forward transformation and backward transformation, respectively. Equat ion (4.1) indicates the location of any point in the source image should not change after the forward and the backward transformation, similar relationship applied to the point in the target image, where the location of a point should b e invariable after the backward and the forward transformation. Equation (4.1) can be further simplified as (()) (()) ()(())()(()) xghx gxux xuxvxuxuxvxux (4 2 ) Similar formulation can be derived for v (y ) ()(()) vyuyvy (4 3 ) We call the Equation (4.2) and the Equation (4.3) the inverse consistent constraints (ICC), which must be satis fied when a DIR algorithm is inverse consistent. In order to incorporate the ICC into DIR algorithm, the following nu meric scheme is proposed

PAGE 63

63 1. Initialize the forward and the backward displacement field 0 iu and 0 iv 2. Compute the forward displacement field 1 iu 1((),(),) ii ufTxSxu (4 4 ) 3. Update the backward displacement field iv by the forward field 1 iu using ICC 1()iiivuxv (4 5 ) 4. Compute the backward displacement field 1 iv 11((),(),)iivfSxTxv (4 6 ) 5. Update the forward displacement field 1 iu by the backward field 1 iv using I CC 111()iiiuvxu (4 7 ) 6. Repeat step (2 ) through (5) until u and v converg e where iu and iv are the forward displacement field and th e backward displacement field at iteration i respectively. S (x ) is the source image, T (x ) is the target image, f and 1f are the iterative function to compute the forward and the backward transformation In the next section, the IC C will be applied to construct the I Demons, I -FDR, and I -FDR -Sigma algorithms. 4.2 The Inverse Consistent Demons (I-Demons) Algorithm The first ICDIR was the I Demons algorithm modeled by incorporating the ICC and the D emons algorithm. In the following, the detailed description of Demons, bijectivity Demons (B Demons), and I -Demons will be provided. 4 2.1 Demons Algorithm T he Demons algorithm was proposed by Thirion 30 based on an analogy with thermodynamic concepts. The idea was to position the image en t ities called demons to move the image pixels by the local characteristics of the images in a similar way Maxwell did for solving thermodynamics equations. In the case of free -form deformation where all pixels with non -zero

PAGE 64

64 gradient were selected as the de mons, the displacement field () ux (the mapping between the target image T (x ) and the source image S (x )) was estimated by 22()((())())() ||()||((())()) uxSxuxTxTx t TxSxuxTx (4 8 ) where (()) Sxux is the deformed source image and is the gradient operator At each iteration, the computed displacement field () ux was convoluted with a Gaussian smoothing kernel for deformation field regularization The Demons algorithm was shown to outperform other methods in a careful evaluation of inter -subject brain registration 137 by the global and local measures Th e global measures included average volume, overlap of grey and white matter tissues, consistency of the deformation field; the local measures included segmentation of cortical sulci, visualization of deformed sulci, and numerical evaluation. 4.2.2 Bijectiv ity Demons (B-Demons ) Algorithm To improve the inverse consistency of the deformation fields, Thiron 30 presented the bijectivity Demons (B Demons) by first computing the forward transformation and the backward transformation using the Demons algorithm, then the residual transformations were calculated using the forward and the backward transformations, after that, adjusting the forward transformation and the backward transformation by subtracting half of the residual transformation to the forward transform ation and adding the rest half to the backward transformation. On average, the inverse consistency residue could be reduced to less than one pixel. However, no detailed analysis was carried out, such as change in similarity measure with the iterations and computation time. 4.2.3 I -Demons Algorithm Following the numerical scheme presented in the Chapter 4.1, the I Demons algorithm can be modeled u tilizing the ICC. The numeric scheme of the I De mons can be expressed as:

PAGE 65

65 1 Initialize the forward and the backwar d displacement field 0 iu and 0 iv 2 Compute the forward displacement fields 1 iuby Equation (4.8) 3 Update the backward displacement field iv by Equat ion (4.5) 4 Compute the backward displacement field 1 iv by switching T and S in Equation (4.8) 5 Update the forward displacement field 1 iu by Equation (4.7) 6 Repeat step (2) through (5) until u v converges. 4.3 The Inverse Consistent Fast Diffusion Registration (I -FDR) Algorithm The second ICDIR model was the I -FDR algorithm constructed using the ICC and the FDR algorithm proposed by Fischer and Modersitzki 32. In the FDR algorithm, the problem of image registration was modeled as the proble m of finding the optimal deformation field u which minimizes the following energy equation E (u ) (())(())(()) EuxDuxRux (4 9 ) 21 (())((())()) 2 DuxSxuxTxdx (4 10) 2 11 (())||()|| 2d j jRuxux where (()) Dux is the distance measure of the deformed image (()) Sxux and the target image () Tx using the sum of the squared distance (SSD). The choice of D (u (x )) depends on the image modalities, for example, D (u (x )) can be in the form of mutual information for the multi -modality registration, d is the dimension of the displacement field u (x ), d = 2 for 2D displacement fields and d = 3 for 3D displacement fields. R (u (x )) is the regularization term to avoid discontinuous or suboptimal deformation fields and it is designed to penalize the deformation osc illations and consequently leads to a smooth deformation field. is the weighting factor of the regularization term. Using calculus of variations 138, Equation (4.9) can be simplified as 2()[(())()](())0 uxSxuxTxSxux (4 12)

PAGE 66

66 By introduced the steady state solution (,)/0 uxtt Equation (4.12) could be solved using time marching method 2(,) (,)[((,))()]((,)) uxt uxtSxuxtTxSxuxt t (4 13) Equation (4.1 3) can be solved by two numeric schemes : the explicit scheme and the semi implicit scheme. In the explicit scheme, Equation (4.13) can be written as 2 1(,) (,)[((,))()]((,))k kkkuxt uxtSxuxtTxSxuxt t (4 14) where u (x tk) is the displacement field u at iteration k Solving Equation (4.14) i s straight forward. The limitation of the explicit scheme is that it is only stable for very small step sizes, which leads to poor efficiency 139. In the semi implicit scheme, the limitation of the step size is reduce d and the convergence is improved. The semi implicit scheme can be written as 2 1 1(,) (,)[((,))()]((,))k kkkuxt uxtSxuxtTxSxuxt t (4 15) Equation (4.15) is indeed an inhomogeneous heat equation and can be effectively solved by additive operator splitting (AOS) scheme 32, which achieve s the accuracy of a conventional semi implicit scheme while maintaining a linear complexity with respect to the image size. The benefit of AOS scheme is to decouple the non tri -diagonal equations into the summation of the tri diagonal equations, which can be effectively solved by Thomas algorithm 140. In the following, we will briefly describe how to use the AOS scheme to solve Equation (4.15) 4.3.1 AOS scheme The AOS scheme was first introduced by Weickert 141 to improve the stability of the explicit scheme in the nonlinear diffusion filtering, which is only stable for small step sizes. Equation (4.15) can be written as

PAGE 67

67 1 1 1[((,))()]((,))d kk kk lj l k jkkVV AVF FSxuxtTxSxuxt (4 16) where (,)k jkVuxt is the displacement vector for a fixed point j 1 1 d k l lAV is the numeric scheme of computing 2 1(,)kuxt using finite difference. After rearranging Equation (4.16) we have 11 1()(),0,1,...d k kk lj lVIAVFk (4 17) The Equation (4.17) is a linear system with n unknown. Matrix Al is tri-diagonal for each dimension, however the summation of Al is not tri-diagonal matrix and Equation (4.17) cannot be directly solved by the efficient tri -diagonal solver: Thomas algorithm 140. The idea of AOS is to replace the inverse of the summation of the matrix by the summation of the inverse. The AOS representation of Equation (4.17) can be expressed as 11 11 ()(),0,1,...d k kk lj lVIdAVFk d (4 18) Each component ( l = 1, 2,, d ) in Equation (4.18) can be efficiently solved by the Thomas algorithm. To construct the I -FDR similar upda te scheme described in Chapter 4.1 can be applied by replacing f and 1f with Equation (4.15) 4.4 A Robust Variational Model for Inverse Consistency To achieve the optimal registration, the weighting factor of the energy function in Equation (4.9) is desirable to be adaptive during the iterations: smaller may produce unr ealistic or discontinuous deformation field and larger may over smooth the deformation field and reduce the accuracy. Ideally it is desired to have lar ge in the beginning of the registration when

PAGE 68

68 the images are not well matched. With the progress of the iteration the weighting factor needs to be gradually reduced as images are gradually aligned because large smoothness weighting factor may over smoo th the deformation field and make it inaccurate. Moreover, both the Demons and FDR algorithms assume that the pixel intensity is preserved for the same object in image volumes. This assumption might not be true in reality, where pixel intensities may vary with the existence of image noise. In order to make adaptive during the registration and account for some pixel intensity variations, we proposed a novel registration model (FDR -Sigma) incorporating a likelihood function L (u ) to estimate the similarity. L (u ) is defined as 2 2(())() 21 (,)({(())(),}|) 2SxuxTx xxLuPSxuxTxx e (4 19) where nR (n = 2 or 3), the likelihood function L (u ) models the pixel intens ities of the difference image, ()(())() DxShxTx as independent and identically-distributed random variables from a Gaussian distribution with zero mean and standard deviation of is an parameter needed to be optimized. The log likelihood function Llog(u ) can be written as 2 log 2(())() (,)ln((,)) ln(2) 2xShxTx LuLu dx (4 20) Applying maximum likelihood estimation ( MLE) to Equation (4.20) and incorporating the regularization term as shown in Equation (4.11) the proposed model is to find both the optimized u (x ) and standar d deviation to minimize the following energy function 2 2 2(())() (,) ln(2)() 2 2xxSxuxTx Eu dx uxdx (4 21)

PAGE 69

69 Equation (4.21) can be solved using calculus of variations and time marching method, u (x ) and can be computed iteratively using the following equations 2 2((())())(()) () 2 u SxuxTxSxux ux t (4 22) 2 21 (())()xSxuxTxdx (4 23) The numerical scheme can be expressed in the following: first, () ux is initialized to zero and is computed using Equation (4.23) then, the updated is applied to update () ux by Equation (4.22) Both () ux and can be solved iteratively by applying Equation (4.22) and Equation (4.23) In the FDR -Sigma, is directly related to the difference image D (x ), i.e., is small while two images are well aligned; is large while two images have large mismatch. This is a desirable characteristic because small leads to the reduced effect of the smoothing weighting factor for well aligned images and large leads to the increased effect of the smoothing weighting factor for misaligned im ages. Because of this adaptive mechanism, this MLE based approach could improve the algorithm convergence and reduce the sensitivity to the choice of weighting factor The assumption of Gaussian distribution of the likelihood estimation makes it possible to allow some variations of pixel intensities in matching corresponding pixels. Applying the ICC, the inverse consistent FDR Sigma (I -FDR -Sigma) can then be constructed using methodology described in Chapter 4.1. 4.5 Algorithm Evaluations 4.5.1 Image Data for Evaluation Three groups of image datasets were used for the algorithm evaluation: 1. Simulated phantom images

PAGE 70

70 The simulated phantom images were synthesized using the mathematically simulated deformation described in Equation (3.1) The parameters of the simulated deformation field were m = 4, b = 0.15. The source image was the representative 2D CT image of a prostate cancer patient. The source image (Figure 4 1 (a)) was cropped from the original CT ima ge to exclude the patient couch and background, the size of the source image is 220 x 360 pixels. The target image (Figure 4 1 (b)) was generated by directly applying the simulated deformation field to the source image. The simulated deformation field is s erved as the gold standard for algorithm evaluation. 2. Myocardium phantom i mages The images of myocardium phantom discussed in the Chapter 3.1.1 were used. There are totally 10 image frames in the myocardium image dataset, each image frame has a size of 96 x 96 x 34 pixels with a pixel dimension of 1 x 1 x 1 mm. Frame 3 and 8 are used as phantom images because of the large motion observed between two image frames. 3. Clinical 4DCT images Ten 3D volumetric 4DCT images of a lung cancer patient were used in this study. 4DCT is capable of acquiring a sequence of ten phases of three dimensional CT images over patients breathing cycle. Phase 0 correspond s to the maximum inspiration, phase 50 corresponds to the maximum expiration, the duration of expiration i s between phase 0 and phase 50, the duration of inspiration is between phase 50 and phase 0. All 10 phases are equally divided within the breathing cycle. In this study, phase 0 was the target image and the rest phase images (images from phase 10 to phase 90) were the source images A r egion of interest (ROI) of 256 x 400 x 51 pixels was selected to exclude the background and the patient couch. The image pixel intensities were normalized between 0 and 1 before image registration.

PAGE 71

71 4.5.2 Evaluation Criteria T he following criterion s were selected to evaluate the effectiveness of the ICDIR: 1 Average phantom error (APE) APE measures the average errors between the computed marker motion and the true marker motion It can be expressed as 11 ()()M ct iAPEuiui M (4 24) where cu is the computed displacement, tu is the true d isplacement, and M is the number of the markers For the physical phantom images, M = 10; for the simulated phantom images, M = 220 x 360 = 79200, which the number of pixels in the simulated images. 2 Inverse consistency error (ICE) The ICE is computed by 11 ()() 2N ii iICEhgxidghxid N (4 25) w here h (x ) and g (x ) are the forward and the backward transformation, respectiv ely, N is the total number of the pixels within the volume, id is the identity matrix. Theoretically ICE is expected to be zero if the registration is inverse consistent. However, ICE is difficult to be zero in reality, a s mall ICE is always preferred. 3 S um of squared error (SSE) SSE measures the similarity between the deform ed source image (S (x + u (x ))) and the target image ( T (x )) 2(())() SSESxuxTxdx (4 26)

PAGE 72

72 For a perfect match, SSE is zero. SSE is a global measure and may not be sensitive to local deformations. 4 Robustness Robustness is defined as the variation of the evaluation metrics, such as ICE or SSE, with respect to the change of the parameters, such as step size A robust registration algorithm should be less sensitive to the parameter variation Robustness and accuracy are two different concepts. A robust algorithm does not warrantee its accuracy. Ideally a good algorithm is expected to be both accurate and robust. T he standard deviation (SD) of the ICE the APE, and the SSE with respect to the change of the step size will be evaluated. The evaluated algorithms are divided into two groups: 1) FDR gr oup algorithms (FDR, I -FDR, FDR Sigma, and I -FDR -Sigma) and 2) Demons -group algorithms (Demons, I Demons, and B Demons). The variation of step size is listed in Table 4 1 for the FDR -group and Demons group algorithms using simulated phantom, physical myoca rdium phantom, and clinical images. 4.6 Results 4.6.1 Simulated Phantom Images The comparison of the FDR -group algorithms is shown in Figure 4 2. Figure 4 2 (a, c, e) shows the change in APE, ICE, and SSE at the 300th iteration with step size, respectively Figure 4 2 (b, d, f) shows the change in APE, ICE, and SSE with the number of iterations at a step size corresponding to the minimum APE, respectively. When step size varied from 0.1 to 2.0, I -FDR Sigma constantly outperformed the rest three algorithms ( FDR Sigma, FDR, and I -FDR) by the measure of APE and SSE. The accuracy (measured by APE) of the ICDIR (I -FDR and I FDR Sigma) was improved compared to their counterparts (FDR and FDR Sigma). The minimum APE for I FDR Sigma was 7.48 pixels, which is 13% les s than the minimum APE of FDR Sigma (8.64 pixels), 25% less than the minimum APE of I FDR (10 pixels), and 27% less than

PAGE 73

73 the minimum APE of FDR (10.2 pixels). The minimum SSE for I FDR Sigma was 231, compared to 315 for FDR -Sigma, 3133 for I FDR, and 3149 for FDR. The ICDIR had the reduced ICE compared to their counterparts. The minimum ICE for I -FDR -Sigma was 0.08 pixels, compared to 1.4 pixels for FDR -Sigma, 0.002 pixels for I FDR, and 0.3 pixels for FDR. Figure 4 3 shows the plot of the energy for the FD R -group algorithms described by Equation (4.9) and Equation (4.21) using the optimal step size corresponding to the minimum APE, the energy for all four algorit hms decreased with the iterations. The robustness measurements of the FDR -group algorithms using the simulated phantom are shown in Figure 4 4. For APE, the maximum SD was 0.78 pixels (I FDR -Sigma), the minimum SD was 0.11 pixels (FDR). For SSE, the maximu m SD was 915 (I -FDR -Sigma), the minimum SD was 232 (FDR). For ICE, the maximum SD was 0.32 pixels (FDR Sigma), the minimum SD was 0.0006 pixels (I -FDR). ICDIR (I -FDR and I -FDR -Sigma) significantly reduced the SD of ICE compared to their counterparts (Figur e 4 4 (c)). The comparison of the Demons -group algorithms using the simulated phantom is shown in Figure 4 5. After applying ICC, the I -Demons outperformed the Demons and B Demons algorithms by APE, ICE, and SSE (Figure 4 5 (a, c, e)). I Demons obtained th e minimum APE of 4.6 pixels, which was 30% less than the minimum APE of B -Demons (6.6 pixels) and 32% less than the minimum APE of Demons (6.8 pixels). Using the optimal step size (step size = 1.0) corresponding to the minimum APE, at the 300th iteration, I Demons had an ICE of 0.4 pixels, compared to 1.4 pixels for B -Demons and 2.9 pixels for Demons. Similarly, at the 300th iteration, SSE of I Demons was 135, compared to 538 for Demons and 711 for B -Demons. The robustness of the Demons -group algorithms usi ng the simulated phantom is shown in Figure 4 6. I Demons attained the minimum SDs of SSE (829) and ICE (0.17 pixels) among the

PAGE 74

74 three Demons -group algorithms. However, the SD of APE for I Demons (1.4 pixels) was larger than the SD of APE for Demons (1.0 pi xels) and B Demons (1.0 pixels). 4.6.2 Physical Myocardium Phantom Images The comparison of the FDR -group algorithms using the physical myocardium phantom is shown in Figure 4 7. Based on the minimum APE, the optimal step size was 0.5 for I FDR Sigma, 0.3 for FDR Sigma, and 2.0 for both I -FDR and FDR algorithms. As shown in Figure 4 7 (b), with the optimal step size, I -FDR Sigma obtained the minimum APE of 0.38mm at the 300th iteration among all four algorithms, compared to 0.49 mm for FDR -Sigma, 0.44 mm fo r I -FDR, and 0.50 mm for FDR. As shown in Figure 4 7 (f), the SSE was 180 for I FDR -Sigma and 181 for FDR -Sigma at the 300th iteration, which was smaller than the SSE of FDR (209) and I FDR (227). Most significant improvement using ICC was observed by the measurement of the ICE. For both ICDIR (I -FDR and I -FDR -Sigma), ICE were greatly reduced from 0.017 mm for FDR Sigma to 0.0008 mm for I -FDR -Sigma and from 0.015 mm for FDR to 0.0004 mm for I -FDR. Moreover, after 20 iterations, the ICE of both I FDR Sigma a nd I FDR were converged to 0.0008 mm and 0.0004 mm, respectively. In contrast, the ICE of FDR -Sigma and FDR were linearly increasing after 50 iterations. The robustness measurements of the FDR -groups algorithm using the physical simulated phantom are show n in Figure 4 8. I -FDR -Sigma achieved the minimum SD of APE (0.04 mm) among four algorithms. Both FDR -Sigma and I -FDR -Sigma had smaller SD of SSE (43 for FDR Sigma and 46 for I FDR Sigma) than the SD of SSE for I -FDR (166) and FDR (157). The minimum SD of ICE was 0.0001 mm and it was achieved by the I -FDR algorithm. The comparison of the Demons -group algorithm using the physical myocardium phantom is shown in Figure 4 9. The optimal step size corresponding to the minimum APE is 0.2 for all three Demons -grou p algorithms. At the 300th iteration, with the optimal step size, both Demons

PAGE 75

75 and I Demons had the similar APE (0.4 mm), which was 11% less that APE of B Demons (0.45mm). For ICE, the I -Demons algorithm reduced the ICE by 68 times, from 0.70 mm for Demons to 0.01 mm for I Demons. The minimum ICE was observed by B Demons (0.005 mm). All three algorithms led to similar levels of SSE at the 300th iteration: 221 for Demons, 231 for I Demons, and 213 for B Demons. The robustness measurements of the Demons -group algorithms were shown in Figure 4 10. In Figure 410 (a, c), the I Demons algorithm attained the minimum SD of APE of 0.08 mm and SD of ICE of 0.02 mm among all three algorithms. In Figure 410 (b), all three algorithms led to similar levels of SD of SSE, 43 for Demons, 47 for I Demons, and 41 for B Demons, when step size varied from 0.1 to 1.0. 4.6.3 Clinical Images Figure 4 11 shows the comparison of the FDR -group algorithms using the clinical 4DCT images (phase 0 and phase 50) of the lung cancer patient With the optimal step size corresponding to the minimum SSE, at the 200th iteration, FDR -Sigma and I -FDR -Sigma attained SSE of 2049 and 1996, which were 40% less than SSE of I FDR (3416) and FDR (3533). For ICE measurement, the I FDR and I -FDR -Sigma algo rithms attained the minimum ICE of 0.0009 mm and 0.007 mm, respectively, compared to 0.16 mm for FDR -Sigma and 0.037 mm for FDR. The robustness measures shown in Figure 4 11 (e, f) indicated that the I FDR and I-FDR -Sigma led to smaller SD of ICE ( 0.002 m m for I FDR Sigma and 0.0002 mm for I FDR), compared to SD of ICE of FDR Sigma (0.04 mm) and FDR (0.01 mm) Figure 4 12 shows the comparison of the Demons group algorithms using the clinical 4DCT image of the lung cancer patient. The change in SSE and ICE w ith the number of iteration were listed in Figure 4 12 (b, d) with the optimal step size corresponding to the minimum SSE. I Demons algorithm performed better than Demons and B -Demons. It attained the minimum SSE

PAGE 76

76 of 1449, compared to SSE of 1808 for Demons and SSE of 1589 for B Demons. It also attained the minimum ICE of 0.11 mm, compared to 0.25 mm for B Demons of and 1.59 mm for Demons. Figure 4 12 (e, f) shows the robustness measurements. Compared to Demons (1.79 mm), the SD of ICE for I Demons was reduc ed by 90%, attained 0.17 mm at the 300th iterations. The SD of SSE for I Demons was 336, which was larger than Demons (304), but smaller than B Demons (340). I -Demons also converged faster than B Demons and Demons in terms of SSE and ICE. The result of reg istration from phase 0 to phase 10 through phase 90 of the 4DCT images is shown in Figure 4 13. The I Demons, I -FDR, and I FDR Sigma algorithms attained lower SSE at the 200th iteration than the Demons, FDR, and FDR -Sigma algorithms, respectively, especial ly when the deformable motion was large, for example, registration of phase 0 to phase 50. Moreover, the I -FDR -Sigma and FDR -Sigma algorithms, based on the likelihood estimation, converged faster than the FDR and I -FDR algorithms (Figure 4 13 (c)). 4.6.3 .1 Visual estimation The effect of the ICC could be also demonstrated in F igure 4 14. Figure 4 14 (a, b) show one axial slice from the target and s ource images. The artifacts, indicated by the point A in Figure 4 14 (c) with the red arrow, was shown in the deformed image (Figure 4 14 (c)) using the were not visible in the deformed image (Figure 4 14 (d)) using the I Demons algorithm with the same parameters. After plotted the profile of the di splacement vector on axial direction (Figure 4 14 (e)), the artifacts was due to the large axial displacement vectors for some voxels computed by the Demons algorithm in the source images causing these voxels to move out of the source images Compared to Demons, I Demons limited the motion magnitude (Figure 4 14 (e)) and produced more physiological consistent deformation field. The reduction of ICE using I -

PAGE 77

77 Demons was shown in Figure 415 by the gray scale images. At the same gray scale, the I Demons algori thm demonstrated superior ICE reduction than the Demons algorithm. 4.6.3.2 Effects on re -contouring In the clinical auto re -contouring application the contours of the gross tumor volume (GTV) were automatically propagated from the planning CT to the phas e CT by the transformation computed using Demons, I Demons and B Demons respectively. The details of the re -contouring were discussed in Chapter 5.1. The registratio n parameters were the following : the step size = 0.5 Gaussian kernel size = 3 x 3 x 3 wit h a standard deviation of 0. 6 5 and number of iterations = 100. Figure 4 16 (a -b) show an axial slice of the overlay of the contours generated from the deformation fields by the I -Demons (aqua), the Demons (green) and B Demons (red) with the planning contours transferred from the corresponding slice in the planning CT Because of the respiratory motion, the planning contour (i.e., the brown contour) cannot represent the GTV in the phase CT. By visual assessment, the auto generated contours by three demons group algorithms is able to match the anatomical boundary. The maximum difference of 1 mm was observed by measuring the Euclidean distance of the contour points between Demons and B -Demons algorithm (Figure 4 16 (c)). Maximum difference of 1 mm was obser ved by measuring the Euclidean distance of the contour points between Demons and B Demons (Figure 4 16 (d)). 4.6.4 Comparison of FDR, I-FDR, and Christensens Inverse Consistent Registration (CICR) As discussed in Chapter 2.4, Christensen et al. 66 propos ed an inverse consistent registration (CICR) algorithm by embedding the inverse consistency into the energy function. Among the proposed I -FDR, I FDR Sigma, and I Demons algorithm, the I -FDR algorithm is most similar to the CICR algorithm. The difference b etween the I FDR and CICR algorithms is

PAGE 78

78 that 1) the I FDR algorithm does not incorporating the ICC into the energy function, instead it applied the ICC to constrain the registration. 2) I FDR used the L 2 norm of the displacement vector to regulate the dis placement field while CICR used the elastic regulation. 3) I -FDR does not require explicitly computing the inverse of the deformation field. The CICR was implemented and compared to the FDR and the I FDR algorithm, the algorithm evaluation wa s based on the physical myocardium phantom discussed in the Chapter 4.5.1 and the clinical 4DCT images. To the best of our knowledge, no direct comparison was published between the CICR and other inverse consistent DIR. Because the parameter selections can affect the re gistration results, the presented comparison is not to draw any conclusion of whether one algorithm is superior to the other, but to estimate their performance under the most available optimal parameters for each algorithm: for CICR, t he parameters were: = 104, = 107, = 506, = 10, = 10, = 0, step size = 5.0103. For both FDR and I -FDR, t he parameters were = 0.08 and step size = 5 .0 The image intensity was normalized in the range of 0 and 1 before DIR. All three algorithms were executed fo r 300 iterations. The result of the physical myocardium phantom was shown in Figure 4 17 with the measures of SSE APE, ICE, and computation time per iteration. Among them, the I FDR algorithm achieved the best performance by the measurement of APE (Figur e 4 17 (a)). The APE for the I FDR, FDR, and CICR algorithms is 0.39 mm, 0.52 mm, and 0.61mm respectively. The I -FDR algorithm achieved an APE reduction of 25% c ompared to the FDR algorithm and an APE reduction of 36% compar ed to the CICR algorithm For I CE (Figure 4 -17 (b)), the CICR, I FDR and FDR algorithm attained the ICE of 6.9108, 8.0104, and 2.2102, respectively. Compared to the FDR algorithm where ICE increased with respect to the number of iterations, the ICEs of both CICR and I -FDR were co nverged after 20 iterations. Figure 4 17 (c) illustrated

PAGE 79

79 the change in SSE with the number of iterations, all three algorithms achieved the similar level of SSE values after 300 iterations, 153.2 for FDR, 182.9 for C ICR, and 180.0 for I -FDR. The computatio n time per iteration (TPI) was significantly different among three algorithms. CICR algorithm took the longest TPI (10.2 s econds), which was 5.4 times longer than I FDR (1.6 seconds) and 19.4 times longer than FDR (0.5 seconds) The long TPI for CICR algor ithm was due to the computation of the inverse deformation field and Fourier series calculation Even though I -FDR did not achieve the minimum ICE and SSE it had the advantage of a chieving minimum APE, reducing the ICE by 27 times comparing with FDR and a ttained the reasonable computation time The comparison of three algorithms using 4DCT l ung images is shown in Figure 418. Compar ed to SSE of 1.23 x 104 for CICR, both I -FDR and FDR algorithm achieved similar SSE value, 6.2 x 103 for I -FDR and 6.5 x 103 for FDR Compar ed to FDR, I -FDR reduced ICE by 97% (from 0.1275 pixels to 3x103 pixels) while CICR reduced ICE by 99% (from 0.1275 pixels to 2x104 pixels). However, CICR took the longest computation time ( TPI = 185.2 seconds ), compared to 25.9 seconds f or I FDR and 8.5 seconds for FDR. With carefully chosen parameters, CICR attained considerable good results; however, it is very challenge to optimize seven parameters in the CICR model. Moreover, its long computation time required the necessity of high pe rformance computer 142 in order to reduce the computation time and meet memory requirements. In contrast, I -FDR is more computational efficient and achieves ICE of <= 0.02 mm in both clinical images and physical phantom evaluation. 4.7 Discussion and Conc lusion The proposed ICC provided a novel approach to constrain the inverse inconsistent registration to be inverse consistent and physiologically realis tic. Compared to the Demons, FDR and FDR Sigma algorithm s, the I Demons, I FDR and I FDR Sigma algorit hms

PAGE 80

80 significantly reduced the ICE by up to 98%, 99%, and 96%, respectively Small values of ICE indicate that the deformation is physiological consistent. In contrast, the ICE of Demons and FDR increased linearly with increasing iterations. Beside the ICE reduction, the ICC also improved the ICE robu stness. T h e SD of the ICE of the ICDIRs was reduced by 250 times comparing with their counterpart which indicated that the proposed ICC was successful in providing more physiological realistic deformation s re ducing the sensitivity of the parameters selection. Even though the improvement of APE for ICDIR varied depending on the image dataset, the faster APE convergence of ICDIR was observed for both the simulated phantom and the physical myocardium phantom ima ges. The I FDR -Sigma algorithm provided the superior performance among the FDR group algorithms, its likelihood estimation based similarity measures led to faster convergence. The fact that the I Demons outperformed the B Demons indicated that simple compe nsation of the inverse residue could not achieve the optimal deformation field It was observed that the most of the ICE occurred on boundaries of objects, which might be due to the fact that the smoothness of the deformation field was difficult to preser ve along the object boundaries because of the abrupt change of the gradient in the regions. The ICC can greatly improve the unique ness of the correspondence between two images, and consequently reduce the ICE, which was consistent with the observation by C hristensen et al.66. The superior performance of the ICC might be due to the fact that it provides a re -initialization mechanism to avoid the local minimum and achieve s optimal convergence while traditional algorithm are more likely trapped in the local minimum. Provided a transformation is invertible, the new inverse consistent approach could lead to more consistent and physiologically meaningful deformation in medical imaging. It should be mentioned that the accuracy of the registration algorithms can not be measured by ICE alone as ICE only measures the consistency

PAGE 81

81 of the deformation fields and does not guarantee the similarity between the two images. For instance, zero deformation fields will always have an ICE of zero, however, zero deformation fields do es not indicate the goodness of the registration. Therefore, ICE has to be combined with other similarity measures (such as SSE ) in order to provide accurate evaluations of DIR algorithms. The SSE value, as a global similarity measures, ha s been widely app lied in published literatures to measure performance of registration algorithm s Practically however, it is very difficult to choose the comparison criterion only based on the SSE value. For example, how does one choose the threshold of the SSE value diff erence to determine a better registration algorithm? Will a threshold value of 10 or 100 be enough? Will smaller SSE always indicates better registration? SSE, as shown in Equation (4 26), is an indirect meas ure of the goodness of the deformation fields and it may not be sensitive to the local c hange of the deformation fields. In F igure 4 19, we present ed a case of comparing the registration results of two different DIR algorithms on a 2D slice s from clinical lung 3D images. Method A is the D emons algorithm and method B is the FDR algorithm. At the end of 600 iterations, method A achieved a lower SSE value ( 90) than method B ( 668). Visual evaluation also indicated that the deformed image by method A (Figure 4 1 9 (c)) was more similar to the target image (Figure 4 19 (b)) than the deformed image by method B (Figure 4 19 (d)). However, the quiver plot of the deformation field (Figure 4 20) indicated that method B produced a smoother deformation while the deformati on grid of method A looked rather irregular. Moreover method A had the negative Jacobian determinant indicating the anatomical topology was not preserved in certain locations for method A. Therefore, only relying on SSE measurement itself might not reveal the characteristic of the deformation field E ven though method A had lower SSE value method B

PAGE 82

82 might be the better choice for the application where smoothness of the deformation field and preservation of topol ogy are important. The effects of the likeli hood based variation model were also demonstrated using high noise myocardium phantom images. In Figure 4 21 (a) and (b), the FDR Sigma and I FDR Sigma algorithms outperformed the FDR and I -FDR algorithms by the measurements of APE and SSE. A maximum APE reduction of 56 % was attained between I FDR -Sigma and FDR, and a maximum SSE reduction of 53% was observed between I FDR Sigma and I -FDR. The ICE of all four algorithms were small: the maximum ICE among four algorithms was 0.0144 mm for FDR -Sigma, compared to 0.0069 mm for FDR, 1.00x104 mm for I -FDR, and 4.78x104 mm for I -FDR -Sigma. This again demonstrated that the likelihood function in the variation model allowed variations in pixels matching for high noise images, achieved fast convergence, and attain ed lower phantom errors. W e presented a novel ICC for DIR. We demonstrated that it did not require explicitly computing the inverse of the deformation fields, thus reduce the numeric errors. Compared to the unidirectional DIR the ICC enabled DIR algorithm s significantly reduced the inverse consistency error, and had less sensitive to parameter selections. The ICC demonstrated a new method to achieve physically realistic deformable registration. Although the current study was focused on the Dem ons, FDR, and I -FDR -Sigma algorithms the new inverse consistency approach might be extended to the other deformable registration algorithms as well.

PAGE 83

83 (a ) (b) Figure 4 1. The simulated phantom. (a) The source image with an image size of 220 x 360 pixels. It is cropp ed from the representative 2D slice of the 3D CT images of a prostate cancer patient to exclude the patient couch and background. (b) The target image. The target image is generated by applying a mathematically generated deformation field to the source ima ge (a). The target image has the same image size as the source image. The mathematically generated deformation field is considered as the gold standard for DIR evaluation.

PAGE 84

84 (a) (b) (c) (d) (e) (f) Figure 4 2 The evaluation of the FDR, I -FDR, FDR -Sigma, and I -FDR -Sigma algorithms using the simulated phantom. (a) Change in APE at the 300th iteration with step size. (b) Change in APE with the number of iterations at a step size corresponding to the minimum APE. (c) Change in ICE at the 300th ite ration with step size. (d) Change in ICE with the number of iterations at a step size corresponding to the minimum APE. (e) Change in SSE at the 300th iteration with step size. (d) Change in SSE with the number of iterations at a step size corresponding to the minimum APE.

PAGE 85

85 (a ) (b) Figure 4 3 Change in energy with the number of iterations at a step size corresponding to the minimum APE. (a) Change in energy for the I FDR Sigma and FDR -Sigma algorithms. (b) Change in energy for the I -FDR and FDR algorithms.

PAGE 86

86 (a) (b) (c) Figure 4 4 The standard deviations (SD) of the FDR, I -FDR, FDR -Sigma, and I FDR Sigma algorithms with the change in step size using the simulated phantom. (a) SD of APE. (b) SD of SSE. (c) SD of ICE.

PAGE 87

87 (a) (b) (c) (d) (e) (f) Figure 4 5 The evaluation of the Demons, I Demons, and B Demons algorithms using the simulated phantom. (a) Change in APE at the 300th iteration with step size. (b) Change in APE with the number of iterations at a step size corresponding to the m inimum APE. (c) Change in ICE at the 300th iteration with step size. (d) Change in ICE with the number of iterations at a step size corresponding to the minimum APE. (e) Change in SSE at the 300th iteration with step size. (d) Change in SSE with the number of iterations at a step size corresponding to the minimum APE.

PAGE 88

88 (a) (b) (c) Figure 4 6 The standard deviations (SD) of the Demons, I Demons, and B -Demons algorithms with the change in step size using the simulated phantom. (a) SD of APE. (b) SD of SSE. (c) SD of ICE.

PAGE 89

89 (a) (b) (c) (d) (e) (f) Figure 4 7 The evaluation of the FDR, I -FDR, FDR -Sigma, and I -FDR -Sigma algorithms using the physical myocardium phantom. (a) Change in APE at the 300th iteration with step size. (b) Change in APE with the number of iterations at a step size corresponding to the minimum APE. (c) Change in ICE at the 300th iteration with step size. (d) Change in ICE with the number of iterations at a step size corresponding to the minimum APE. (e) Change in SSE at the 300th iteration with step size. (d) Change in SSE with the number of iterations at a step size corresponding to the minimum APE.

PAGE 90

90 (a) (b) (c) Figure 4 8 The standard deviations (SD) of the FDR, I -FDR, FDR -Sigma, and I FDR Sigma algorithms with the change in step size using the physical myocardium phantom. (a) SD of APE. (b) SD of SSE. (c) SD of ICE.

PAGE 91

91 (a) (b) (c ) (d ) (e ) (f ) Figure 4 9 The evaluation of the Demons, I Demons, and B Demons algorithms using the physical myocardium phantom. (a) Change in APE at the 300th iteration with step size. (b) Change in APE with the number of iterations at a step size corresponding to the minimum APE. (c) Change in ICE at the 300th iteration with step size. (d) Change in ICE with the number of iterations at a step size corresponding to the minimum APE. (e) Change in SSE at the 300th iteration with step size. (d) Change in SSE with the number of iterations at a step size corresponding to the minimum APE.

PAGE 92

9 2 (a) (b) (c) Figure 4 10. The standard deviations (SD) of the Demons, I Demons, and B Demons algorithms with the change in step size using the simulated phantom. (a) SD of APE. (b) SD of SSE. (c) SD of ICE.

PAGE 93

93 (a) (b) (c ) (d) (e ) (f) Figure 4 11. The evaluation of t he FDR, I -FDR, FDR Sigma, and I -FDR -Sigma algorithms using the clinical 4DCT lung images. (a) Change in SSE at the 200th iteration with step size. (b) Change in SSE with the number of iterations at a step size corresponding to the minimum SSE. (c) Change i n ICE at the 200th iteration with step size. (d) Change in ICE with the number of iterations at a step size corresponding to the minimum SSE. (e) SD of SSE with the change in step size. (f) SD of ICE with the change in step size.

PAGE 94

94 (a) (b) (c ) (d ) (e ) (f) Figure 4 12. The evaluation of the Demons, I -Demons, and B Demons algorithms using the clinical 4DCT lung images. (a) Change in SSE at the 200th iteration with step size. (b) Change in SSE with the number of iterations at a step size corresp onding to the minimum SSE. (c) Change in ICE at the 200th iteration with step size. (d) Change in ICE with the number of iterations at a step size corresponding to the minimum SSE. (e) SD of SSE with the change in step size. (f) SD of ICE with the change i n step size.

PAGE 95

95 (a) (b) (c ) (d) Figure 4 13. The evaluation of the Demons group and FDR -group algorithms using 4DCT clinical lung images for 200 iterations. The 4DCT images contain 10 phases (phase 0, 10, .. 90) corresponding to the patients bre athing cycle. Deformable registration was carried out between the phase 0 and the rest phases ( phase 10 through phase 90). (a) Change in SSE of the Demons group DIR algorithm with different phases of 4DCT at the 200th iterations (b) Change in ICE of the D emons -group DIR algorithm with different phases of 4DCT at the 200th iterations (c) Change in SSE of the FDR group DIR algorithm with different phases of 4DCT at the 200th iterations (d) Change in ICE of the FDR -group DIR algorithm with different phases of 4DCT at the 200th iterations

PAGE 96

96 (a) (b) (c) (d) (e) (f) Figure 4 14. Visual comparison of DIRs using 4DCT dataset of a lung cancer patient. (a) Source imag e corresponding to the end -of -ex hale phase; (b) Target image corresponding to t he end -of inhale phase; (c) Deformed image using Demons, the red arrow (point A) shows the artifacts; (d ) Deformed image using I Demons, t he artifacts were not visible for I -Demons. (e) The profile of the displacement field on axial direction by Demons and I Demons algorithms. (f) Change in SSE with the number of iterations. A A

PAGE 97

97 (a) (b) Figure 4 15. Distribution of the inverse consistency error (ICE) in gray scale. Higher intensity region indicated the larger ICE. (a) ICE by the Demons algorithm. (b) ICE by the I Demons algorithm, The ICE was greatly reduced by the I -Demons comparing with the Demons.

PAGE 98

98 (a) (b) (c) (d) Figure 4 16. Auto re -contouring of gross tumor volume (GTV). The registratio n parameters were the following : the step size = 0 .5 Gaussian kernel size = 3 x 3 x 3 with a standard deviation of 0. 6 5 and number of iterations = 100. (a)Th e yellow contours represents the planning GTV, the green, aqua, and red contours represent the auto generated GTV using Demons, I Demons and B Dem ons respectively. (b) The close view of the red rectangle region in (a). (c) Histogram of the distance between the contour points generated by Demons and B -Demons. (d) Histogram of the distance between the contour points generated by Demons and I -Demons. The pixel dimension is 1 mm by 1mm on X and Y directions.

PAGE 99

99 (a) (b) (c ) (d) (e) Figure 4 17. Comparison of the FDR, I -FDR, and CICR algorithms using the physical myocardium phantom. (a) Change in APE with the number of iterations. (b) Change in ICE with the number of iterations. (c) Change in SSE with the number of iterations. (d) Computation time per iteration (TPI) for three algorithms using a n Intel 2.4GHz PC with 4GB memory running Windows XP operating system. (e) Time per million pixels per iteration (TPMI) for three algorithms.

PAGE 100

100 (a) (b) (c ) (d) Figure 4 18. Comparison of the FDR, I -FDR, and CICR a lgorithms using 4DCT images from a lung cancer patient. (a) Change in SSE with the number of iterations. (b) Change in ICE with the number of iterations. (c) Computation time per iteration (TPI) for three algorithms using a n Intel 2.4GHz PC with 4GB memory running Windows XP operating system. (d) Time per million pixels per iteration (TPMI) for three algorithms.

PAGE 101

101 (a) (b) (c) (d ) (e) Figure 4 19. Comparison of two DIR algorithms using SSE and visual estimation. (a) Source image; (b) target image; (c) deformed source image using method A; (d) deformed source image using method B; (e) Change in SSE with the number of iteratio ns for method A and method B.

PAGE 102

102 (a) (b ) Figure 4 20. Quiver plot of the deformable field computed in Figure 4 19 after 600 iterations. (a) Quiver plot of the deformation field using method A, (b) quiver plot of the deformation field using method B. T he deformation field by method B is smoother than the deformation field by method A.

PAGE 103

103 (a) (b) (c ) (d) (e ) (f) Figure 4 21. The evaluation of the FDR, I -FDR, FDR Sigma, and I -FDR -Sigma algorithms using high noise myocardium phantom. (a) Change in APE at the 300th iteration with step size. (b) Change in APE with the number of iterations at a step size corresponding to the minimum APE. (c) Change in ICE at the 300th iteration with step size. (d) Change in ICE with the number of iterations a t a step size corresponding to the minimum APE. (e) Change in SSE at the 300th iteration with step size. (d) Change in SSE with the number of iterations at a step size corresponding to the minimum APE.

PAGE 104

104 Table 4 1 Change in step size of the FDR -group and D emons group algorithms for robust measure Simulated phantom Physical phantom Clinical image FDR -group From 0.1 0.1 0.1 To 2.0 2.0 1.0 Increment 0.1 0.1 0.1 Demons -group From 0.1 0.1 0.1 To 1.0 1.0 0.9 Increment 0.1 0.1 0.2 Table 4 2. The mean and standard derivation (SD) of APE, ICE, and SSE of seven DIR algorithms using myocardium physical phantom, the image pixel dimension is 1 mm x 1 mm x 1 mm on X, Y, and Z direction. APE(mm) ICE(mm) SSE Mean STD Mean STD Mean STD Demons 0.58 0.12 1.35 0.69 188.73 45.69 I Demons 0.52 0.078 0.033 0.019 195.18 47.80 B Demons 0.60 0.098 0.029 0.025 181.82 42.83 FDR 0.67 0.17 0.0085 0.0020 342.18 160.91 I FDR 0.64 0.20 0.00027 0.00010 369.20 168.87 FDR Sigma 0.65 0.10 0.027 0.0093 157.22 44.88 I FDR S igma 0.44 0.041 0.0042 0.0044 220.26 46.76

PAGE 105

105 CHAPTER 5 CLINCIAL APPLIATION OF DEFORMABLE IMAGE REGISTRATION 5.1 Auto Re -contouring Auto re -contouring is an important clinical application of DIR. Auto re -contouring makes it possible to automatically propagate the physician -drawn contours in the planning image dataset to a new image dataset. In 4DCT lung imaging, after auto re -contouring, the planning contours in the planning CT can then be automatically transferred to the phase CT corresponding to the patient breathing cycle. The automatically generated contours can assist monitoring the target change during the course of radiation therapy with minimum human interference. There are generally two frameworks for re -contouring 28: 1) the snake based framework and 2) the registration based framework. The registration based fram ework was discussed in the Chapter 1.2, where the DIR is first applied to compute the deformation field between two image volumes, then the planning contours are propagated to the new images by the computed deformation field. The snake based framework is indeed the image segmentation techniques 143, where the iterative algorithm seek s to separate the target by evolving the initial contour using both the image gradient information and contour length. Compared to the registration based framework, the accura cy of the snake based framework is not affected by the location of the original contours, however, it has the difficulty to re -contouring the targets whose boundaries are not well visually defined, e.g., CTV or PTV In contrast, r egistration based framew ork is able to propagate all the planning contours to the new image dataset however the accuracy of the propagated contours depends on not only the accuracy of the DIR algorithm itself, but also the fidelity of the planning contours. Because of its flexi bility a registration based re -contouring was implemented. In the following, we will discuss the details of the registration based auto re contouring techniques

PAGE 106

106 Lu et al. 28 proposed an automatic re -contouring algorithm combining DIR techniques and tria ngulated surface construction techniques. While our re -contouring approach is similar to the Lus work, it differs in a number of respects. First, we used I -FDR Sigma deformable registration algorithm, which is more accurate and converge faster. Second, op en source visualization toolkit (VTK) 144 was used to assist surface reconstruction and graphics tasks, such as volume cross section generation. VTK is a software framework for 3D computer graphics, image processing, and visualization. It is under active d evelopment and used by thousands of researchers and developers around the world. The flowchart of the auto re -contouring is shown in Figure 5 1. The auto re -contouring consists of four components: 1) surface reconstruction; 2) deformation field computation; 3) surface propagation; and 4) post processing. Surface reconstruction is to create a 3D triangular planning surface mesh that connects all the existing 2D planar contours outlined by the radiation oncologists in the planning dataset. After generating t he planning surface mesh, a deformed surface mesh was created by deforming the planning surface mesh using the computed deformable field. As shown in Figure 5 2, the white cubes represent the nodes ( A B and C) of a triangle from the planning surface mesh After mesh deformation, the original nodes ( A B and C) are moved to the new locations represented by the yellow cubes ( A B and C) by the deformation field. The mesh deformation does not change the relative connectivity of the neighborhood mesh no des, but their distance between the connected nodes can vary. The post processing of the deformed mesh, such as the generation of 2D planar contours, can be used for assisting contour visualization for the clinicians. To generate 2D planar contours, a seri es of parallel planes were used to cut the deformed surface meshes and the auto contours were computed by the intersection between the surface mesh and the parallel planes.

PAGE 107

107 In the following, we will explain necessity to apply surface reconstruction for auto re -contouring. As shown in Figure 5 3, because the deformation exists in the 3D space, after deformation, the planning 3D planar contour points will usually not be on the same plane and form the scattered 3D points. There are two approaches to produce t he 2D deformed contours from the scattered 3D contour points: 1) the 3D interpolation based approach and 2) the surface reconstruction based approach. Let us first take a look at the intensity range of the contours. Generally the intensities of the contou rs are binary, that is, the intensity of the points inside the contour is one and outside the contour is zero. The disadvantage of the interpolation based approach is that it will introduce the pixels with the intensity between 0 and 1, e.g., 0.5. It is, t herefore, difficult to determine the location of those pixels, i.e., inside or outside the contours. Figure 5 4 illustrates the problem in the 2D case. In Figure 5 4, the plane 1 and plane 3 have the two contours, where pixels with intensity of one are the points inside or on the contours and pixels with intensity of 0 are the points outside the contours. Here the goal is to compute the contour on the plane 2 located in the middle of the plane 1 and plane 3. The interpolation based approach is illustrated i n Figure 5 4 (a), where a linear interpolation scheme is applied to compute the pixels in the plane 2, the linear interpolation scheme could be expressed as 13 2() 2 PP P (5 1 ) where P2 is the pixel intensity in the plane 2, P1 and P3 are the pixels that have the shortest distance to P2 in the plane 1 and plane 3 respectively. After interpolation, the pixels in the plane 2 consist of two intensity values, 0.5 or 1. Intuitively the pixels in blue between the dashed lines are considered the points within the contour. In the interpolation based approach, two pixels with intensity of 0.5 were inside the c ontour, however four pixels with the same intensity of 0.5 were outside the contours. Therefore it is almost impossible to distinguish the pixels by the intensity

PAGE 108

108 value alone for interpolation based approach. In surface reconstruction based algorithm shown in Figure 5 4 (b), the dashed lines represent the reconstructed surface that can be easily used to determine whether a point is inside the contour or outside the contour: the points within the surface are the points inside the contours (in blue) and the points outside the surface are the points outside the contours (in red). Moreover, no pixel has a value between 0 and 1 in this case. Surface reconstruction based approach is always preferred in the auto re contouring, especially in the cases where the shap e of the top and the bottom contours is dramatically different. 5.1.1 Surface Reconstruction As illustrated in Figure 5 5, the physician-drawn planning contours (Figure 5 5(a)) are exported from the TPS, the surface reconstruction is used to connect the contour points by triangular (the process is also called triangulation), where each edge of one triangle is the edge of two nearby triangles. Surface reconstruction from planar contours is a developed technique in computer vision and computer graphics. Chri stiansen et al. 145 proposed a limited triangulation algorithm based on the normalized contours points and shortest diagonal principle, however, the mesh quality could be poor for non -convex contours. Here, a robust surface reconstruction algorithm (vtkVox elContoursToSurfaceFilter ) available in VTK 144 was used to reconstruction the surface mesh. Figure 5 5 (b) shows the generated surface mesh from the planning contours in Figure 5 5 (a). The close view of the mesh is available in Figure 5 5 (c). 5.1.2 Pos t Processing Post processing is to generate layers of 2D deformed contours from the 3D deformed surface mesh. 2D contours will make it easy for the clinicians to compare the contours before and after deformation in the TPS. Two functions in VTK, vtkCutt er and vtkPlane, were used to create the 2D contours.

PAGE 109

109 The in -house software was developed for auto re contouring. It was based on the VTK 144, Matlab (The Mathworks Inc, Natick, MA), and python programming language 146. The code snippet of surface rec onstruction, deformation, and post processing was available in Appendix A. An example of auto re -contouring was shown in Fig ure 5 6, where t he planning contours of GTV (i.e., the green contour) and right lung (i.e., the brown contour) in the planning CT w ere mapped onto the phase CT. The auto re -contoured right lung (i.e., the red contour) and GTV (i.e., the blue contour) matched the anatomical boundaries. The auto re -contouring is able to compensate for the location and shape change of the target. 5.2 Au to Internal Target Volume Generation Another clinical application of DIR in 4D treatment planning is auto internal target volume (ITV) generation. In our institute, the ITV is defined as the union of the GTV from different phases of the 4DCT images. Tradit ionally ITV contouring is a time consuming process because the physicians require manually contouring the GTV in each phase CT. Auto ITV generation only requires the physician to contour at one CT dataset without the cumbersome work of the contouring in ea ch phase CTs. It consists of two steps: first, physician is required to contour the GTV in one phase CT; second, the contoured GTV is automatically propagated to the other phase CT by auto re -contouring and DIR ; third, the ITV is computed by the union of t he GTV in all phase CTs. Furthermore, a probability density function (pdf) can be computed by assigning the probability to each pixel in the GTV region. In Figure 5 7 an ITV is automatically computed from a 4DCT image dataset consisting of 10 phases CT. T he color map represents the probability that a GTV voxel occupies a specific pixel location for all phases CT in the respiratory cycle. The ITV is generated by the union of all the GTV in the phase CTs

PAGE 110

110 The auto ITV generation provides an effective treatm ent planning tools for 4DCT without increasing the overall physician workload associated with manual segmentation of structures on each phased CT within the 4DCT.

PAGE 111

111 Surface Reconstruction Deformation Field Surface Propagation Post Processing Figure 5 1. Flowchart of the auto re -contouring. The 3D surf ace mesh is reconstructed from the 2D layers of the manually contoured structures. After applied deformation field computed from DIR, the 3D surface mesh was deformed and represented the anatomical change. The post processing, such as 2D planar contours ge neration, will assist the visualization of the deformed structures.

PAGE 112

112 Figure 5 2. Mesh deformation. The white cubes represent the nodes (A, B, and C) of a triangle from the planning surface mesh. After mesh deformation, the or iginal nodes (A, B, and C) are moved to the new locations represented by the yellow cubes (A, B, and C) by the deformation field. The mesh deformation only changes the distance between the nodes, without affecting their connectivity. A B C A B C

PAGE 113

113 (a) (b) (c) Figure 5 3. Surface reconstruction and surface mesh deformation. (a) The planning GTV contours. (b) Surface mesh after using surface reconstruction algorithm. (c) Surface mesh deformation after applying deformation field.

PAGE 114

114 (a) (b) Figure 5 4. A 1D example of interpolation based re -contouring and surface reconstruction based re -contouring. The contour in the middle plane (plane 2) is needed to be generated based on the known contours in the plane 1 and the plane 3. (a) Interpolation based re -contouring has difficulties to determine whether the pixels in plane 2 is inside the contours because of the same pixel intensity for points inside or outside the contours; (b) Surface reconstruction based re -contouring can generate the contour in plane 2 by determining whether the pixel is inside the reconstructed surface. 0 0 0 1 0 0 0 1 1 1 1 1 1 1 Plane 2: 0 0 1 1 1 0 0 Plane 1: 0 0 0 1 0 0 0 Plane 2 : 0.5 0.5 0.5 1 0.5 0.5 0.5 Plane 3: 1 1 1 1 1 1 1

PAGE 115

115 (a) (b) (c) Figure 5 5. Surface reconstruction from 2D planar contours. (a) Visualization of layers of planning contours drawn by the physicians; (b) Re constructed surface connecting the contour points using triangular mesh; (c) Close view of the triangular mesh. Figure 5 6 Auto re -contouring for lung image using DIR Physician drawn contours (drawn on separate source CT) and computer generated cont ours, using auto re -contouring, have been overlaid on a target CT. This zoom in view has the dimension of 358 x 355 with the pixel dimension of 0.4 mm by 0.4 mm. The LUNG -RT and GTV -fb (brown and green contours) representing outlining the right lung and gross tumor respectively, are physician drawn contours on the source CT. The Auto LUNG RT and auto GTV fb (red and blue contours) are the corresponding computer generated contours on target CT based on DIR.

PAGE 116

116 (a) (b) Figure 5 7 ITV generation. (a) The color map represents probability density function, which indicates the probability that a GTV voxel occupies a specific pixel location for all phases in the respiratory cycle. The ITV is the union of all nonzero probability voxels. The pixel dimension is 1 mm x 1 mm x 3 mm. (b) The histogram of the probability density function.

PAGE 117

117 CHAPTER 6 NEAR REAL TIME DEFORMABLE IMAG E REGISTRATION USING GRAPHICS PROCESSING UNITS As discussed in the previous chapters, DIR plays an important role in estimating target motion and monitoring the target evolution. However, existing DIR algorithms a re computational intensive, typical reported computation time of CT images was: 5 minutes for image volumes of 128 x 128 x 128 pixels using demons 30, 3 minutes for image volumes of 256 x 256 x 61 pixels using free -form deformation 31 on Pentium III 933MHz personal computer ( PC ), 6 minutes for image volumes of 256 x 256 x 61 pixels using accelerated demons 22 on Pentium 2.8GHz PC. Therefore for a typical image size of 256 x 256 x 100 pixels one c an expect the registration time be at least 5 minutes using the standard algorithms. One should expect more computation time if considering additional analysis for clinical applications in ART, such as auto re -contouring 147, 148, dose rec alculation 149, 150, and treatment re -planning 150, 151 For the clinical us age of DIR, a desirable computation time is within two to three minutes. To meet the clinical demands, researches have been carried out using high performance computing (HPC) techniques to accelerate the computation of DIR algorithms. In the following, we will first review the current high performance computing techniques in image registration using different hardware architectures, and then a proof of concept study will be presented to demonstrate the feasibility of the near real time Demons algorithm. Th e Demons algorithm as already discussed in Chapter 4 though not necessarily the fastest or most robust in all clinical cases, is widely available and serves as a standard for comparison due to its simplicity and efficiency. The HPC implementation of some essential image processing components, such as image gradient and convolution, can assist future research in accelerating DIR using HPC.

PAGE 118

118 6.1 Review of High Performance Computing In Medical Image Registration To improve the performance of image registrati on, HPC techniques have been studied using different hardware architectures. The following HPC architectures will be reviewed including 1) field -programmable gate array (FPGA); 2) cell broadband engine (CBE) processor; 3) supercomputer or clusters; 4) graphics processing unit (GPU). 6.1.1 Review of Image Registration Using FPGA FPGA is a programmable silicon chip utilizing the prebuilt logic blocks and programmable routing resources. FPGAs can be configured to implement customized hardware functionalities w ithout soldering irons. To implement algorithms in FPGA, algorithms are firstly developed in the design software, then compiled as the configuration file and downloaded to the FPGA. The availability of the high level design tools, such as graphical block diagrams or interface to C language, makes the FPGA programming convenient. FPGA techniques were used in image registration to accelerate computation of mutual information. Based on an Altera Stratix II EPSS180F1508C4 FPGA (Altera Corporation, San Jose, CA), Dandekar et al. 152154 demonstrated a FPGA architecture for multi -modality registrations of abdominal intra -procedural non -contrast CT with pre -procedural contrast enhanced CT and PET images. The registration was based on the hierarchical volume subd ivision based DIR, where the image volume was subdivided into small sub -volumes, rigid registration was applied to each sub volume, and the final deformation was computed by combining locally rigid registration of the sub -volumes with quaternion -based inte rpolation. In their study, the FPGA played a role of accelerating the mutual information computation in the registration. Comparing with the C++ implementation on an Intel Xeon 3.6 GHz PC, the FPGA provided a speedup of 41 for an image size of 256x256x256 pixels and a speedup of 21 for an image size of 16x16x16 pixels. Castro-Pareja et al. 155, 156 investigated the FPGA application of

PAGE 119

119 accelerating mutual information calculation, where the fast automatic image registration (FAIR) architecture was proposed to improve the mutual information calculation by accelerating the histogram computation. The FAIR system was based on an Altera Stratix EP1S40 FPGA and achieved a speedup of 86 comparing with the sequential code on a Pentium III 1GHz PC. Jiang et al. 157 pr oposed an FPGA -based design supporting free -form deformation using third order B -spline model on 2D images. The approach included pre -calculating the B -s pline basis function, transforming a nested loop to avoid conditional calculation, and developing fully pipelined circuits and multiple pipeline designs. the FPGA implementation, based on a Xilinx XC2V6000 FPGA at 67 MHz,, run 3.2 times faster than an Intel Xeon 2.6 GHz PC. It was pointed out that the current FGPA implementation cannot execute the whole re gistration task independently; it had to communicate with the host machine and served as an additional computation unit. To the best of our knowledge, no complete FPGA system, which included all the necessary registration components, was reported. 6.1.2 R eview of Image Registration Using Cell Broadband Engine (CBE) CBE was developed by the collaboration of Sony, IBM, and Toshiba. The Cell objective was to achieve 100 times performance of the PlayStation2 and led way for the future 158. The first generation of CBE combined a 64 bit power processor element (PPE) with eight synergistic processor elements (SPEs). Each SPE had a SIMD (Single -instruction, multiple data) unit, which could perform a floating or an integer operation on four data elements at each clo ck cycle. The PPE, on the other hand, was more general processor and was capable of running a conventional operating system. It could also control the SPEs and could start, stop, interrupt, and schedule processes running on the SPEs. To achieve high perfor mance for computational intensive tasks and reduce the memory latency, EIB (Element Interconnect Bus) was introduced to provide high

PAGE 120

120 bandwidth access to the main memory and the external data storage. Readers can refer to 158 for the details of CBE. After t he debut of the first generation of CBE in 2005, studies were carried out utilizing CBE in medical image registration. Ohara et al. 159 presented a scheme of implementing mutual information based multi resolution 3D image registration on an IBM BladeCenter QS20, which contained two CBE processors at 3.2 GHz with 1GB memory. To accelerate the image registration, the optimization of the parallelism was exploited in the following: 1) Task parallelization and partition. The program was partitioned into multiple tasks to be fit into the local memory of each SPE. It could be viewed as a distributed memory multiprocessor with a small local memory attached to a large shared memory. 2) SIMD pipeline optimization. SPE was highly pipelined and optimized for SIMD instru ctions. It was essential to not only perform multiple operations by SIMD instructions but also optimize the code to exploit the architecture. 3) Optimizing memory bandwidth. Two data partition techniques, localized sampling and speculative packing, were ap plied to optimize the memory traffic and avoid low efficient pixel wise memory access. Comparing with the sequential version on Intel Xeon 3.0 GHz processor with 4G memory, the CBE run about 11 times faster and was able to reduce the memory traffic by 82%. As a result, it could register a pair of 256x256x30 pixels images in one second using multi -resolution method. However it was unclear in this paper whether the 3D image registration was the rigid registration or the deformable registration. Some registrat ion details, such as levels of the resolutions, the number of bins in mutual information calculation, and number of iterations in each resolution, were not specified in the paper. 6.1.3 Review of Image Registration Using Supercomputers or Clusters Paralle l computers and supercomputers 160 are the traditional hardware platform for parallel computing. The common parallel computers refer to either SMP (symmetric

PAGE 121

121 multiprocessor) servers, which have many symmetrical processors on a single mother board, or compu ter clusters, which consist of many of the similar type of machines and tightly-coupled using dedicated network connection. The supercomputer, on the other hand, has several different architectures: 1) MPP (Massively Parallel Processor). The MPP consists o f large number of processors connected by a high performance network. 2) PVP (Parallel Vector Processor). PVP consists of many specific vector processors and is more expensive than MPP. Memory architectures described how each processor access the physical memories. It can be categories into two groups: shared memory (Figure 6 1 (a)) and distributed memory (Figure 6 1 (b)). In the shared memory architecture, each processor accesses the same global memory resource; any memory modifications by one processor a re visible to all the other processors. The main advantages of the share memory architecture are 1) the global memory access provides easy programming perspective; 2) fast data sharing due to the fast communication between the global memory and its adjunc t processors. However, the share memory architecture suffers from the lack of scalability between memory and processors and the potential high cost for designing shared memory machine with the increasing numbers of processors. In the distributed memory arc hitecture, the memory is associated with individual processors and each processor is only able to address its own memory. The main advantages of the distributed memory architecture are 1) no bus or switch to interfering the data access, each processor can utilize the full bandwidth to its own local memory; 2) no inherent limit to the number of processors and sizes of the systems. The main drawbacks in distributed memory system are 1) the difficulty of the inter processor communication; 2) the difficulty of mapping the existing global memory based data structures into the distributed memory data structures.

PAGE 122

122 Several studies had been carried out using parallel computing techniques to speed up the image registration. Christensen et al. 142 implemented the elastic and fluid deformation image registration algorithms on a MarPar 128x128 m assively parallel SIMD computer. For registration 128x128x100 pixels volumes with 100 iterations, parallelized fluid registration took 1.8 hours, about 178 times speed up comparing with a SGI MIPS 150 MHz R4400 computer. Rigid and deformable registration on the shared memory multiprocessor system had also been studied 161, 162. Rohlfing and Maurer 162 presented a parallel implementation of a B -spline based free -form deformable regis tration algorithm on a 128CPU shared memory system (SGI Origin 3800). Using multithreading programming by partitioning data and tasks, the parallel implementation achieved a speedup factor of more than 50 times and was able to complete the intra -operative brain deformation in less than a minute using 64 CPUs. Wachowiak 161 proposed a parallel optimization algorithm based on the global optimization method called DIRECT ( Dividing Rectangles) and the local optimization method called MDS (Multidirectional Sea rch). The parallel registration implementation was performed on a SGI Altix 3000 shared memory system with 20 Intel Itanium 2 CPUs with 1.3 GHz and programmed with OpenMP 163, which is an application programming interface (API) for shared memory multiproce ssing programming. Computer clusters were reported to improve the performance of the medical image registration 164 166. Warfield 164 implemented a parallel feature-based rigid image registration on a cluster consisting of two Enterprise Server 5000S, each has 8 167 MHz UltraSPARC -I CPUs. Using POSIX and MPI (Message Passing Interface) parallel programming tools, the registration execution time was within 5 10 minutes. For rigid multi -modality image registration, ourselin 165 proposed an scheme to paralle lize block -matching registration algorithm using a cluster with 10 biopro Pentium III 933 MHz PCs. MPI was utilized to coordinate the different

PAGE 123

123 nodes of the cluster, the parallelization of each node was programmed by OpenMP. For small and medium resolution images, the parallelized registration achieved 19 seconds and 45 seconds on five bi processors respectively. It took 70 seconds for ten bi -processors. Ion 166 presented a performance study of implementing a B -spline based deformable registration on three different clusters. The implementation involved the data parallel processing technique and the precomputation technique, it was demonstrated that the registration time was reduced from ten hours to ten minutes. 6.1.4 Review of Image Registration Using Gra phics Processing Unit (GPU) A Graphics Processing Unit (GPU) is a highly parallel comput ation unit with tremendous memory bandwidth and computational horsepower, but it lacks the programming flexibility of the CPU. GPU computing is now emerging as a viabl e and high performance alternative for general non -graph ics computing tasks, such as DIR The GPUs substantial arithmetic and memory bandwidth capabilities, coupled with its recent addition of user programmability, has allowed for general -purpose computa tion on graphics hardware (GPGPU) Many non-graphics oriented computationally expensive algorithms have been implemented on the GPU. Developers prefer GPUs over other alternative parallel processors due to several advantages including their low cost and w ide availability. Owens et al. 167 presented a comprehensive survey of latest research in GPU computing Traditional GPU programming languages have been primarily used for graphics applications they include OpenGL168, DirectX169, Cg170, and HLSL171. N ew GPU programming languages, such as CUDA 172, CTM 173, and Brook 174, have emerged to allow for efficient programming for general purpose computing. GFLOPS (i.e. 109 floating point operations per second) is an accepted overall approximate representation of the computational capability of a processor. Current high end processors include the Intel Dual Core Xeon 3.0 GHz (4 MB L2

PAGE 124

124 cache, 1333 MHz FSB) (CPU) rated up to 38.3 GFLOPS175, and the NVIDIA GeForce 8800 GTX (GPU) rated up to 520 GFLOPS176. The chal lenge for fast DIR using GPU is to implement algorithms in a form that can take advantage of the highly parallel nature of GPU computing. Since medical imaging applications intrinsically have data lev el parallelism with high computational requirements, th ey are very suitable for GPU implement ation. R esearchers have conducted several prior GPGPU studies in the field of image registration. However, only a limited number of previous studies concentrate on the DIR The current literature s have not paid suffici ent attention to GPGPU use in DIR. Instead, GPUs have been mainly used for speeding up the generation of the digitally reconstructed radiograph (DRR) in 2D 3D image registrations 177179. On the one hand, image registration is computationally expensive. On the other hand, image registration is high parallel. Hence, GPUs with their high -performance parallel processing power provide great opportunities for speeding up image registration L arge 3D data sets used in DIR algorithms put an even heavier burden on computational resources. However, as indicated by Vetter et al. 180, there has not been much research into the use of GPU in DIR. Likewise, Sharp et al. 181 do not consider prior work on the GPUbased deformable registration as well established. O ne of t he early studies of deformable registration utilizing GPU could be traced back to Soza et al. 182. They used 3D B ezier functions for non rigid alignment of respective volumes and utilized the tri -linear interpolation capabilities of GPU to accelerate the d eformation of the moving volume. In order to better understand the brain shift phenomenon, Hastreiter et al. 183 performed DIR based on piecewise linear transformations proposed in an earlier study 184 to approximate nonlinear deformations and accelerate d tri linear interpolation by using the 3D texture mapping capabiliti es of graphics hardware. Levin et al. 185 exploited graphics hardware t o implement a high-performance

PAGE 125

125 t hin -plate s pline (TPS) volume warping algorithm that could be used in iterative imag e registration s and accelerated the application of the TPS nonlinear transformation by combining hardwareaccelerated 3D textures, vertex shaders, and tri linear interpolation. Strzodka et al. 186 implemented a non rigid regularized gradient flow registrat ion on the GPU to match 2D images. I n contrast to earlier studies that used graphics hardware only for the tri linear interpolation (e.g. Soza et al.s study 182), K ohn et al. 187 implemented the entire 3D regularized gradient flow on a GPU. However, due t o memory bottlenecks, they reported that their 3D non-rigid registration is not as fast as one would expect. Vetter et al. 180 implemented non rigid registration on a GPU using mutual information and the Kullback -Leibler divergence between observed and lea rned joint intensity distributions. They proposed this method for specifically matching multi -modal data sets and, like Kohn et al. 187, they implemented the entire registration process on the GPU. To the best to our knowledge, there are only three studie s that mapped the Demons registration algorithm to the GPU: implementation of Kim et al. 188 Courty and Hellier 189, and Sharp et al. 181. All these three studies were conducted concurrently and independently with our study and differ mainly on the metho d they used for smoothing the displacement field. These implementations also used different GPUs and programming environments. Kim et al.188 implemented the Demons algorithm using a simple ramp smoothing filter and a NVIDIA GeForce 6800 GPU with the Cg la nguage. They took the average of the closest six neighbors of each voxel t o smooth the displacement field. Courty and Hellier 189 presented a GPU implementation of the Demons algorithm usin g a Gaussian recursive filter The advantage of recursive filtering is that number of operations is bounded and independent of the standard deviation of the Gaussian filter but the error will be significant for the standard deviation larger

PAGE 126

126 than 10. In order to implement the recursive filter the Gaussian filter with 4th-order cosines exponential functions was approximated. In this implementation, they chose an NVIDIA Quadro FX 1400 GPU, with fragment programs written with a Cg like syntax. Finally, Sharp et al. 181 implemented the Demons algorithm using a separable Gauss ian filter. They used the Brook program ming environment and a NVIDIA GeForce 8800 GTS GPU. Using a Gaussian filter for smoothing the displacement fie l d provides the most accurate implementation. However, this is the most expensive part of the Demons algori thm. Hence, to achieve faster runtimes some of the researchers simplified or approximated the smoothing process. For instance, Kim et al. used simple ramp smoothing instead of Gaussian smoothing and Courty and Hellier approximated the Gaussian filter as di scussed above. Sharp et al. was first to use a separable Gaussian filter. In order to provide an accurate implementation, we have als o used Gaussian smoothing and performed convolution of the displacement field with separable Gaussian filter like Sharp et al. did. However, we improved the speed of Sharp et al.s implementation by using NVIDIAs CUDA environment instead of Brook; we report a 10% faster runtime on the same hardware. CUDA specifically targets newer NVIDIA cards and provides powerful features such as shared memory access, which we extensively used in our implementation that Brook does not offer. Hence, CUDA is optimized for these cards and provides better support/performance than Brook. In addition, Brook is a legacy academic project and is at best maintenance mode -only, whereas CUDA is directly supported by NVIDIA, is under active development, and has a broader set of programming tools and libraries available. In addition to achieving fast er runtime, we obtained constant speedups across dataset s with different image sizes. Kim et al. and Courty and Hellier reported speedup for only one dataset. Sharp et al. reported speedups for two datasets. However, they reported a smaller speedup for the larger dataset size. Hence, to the best to our knowledge,

PAGE 127

127 our implementation presents the fastest and most scalable GPU-based i mplementation of Demons algorithm available to date. It has to be noted that c urrent GPUs are limited by single precision floating point (i.e. 32 bits), while CPUs can use double prec ision floating point (i.e. 64 bits). Typically single precision is deemed to be sufficient for DIR in medical imaging, and other general computations, including diffusion equations, wave equations, Poisson problem and Navier -Stokes equations 190. However there are cases, such as for finite element modeling, where double precision is needed 154. As well, significantly less dedicated processor memory is available to GPU programming than for CPU programming. Currently GPUs typically have less than 1 GB memory For CPUs up to 4GB RAM is available with the 32 bit architecture, and theoretically up to 1.7x1010 GB RAM with the 64-bit architecture. In the next section, we present the implementation of the demons algorithm for the near real time DIR based on a comm ercial GPU for a standard PC architecture and CUDA programming language We consider the central issue of GPU vs. CPU computing: the effect of the computational organization, including data structures, and structure of parallelism on computing performance. A systematic comparison of GPU and CPU computing strategies for DIR in terms of computational efficiency time, and cross -correlation (CC) between the deformed images and the target images is presented. For this relatively nascent and highly promising tec hnology, we consider a standard and wide ly disseminated DIR algorithm: Demons. The details of the Demons algorithm are discussed in the Chapter 4.2. The D emons algorithm, though not necessarily the fastest or most robust in all clinical cases, is widely av ailable and serves as a standard for comparison due to its simplicity. This initial choice will permit other researchers to standardize performance comparisons and bu ild on our promising results. We evaluate this

PAGE 128

128 strategy over a range of clinical lung imag i ng volumes acquired with 4DCT. In contrast to the previous studies, which only present ed single -threading CPU implementation runtimes, we will provide multi threading CPU implementation runtimes, in order to present a fair comparison between CPU and GPU speeds. 6.2 Materials and Methods Lung and prostate imaging are candidates for clinical applications of DIR, requiring fast near real time DIR if one is to use time series volumetric imaging data as part of ART. We evaluated GPU DIR using clinical CT lung imaging acquired with 4DCT from eight patients. CT images were acquired using a Phillips BRILLIANCE 16 slice CT Simulator (Phillips Medical Systems, Cleveland, Ohio). CT data was binned over the 10 phases of a respiratory cycle, the free breathing (FB) CT represented an average over the respiratory cycle sampled from the 10 phases. The FB CT served as the source CT, since at our institution, by convention and historical consistency, all segmentation is initially carried out on the FB CT. The target consist ed of one of the ten phases, typically with the largest deformation. The Demons algorithm was used to map the FB CT data set onto a phased CT. It was implemented on an Intel dual core 2.4GHz CPU using C language, and on a NVIDIA 8800 GTX GPU using CUDA. I n our implementations, the available dedicated RAM for the CPU was 3GB, while the video memory for the GPU was 1536MB (The NVIDIA 8800 GTX with 1536MB used here is equivale nt to the NVIDIA Quadro FX 5600). The following parameters were used for the demons algorithm: size of smoothing kernel is 3 x 3 x 3 pixels standard deviation of smoothing kernel is 0.5, and step size = 1.0 The dual core CPU implementations consisted of single threading (i.e. equivalent to single CPU processing) and multithreading (i.e. equivalent to dual CPU processing). Current GPUs are capable of single precision floating point operation (i.e. 32 bit processing), while CPUs allow for double precision floating point operation (i.e. 64 bit processing). The cross -correlation (CC)

PAGE 129

129 metric was used to quantify the resulting difference image for DIR from GPU and CPU implementations. In the study, we compare d the performance of the CPU and the GPU implementation using the following criteria: TPI (time per iteration) and TPMI (time per mega pi xels per iteration). TPI represents the computation time per iteration while TPMI represents the computation time per million pixels per iteration. TPI is independent of the termination criterion, and typically varies linearly with image size. TPMI is a normalized measure (with respect to data size) of TPI, and is a better measure of computational efficiency. The variation of TPMI expressed as a percentage of the mean (i.e. 100% x standard deviation (sd)/ mean) provides a measure ment of sensitivity of compu tational efficiency to data size. Ideally TPMI should be the same regardless of the image size. This is typically the case for CPU computing, with the possible exception due to very large data sets that cannot be entirely stored on RAM but are frequently a ccessed by CPU. Invariance of TPMI with respect to data size is desirable and can indicate programming efficien cy T he maximum TPMI mean TPMI and standard deviation of the mean TMPI provide a test for this invariance over the imaging volumes encountered clinically. Since there was no absolute gold standard for the clinical imaging, the CC between the deformed and the target image, and the difference image were used to evaluate the registration results between CPU DIR and GPU DIR. 6.3 Results It was observ ed that 100 iterations were sufficient to achieve good convergence for DIR in all 8 clinical cases. The termination condition was empirically set that the fractional difference in similarity measures (i.e. cross correlation) between consecutive iterations was less than 103. The source image, target image, deformed image and the difference images are shown in Fig ure 6 2 As well, no difference between the GPU and CPU based deformation was visually observed in the difference images. The effect of single pr ecision float computing (GPU) and double precision

PAGE 130

130 floating point (CPU) on the computation of CC is quantified in Table 6 1 Even with different programming architectures, the difference was minimal and was not significant to the deformation mapping. For C C, a mean difference of 4 x 106 and a maximum difference of 2 x 105 were observed between GPU and CPU computations over 100 iterations for the 8 clinical cases. This was not statistically significant, as it was smaller than the termination criteria. In f act different programming implementations of D emons algorithm (i.e. MATLAB vs. C) also produce small but clinically insignificant differences measured by the cross -correlation and the difference image. For the clinical CT images using DIR based on 100 iterations and the demons algorithm, GPU time was in the range of 1.8 13.5 sec onds for data sizes of 2x106 4x 107 pixels (representative of imaging volumes in radiotherapy), which we considered to be near real -time. In Table 6 2 the individual CPU and GPU time were reported for each clinical data set. TPI for GPU ranged from 0.01811.35 seconds, compared to 1.037.45 seconds for single thread CPU and 0.671 4.66 seconds for dual thread CPU. Normalizing for image size, the TPMI for GPU was 0.00916 seconds while it was 0.527 seconds and 0.335 seconds for single thread and dual thread CPU respectively. Hence the GPU was 55 61 times faster (based on ratio of TPIs or TPMIs) than the single threading CPU, and 34 39 times faster than multithreading CPU. The mu ltithreading implementation was efficiently optimized to take advantage of the dual core architecture since the ratio of multithreading to single threading applications was approximately 0.63 for both TPI and TPMI measurements A value of 0.5 is considered ideal. It should be further noted that the current impressive GPU implementation involved computing the CC value at each iteration, without CC evaluation, the GPU time could be further reduced by approximately 10%. CC is not required for the demons algori thm but we implemented in order to

PAGE 131

131 study any CPU and GPU computational differences. The improvement in computational efficiency using the GPU, and the linear dependence of TPI (for both CPU and GPU) on image size were shown in Figure 6 3 As expected, for both single threading and multithreading CPU implementations, the TPI is a function of image size but TPMI is almost invariant of image size. Similar behavior was also observed for GPU. The standard deviation in TPMI expressed as a percent of mean (i.e. 10 0% x sd/ mean), was 1.3% for single thread CPU, 1.6% for dual thread CPU and 3.7% for GPU using CUDA. This indicated very good memory management on the part of CUDA. In Table 6 3 for the purpose of comparison, we have listed the GPU performance measure d based on TPI and T PMI of the implementation by Sharp et al.181, who carried a comparison between GPU based DIR using a NVIDIA 8800 GTS GPU and CPU DIR (using a 2.8 GHz Intel dual core CPU and 1.5GHz RAM). Our implementation had TPMI of 8.75 x103 seconds for the image size of 256 x 256 x 64 (=4.19 x 106 pixels), while Sharp had a value of 10.4 x 103 seconds for a similar image size (256 x 128 x 128), resulting in a 16% improvement. For the larger image size of 256 x 256 x 175 (=11.47 x106 pixels), we ob tained a TPMI of 8.72 x 103 seconds while Sharps i mplementation had TPMI of 12.16x 103 seconds using an image size of 424x180x 150 (=11.45 x106 pixels), resulting in an improvement of 28%. These results were presented in Figure 6 4 The differences in t he performance of the two implementations could be due to different hardware (NVIDIA 8800 GTX vs. NVIDIA 8800 GTS), programming language (CUDA vs. Brook), and possibly slightly different implementations of the demons algorithm. We did not carry out an explicit comparison of performance based on the CUDA and Brook programming languages using the same GPU hardware. However an examination of the standard deviation of TPMI seems to suggest that our implementation of CUDA may be better at

PAGE 132

132 memory utilization th an Brook. For our GPU implementation, our TPMI had a standard derivation of 3.7% of the mean, while that of Sharp et al. had a standard derivation of 10.9% of the mean. A lower standard derivation suggests less dependence of computation efficiency (normal iz ed with respect to image size) on image size. 6.4 Discussion and Conclusion GPU computing offers fast parallel computing within the PC architecture, and has been shown to be highly promising for the DIR i n clinical applications. However, c ompared to CPU programming, GPU programming is challenged by the difficult programming environment s single floating point precision, and reduced dedicated processor memory. A GPU implementation of DIR utilizing the demons logarithm and CUDA language was presented for clinical CT lung imaging. Compared to the traditional CPU implementation, there was negligible difference in the registration quality based on CC value during 100 it erations. GPU registration time based on 100 iterations, which was more than sufficient to achieve desired convergence, were 1.8 13.5 seconds for the clinical data sets with image sizes from 2 x 106 to 1.4 x 107 pixels encountered in radiotherapy. The GPU was 55 61 times faster than the CPU with the same registration algorithm implemented with C language for single threading implementation, and 34 39 times faster than CPU using multithreading implementation on dual core 2.4GHz CPU. Thus, by using a relatively simple algorithm such as the D emons algorithm GPU based image registration yielded ne ar real time performance. The computation efficiency, normalized for image size, was characterized by TPMI (time per megapixels per iteration) and its standard deviation. The TPMI for GPU was 0.00916 seconds significantly less than that for CPU (single th reading 0.527 seconds multithreading 0.335 seconds ). The standard deviation of TPMI, listed here as a percent of mean (i.e. 100% x sd/ mean), was only 3.7%, indicating that the CUDA programming achieved highly parallelized computing for the indicated data sizes. The

PAGE 133

133 GPU computational efficiency presented here shows a conservative estimate, with a further reduction of approximately 10% in GPU computation time for both TPI and TPMI achieved by not requiring CC evaluation, which was not necessary for the demons algorithm but to study the CPU and GPU computational differences. Although only D emons algorithm was considered here, GPU computing potentially offers significant performance acceleration for other algorithms. For example, similar improvement can b e expect ed for the accelerated D emons algorithm 22, which applies the active force concept to both the source and target images (conventional demons applies to only one image volume), and represents no different function type evaluations for GPU or CPU computing. Future work will consider DIR algorithms with even faster convergence and improved robustness than the D emons algorithm considered here. Furthermore DIR algorithms are expected take advantage of improved GPU hardware, which historically has se en faster growth in computational power compared to CPU hardware. The possibility of GPU based real -time DIR opens up a host of clinical applications, for ART and other areas in medical imaging.

PAGE 134

134 Memory CPU CPU CPU CPU CPU Memory CPU Memory CPU Memory CPU Memory Network (a) (b) Figure 6 1. Memory ar chitectures of the traditional parallel computers. (a) The shared memory architecture. (b) The distributed memory architecture.

PAGE 135

135 (a) (b) (c) (d) (e) Figure 6 2 Deformable image registration using lung imaging. 2D image slices presented here are based on volumetric deformable image registration using demons method. (a) Target image. (b) Source image. (c) Deformed image using demons deformable image registration on image (a). (d) Difference image of the source image and the target im age. (e) Difference image of the target image and the deformed image (i.e. following DIR).

PAGE 136

136 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 0.0E+00 2.0E+06 4.0E+06 6.0E+06 8.0E+06 1.0E+07 1.2E+07 1.4E+07 1.6E+07 Image Size (pixels) TPI (seconds per iteration) CPU single threading CPU multithreading GPU Figure 6 3. Performance comparisons between the CPU (single thread ing and multi thread ing) and the GPU based implementations of the Demons DIR algorithm as a fun ction of image size. The GPU implementation is approximately 58 times faster than the corresponding single threading CPU implementation, and ~37 times faster than the corresponding multithreading CPU implementation. All three implementations exhibit a lin ear dependence on the image size.

PAGE 137

137 6.0E-03 7.0E-03 8.0E-03 9.0E-03 1.0E-02 1.1E-02 1.2E-02 1.3E-02 0.0E+00 2.0E+06 4.0E+06 6.0E+06 8.0E+06 1.0E+07 1.2E+07 1.4E+07 1.6E+07 Image Size (pixels) TPMI GPU CUDA GPU Brook Fig ure 6 4. Computational e fficiency of the GPU programming for DIR. The effect on data size on GPU computational efficiency was explored. Here we superimposed our GPU (NVIDIA 8 800 GTX using CUDA programming language) results, alongside that of Sharp et al who used NVIDIA 8800 GTS with the Brook programming language.

PAGE 138

138 Table 6 1 Maximum, mean and standard deviation of the difference in correlation coefficients between the CPU implementations and the GPU implementatio n in 100 iterations Patient Max Difference Mean Difference Standard Derivation 1 2.2 E 5 3.0 E 6 4.0 E 6 2 2.1 E 5 4.0 E 6 4.0 E 6 3 3.0 E 5 7.0 E 6 8.0 E 6 4 3.1 E 5 7.0 E 6 8.0 E 6 5 3.0 E 5 5.1 E 6 5.0 E 6 6 2.5 E 5 3.7 E 6 5.0 E 6 7 2.6 E 5 4.0 E 6 4.2 E 6 8 2.3 E 5 3.3 E 6 4.0 E 6 Table 6 2 Performance comparison between the CPU and GPU based implementations of DIR for clinical imaging data Patient Image Size (pixels) CPU Time (seconds) GPU Time (seconds) Single threading Multithreading TPI TPMI TPI TPMI TPI TPMI 1 512x512x51 7.174 0.537 4.380 0.328 0.175 1.308E 2 2 512x512x54 7.453 0.526 4.660 0.329 0.186 1.316E 2 3 256x256x119 4.012 0.514 2.574 0.330 0.0713 9.138E 3 4 256x256x30 1.029 0.523 0.671 0.341 0.0181 9.21E 3 5 256x256x64 2.200 0.524 1.420 0.338 0.0367 8.75E 3 6 256x256x90 3.097 0.525 1.989 0.337 0.0540 9.16E 3 7 256x256x150 5.231 0.535 3.337 0.339 0.0898 9.13E 3 8 256x256x175 6.107 0.532 3.869 0.337 0.100 8.72E 3

PAGE 139

139 Table 6 3 Performance comparison be tween the CPU (2.8GHz Intel Dual Core with 1.5GB RAM) and GPU (NVidia 8800 GTX) implementations of the Demons DIR algorithm for clinical imaging data in Sharp et al s GPU implementation Data Image Size CPU time for single threading implementation (second s) GPU Time (seconds) TPI TPMI TPI TPMI 1 256x128x128 3.070 0.732 0.0437 0.0104 2 424x180x150 6.335 0.553 0.139 0.0122

PAGE 140

140 APPENDIX CODE SAMPLES OF RE CONTOURING This following re -contouring ( propagateSingleTargetContoursByDIR.m) is to propagate 2D contours from the source image to the target image. The code is written using Matlab and python. The code was tested on Matlab 7.2 and VTK 5.2 with python 2.4 wrapper. To interact with Matlab, two additional python package, PyMat and SciPy, are also need to b e installed. VTK python interface was used for surface reconstruction and post processing. % ////////// propagateSingleTargetContoursByDIR.m //////////////////// % deform one single target contour by DIR % % propagateSingleTargetContoursByDIR(deformationFields,dicomStruct,structureNam e) % % deformationfiels 3D deformation fields % dicomStruct The contour structure exported as DICOM format % structureName name of the structure needed to deform. It is a % cellarry, not caption sensitive. For example, {'heart'}. % % Junyi Xia % % Last Update: 05/03/08 % Revision: 0.1a % ////////////////////////////////////////////////////////////////////// function vtkFileName = propagateSingleTargetContoursByDIR(deformationFields,dicomStruct,structure Nam e) vtkFileName = 'D:\ test\ DeformedSingleContour.vtk'; PYTHON_BIN = 'C:\ VTK \ Binary \ bin \ release \ vtkpython'; CONVERTEDCONTOURFILENAME = 'D: \ test\ TempVTK.mat'; SAVECONCONTOURFILENAME = 'D:\ test \ TempReconstructed.vtk'; DEFORMEDCONTOURFILENAME = 'D:\ test\ TempVTK_Deformed.mat'; %% Step 1: reformat contours data for VTK usage convertContour4Python2(dicomStruct,structureName,CONVERTEDCONTOURFILENAME); %% Step 2: do VTK surface recontruction and save the result as vtk file unixcommand = [PYTHON_BIN,' ','D:\ Clinician version of DRT\ Segmentation \ 1.1 \ VTK \ pyVTKSurfaceReconstruction.pyc',' ',CONVERTEDCONTOURFILENAME,' ',SAVECONCONTOURFILENAME]; status = unix(unixcommand);

PAGE 141

141 %% Step 3: Deforming the reconstructed surface points % R ead VTK points singleStructure = readVTKContourPoints(SAVECONCONTOURFILENAME); % Deforming points deformedStructure = mappingContourPoints3D(deformationFields, singleStructure); % Save as .mat file for Python save(DEFORMEDCONTOURFILENAME,'deformedStructure'); disp('Step 3 ... completed'); %% Step 4: Contour regeneration using planar cutting plane % 1) Generate the new mesh with deformed contour points % 2) Cut the new mesh with the paralle planes unixcommand = [PYTHON_BIN,' ','D: \ Clinician version of DRT\ Segmentation \ 1.1 \ VTK \ pyVTKCutDeformedSurface.pyc',' ',SAVECONCONTOURFILENAME,' ',DEFORMEDCONTOURFILENAME,' ', num2str(floor(min(deformedStructure(:,3)))),' ',num2str(ceil(max(deformedStructure(:,3)))),' ',vtkFileName]; status = unix(unixcommand);

PAGE 142

142 % ///////////////////// convertContour4Python2.m //////////////////////// % convert the contours from vtk python surface reconstruction % % converted = convertContour4Python(structureFileName,fieldname,structureNameList, saveFileName) % % rtStructures exported structure contour points. % structureNameList all the interested structures, such as 'heart', % 'cord'. It is a cell array. % saveFileName % % change logs: % 1) % % Junyi Xia % $ Revision: 0.1 $ % $ Last update: 03/31/08 $ % ///////////////////////////////////////////////////////////////////////// function convertContour4Python2(rtStructures, structureNameList, saveFileName) % Extract the points of the specified structure selectedStructures = rtStructures; for i=length(selectedStructures):1:1 strName = selectedStructures(i).Name; if ( ~isTargetStructure(structureNameList,strName)) selectedStructures(i) = []; end end % Filtering small contours selectedStructures = drtGetFilteredStructures( selectedStructures); savedStructName = []; for j = 1:length(selectedStructures) contourpoints = []; for i = 1:length(selectedStructures(j).Contours) temp = selectedStructures(j).Contours(i).Points; z = selectedStructures(j).Contou rs(i).Idx; temp = temp'; temp(:,3) = z; contourpoints{i} = temp; end structName = strrep(selectedStructures(j).Name,' ','_'); savedStructName{j} = structName; eval(sprintf('%s = contourpoints;',structName)); end save(saveFileName,savedStructName{:}); disp('Converting completed!') function result = isTargetStructure( cellTargetStructure, input) n = length(cellTargetStructure); result = 0; for i=1:n

PAGE 143

143 if ( strcmpi(input,cellTargetStructure{i})) result = 1; break; end end

PAGE 144

144 pyVTKSurfaceReconstruction.py // Reconstruct the surface from 2D contours #!/usr/bin/env python # import vtk import sys from scipy.io import loadmat # Create the RenderWindow, Renderer and both Actors # ren1 = vtk.vtkRenderer() renWin = vtk.vtkRenderWindow() renWin.AddRenderer(ren1) iren = vtk.vtkRenderWindowInteractor() iren.SetRenderWindow(renWin) points = [] polys = [] # # Load the data # loadpoints = loadmat(sys.argv[1]) tempContou rs = loadpoints.keys() # Get contour names contoursNames = [] for i in range(0,len(tempContours)): if ( tempContours[i].find('__') == 1 ): contoursNames.append(tempContours[i]) contours = [] contourMapper = [] contourActor = [] f = [] m = [] a = [] for k in range(0,len(contoursNames)): totalPoints = loadpoints[contoursNames[k]] numContours = len(totalPoints) idx = 0 points.append(vtk.vtkPoints()) polys.append(vtk.vtkCellArray()) print contoursNames[k] for i in range(0,numContours): numPoints = len(totalPoints[i]) print numPoints polys[k].InsertNextCell(numPoints) for j in range(0,numPoints):

PAGE 145

145 points[k].InsertPoint(idx,totalPoints[i][j,0],totalPoints[i][j,1],totalPoints [i][j,2]) polys[k].InsertCellPoint(idx) idx += 1 # # Create a representation of the contours used as input # contours.append(vtk.vtkPolyData()) contours[k].SetPoints(points[k]) contours[k].SetPolys(polys[k]) contourMapper.append(vtk.vtkPolyDataMapper()) contourMapper[k].SetInput(contours[k]) contourActor.append(vtk.vtkActor()) contourActor[k].SetMapper(contourMapper[k]) contourActor[k].GetProperty().SetColor(1,0,0) contourActor[k].GetProperty().SetAmbient(1) contourActor[k].GetProperty().SetDiffuse(0) contourActor[k].GetProperty().SetRepresentationToWireframe() ren1.AddActor(contourActor[k]) # # Create the contour to surface filter # f.append(vtk.vtkVoxelContoursToSurfaceFilter()) f[k].SetInput(contours[k]) f[k].SetMemoryLimitInBytes(1000000) f[k].SetSpacing(1,1,1) m.append(vtk.vtkPolyDataMapper()) m[k].SetInputConnection(f[k].GetOutputPort()) m [k].ScalarVisibilityOff() m[k].ImmediateModeRenderingOn() a.append(vtk.vtkActor()) a[k].SetMapper(m[k]) ren1.AddActor(a[k]) contourActor[k].VisibilityOn() ###################### writer = vtk.vtkPolyDataWriter() wri ter.SetFileName(sys.argv[2]) writer.SetInput(f[k].GetOutput()) writer.Write() #######################

PAGE 146

146 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % writeVTKVolume2Matrix.m % read a vtk image file and write it as m atlab matrix % % Parameter: % vtkfile name of the vtkfile % Return: % vol image volume % % Note: no spacing, origin information is returned. % % Junyi Xia % % Nov.30, 2006 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function points = readVTKContourPoints(vtkfile) fid = fopen(vtkfile,'r','b'); if ( fid == 1 ) error('can not open file %s !',vtkfile); end numberOfPoints = 0; dims = []; vol = []; % read file line by line while 1 tline = fgetl(fid); % end of file? if ( tline == 1 ) break; end % is a string ? if ( isempty(str2num(tline))) index = strfind(tline, 'POINTS'); if ( ~isempty(index)) numberOfPoints = str2num(sscanf(tline,'%*s %s %*s')); if ( numberOfPoints ~= 0 ) temp = textscan(fid,'%f %f %f',numberOfPoints); points(:,1) = temp{1}; points(:,2) = temp{2}; points(:,3) = temp{3}; end break; end end end vol = double(vol); fclose(fid);

PAGE 147

147 % /////////////////////// mappingContourPoints3D.m //////////////////// % deform the contour points by the deformation fields. % % mappedPoints = mappingContourPoints(deformationfields, contourpoints) % % deformationfiels 3D deformation fields % contourpoints contour points, it is a cell array % % Junyi Xia % % Last Update: 04/01/08 % ////////////////////////////////////////////////////////////////////// function mappedContourPoints = mappingContourPoints3D(deformationfields, contourpoints) % rebuild the points into n x 3 matrix contour = contourpoints; mappedPoints = []; % do the interpolation mappedPoints(:,1) = interp3(deformationfields(:,:,:,1),contour(:,1),contour(:,2),contour(:,3),'li near',0); mappedPoints(:,2) = interp3(deformationfields(:,:,:,2),contour(:,1),contour(:,2),contour(:,3),'li near',0); mappedPoints(:,3) = interp3(deformationfields(:,:,:,3),contour(:,1),contour(:,2),contour(:,3),'li near',0); mappedContourPoints = mappedPoints + contour;

PAGE 148

148 pyVTKCutDeformedSurface.py // Generate 2D deformed contours from cutting 3D deformed surface mesh #!/usr/bin/env python # ## argv[1] reconstructed mesh ## argv[2] deformed points ## argv[3] m inimum z of cutting plane ## argv[4] maximum z of cutting planeimport vtk import vtk import sys from scipy.io import loadmat # Load the deformed points loadpoints = loadmat(sys.argv[2]) contourPoints = loadpoints['deformedStructure'] # Load the original mesh fileReader = vtk.vtkPolyDataReader() fileReader.SetFileName(sys.argv[1]) fileReader.Update() polyMesh = fileReader.GetOutput() ptsDeformed = vtk.vtkPoints() # Build points with deformed contour points idx = 0 for i in range(0,len(contourPoints)): ptsDeformed.InsertPoint(idx,contourPoints[i][0],contourPoints[i][1],contourPo ints[i][2]) idx += 1 polyMesh.SetPoints(ptsDeformed) # Get the max and min Z of the cutting plane minZ = int(sys.argv[3]) maxZ = int(sys.argv[4]) plane = [] planeCut = [] appendContours = vtk.vtkAppendPolyData() for i in range(0,maxZminZ+1): plane.append(vtk.vtkPlane()) if i == 0: plane[i].SetOrigin(0,0,minZ+0.00001) elif i == maxZminZ: plane[i].SetOrigin(0,0,maxZ0.00001) els e: plane[i].SetOrigin(0,0,minZ+i+0.00001)

PAGE 149

149 plane[i].SetNormal(0,0,1) planeCut.append(vtk.vtkCutter()) planeCut[i].SetCutFunction(plane[i]) planeCut[i].SetInput(polyMesh) appendContours.AddInput(planeCut[i].GetOutput()) writer2 = vtk.vtkPolyDataWriter() writer2.SetFileName(sys.argv[5]) writer2.SetInput(appendContours.GetOutput()) writer2.Update() writer2.Write()

PAGE 150

150 LIST OF REFERENCES 1. J. V. Dyk, The Modern Technology of Radiation Oncology: A Compendium for Medical Physic ists and Radiation Oncologists (Medical Physics Publishing Corporation, Madison, Wisconsin, 1999). 2. J. M. Balter, H. M. Sandler, K. Lam, R. L. Bree, A. S. Lichter and R. K. ten Haken, "Measurement of prostate movement over the course of routine radiother apy using implanted markers," Int J Radiat Oncol Biol Phys 31, 113118 (1995). 3. C. S. Ross, D. H. Hussey, E. C. Pennington, W. Stanford and J. F. Doornbos, "Analysis of movement of intrathoracic neoplasms using ultrafast computerized tomography," Int J Radiat Oncol Biol Phys 18, 671677 (1990). 4. K. M. Langen and D. T. Jones, "Organ motion and its management," Int J Radiat Oncol Biol Phys 50, 265278 (2001). 5. J. H. Heinzerling, J. F. Anderson, L. Papiez, T. Boike, S. Chien, G. Zhang, R. Abdulrahman and R. Timmerman, "Four -dimensional computed tomography scan analysis of tumor and organ motion at varying levels of abdominal compression during stereotactic treatment of lung and liver," Int J Radiat Oncol Biol Phys 70, 15711578 (2008). 6. M. G. Herman, T. M. Pisansky, J. J. Kruse, J. I. Prisciandaro, B. J. Davis and B. F. King, "Technical aspects of daily online positioning of the prostate for three dimensional conformal radiotherapy using an electronic portal imaging device," Int J Radiat Oncol Biol Phys 57, 11311140 (2003). 7. S. Aubin, L. Beaulieu, S. Pouliot, J. Pouliot, R. Roy, L. M. Girouard, N. Martel -Brisson, E. Vigneault and J. Laverdiere, "Robustness and precision of an automatic marker detection algorithm for online prostate daily targeting usin g a standard V EPID," Med Phys 30, 18251832 (2003). 8. E. Vigneault, J. Pouliot, J. Laverdiere, J. Roy and M. Dorion, "Electronic portal imaging device detection of radioopaque markers for the evaluation of prostate position during megavoltage irradiation : a clinical study," Int J Radiat Oncol Biol Phys 37, 205212 (1997). 9. P. B. Greer, C. C. Jose and J. H. Matthews, "Set up variation of patients treated with radiotherapy to the prostate measured with an electronic portal imaging device," Australas Radio l 42, 207212 (1998). 10. J. A. Stryker, J. Shafer and R. E. Beatty, "Assessment of accuracy of daily set ups in prostate radiotherapy using electronic imaging," Br J Radiol 72, 579583 (1999). 11. S. L. Meeks, W. A. Tome, T. R. Willoughby, P. A. Kupelian, T. H. Wagner, J. M. Buatti and F. J. Bova, "Optically guided patient positioning techniques," Semin Radiat Oncol 15, 192201 (2005).

PAGE 151

151 12. D. P. Gierga, J. Brewer, G. C. Sharp, M. Betke, C. G. Willett and G. T. Chen, "The correlation between internal and ex ternal markers for abdominal tumors: implications for respiratory gating," Int J Radiat Oncol Biol Phys 61, 15511558 (2005). 13. P. Kupelian, T. Willoughby, A. Mahadevan, T. Djemil, G. Weinstein, S. Jani, C. Enke, T. Solberg, N. Flores, D. Liu, D. Beyer a nd L. Levine, "Multi institutional clinical experience with the Calypso System in localization and continuous, real time monitoring of the prostate gland during external radiotherapy," Int J Radiat Oncol Biol Phys 67, 10881098 (2007). 14. T. R. Willoughby P. A. Kupelian, J. Pouliot, K. Shinohara, M. Aubin, M. Roach, 3rd, L. L. Skrumeda, J. M. Balter, D. W. Litzenberg, S. W. Hadley, J. T. Wei and H. M. Sandler, "Target localization and real time tracking using the Calypso 4D localization system in patients with localized prostate cancer," Int J Radiat Oncol Biol Phys 65, 528534 (2006). 15. D. Letourneau, J. W. Wong, M. Oldham, M. Gulam, L. Watt, D. A. Jaffray, J. H. Siewerdsen and A. A. Martinez, "Cone -beam CT guided radiation therapy: technical implementa tion," Radiother Oncol 75, 279286 (2005). 16. G. G. Zeng, S. L. Breen, A. Bayley, E. White, H. Keller, L. Dawson and D. A. Jaffray, "A method to analyze the cord geometrical uncertainties during head and neck radiation therapy using cone beam CT," Radiother Oncol (2008). 17. F. Xu, J. Wang, S. Bai, Y. Li, Y. Shen, R. Zhong, X. Jiang and Q. Xu, "Detection of intrafractional tumour position error in radiotherapy utilizing cone beam computed tomography," Radiother Oncol (2008). 18. J. A. Tanyi and M. H. Fuss, "Volumetric image -guidance: does routine usage prompt adaptive re -planning? An institutional review," Acta Oncol 47, 14441453 (2008). 19. L. Masi, F. Casamassima, C. Polli, C. Menichelli, I. Bonucci and C. Cavedon, "Cone beam CT image guidance for intrac ranial stereotactic treatments: comparison with a frame guided set up," Int J Radiat Oncol Biol Phys 71, 926 933 (2008). 20. M. M. Coselmon, J. M. Balter, D. L. McShan and M. L. Kessler, "Mutual information based CT registration of the lung at exhale and i nhale breathing states using thin -plate splines," Med Phys 31, 29422948 (2004). 21. H. H. Liu, P. Balter, T. Tutt, B. Choi, J. Zhang, C. Wang, M. Chi, D. Luo, T. Pan, S. Hunjan, G. Starkschall, I. Rosen, K. Prado, Z. Liao, J. Chang, R. Komaki, J. D. Cox, R. Mohan and L. Dong, "Assessing respirationinduced tumor motion and internal target volume using four -dimensional computed tomography for radiotherapy of lung cancer," Int J Radiat Oncol Biol Phys 68, 531540 (2007). 22. H. Wang, L. Dong, J. O'Daniel, R. Mohan, A. S. Garden, K. K. Ang, D. A. Kuban, M. Bonnen, J. Y. Chang and R. Cheung, "Validation of an accelerated 'demons' algorithm for deformable image registration in radiation therapy," Phys Med Biol 50, 28872905 (2005).

PAGE 152

152 23. G. E. Christensen, S. Joo Hyun, L. Wei, I. E. Naqa and D. A. Low, "Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry," Medical Physics 34, 21552163 (2007). 24. K. Kitamura, H. Shirato, Y. Seppenwoolde, T. Shimizu, Y. Ko dama, H. Endo, R. Onimaru, M. Oda, K. Fujita, S. Shimizu and K. Miyasaka, "Tumor location, cirrhosis, and surgical history contribute to tumor movement in the liver, as measured during stereotactic irradiation using a real time tumor tracking radiotherapy system," Int J Radiat Oncol Biol Phys 56, 221228 (2003). 25. K. K. Brock, M. B. Sharpe, L. A. Dawson, S. M. Kim and D. A. Jaffray, "Accuracy of finite element model -based multi -organ deformable image registration," Med Phys 32, 16471659 (2005). 26. K. K. Brock, L. A. Dawson, M. B. Sharpe, D. J. Moseley and D. A. Jaffray, "Feasibility of a novel deformable image registration technique to facilitate classification, targeting, and monitoring of tumor and normal tissue," Int J Radiat Oncol Biol Phys (2006). 2 7. T. Rohlfing, C. R. Maurer, Jr., W. G. O'Dell and J. Zhong, "Modeling liver motion and deformation during the respiratory cycle using intensity -based nonrigid registration of gated MR images," Med Phys 31, 427432 (2004). 28. W. Lu, G. H. Olivera, Q. Che n, M. L. Chen and K. J. Ruchala, "Automatic re -contouring in 4D radiotherapy," Physics in Medicine and Biology 51, 10771099 (2006). 29. D. L. Collins, A. C. Evans, C. Holmes and T. M. Peters, "Automatic 3D segmentation of neuro anatomical structures from MRI," in information processing in medical imaging: Proc. 14th international conference, edited by B. Y., C. and di Paola, R. (Kluwer Academic Publishers, Dordrecht, 1995), pp. 139152. 30. J. -P. Thirion, "Image matching as a diffusion process: an analogy with Maxwells demons," Medical Image Analysis 2 243260 (1998). 31. W. Lu, M. L. Chen, G. H. Olivera, K. J. Ruchala and T. R. Mackie, "Fast free -form deformable registration via calculus of variations," Phys Med Biol 49, 3067-3087 (2004). 32. B. Fischer and J. Modersitzki, "Fast diffusion registration," AMS contemporary Mathematics, Inverse Problems, Image Analysis, and Medical Imaging 313, 117129 (2002). 33. G. E. Christensen, "Minimizing sources of errors in medical image registration," (IEEE, Washington, DC, USA, 2002), pp. 14 17. 34. D. Sarrut, "Deformable registration for image guided radiation therapy," Z Med Phys 16, 285297 (2006). 35. B. Zitova and J. Flusser, "Image registration methods: a survey," Image and Vision Computing 21, 9771000 (2003).

PAGE 153

153 36. J. B. Maintz and M. A. Viergever, "A survey of medical image registration," Med Image Anal 2 1 36 (1998). 37. H. Lester and S. R. Arridge, "A survey of hierarchical non linear medical image registration," Pattern Recognition 32, 129149 (1999). 38. D. L. Hill, P. G. Batchelor, M. Holden and D. J. Hawkes, "Medical image registration," Phys Med Biol 46, R1 45 (2001). 39. A. Gholipour, N. Kehtarnavaz, R. Briggs, M. Devous and K. Gopinath, "Brain functional localization: a survey of image registration t echniques," IEEE Trans Med Imaging 26, 427451 (2007). 40. B. M. Dawant, "Non rigid registration of medical images: purpose and methods, a short survey," (IEEE, Washington, DC, USA, 2002), pp. 465468. 41. L. Brown, "A survey of image registration techniq ues," Computing Surveys 24, 325376 (1992). 42. J. P. W. Pluim, J. B. A. Maintz and M. A. Viergever, "Mutual information-based registration of medical images: a survey," IEEE TRANSACTIONS ON MEDICAL IMAGING 22, 986 1004 (2003). 43. W. R. Crum, T. Hartkens and D. L. Hill, "Non rigid image registration: theory and practice," Br J Radiol 77 Spec No 2, S140 153 (2004). 44. D. J. Hawkes, D. Barratt, J. M. Blackall, C. Chan, P. J. Edwards, K. Rhode, G. P. Penney, J. McClelland and D. L. Hill, "Tissue deformation and shape models in image -guided interventions: a discussion paper," Med Image Anal 9 163175 (2005). 45. T. T. McInerney, D, "Deformable Models in medical Image Analysis: a Survey," Medical Image Analysis 1 91 108 (1996). 46. J. V. Hajnal, D. J. Hawkes and D. Hill, Medical image registration (CRC Press, Boca Raton, 2001). 47. C. Zhujiang, P. Shiyan, L. Rui, R. Balachandran, J. M. Fitzpatrick and W. C. Chapman, "Registration of medical images using an interpolated closest point transform: method and vali dation," Medical Image Analysis 8 421 427 (2004). 48. J. B. West and J. M. Fitzpatrick, "Point -based rigid registration: clinical validation of theory," Vol. 3979, (SPIE Int. Soc. Opt. Eng, San Diego, CA, USA, 2000), pp. 353359. 49. K. Rohr, H. S. Stieh l, R. Sprengel, W. Beil, T. M. Buzug, J. Weese and M. H. Kuhn, "Point -based elastic registration of medical image data using approximating thin -plate splines," (Springer -Verlag, Hamburg, Germany, 1996), pp. 297306.

PAGE 154

154 50. K. Rohr, Landmark -based image analy sis : using geometric and intensity models (Kluwer Academic Publishers, Dordrecht ; Boston, 2001). 51. H. Chui and A. Rangarajan, "A new point matching algorithm for non rigid registration," Computer Vision and Image Understanding 89, 114 141 (2003). 52. W. Beil, K. Rohr and H. S. Stiehl, "Investigation of approaches for the localization of anatomical landmarks in 3D medical images," (Elsevier, Berlin, Germany, 1998), pp. 265270. 53. P. J. Besl and H. D. McKay, "A method for registration of 3 D shapes," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 14, 239256 (1992). 54. T. Masuda and N. Yokoya, "Robust method for registration and segmentation of multiple range images," Computer Vision and Image Understanding 61, 295 307 (1995). 55. T. Z insser, J. Schmidt and H. Niemann, "A refined ICP algorithm for robust 3D correspondence estimation," Vol. 2, (Institute of Electrical and Electronics Engineers Computer Society, Barcelona, Spain, 2003), pp. 695698. 56. M. Rogers and J. Graham, "Robust and accurate registration of 2 D electrophoresis gels using point -matching," IEEE Transactions on Image Processing 16, 624635 (2007). 57. M. H. Mahoor and M. Abdel -Mottaleb, "Face recognition based on 3D ridge images obtained from range data," Pattern Recognition 42, 445451 (2009). 58. S. Sclaroff and A. P. Pentland, "Modal matching for correspondence and recognition," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 17, 545561 (1995). 59. A. D. J. Cross and E. R. Hancock, "Graph matching w ith a dual -step EM algorithm," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 20, 12361253 (1998). 60. G. Subsol, N. Roberts, M. Doran, J. P. Thirion and G. H. Whitehouse, "Automatic analysis of cerebral atrophy," Magnetic Resonance Imagin g 15, 917927 (1997). 61. J. B. A. Maintz, P. A. van den Elsen and M. A. Viergever, "Evaluation of ridge seeking operators for multimodality medical image matching," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 18, 353365 (1996). 62. P. A. Van den Elsen, J. B. A. Maintz, E. J. D. Pol and M. A. Viergever, "Automatic registration of CT and MR brain images using correlation of geometrical features," IEEE TRANSACTIONS ON MEDICAL IMAGING 14, 384396 (1995).

PAGE 155

155 63. O. Monga, S. Benayoun and O. D. Faugeras, "From partial derivatives of 3 -D density images to ridge lines," (IEEE Comput. Soc. Press, Champaign, IL, USA, 1992), pp. 354359. 64. A. Gueziec and N. Ayache, "Smoothing and matching of 3 D space curves," (Springer Verlag, Santa Margherita Ligure, Italy, 1992), pp. 620629. 65. B. K. P. Horn and B. G. Schunck, "DETERMINING OPTICAL FLOW," Artificial Intelligence 17, 185203 (1981). 66. G. E. Christensen and H. J. Johnson, "Consistent image registration," IEEE Trans Med Imaging 20, 568582 (2001). 67. A. Roche, G. Malandain, N. Ayache and S. Prima, "Towards a better comprehension of similarity measures used in medical image registration," edited by C. C. A. Taylor (Springer -Verlag Berlin, 1999), pp. 555566. 68. R. Shekhar and V. Zagrodsky, "Mut ual information -based rigid and nonrigid registration of ultrasound volumes," IEEE Trans Med Imaging 21, 9 22 (2002). 69. Z. Zhang, Y. Jiang and H. Tsui, "Consistent multi -modal non rigid registration based on a variational approach," Pattern Recognition L etters 27, 715725 (2006). 70. A. Andronache, M. von Siebenthal, G. Szekely and P. Cattin, "Nonrigid registration of multi -modal images using both mutual information and cross -correlation," Medical Image Analysis 12, 3 15 (2008). 71. G. E. Christensen, B. Carlson, K. S. Chao, P. Yin, P. W. Grigsby, K. Nguyen, J. F. Dempsey, F. A. Lerma, K. T. Bae, M. W. Vannier and J. F. Williamson, "Image -based dose planning of intracavitary brachytherapy: registration of serial imaging studies using deformable anatomic t emplates," Int J Radiat Oncol Biol Phys 51, 227243 (2001). 72. J. P. Pluim, J. B. Maintz and M. A. Viergever, "Image registration by maximization of combined mutual information and gradient information," IEEE Trans Med Imaging 19, 809814 (2000). 73. P. H ellier and C. Barillot, "Coupling dense and landmark based approaches for nonrigid registration," IEEE Trans Med Imaging 22, 217227 (2003). 74. P. Cachier, E. Bardinet, D. Dormont, X. Pennec and N. Ayache, "Iconic feature based nonrigid registration: the PASHA algorithm," Computer Vision and Image Understanding 89, 272298 (2003). 75. D. L. Collins, P. Neelin, T. M. Peters and A. C. Evans, "Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space," J Comput Assist Tomogr 18, 192205 (1994).

PAGE 156

156 76. X. Wu, S. J. Dibiase, R. Gullapalli and C. X. Yu, "Deformable image registration for the use of magnetic resonance spectroscopy in prostate treatment planning," Int J Radiat Oncol Biol Phys 58, 15771583 (2004). 77. R. L. Harder an d R. N. Desmarais, "Interpolation using surface splines," Journal of Aircraft 9 189191 (1972). 78. A. Goshtasby, Registration of image with geometric distortion," IEEETransactions ofGeoscience andRemoteSensing 26, 60 64 (1988). 79. J. Lian, L. Xing, S. Hunjan, C. Dumoulin, J. Levin, A. Lo, R. Watkins, K. Rohling, R. Giaquinto, D. Kim, D. Spielman and B. Daniel, "Mapping of the prostate in endorectal coil -based MRI/MRSI and CT: a deformable registration and validation study," Med Phys 31, 30873094 (2004). 80. H. J. Johnson and G. E. Christensen, "Consistent landmark and intensity-based image registration," Vol. 21, (IEEE, Davis, CA, USA, 2002), pp. 450461. 81. J. E. Downhill, Jr., M. S. Buchsbaum, T. Wei, J. Spiegel Cohen, E. A. Hazlett, M. M. Haznedar J. Silverman and L. J. Siever, "Shape and size of the corpus callosum in schizophrenia and schizotypal personality disorder," Schizophr Res 42, 193-208 (2000). 82. L. Zagorchev and A. Goshtasby, "A comparative study of transformation functions for nonrigid image registration," Image Processing, IEEE Transactions on 15, 529538 (2006). 83. D. Rueckert, L. I. Sonoda, C. Hayes, D. L. Hill, M. O. Leach and D. J. Hawkes, "Nonrigid registration using free -form deformations: application to breast MR images," IEEE Trans Med Imaging 18, 712721 (1999). 84. Z. Xie and G. E. Farin, "Image registration using hierarchical B -splines," IEEE Trans Vis Comput Graph 10, 85 94 (2004). 85. D. Mattes, D. R. Haynor, H. Vesselle, T. K. Lewellen and W. Eubank, "PET CT image regis tration in the chest using free -form deformations," IEEE Trans Med Imaging 22, 120128 (2003). 86. A. F. Frangi, D. Rueckert, J. A. Schnabel and W. J. Niessen, "Automatic construction of multiple -object three dimensional statistical shape models: applicati on to cardiac modeling," IEEE Trans Med Imaging 21, 11511166 (2002). 87. K. McLeish, D. L. Hill, D. Atkinson, J. M. Blackall and R. Razavi, "A study of the motion and deformation of the heart due to respiration," IEEE Trans Med Imaging 21, 11421150 (2002). 88. M. Ferrant, A. Nabavi, B. Macq, F. A. Jolesz, R. Kikinis and S. K. Warfield, "Registration of 3 D intraoperative MR images of the brain using a finite-element biomechanical model," IEEE Trans Med Imaging 20, 13841397 (2001).

PAGE 157

157 89. A. Hagemann, K. Roh r, H. S. Stiehl, U. Spetzger and J. M. Gilsbach, "Biomechanical Modeling of the Human Head for Physically Based, Nonrigid Image Registration," IEEE TRANSACTIONS ON MEDICAL IMAGING 18, 875884 (1999). 90. A. Mohamed, E. I. Zacharaki, D. Shen and C. Davatzik os, "Deformable registration of brain tumor images via a statistical model of tumor -induced deformation," Med Image Anal (2006). 91. A. Samani, J. Bishop, M. J. Yaffe and D. B. Plewes, "Biomechanical 3 D finite element modeling of the human breast using MR I data," IEEE Trans Med Imaging 20 271279 (2001). 92. J. A. Schnabel, C. Tanner, A. D. Castellano -Smith, A. Degenhard, M. O. Leach, D. R. Hose, D. L. G. Hill and D. J. Hawkes, "Validation of nonrigid image registration using finite -element methods: appli cation to breast MR images," IEEE Transactions on Medical Imaging 22, 238 247 (2003). 93. M. Sermesant, P. Moireau, O. Camara, J. Sainte Marie, R. Andriantsimiavona, R. Cimrman, D. L. Hill, D. Chapelle and R. Razavi, "Cardiac function estimation from MRI using a heart model and data assimilation: advances and difficulties," Med Image Anal 10, 642656 (2006). 94. K. K. Brock, D. L. McShan, R. K. Ten Haken, S. J. Hollister, L. A. Dawson and J. M. Balter, "Inclusion of organ deformation in dose calculations, Med Phys 30, 290295 (2003). 95. T. Zhang, N. P. Orton, T. R. Mackie and B. R. Paliwal, "Technical note: A novel boundary condition using contact elements for finite element based deformable image registration," Med Phys 31, 24122415 (2004). 96. Y. Chi, J. Liang and D. Yan, "A material sensitivity study on the accuracy of deformable organ registration using linear biomechanical models," Med Phys 33, 421433 (2006). 97. P. Hellier, C. Barillot, E. Memin and P. Perez, "Hierarchical estimation of a dense de formation field for 3 D robust registration," IEEE Trans Med Imaging 20, 388402 (2001). 98. J. C. Gee, M. Reivich and R. Bajcsy, "Elastically deforming 3D atlas to match anatomical brain images," J Comput Assist Tomogr 17, 225236 (1993). 99. R. Bajcsy an d S. Kovacic, "Multiresolution Elastic Matching," Computer Vision, Graphics, and Image Processing 46, 1 21 (1989). 100. M. Bro Nielsen and C. Gramkow, "Fast Fluid Registration of Medical Images," Visualization in Biomedical Computing, Lecture Notes in Comp uter Science 1131, 267276 (1996).

PAGE 158

158 101. E. D'Agostino, F. Maes, D. Vandermeulen and P. Suetens, "A viscous fluid model for multimodal non rigid image registration using mutual information," Med Image Anal 7 565575 (2003). 102. P. A. Freeborough and N. C. Fox, "Modeling brain deformations in Alzheimer disease by fluid registration of serial 3D MR images," Journal of Computer Assisted Tomography 22, 838843 (1998). 103. F. L. Bookstein, "Thin plate splines and the decomposition of deformations," Lect. Notes Comput. Sci. 511, 326342 (1991). 104. D. Terzopoulos, "Regularization of inverse visual problems involving discontinuities," IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI -8 413424 (1986). 105. W. H. Press, Numerical recipes : the art of scientific computing (Cambridge University Press, Cambridge ; New York, 1989). 106. N. Paragios, Y. Chen and O. Faugeras, Handbook of mathematical models in computer vision (Springer, New York, 2006). 107. M. Chen, W. Lu, Q. Chen, K. J. Ruchala an d G. H. Olivera, "A simple fixed -point approach to invert a deformation field," Med Phys 35, 81 88 (2008). 108. P. Cachier and D. Rey, "Symmetrization of the non rigid registration problem using inversion invariant energies: application to multiple scleros is," (Springer -Verlag, Pittsburgh, PA, USA, 2000), pp. 472481. 109. J. Ashburner, J. L. Andersson and K. J. Friston, "Image registration using a symmetric prior -in three dimensions," Hum Brain Mapp 9 212225 (2000). 110. T. S. Yoo, Insight into Images: Principles and Practice for Segmentation, Registration, and Image Analysis (AK Peters, Ltd, 2004). 111. I. Arganda Carreras, C. O. S. Sorzano, R. Marabini, J. M. Carazo, C. Ortiz -de -Solorzano and J. Kybic, "Consistent and elastic registration of histolog ical sections using vector spline regularization," (Springer -Verlag, Graz, Austria, 2006), pp. 8595. 112. V. A. Magnotta, H. J. Bockholt, H. J. Johnson, G. E. Christensen and N. C. Andreasen, "Subcortical, cerebellar, and magnetic resonance based consist ent brain image registration," Neuroimage 19, 233245 (2003). 113. J. He and G. E. Christensen, "Large deformation inverse consistent elastic image registration," Inf Process Med Imaging 18, 438449 (2003). 114. G. E. Christensen, "Inverse consistent registration with object boundary constraints," Vol. Vol. 1, (IEEE, Arlington, VA, USA, 2004), pp. 591594.

PAGE 159

159 115. S. K. Yeung and P. Shi, "Stochastic inverse consistent registration," (IEEE, Shanghai, China, 2006), pp. 4 pp. 116. S. K. Yeung and P. C. Shi, "St ochastic inverse consistency in medical image registration," in Medical Image Computing and Computer -Assisted Intervention Miccai 2005, Pt 2, Vol. 3750, (Springer -Verlag Berlin, Berlin, 2005), pp. 188196. 117. L. Alvarez, R. Deriche, T. Papadopoulo and J. Sanchez, "Symmetrical dense optical flow estimation with occlusions detection," (Springer -Verlag, Copenhagen, Denmark, 2002), pp. 721735. 118. A. Leow, S. C. Huang, A. Geng, J. Becker, S. Davis, A. Toga and P. Thompson, "Inverse consistent mapping in 3D deformable image registration: Its construction and statistical properties," Vol. 3565, (Springer Verlag, Heidelberg, D 69121, Germany, Glenwood Springs, CO, United States, 2005), pp. 493503. 119. B. B. Avants, M. Grossman and J. C. Gee, "Symmetric d iffeomorphic image registration: Evaluating automated labeling of elderly and neurodegenerative cortex and frontal lobe," Vol. 4057 LNCS, (Springer Verlag, Heidelberg, D 69121, Germany, Utrecht, Netherlands, 2006), pp. 50 57. 120. M. F. Beg and A. Khan, Symmetric data attachment terms for large deformation image registration," IEEE TRANSACTIONS ON MEDICAL IMAGING 26, 1179 1189 (2007). 121. S. Joshi, B. Davis, M. Jomier and G. Gerig, "Unbiased diffeomorphic atlas construction for computational anatomy," Ne uroimage 23 Suppl 1 S151160 (2004). 122. G. Xiujuan, D. Kumar, G. E. Christensen and M. W. Vannier, "Inverse consistent image registration of MR brain scans: handedness in normal adult males," (Springer -Verlag, Philadelphia, PA, USA, 2003), pp. 71 80. 1 23. P. Jannin, J. M. Fitzpatrick, D. J. Hawkes, X. Pennec, R. Shahidi and M. W. Vannier, "Validation of medical image processing in image -guided therapy," IEEE Trans Med Imaging 21, 14451449 (2002). 124. R. Kashani, M. Hub, M. L. Kessler and J. M. Balter, "Technical note: a physical phantom for assessment of accuracy of deformable alignment algorithms," Med Phys 34, 27852788 (2007). 125. M. Serban, E. Heath, G. Stroian, D. L. Collins and J. Seuntjens, "A deformable phantom for 4D radiotherapy verification: design and image registration evaluation," Med Phys 35, 10941102 (2008). 126. J. G. Parker and D. R. Gilland, "Wall motion estimation for gated cardiac emission tomography: Physical phantom evaluation," IEEE Transactions on Nuclear Science 55, 531536 ( 2008).

PAGE 160

160 127. A. Gholipour, N. Kehtarnavaz, R. W. Briggs, K. S. Gopinath, W. Ringe, A. Whittemore, S. Cheshkov and K. Bakhadirov, "Validation of Non -Rigid Registration Between Functional and Anatomical Magnetic Resonance Brain Images," Biomedical Engineering, IEEE Transactions on 55, 563571 (2008). 128. Z. Xue, D. Shen, B. Karacali, J. Stern, D. Rottenberg and C. Davatzikos, "Simulating deformations of MR brain images for validation of atlas -based segmentation and registration algorithms," Neuroimage 33, 855866 (2006). 129. R. Fahmi, A. Aly, A. Elbaz and A. A. Farag, "New deformable registration technique using scale space and curve evolution theory and a finite element based validation framework," Conf Proc IEEE Eng Med Biol Soc 1 30413044 (2006). 130. P. Castadot, J. A. Lee, A. Parraga, X. Geets, B. Macq and V. Gregoire, "Comparison of 12 deformable registration strategies in adaptive radiation therapy for the treatment of head and neck tumors," Radiother Oncol (2008). 131. G. E. Christensen, X. Geng, J. G. Kuhl, J. Bruss, T. J. Grabowski, I. A. Pirwani, M. W. Vannier, J. S. Allen and H. Damasio, "Introduction to the non rigid image registration evaluation project (NIREP)," in Biomedical Image Registration, Proceedings Vol. 4057, (2006), pp. 128135. 132. P. Hellier, C. Barillot, I. Corouge, B. Gibaud, G. Le Goualher, D. L. Collins, A. Evans, G. Malandain, N. Ayache, G. E. Christensen and H. J. Johnson, "Retrospective evaluation of intersubject brain registration," Medical Imaging, IEEE Transactions on 22, 11201130 (2003). 133. T. Rohlfing, "Transformation model and constraints cause bias in statistics on deformation fields," Med Image Comput Comput Assist Interv Int Conf Med Image Comput Comput Assist Interv 9 207214 (2006). 134. P. W. Roger, "Validati on of registration accuracy," in Handbook of medical imaging, (Academic Press, Inc., 2000), pp. 491497. 135. X. Geng, D. Kumar and G. E. Christensen, "Transitive inverse -consistent manifold registration," (Springer -Verlag, Glenwood Springs, CO, USA, 2005), pp. 468479. 136. H. Jianchun, G. E. Christensen, J. T. Rubenstein and W. Ge, "Analysis of a new method for consistent large -deformation elastic image registration," Vol. 4684, (SPIE -Int. Soc. Opt. Eng, San Diego, CA, USA, 2002), pp. 945954. 137. P. Hellier, C. Barillot, I. Corouge, B. Gibaud, G. Le Goualher, D. L. Collins, A. Evans, G. Malandain, N. Ayache, G. E. Christensen and H. J. Johnson, "Retrospective evaluation of intersubject brain registration," IEEE Trans Med Imaging 22, 11201130 (2003). 138. C. Fox, An introduction to the calculus of variations (Dover Publications, New York, 1987).

PAGE 161

161 139. J. Weickert, K. J. Zuiderveld, B. M. ter Haar Romeny and W. J. Niessen, "Parallel implementations of AOS schemes: a fast way of nonlinear diffusion filte ring," in Image Processing, 1997. Proceedings., International Conference on, Vol. 3, (1997), pp. 396399 vol.393. 140. W. H. Press, "Numerical recipes : the art of scientific computing," xx, 818 p. (1986). 141. J. Weickert, "Recursive separable schemes fo r nonlinear diffusion filters," in Scale -Space Theory in Computer Vision, Vol. 1252, (Springer -Verlag Berlin, Berlin 33, 1997), pp. 260271. 142. G. E. Christensen, M. I. Miller, M. W. Vannier and U. Grenander, "Individualizing neuroanatomical atlases usi ng a massively parallel computer," Computer 29, 32-& (1996). 143. M. Kass, A. Witkin and D. Terzopoulos, London, Engl, 1987 (unpublished). 144. W. Schroeder, K. Martin and B. Lorensen, The visualization toolkit : an object -oriented approach to 3D graphics 4th ed. (Kitware, New York, 2006). 145. H. N. Christiansen and T. W. Sederberg, "CONVERSION OF COMPLEX CONTOUR LINE DEFINITIONS INTO POLYGONAL ELEMENT MOSAICS," 12, 187 192 (1978). 146. M. Lutz, Programming Python, 3rd ed. (O'Reilly, Sebastopol, CA, 2006). 147. W. Lu, G. H. Olivera, Q. Chen, M. L. Chen and K. J. Ruchala, "Automatic re -contouring in 4D radiotherapy," Phys Med Biol 51, 10771099 (2006). 148. K. S. Chao, S. Bhide, H. Chen, J. Asper, S. Bush, G. Franklin, V. Kavadi, V. Liengswangwong, W. Gordon, A. Raben, J. Strasser, C. Koprowski, S. Frank, G. Chronowski, A. Ahamad, R. Malyapa, L. Zhang and L. Dong, "Reduce in variation and improve efficiency of target volume delineation by a computer assisted system using a deformable image registration appr oach," Int J Radiat Oncol Biol Phys 68, 15121521 (2007). 149. J. Orban de Xivry, G. Janssens, G. Bosmans, M. De Craene, A. Dekker, J. Buijsen, A. van Baardwijk, D. De Ruysscher, B. Macq and P. Lambin, "Tumour delineation and cumulative dose computation in radiotherapy based on deformable registration of respiratory correlated CT images of lung cancer patients," Radiother Oncol (2007). 150. E. Rietzel, G. T. Chen, N. C. Choi and C. G. Willet, "Four -dimensional image -based treatment planning: Target volume s egmentation and dose calculation in the presence of respiratory motion," Int J Radiat Oncol Biol Phys 61, 15351550 (2005). 151. P. J. Keall, S. Joshi, S. S. Vedam, J. V. Siebers, V. R. Kini and R. Mohan, "Four dimensional radiotherapy planning for DMLC -ba sed respiratory motion tracking," Med Phys 32, 942951 (2005).

PAGE 162

162 152. O. Dandekar, C. Castro Pareja and R. Shekhar, "FPGA based real time 3D image preprocessing for image -guided medical interventions," Journal of Real Time Image Processing 1 285301 (2007). 153. O. Dandekar and R. Shekhar, "FPGA accelerated deformable image registration for improved target -delineation during CT -guided interventions," IEEE Transactions on Biomedical Circuits and Systems 1 116127 (2007). 154. O. Dandekar, V. Walimbe and R. S hekhar, "Hardware implementation of hierarchical volume subdivision-based elastic registration," (IEEE, New York, NY, USA, 2006), pp. 14251428. 155. C. R. Castro Pareja, J. M. Jagadeesh and R. Shekhar, "FAIR: a hardware architecture for real time 3 D ima ge registration," IEEE Trans Inf Technol Biomed 7 426434 (2003). 156. C. R. Castro Pareja, J. M. Jagadeesh and R. Shekhar, "FPGA -based acceleration of mutual information calculation for real -time 3D image registration," Vol. 5297, (International Society for Optical Engineering, Bellingham, WA 982270010, United States, San Jose, CA, United States, 2004), pp. 212219. 157. J. Jun, W. Luk and D. Rueckert, "FPGA based computation of free -form deformations in medical image registration," (IEEE, Tokyo, Japan 2003), pp. 234241. 158. J. A. Kahle, M. N. Day, H. P. Hofstee, C. R. Johns, T. R. Maeurer and D. Shippy, "Introduction to the cell multiprocessor," IBM Journal of Research and Development 49, 589604 (2005). 159. M. Ohara, Y. Hang u, F. Savino, G. Iyengar, G. Leiguang, H. Inoue, H. Komatsu, V. Sheinin, S. Daijavaa and B. Erickson, "Real time mutual information -based linear registration on the Cell Broadband Engine Processor," (IEEE, Arlington, VA, USA, 2007), pp. 33 36. 160. G. L. Chen, G. Z. Sun, Y. Q. Zhang and Z. Y. Mo, "Study on parallel computing," Journal of Computer Science and Technology 21, 665673 (2006). 161. M. P. Wachowiak and T. M. Peters, "High-performance medical image registration using new optimization techniques," IEEE Transactions on Information Technology in Biomedicine 10, 344353 (2006). 162. T. Rohlfing and C. R. Maurer Jr, "Nonrigid image registration in shared-memory multiprocessor environments with application to brains, breasts, and bees," IEEE Transaction s on Information Technology in Biomedicine 7 1625 (2003). 163. R. Chandra, Parallel programming in OpenMP (Morgan Kaufmann Publishers, San Francisco, CA, 2001). 164. S. K. Warfield, F. A. Jolesz and R. Kikinis, "High performance computing approach to th e registration of medical imaging data," Parallel Computing 24, 13451368 (1998).

PAGE 163

163 165. S. Ourselin, R. Stefanescu and X. Pennec, "Robust registration of multi -modal images: Towards real time clinical applications," in 5th Int. Conf. MICCAI Part II, (2002) pp. 140147. 166. F. Ino, Y. Tanaka, H. Kitaoka and K. Hagihara, "Performance study of nonrigid registration algorithm for investigating lung disease on clusters," Vol. 2005, (Institute of Electrical and Electronics Engineers Computer Society, Piscatawa y, NJ 088551331, United States, Dalian, China, 2005), pp. 820825. 167. J. Owens, D. Luebke, N. Govindaraju, M. Harris, J. Kruger, A. Lefohn and T. Purcell, "A Survey of General -Purpose Computation on Graphics Hardware," in Eurographics 2005, State of the Art Reports, (2005), pp. 21 51. 168. M. Woo and OpenGL Architecture Review Board., OpenGL programming guide : the official guide to learning OpenGL, version 1.2, 3rd ed. (Addison Wesley, Reading, MA, 1999). 169. J. Sanchez and M. P. Canton, DirectX 3D graphics programming bible (IDG Books Worldwide, Foster City, CA, 2000). 170. R. Fernando and M. J. Kilgard, The Cg tutorial : the definitive guide to programmable real -time graphics (AddisonWesley, Boston, Mass. ; London, 2003). 171. P. Walsh, Advanced v isual effects with direct 3d (Thomson Course Technology, Boston, MA, 2006). 172. NVIDIA Corp., http://developer.nvidia.com/object/cuda.html, (2007). 173. AMD Inc., http://ati.amd.com/companyinfo/researcher/documents/ATI_CTM_Guide.pdf, (2006). 174. I. Buck, T. Foley, D. Horn, J. Sugerman, K. Fatahalian, M. Houston and P. Hanrahan, "Brook for GPUs: Stream computing on graphics hardware," Vol. 23, (Association for Computing Machinery, Los Angeles, CA, United states, 2004), pp. 777786. 175. Intel Corp. http://www.intel.com/performance/server/xeon/hpcapp.htm. 176. NVIDIA Corp., "NVIDIA GeForce 8800 GPU Architecture Overview," Technical Brief (2006). 177. J. Spoerk, H. Bergmann, F. Wanschitz, S. Dong and W. Birkfellner, "Fast DRR splat rendering using common consumer graphics hardware," Medical Physics 34, 43024308 (2007). 178. A. Khamene, P. Bloch, W. Wein, M. Svatos and F. Sauer, "Automatic registration of portal images and volumetric CT for patient positioning in radiation therapy," Medical Image An alysis 10, 96 112 (2006).

PAGE 164

164 179. H. Hong, K. Kim and S. Park, "Fast 2D 3D point -based registration using GPU based preprocessing for image -guided surgery," in Progress in Pattern Recognition, Image Analysis and Applications, Proceedings Vol. 4225, (Springe r -Verlag Berlin, Berlin, 2006), pp. 218 226. 180. C. Vetter, C. Guetter, C. Xu and R. Westermann, "Non rigid multi -modal registration on the GPU," Vol. 6512, (SPIE, Bellingham WA, WA 982270010, United States, San Diego, CA, United States, 2007), pp. 651228. 181. G. C. Sharp, N. Kandasamy, H. Singh and M. Folkert, "GPU based streaming architectures for fast cone -beam CT image reconstruction and demons deformable registration," Phys Med Biol 52, 57715783 (2007). 182. S. Grzegorz, B. Michael, H. Peter, N. C hristopher, G and G. nther, "Nonrigid Registration with Use of Hardware -Based 3D Bezier Functions," in Proceedings of the 5th International Conference on Medical Image Computing and Computer Assisted Intervention -Part II, (Springer -Verlag, 2002). 183. P. Hastreiter, C. Rezk Salama, G. Soza, M. Bauer, G. Greiner, R. Fahlbusch, O. Ganslandt and C. Nimsky, "Strategies for brain shift evaluation," Med Image Anal 8 447464 (2004). 184. C. Rezk -Salama, M. Scheuering, G. Soza and G. Greiner, "Fast volumetric de formation on general purpose hardware," in Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware, (ACM, Los Angeles, California, United States, 2001). 185. D. Levin, D. Dey and P. J. Slomka, "Acceleration of 3D, nonlinear warping usin g standard video graphics hardware: implementation and initial validation," Computerized Medical Imaging and Graphics 28, 471483 (2004). 186. R. Strzodka, M. Droske and M. Rumpf, "Image registration by a regularized gradient flow. A streaming implementati on in DX9 graphics hardware," Computing (Vienna/New York) 73, 373389 (2004). 187. A. Khn, J. Drexl, F. Ritter, M. Knig and H. Peitgen, "GPU Accelerated Image Registration in Two and Three Dimensions," in Bildverarbeitung fr die Medizin 2006, (2006), p p. 261265. 188. J. Kim, S. Li, Y. Zhao and B. Movsas, "Real time intensity -based deformable fusion on PC graphics hardware," In XVth International Conference on the Use of Computers in Radiation Therapy (2007). 189. N. Courty and P. Hellier, "Accelerating 3D Non Rigid Registration using Graphics Hardware," International Journal of Image and Graphics 8 1 18 (2008).

PAGE 165

165 190. J. D. Owens, D. Lubebke, N. Govindaraja, M. J. Harris, J. Kruger, A. E. Lefohn and T. J. Purcell, "A survey of general -purpose computation on graphics hardware," in Eurographics, State of the Art Reports, (2005), pp. 21 51.

PAGE 166

166 BIOGRAPHICAL SKETCH Junyi Xia was born in 1974 in Dinghai, ZheJiang, Peoples Republic of China. Junyi graduated from ZheJiang University with a Bachelor of Science d egree in chemical machinery in 1996. After two years graduate study, Junyi was graduated with a Master of Science degree in chemical machinery. Upon graduation, he worked as a software engineer in Supcon Group Ltd. China. In 2000, Junyi was enrolled in th e graduation program in the Department of Biomedical Engineering at the University of Memphis, where he served as the chief mechanical designer under the supervision of Dr.Sanjiv Samant. In 2003, Junyi was graduated with a Master of Science degree in biome dical engineering. In 2005, Junyi joined the graduate program in the Department of Nuclear and Radiological Engineering at the University of Florida to pursue his Ph.D with Dr. Sanjiv Samant in the field of image guided radiation therapy. During his Ph.D studies, Junyi presented invited speeches in numerous national and international conferences and served as reviewers for scientific journals and international imaging conferences. Junyi married Yuming Zhao, on June 9th 2006.