Citation

## Material Information

Title:
Diffusion Tensor Field Restoration and Segmentation
Creator:
WANG, ZHIZHOU ( Author, Primary )
2008

## Subjects

Subjects / Keywords:
Analytical estimating ( jstor )
Data smoothing ( jstor )
Estimate reliability ( jstor )
Estimators ( jstor )
Imaging ( jstor )
Lagrangian function ( jstor )
Magnetism ( jstor )
Rats ( jstor )
Statistical estimation ( jstor )
Tensors ( jstor )

## Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Zhizhou Wang. Permission granted to University of Florida to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
8/7/2004
Resource Identifier:
56799480 ( OCLC )

Full Text

DIFFUSION TENSOR FIELD RESTORATION AND SEGMENTATION

By

ZHIZHOU WANG

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2004

by

Zhizhou Wang

I dedicate this work to my family.

ACKNOWLEDGMENTS

I would like to first thank my advisor, Dr. Baba C. Vemuri, for everything he

has done during my doctoral study. This dissertation would not be possible without

him. Dr. Vemuri introduced me to the field of medical imaging and always helps and

encourages me. His insight and experience have guided me through my research and

he gave numerous -II..i-I -li..i-, to this dissertation. I have been very lucky to work

with him. I would also like to thank my excellent committee, Dr. Yunmei Chen, Dr.

Anand Rangarajan, Dr. Murali Rao, Dr. Sartaj K. Sahni and Dr. Baba C. Vemuri,

for providing valuable advice. In addition, special thanks go to Dr. Arunava Banerjee

for attending my defense.

My doctoral research is a happy cooperation with many people. Dr. Vemuri has

been involved with the whole progress, Dr. Chen has contributed a lot in the DTI

restoration part, Dr. Thomas H. Mareci and Evren Ozarslan have been my resource

of DT-MRI 1]li',-i -, and have kindly provided the data for DTI segmentation, Dr.

Rachid Deriche has provided excellent comments on my work in DTI segmentation

and Tim McGraw has provided beautiful fiber visualizations to make my work more

appealing.

I would like to thank the editors in the Editorial office for their careful and

thorough corrections. I would also like to thank the staff and the faculty members

who have been very patient and helpful during my study here. In particular, Dr.

David Gu and Dr. Arunava B.fl1 i1. have attended several of my seminars and

showed me quite a few interesting points. I have also benefited a lot from the course

taught by Dr. David Metzler and Dr. Anand Rangarajan.

Friends in my lab, my department and the University of Florida are much appre-

ciated for sharing my joys and sadnesses. Special thanks go to Xiaobin Wu, Fangwen

Chen, Fuwang Chen and Daying Zhang who have helped me to pull through the

busiest time in my life. I am also fortunate to enjoy the friendship of Jundong Liu,

Ju Wang, Haibin Lu, Fei Wang, Jie Zhou, Tim McGraw, Eric Spellman, Bing Jian,

Neeti Vohra, Andy Shiue Le-Jeng, Lingyun Gu, Hongyu Guo and their families.

Though it is a distant memory, I have always cherished the mentorship from

some of my advisors in the long past: Anan Sha, Yining Hu, Shengtao Qin, Yuan

Zhu and Yuqing Gu. They guided me through different stages of study and life, I

would not have been able to achieve what I have today without them.

Last, but not least, I would like to thank my parents, my sister and my wife

Jing for their understanding and love during the past few years. Their support and

encouragement are my source of strength. I am also thankful for my son who has

inspired me a lot with his little discoveries.

This research was supported in part by the grants NIH R01-NS42075. I also have

received travel grants from IEEE Computer Society Conference on Computer Vision

and Pattern Recognition 2003 and the Department of Computer and Information

Science and Engineering, College of Engineering, University of Florida.

Chapter 2 is copyrighted by IEEE and was published in Proceedings of the

2004 IEEE International Symposium on Biomedical Imaging: From Nano to Macro,

co-authored with Baba C. Vemuri, Evren Ozarslan, Yunmei Chen and Thomas H.

Mareci.

Chapter 3 is copyrighted by IEEE and will appear in IEEE Transaction on Medi-

cal Imaging, co-authored with Baba C. Vemuri, Yunmei Chen and Thomas H. Mareci.

A preliminary version of chapter 4 is copyrighted by IEEE and will appear in

IEEE Computer Society Conference on Computer Vision and Pattern Recognition,

co-authored with Baba C. Vemuri.

page

ACKNOWLEDGMENTS ............... .......... iv

LIST OF TABLES ............... ............. ix

LIST OF FIGURES . .............................. x

ABSTRACT ................ ............... xii

CHAPTER

1 INTRODUCTION . ............... ......... 1

1.1 ABC of Magnetic Resonance Imaging . . . ... 1
1.1.1 Nature of a Proton Spin with a Magnetic Field ...... 1
1.1.2 Net Magnetization and Its Detection ............ 2
1.1.3 Magnetic Resonance Imaging ...... . . ... 3
1.2 Background of DT-MRI ................. .... 5
1.2.1 Diffusion Process . . . . . 5
1.2.2 Diffusion Weighted Images . . . . .. 5
1.2.3 Application of DT-MRI . . . . . 7
1.3 Literature Review. . . . . . .. 9
1.3.1 Optimal b Values . . . . . .. 9
1.3.2 DTI Restoration . . . . .. . 10
1.3.3 DTI Segmentation . . . . . 12
1.4 Contributions . . . . . . . 14

2 STATISTICAL ANALYSIS OF A NONLINEAR ESTIMATOR FOR ADC
AND ITS APPLICATION TO OPTIMIZING b VALUES . 17

2.1 Introduction . . . . . . . 17
2.2 Theory . . ...... . ....... 18
2.2.1 A Nonlinear Estimator for ADC . . . 18
2.2.2 Variance of the Nonlinear Estimate for ADC . . 20
2.2.3 Weighted COV of the Nonlinear Estimator for ADC . 21
2.2.4 Optimal Diffusion Weighting Factors . . 22
2.3 Results ....... . . .. ....... ...... 22
2.3.1 Simulation Results of Different Estimators . . 22
2.3.2 Variance of Different Estimates . . . 23
2.3.3 Weighted COV and Optimal Diffusion Weighting Factors 25

3 A CONSTRAINED VARIATIONAL PRINCIPLE FOR DIRECT ESTI-
MATION AND Y\IOOTHING OF THE DTI FROM COMPLEX DWIS 28

3.1 Introduction .................. ........... 28
3.2 Constrained Variational Principle Formulation .......... 29
3.2.1 The Complex Nonlinear Data Constraint ......... 31
3.2.2 LP Smoothness Norm .... ........... .. 32
3.2.3 The Positive Definite Constraint ............ 33
3.2.4 Existence of a Solution .. . . . .. 33
3.3 Numerical Methods . . . . . . 39
3.3.1 Discretized Constrained Variational Principle . . 40
3.3.2 Augmented Lagrangian Method . . . 40
3.3.3 Limited Memory Quasi-Newton Method . .... 41
3.3.4 Line Search . . . . . . 44
3.3.5 Implementation Issues . . . . .... 44
3.4 Experimental Results . . . . . . 45
3.4.1 Complex Valued Synthetic Data . . . 45
3.4.2 Complex DWI of a Normal Rat Brain . . ... 48

4 DTI SEGMENTATION USING AN INFORMATION THEORETIC TEN-
SOR DISSIMILARITY MEASURE . . . . .. 58

4.1 Introduction . . . . . . . 58
4.2 A New Tensor Distance and Its Properties .. . . 59
4.2.1 Definitions . . . . . . 59
4.2.2 Affine Invariant Property. . . . . 61
4.2.3 Tensor Field Mean Value. . . . . 63
4.3 Active Contour Models for Image Segmentation . .... 66
4.3.1 Snake Energy Functional . . . ..... 66
4.3.2 Curve Evolution . . . . . . 69
4.3.3 Level Set Formulation . . . . . 70
4.4 Piecewise Constant DTI Segmentation Model . . .... 70
4.4.1 Curve Evolution Equation . . . ..... 72
4.4.2 Level Set Formulation . . . . . 72
4.4.3 Implementation and Numerical Methods . .... 73
4.5 Piecewise Smooth DTI Segmentation Model . . 75
4.5.1 Discontinuity Preserving Smoothing . . .... 75
4.5.2 Curve Evolution Equation and Level Set Formulation 77
4.5.3 Implementation and Numerical Methods . .... 77
4.6 Experimental Results . . . . . . 79
4.6.1 Synthetic Tensor Field Segmentation . . 79
4.6.2 Single Slice DTI Segmentation . . . 80
4.6.3 DTI Segmentation in 3D . . . ...... 80

5 CONCLUSIONS ................... ............ 92

5.1 Optimal b Values ................... ........ 92
5.2 DTI Restoration . . . . . . . 92
5.3 DTI Segmentation . . . . . ...... 94

REFERENCES ..... . . ... ............... 96

BIOGRAPHICAL SKETCH . . . . . . . 102

LIST OF TABLES
Table page

1.1 MRI properties of different tissues. . . . . . 4

2.1 Optimum Q-value for typical ADC range of human brains with multiple
measurements. . . . . . . . 26

2.2 Optimum diffusion weighting factors with multiple measurements for
normal rat brains. . . . . . . . 26

3.1 Comparison of the accuracy of the estimated DEVs using different
methods for different noise levels. . . . . 48

LIST OF FIGURES
Figure page

1.1 Proton spin and its interaction. . . . . . 2

1.2 Equilibrium magnetization of proton spins. . . . 3

1.3 Detection of net magnetization. ..... . . 4

1.4 Ellipsoid visualization of diffusion tensors. . . . 5

1.5 A slice of DWI taken from a normal rat brain. . . . 7

1.6 A slice of DWI taken from a normal rat brain with various diffusion
weighting factors. . . . . . . ... 8

1.7 Fiber tract maps computed from diffusion tensor imaging . 9

1.8 A general framework of DT-MRI analysis . . . 10

2.1 Distribution of ADC values in the white matter and the gray matter
of a normal rat brain. . . . . . . 22

2.2 Simulation results of different ADC estimates. . . 24

2.3 Variance of different ADC estimates when a = 0.05. . . 25

2.4 Weighted COV of the nonlinear ADC estimate versus the diffusion
weight factors. . . . . . . .... 27

3.1 A slice of several volumetric complex DWIs . . 50

3.2 A slice of the original (ground truth) and the estimated diffusion tensor
fields .... . . ... .............. 51

3.3 A slice of the computed DEV field for the noisy synthetic data . 52

3.4 A slice of the normal rat brain DTI estimated using multivariate linear
regression without -ihiilm., viewed channel by channel. . 53

3.5 A slice of the normal rat brain DTI estimated using multivariate non-
linear regression without smoothing, viewed channel by channel. 54

3.6 A slice of the normal rat brain DTI estimated using using our proposed
method, viewed channel by channel. . . .... . 55

3.7 A slice of the computed DEV field from a normal rat brain. . 56

3.8 A slice of the normal rat brain DTIs in the region of the cerebellum
viewed channel by channel. . . . . .. 57

4.1 Transformation of tensor fields. . . . .. . 62

4.2 Evolution of a curve and its corresponding higher dimension function. 71

4.3 Segmentation of a synthetic tensor field where two regions differs only
in the orientations, . . . . .. ...... 82

4.4 Segmentation of a synthetic tensor field where two regions differs only
in scale, ..... . . ... ............ 83

4.5 Segmentation of a synthetic tensor field with two roughly homogeneous
regions that only differ in the orientations. . . 84

4.6 Segmentation of a synthetic tensor field with two roughly homogeneous
regions that only differ in scale. . . . .. . 85

4.7 A slice of the DTI of a normal rat spinal cord. . . 86

4.8 Segmentation of the slice of DTI shown in Fig. 4.7 . ..... 87

4.9 A slice of the DTI of a normal rat brain. . . . 88

4.10 Segmentation of the corpus callosum from a slice of DTI shown in Fig.
4.9.. . . . . . . . . 89

4.11 Segmentation of the corpus callosum from a slice of the DTI in Fig.
4.9 using piecewise smooth model. . . . ....... 90

4.12 3D Segmentation of the DTI of a normal rat spinal cord. . 90

4.13 3D Segmentation of the corpus callosum from the DTI of a normal rat
brain. .... . . . ... .............. 91

Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

DIFFUSION TENSOR FIELD RESTORATION AND SEGMENTATION

By Zhizhou Wang

August 2004

Chair: Baba C. Vemuri
AI. P] r Department: Computer and Information Science and Engineering

Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is a relative new MRI

modality that can be used to study tissue microstructure, e.g., white matter con-

nectivity in brain in vivo. It has attracted vast attention during the past few years.

In this dissertation, novel solutions to two important problems in DT-MRI analysis:

diffusion tensor field (aka diffusion tensor image, DTI) restoration and segmentation

are presented.

For DTI restoration, we first develop a model for estimating the accuracy of a

nonlinear estimator used in estimating the apparent diffusivity coefficient (ADC). We

also use this model to design optimal diffusion weighting factors by accounting for

the fact that the ground truth of the ADC is a distribution instead of a single value.

The proposed method may be extended to study the statistical properties of DTI

estimators and to design corresponding optimal acquisition parameters.

We then present a novel constrained variational principle for simultaneous smooth-

ing and estimation of the DTI from complex valued diffusion weighted images (DWI).

The constrained variational principle involves the minimization of a regularization

term of LP smoothness norms, subject to a nonlinear inequality constraint on the

data. The data term we nnrpl-: is the original St. i-Li.1-Tanner equation instead of

the linearized version usually employed in literature. The complex valued nonlinear

form leads to a more accurate (when compared to the linearized version) estimate

of the DTI. The inequality constraint requires that the nonlinear least squares data

term be bounded from above by a known tolerance factor. Finally, in order to ac-

commodate the positive definite constraint on the diffusion tensor, it is expressed

in terms of Cholesky factors and estimated. The constrained variational principle

is solved using the augmented Lagrangian technique in conjunction with the limited

memory quasi-Newton method.

For DTI segmentation, we present a novel definition of tensor "ili-I ... grounded

in concepts from information theory and incorporate it in the segmentation of DTI.

Diffusion tensor is a symmetric positive definite (SPD) tensor at each point of a

DTI and can be interpreted as the covariance matrix of a local Gaussian distribu-

tion. Thus, a natural measure of dissimilarity between diffusion tensors would be

the Kullback-Leibler(KL) divergence or its relative. In particular, we define a tensor

"ili-i .11i.. as the square root of the J-divergence (symmetrized KL) between two

Gaussian distributions corresponding to the tensors being compared. Our definition

leads to novel closed form expressions for the "ili-1 .. itself and the mean value

of a DTI. We then incorporate this new tensor "ili-i.... in a region based active

contour model for DTI segmentation and the closed expressions we derived greatly

help the computation.

CHAPTER 1
INTRODUCTION

In their seminal work [1], Basser et al., introduced diffusion tensor magnetic

resonance imaging (DT-MRI) as a new MRI modality from which anisotropic water

diffusion can be inferred quantitatively. Since then, DT-MRI has became a powerful

method to study the tissue microstructure, e.g., white matter connectivity in the

brain in vivo. DT-MRI analysis consists of a variety of interesting problems: diffusion

weighted image (DWI) acquisition parameter optimization, diffusion tensor field (aka

diffusion tensor image, DTI) restoration, DTI segmentation, DTI registration, DTI

atlas construction, fiber tracking and visualization. Since this research area is still

very young, most questions raised in the above problems have not been solved yet.

In this chapter, the basics of MRI and DT-MRI are reviewed, followed by a literature

review of the state of art research in DT-MRI analysis.

1.1 ABC of Magnetic Resonance Imaging

Magnetic resonance imaging (\lI I) is a powerful noninvasive imaging modality

used mainly to "see" the soft tissues inside the human body in medical sciences. It

has its beginnings in the early 1970s and is based on the p1r, --i of nuclear magnetic

resonance that was thoroughly studied in the first half of the 20th century. Here only

the most fundermental concepts and ideas in MRI are briefly presented; readers are

referred to Haacke et al. [2] for an excellent and complete coverage of this topic.

1.1.1 Nature of a Proton Spin with a Magnetic Field

1H or proton MR is the most common technique for imaging biological systems

due to its high concentration in the human body and high sensitivity. A classical

view of a proton is a tiny spinning positive charge. The high angular rotation of

It Bo

I

(a) (b)

Figure 1.1: Proton spin and its interaction. (a) Proton spin. (b) Interaction of a
proton spin with an external magnetic field.

a proton causes a circular current loop I, which in turn brings a magnetic moment

gt (see Fig. 1.1(a)) As shown in Fig. 1.1(b), a spinning proton immersed in an

external magnetic field Bo will have a processional motion around the field direction

if gt is not aligned with Bo. The frequency of this precession is called the Larmor

frequency and is given by

wo = 7Bo (1.1)

where the constant 7 is called the gyromagnetic ratio.

1.1.2 Net Magnetization and Its Detection

Consider an object that contains numerous protons and place it in a large static

magnetic field. Eventually the processing protons will be aligned with the external

magnetic field; however, the alignments for a group of protons might be parallel or

antiparallel due to thermal agitation with the absolute temperature T. Still, the

excess of parallel alignments will bring a net equilibrium magnetization Mo that is

proportional to the spin density po everywhere (Fig. 1.2).

Even in a large body, the non-vanishing net magnetization is not significant

enough to be detected when it is aligned with the applied magnetic field. However,

M B0

Figure 1.2: Equilibrium magnetization of proton spins. The black colored ones are
parallel while the the gray colored ones are antiparallel to the external magnetic field.

as shown in Fig. 1.3, the magnetization vector can be tipped away by a nearby radio-

frequency (rf) magnetic pulse and then process around the static magnetic field and

its frequency as given by equation (1.1). It is now possible to detect this processing

magnetization using a closely placed coil in which an electronic signal is induced

and measured. There are three tissue properties that have important impact on

the strength of the signal. One of them is the spin density, the other two are the

longitudinal relaxation time T1 and the transversal relaxation time T2. Both T1 and

T2 are tissue-specific time constants for protons and measure the rate of change in net

magnetization. While Ti decides the time taken for spinning protons to realign with

the external magnetic field, T2 measures the decay of magnetization perpendicular to

the main magnetic field.

1.1.3 Magnetic Resonance Imaging

Imaging aims to relate signal with the spatial localization of various factors.

This is achieved by using a spatially variant magnetic field that leads to a spatially

changing distribution of resonance frequencies wc(x) = y(Bo + G x), where G is

the gradient of the magnetic field. Now the measured signal contains a spectrum of

a rf pulse. (b) Processed net magnetization is detected by a nearby coil.

Table 1.1: MRI properties of different tissues

white matter grey matter water bone air tumor infarct
TI bright grey dark dark dark dark dark
T2 bright interim. bright dark dark bright bright

S(t) JM(xi' C dx (1.2)

where M(x) is the magnetization that is a combination of the spin density, longitu-

dinal relaxation, transversal relaxation and other factors like diffusion. M(x) can be

computed by inverting the received signals using Fourier transformation:

M(x)- S(k)-ik-xdk (1.3)

where k = is called the reciprocal space vector to make the Fourier transformation

obvious. In practice, a particular slice in a 3D volume is selected using a magnetic field

with changing magnitude along a certain direction. Different kinds of contrast images

can be derived from the effective spin density. Table 1.1 shows the imaging properties

of various normal and abnormal tissues in the brain [3], with the abbreviation interim.

referring to an intermediate brightness value between dark and bright.

(a) (b)

Figure 1.4: Ellipsoid visualization of diffusion tensors. (a) Anisotropic diffusion ten-
sor. (b) Isotropic diffusion tensor.

1.2 Background of DT-MRI

1.2.1 Diffusion Process

Diffusion is a process of intermingling molecules as a result of random thermal

agitation, and in our context, refers specifically to the random translational motion

of water molecules in the part of the anatomy being imaged with MR. In three

dimensional space, diffusivity can be described by a 3 x 3 matrix D called diffusion

tensor that is intimately related to the geometry and organization of the microscopic

environment.

The diffusion tensor D can be easily visualized as ellipsoids shown in Fig. 1.4,

where the axes of each ellipsoid correspond to the eigenvector directions of the diffu-

sion tensor and are scaled by the eigenvalues. The color of the ellipsoid ranging from

blue to red shows the lowest to highest degree of anisotropy. For example, diffusion

tensors in free water region are shown as large round blue ellipsoids that are just blue

spheres.

1.2.2 Diffusion Weighted Images

Diffusion weighted image SI and the diffusion tensor D are related through the

St. i-Li.1-Tanner equation as given by [1]

SI = Soe-:D = Soe 1 E3_ (1.4)

where b, is the diffusion weighting matrix of the 1-th magnetic gradient, ":" denotes

the generalized inner product for matrices. Equation (1.4) is a compact form suitable

for mathematical analysis, some authors prefer to use the LeBihan's b-factor and the

diffusion sensitizing gradient direction g = [g,, gy, g]T to reflect the imaging ]li, -i. -

as the follows:

S, = Soe-bgTDg

It is not hard to verify that

bi,1 7,./2. b ,12 -- 7,/, */ 7 1,13 / /, */

bl,21 -- 7,/ /. *. l,22 7,./12- 1,23 = 1- *I

b1,31 */,. ,32 = 1, */ 1,33 = bg2

The popular phase-encoding method for acquiring DT-MRI images yields com-

plex measurements; thus, SI and 5o will be complex variables and equation (1.4) still

holds in such cases. In the following text, we assume SI = R, + ill and So = Ro + i1o

are complex variables, where RI = real(Si), Ii = 11,"i ''1,1, i/(S), Ro = real(So) and

Io = I"''',, '/(So). Taking the magnitude on both sides in equation (1.4) yields

I bD _jSo b1 l,1ij Di (1.5)

Taking log on both sides of equation (1.5) leads to the following linearized

St. i-L.il-Tanner equation:

log(||S,| ) = log(||So 1) b, : D log(|SO 1) b,,D (1.6)
i=1 j=1

Figure 1.5 shows a slice of the magnitude, real part and imaginary part of a DWI.

Note that in all of published literature to date, what have been considered is only the

magnitude of the complex measurements and in particular the linearized equation

(1.6).

(a) Magnitude (b) Real part (c) Imaginary part

Figure 1.5: A slice of DWI taken from a normal rat brain. Negative values in the real
part and the imaginary part are shown in blue.

1.2.3 Application of DT-MRI

Given several non-collinear diffusion weighted intensity measurements, D can

be estimated via multivariate regression models from equation (1.4) or its variants

such as the linearized version in equation (1.6). To improve the accuracy, different

b values are also eviil'rid to acquire the raw DWIs. Figure 1.6 shows the complex

valued DWIs in different directions and with different b values, the changes in DWIs

are clearly evident with different b values. Though the difference in DWIs is not as

obvious with changing directions, it is still visible inside the corpus callosum where

the diffusion tensor is anisotropic since the value of DWI at a pixel changes with

directions only when the diffusion tensor at that location is anisotropic. A general

principle is that water diffuses preferably along ordered tissues e.g., the brain white

matter; thus, diffusion anisotropy can then be computed to show microstructural and

.1li'1,-i. .1. '-4ical features of tissues [4]. Especially in highly organized nerve tissue, like

white matter, diffusion tensor provides a complete characterization of the restricted

motion of water through the tissue that can be used to infer fiber tracts. Another

possible application of the diffusion tensor field is segmentation of anisotropic regions.

Figure 1.6: A slice of DWI taken from a normal rat brain with various diffusion
weighting factors. Left to right: Fixed direction, varying b values. Top to bottom:
fixed b value, changing directions.

DT-MRI has recently became a powerful method to study the tissue microstruc-

ture e.g., white matter connectivity in the brain in vivo. To this end, Fig. 1.7 is an

example depicting the fiber tract map which is traced using the diffusion tensor image

inside the corpus callosum of a human brain and is visualized as colored streamtubes

[5]. A general framework of DT-MRI analysis is shown in Fig. 1.8. The first step

of DT-MRI analysis is the optimal acquisition of the diffusion weighted images and

is of particular interest to imaging phi, -ii -, as well as analysists. The second step is

the restoration of DTI and may involve both estimation and smoothing. Given the

restored DTI, a large amount of published work in the literature has been focused

on fiber tracking and visualization. However, with the rich information contained

in DTI, it is also beneficial to segment some structures which might not be easy to

achieve from the standard MRI images. This segmentation can also assist in focusing

on a smaller region of interest (ROI) for fiber tracking, a useful task in many clinical

Figure 1.7: Fiber tract maps computed from diffusion tensor imaging visualized as
colored streamtubes.

applications. Finally, it is crucial to validate the tracked fiber pathways with higher

resolution images like fluro images of histology [6].

1.3 Literature Review

1.3.1 Optimal b Values

In DT-MRI, the most important control parameters in data acquisition are the

diffusion weighting factors (aka. b-values) as shown in equation (1.4). By choosing

these b-values carefully, one can obtain high quality diffusion weighted images without

additional cost. Da Xing et al. [7] proposed a scheme to find optimal b-values for

measurement of the apparent diffusion coefficient (ADC) based on the use of two

diffusion weighted images acquired with b1 and b2. Armitage and Bastin [8] extends

Xing's result to DT-MRI. However, in [7] and [8], the reported methods use the linear

equation (1.6). Though not aiming at optimizing b values, Pierpaoli and Basser [9],

Basser and Pajevic [10], Anderson [11] and others have studied the effects of noise on

RawdataS

Subject

Ivaia ion

DT1 Retoraton Stream nes Visualization
Computabon

Segmentation

Figure 1.8: A general framework of DT-MRI analysis: from raw data to fiber tract
map computation, visualization and segmentation.

DTI estimation. Earlier works [9, 10] are based on numerical simulations while the

latest work [11] aims to provide a theoretical analysis. Anderson [11] uses a power

series of equation (1.5) to the first order to get a noise model for the linearized equation

and then evaluates the perturbation of the estimated eigenvalues and eigenvectors.

Just recently, Brihuega-Moreno et al. [12] aimed to optimize diffusion measurements

by minimizing the Cramer-Rao lower bound (CRLB), which is a function of the

diffusion weighting factors. Their approach is independent of the estimation method;

however, this poses a practical difficulty in choosing an estimator which can achieve

the Cramer-Rao lower bound.

1.3.2 DTI Restoration

Currently there are two popular approaches, one involves smoothing the magni-

tude of the raw data ||IS|| while preserving relevant detail and then estimating the

diffusion tensor D from the smoothed raw data [13, 14]. The raw data in this con-

text consist of several diffusion weighted images acquired for varying magnetic field

strengths and directions. Note that at least seven values at each 3D grid point in

the data domain are required to estimate the six unknowns in the 3x3 symmetric

tensor D and one scale parameter ||Soll||. The raw data smoothing or de-noising can

be formulated using variational principles which in turn require solution to PDEs or

at times directly using PDEs which are not necessarily arrived at from variational

principles [15, 16, 17, 18, 19, 20].

Another approach to restore the DTI is to smooth the dominant eigenvector

(also refereed as principal diffusion direction) after the diffusion tensor has been

estimated from the raw noisy measurements S||S||. In Poupon et al. [21], an energy

function based on a Markovian model was used to regularize the noisy dominant

eigenvector field computed directly from the noisy estimates of D obtained from the

measurements ||IS|| using the linearized St. i-L.il-Tanner equation (1.6). Coulon et al.

[22] proposed an iterative restoration scheme for principal diffusion direction based
on direction map restoration work reported in Tang et al. [23]. Other sophisticated

vector field restoration methods [24, 25, 26] can potentially be applied to the problem

of restoring the dominant eigenvector fields computed from the noisy estimates of D.

Weickert [27] used a nonlinear diffusion technique to restore a given tensor field. Their

scheme was a generalization of the nonlinear anisotropic regularization of vector-

valued image to the matrix-valued image case. The regularization term there involved

the trace of a penalized measure applied to the sum of the component-wise structure

tensors of the given matrix. Recently, Chefd'Hotel et al. [28, 29] presented an elegant

geometric solution to the problem of smoothing a noisy D that was computed from

||IS|| using the log-linearized model (1.6) described above. They assume that the given

(computed) tensor field D from ||IS|| is positive definite and develop a clever approach
based on differential geometry of manifolds to achieve constrained smoothing where

the smoothed tensor field is constrained to be positive semi-definite. Interesting

results of mapped fibers are shown for human brain MRI.

The idea of simultaneous estimation and smoothing of the diffusion tensors from

DWI was pioneered by our earlier work [30] and improved upon in [31]. This improve-

ment involved methods to overcome the problem of manual choice of regularization

control parameters. In both these works [30, 31], the estimated smooth tensor field

was guaranteed to be positive semi-definite. Moreover, these works were a report

of the first use of the nonlinear St. i-1k.li-Tanner equation in the estimation of the

diffusion tensors. Recently, in Tschumperl6 and Deriche [32], a robust version of the

linearized St. i-l.,l-Tanner equation is used as the data term along with a robust reg-

ularization term in a unified variational principle to estimate a smooth D from the

noisy signal measurements. Note that the data term uses a linearized version of the

St. i-1k.li-Tanner equation as in earlier works [22, 28, 21, 14].

1.3.3 DTI Segmentation

One key factor in tensor field analysis is a proper choice of tensor distance that

measures the similarity or dissimilarity between tensors and is particularly important

in DTI segmentation and registration.

In general, any kind of matrix norm can be used to induce a tensor distance.

One such example is the tensor Euclidean distance obtained by using the Frobenius

norm. Due to its simplicity, tensor Euclidean distance has been used extensively in

tensor field restoration [28, 27]. Alexander et al. [33] considered a number of tensor

similarity measures for matching diffusion tensor images and concluded empirically

that the Euclidean difference measure yields the best results. Though not many

sophisticated tensor distances have been proposed in tensor field analysis, there are

quite a few in the machine learning literature. Miller and Chefd'hotel [34] proposed

an interesting measure on transformation groups to design an invariant kernel for

non-parametric density estimation. What is most closely related to our present work

was proposed by Tsuda et al. [35]. They introduced information geometry in the

space of positive definite matrices to derive a Kullback-Leibler divergence for two

matrices and then used it in an "em" algorithm (not the well known EM expectation

maximization) to approximate an incomplete kernel.

In the context of DT-MRI segmentation, recently, Zhukov et al. [36] proposed a

level set segmentation method which is in fact a segmentation of a scalar anisotropic

measure of the diffusion tensor. The fact that a scalar field computed from the

diffusion tensor field is used implies that they have ignored the direction information

contained in the tensor field. Thus, this method will fail if two homogeneous regions

of tensor field have the same anisotropy property but are oriented in a totally different

direction!

To the best of our knowledge, before the acceptance of our work in [37], the

only published work which aims to segment matrix-valued images was reported in

Feddern et al. [38]. In their work, Feddern et al. extended the concept of a structure

tensor to a tensor field and then used it for extending the geodesic active contours

to tensor fields. The stopping criterion in this case is chosen as a decreasing function

of the trace of the sum of the structure tensors formed from individual elements of

the given tensor field. This amounts to taking the Frobenius norm of the tensor

field formed by the gradient magnitude of the individual channels of the given tensor

field. This scheme is a gradient based active contour(snake) whose performance is

lacking in absence of significant gradient in the measurements. Moreover, norm used

there is not invariant to affine transformations of the input coordinates on which the

original tensor field is defined. Affine invariance is a very desirable property for the

segmentation scheme when dealing with data sets obtained from different hardware

or from subjects exhibiting anatomical changes due to development of pathology etc.

in medical imaging applications.

In [37], we applied a region based model for tensor field segmentation by incorpo-

rating a tensor distance based on matrix Frobenius norm. Simultaneously, Rousson

et al. [39] extended the classical surface evolution segmentation model by incorporat-

ing region statistics defined on the tensor fields for DT-MRI segmentation. In both

works [37, 39], the diffusion tensor is treated as a simple matrix and every component

is treated equally. However, the diffusion tensor is actually the covariance matrix of

a local diffusion process and different components have different importance/weight.

In [40], we were the first to use this fact in tensor field segmentation. In particular,

we proposed a novel tensor "ili-I.m.. based on information theory and incorporated

it in region based models for tensor field segmentation. Very recently, the concepts

presented therein have been extended by Lenglet et al. [41], to the case of regions

with piecewise constant non-unit-variances. They also use the Fisher information

metric on the manifold of local Gaussians to achieve segmentation of the diffusion

tensor field.

1.4 Contributions

Statistical Analysis of a Nonlinear Estimator for ADC and Its Application to

Optimizing b Values

We develop a model for estimating the accuracy of a nonlinear estimator used

in computing the apparent diffusivity coefficient (ADC) which provides useful

information about the structure of tissue being imaged with diffusion weighted

MR. Further, we study the statistical properties of the nonlinear estimator and

use them to design optimal diffusion weighting factors. Specifically, we show

that a weighted linear estimator can well approximate the nonlinear estima-

tor and thus can be used to analyze the statistical properties of the nonlinear

estimator. Furthermore, to account for the fact that the ground truth of the

ADC is a distribution instead of a single value, a weighted coefficient of variance

(COV) is proposed as a criteria to be minimized for the determination of the

optimal diffusion weighting factors. Our model has the potential to be extended

to analyze the statistical properties of a nonlinear estimator for diffusion tensor

and thus obtain the optimal b values for DTI acquisition.

* A Constrained Variational Principle for DTI Restoration.

We present a novel constrained variational principle for simultaneous smooth-

ing and estimation of the diffusion tensor field from complex valued diffusion

weighted images (DWI). The constrained variational principle involves the min-

imization of a regularization term using LP smoothness norm, subject to a

nonlinear inequality constraint on the data. The data term we employ is the

original St. i-L.il-Tanner equation instead of the linearized version usually em-

p'.'v d in literature. We empirically show that the complex valued nonlinear

form leads to a more accurate (when compared to the linearized version) es-

timate of the DTI. The inequality constraint requires that the nonlinear least

squares data term be bounded from above by a known tolerance factor which

can be computed from the data. Finally, in order to accommodate the positive

definite constraint on the diffusion tensor, it is expressed in terms of Cholesky

factors and estimated. The constrained variational principle is solved efficiently

by using the augmented Lagrangian technique in conjunction with the limited

memory quasi-Newton method.

* DTI Segmentation Using an Information Theoretic Tensor Dissimilarity Mea-

sure. We aim to segment the DTI using all the information contained in the

tensors defining the field, not only the scalar anisotropic properties, but also

the orientation information contained therein. The key step will be a novel

definition of the matrix distance which can be used to measure heterogeneity of

the diffusion tensor field quantitatively. We present a novel definition of tensor

"ili-.n.. grounded in concepts from information theory and incorporate it in

the segmentation of tensor-valued images. In DTI, the symmetric positive def-

inite (SPD) diffusion tensor at each point can be interpreted as the covariance

matrix of a local Gaussian distribution. Thus, a natural measure of dissimilar-

ity between diffusion tensors would be the Kullback-Leibler(KL) divergence or

its relative. In particular, we define a tensor "ili-.i... as the square root of

the J-divergence (symmetrized KL) between two Gaussian distributions corre-

sponding to the tensors being compared. Our definition leads to novel closed

form expressions for the "ili-i.... itself and the mean value of a DTI. Unlike

the traditional Frobenius norm-based tensor distance, our "ili- i... is affine

invariant, a desirable property in many applications. We then incorporate this

new tensor "ili-i.i.. in a region-based active contour model for DTI segmen-

tation, and the closed expressions we derived greatly helps the computation.

To our knowledge, this is the first use of a region-based active contour model

for diffusion tensor field segmentation.

CHAPTER 2
STATISTICAL ANALYSIS OF A NONLINEAR ESTIMATOR FOR ADC AND
ITS APPLICATION TO OPTIMIZING b VALUES

2.1 Introduction

Prior to the invention of DT-MRI, apparent diffusion coefficient (ADC) has been

used intensively as a contrast mechanism in clinical MRI. The principle underlying

both ADC and DT-MRI is the sensitivity of the magnetic resonance (lI R) signal to

the random motion of water molecules. In the case of ADC, the diffusion weighted

image (DWI) is also given by the St. i-Li.1-Tanner equation:

S = Soe-bD (2.1)

where D is the ADC, and So is the non-DW image.

DWIs are often corrupted by noise, the noise model for complex valued data is

[42]

S = Soe-bD + n, + M (2.2)

where n, and ni are uncorrelated Gaussian noise with zero mean and standard devia-

tion a. Given a set of DWIs acquired with different b-values, ADC can be estimated

via linear or nonlinear regression.

Linear estimators of the ADC and the diffusion tensor have been well studied and

also used in designing optimal b values. However, analysis regarding the nonlinear

estimation has not been considered before. In this chapter, statistical analysis of a

nonlinear estimator for ADC and its application to optimizing b values are carried

out. The method used here also sheds some lights on how to analysis the nonlinear

estimator for diffusion tensor and design a corresponding optimal acquisition method.

2.2 Theory

2.2.1 A Nonlinear Estimator for ADC

Let So = Ro + ilo be the echo intensity without magnetic gradient, D be the

ADC, Sk = Rk kilk be the measurement for the kth magnetic gradient, k = 1,..., K.

Since Sk = Soe-bkD + n i + M as in equation (2.2), the sum of squared errors (SSE)

that measures the goodness of estimation is

E(So, D) [(Rk R -bkD)2 + (ik 0-bkD)2] (2.3)
k

Then we have a nonlinear estimate (Se, D*) that is the solution of a nonlinear opti-

mization problem, i.e.

(St, D*) mn E(So, D) (2.4)
SSo,D

Though there is no analytical form for the nonlinear estimate (Se, D*), it can be

efficiently solved numerically. Levenberg-Marquardt (LM) method [43] is ideal for

nonlinear least squares optimization problems like equation (2.4). Let

R1 Roe-biD

RK Re-bKD
F(So, D)
II loe-b1D

IK 0e-bKD

Then

E(So, D) = F(So, D)TF(So, D)

The Jacobian matrix J(So, D) of F(So, D) is

-e-b1D 0 b 1Roe-b1D

-e-bKD 0 b KRoe-bKD
J(So, D)
0 -e- b1D b joe- b1D

0 -e-bKD b K o0-bKD

Then the gradient G(So, D) of E(So, D) is given by

G(So, D) = 2J(So, D)TF(So, D)

If we denote the Hessian matrix of the kth component of F(So, D) as Hk(So, D), then

the Hessian matrix H(So, D) of E(So, D) is

K
H(So, D) = 2J(So, D)TJ(So, D) + 2 Fk(So, D)Hk(So, D)
k-1

The essence of the LM method is to compute a search direction at iteration n accord-

ing to

(JJ, + A ,I)d, = -J,F, (2.5)

where Ak > 0 is a control parameter. When Ak is zero, the search direction is com-

puted as the Gauss-Newton method. As As k goes to infinity, the search direction ap-

proaches the steepest descent direction. With proper choices of Ak, the LM method

can perform like the gradient descent method when the current solution is far from

the final solution and it can approximate the Gauss-Newton method when the current

solution is close to the final solution.

2.2.2 Variance of the Nonlinear Estimate for ADC
The nonlinear estimate given in (2.4) can only be computed numerically, thus
it is impossible to analytically compute its statistical properties. We propose to
use a weighted linear estimator to approximate the nonlinear estimator. Since the
weighted linear estimator has a close form solution, we can analytically approximate
the statistical behavior of the nonlinear estimator.
The sum of weighted squared errors of the linearized form of the model in (2.1)
is

E(|| IS||, D) = ?(log(S ||) log(||So,|) + bD)2 (2.6)
k
where 11-, = ||,\,,' "911, k = 1,...,K are the weights, Dg is the ground truth. A
corresponding weighted linear estimate is given by

(I So I\*, D*) = min| s, DE\(lI So\, D) (2.7)

The solution (||So||*, D*) can be easily computed as

In(||So |*) ] A [ Z lln( SI I ) (2.8)
D In l (|I|S I I)

where

W 1

Yk i; Y: k ';k
a\
A^- [ ZVib Z 07 Q

It is pointed out in [42] that ||IS|| ~ ISolle-bkD + n when SNR > 3, then by using

Taylor expansion on In(||S, ||), we have

ln(||S,\ ) n(llolle-bkD +n)

In(lSe-bkD)+ (2.9)
I ISO I e-bkD

thus oan( and we have
S1e 0 f I bkJ

It is easy to prove that equation (2.10) is the same as the CRLB given in [12] for

two b-values and we found empirically that this is true even for multiple b-values

more than two. However, whether equation (2.10) and the CRLB given in [12] are

actually the same remains to be explored. Note that this weighted linear estimator

is NOT a realistic estimator since the weighting are related to the ground truth,

however, it serves well as a tool to analyze the nonlinear estimator which is validated

by simulation experiments later.

2.2.3 Weighted COV of the Nonlinear Estimator for ADC

Equation (2.10) shows the variance of the weighted linear estimator when the

ground truth is a single ADC value. However, the ground truth Dg is usually a

distribution instead of a single value as shown in Fig. 2.1. Thus we propose to use

the following weighted COV of D*.

COV,,(D*) f COV(D*|D)p(D)dDg (2.11)

where COV(D*Dg,) = Var(D*|Dg)/Dg, p(Dg) is the probability density function

of the ground truth.

0.12
Grey matter
0.1 White Matter

S0.08

S0.06 -

-0
2 0.04-

0.02-

0
0.5 1 1.5 2 2.5
Dg x10-3
Figure 2.1: Distribution of ADC values in the white matter and the gray matter of a
normal rat brain.

2.2.4 Optimal Diffusion Weighting Factors

The weighted COV of D* in equation (2.11) depends on the ground truth distri-

bution, total number of measurements and the choice of diffusion weighting factors.

The ground truth distribution can not be controlled. Though increasing total number

of measurements will decrease the weighted COV, this will also increase the acquisi-

tion time which is not desirable. When the total number of measurements is fixed,

what one can do is to choose the diffusion weighting factors carefully to get a minimum

weighted COV.

2.3 Results

2.3.1 Simulation Results of Different Estimators

The simulation procedure that we developed consists of the following steps:

1. Choose a ground truth Dg and So.

2. Choose a set of diffusion weighting factors bl, ..., bN.

3. Generate n complex measurements using (2.2) with the selected set of b-values.

4. Estimate D* using the nonlinear estimator in (2.4), the weighted linear estima-

tor in (2.7) and the linear estimator in Xing et al. [7] respectively.

The last two steps are executed for several trials. The following settings were used

in our simulation here:

D, = 0.001mm2/s, So = 10 + lOi, bi = 100s/mm2, b2 500s/mm2 and b3 -

1OO0s/mm2, a = 0.5 or a = 1.0.

The results of the simulation are shown in Fig. 2.2. It is evident that the

nonlinear estimator yields the best results, the weighted linear least square estimator

has similar outcomes like the nonlinear estimator while the linear least square model

does not perform as good as the other two. This indicates that the nonlinear estimator

should be used for better accuracy.

2.3.2 Variance of Different Estimates

The variance of the nonlinear estimate can only be estimated by simulation. This

simulation will be similar as described earlier. However, the total number of trial is

set to be 2000 for a stable and accurate estimation of the variance. The variance

of the linear (LSQR) estimate is given in Bito et al. [44] and the weighted linear

estimate (weighted LSQR) can be computed from equation (2.10). The settings used

here are:

D, = 0.001mm2/s, So = 10 + 10i, bi = 0, bs = 2000s/mm2, b2 will vary from 0

to 3000s/mm2 and a = 0.05.

The results of the computation are shown in Fig. 2.3. It is clear that the variance

of the weighted linear estimate is a good approximation of the nonlinear estimate.

Also we see that in most cases, the nonlinear estimate yields a smaller variance than

the linear estimate. However, they happen to have the same variance when b2 = b1

or b2 = b3.

x 10-3

Linear
Complex Nonlinear

Weighted Linear
Complex Nonlinear

4 6
Trial

8 10

2 4 6 8 10
Trial
x 10-3

2 4 6 8 10 2 4 6 8 10
Trial Trial
Figure 2.2: Simulation results of different ADC estimates. Top: Simulation results
when a = 0.5. Bottom: Simulation results when a = 1.0.

x 10-3

2

x 10-3

i~J

S10-10 var(D*) versus b2 with C=0.05
LSQR
Weighted LSQR
x Complex NLSQR(Simulation)

2-

1.5-

0.5

0 500 1000 1500 2000 2500 3000
b2

Figure 2.3: Variance of different ADC estimates when a = 0.05.

2.3.3 Weighted COV and Optimal Diffusion Weighting Factors

It is easy to compute the weighted COV in equation (2.11) by numerical inte-

gration given the ground truth ADC and the diffusion weighting factors and it is also

possible to find optimal diffusion weighting factors numerically. Currently we use the

medium-scale sequential quadratic programming (SQP) algorithm in Matlab as the

numerical optimization procedure to achieve this.

For our first experiment, we use the typical ADC range found in the human brain

as given in Brihuega-Moreno et al. [12] and assume the distribution to be uniform.

In this case, we can use the dimensionless 3 value which is defined as min(D,) b.

The optimal 3-values with multiple measurements up to five are summarized in table

2.1. Notice that our result is quite different from [12] and these differences are caused

by the different precision in the numerical integration techniques used in the step

to compute the weighted COV. Our integration has higher accuracy than the one

26

Table 2.1: Optimum 1-value for typical ADC range of human brains with multiple
measurements. m is the total number of measurements, the number of measurements
at a particular 1-value are shown in the bracket in the event that it is more than one
and the unit of the D values is mm2 /s.

D range is D range is
[0.3 x 10-3, 3 x 10-3] [0.1 x 10-3, 3 x 10-3]

m __ 12 13
2 0 0.210
3 0 0.223(2)
4 0 0.190(2) 0.380
5 0 0.201(3) 0.519

m 1 12 3
2 0 0.080
3 0 0.055 0.170
4 0 0.064(2) 0.276
5 0 0.070(3) 0.328

Table 2.2: Optimum diffusion weighting factors with multiple measurements for nor-
mal rat brains. m is the total number of measurements, the number of measurement
at a particular b-value is shown in the bracket in the event that it is more than one
and the unit of the b-values is s/mm2

Grey matter White matter
m b6 b2 m b6 b2
2 0 2370 2 0 2830
3 0 2530(2) 3 0 3010(2)
4 0 2650(3) 4 0 3150(3)
5 0(2) 2460(4) 5 0 3250(4)

used in [12] and hence yields higher accuracy results. For our second experiment,

we use real distributions of the ADC values estimated from a normal rat brain.

Histograms of the ADC values from different parts of a normal rat brain are used

as these distributions. These are not uniform as evident from the Fig. 2.1. Results

of weighted COV versus b2 and b3 using (0, b2, b3) as diffusion weighting factors are

shown in Fig. 2.4. Furthermore, the optimal b-values with multiple measurements of

up to five are summarized in table 2.2. Notice that we get two different b values as

the optimal diffusion weighting factors for these real distributions.

S0.2-
0
0

0
1000
2000
3000
4000 4000

A

1000
2000
3000

b3 b2

4

1000
2000

3000

2000

1000

b3 b2
Figure 2.4: Weighted COV of the nonlinear ADC estimate versus the diffusion weight
factors. COVw(D*) of the gray matter (First row) and the white matter (Second row)
versus b2 and b3 using (0, b2, bS) as diffusion weighting factors.

3000
4000 4000

CHAPTER 3
A CONSTRAINED VARIATIONAL PRINCIPLE FOR DIRECT ESTIMATION
AND M\IOOTHING OF THE DTI FROM COMPLEX DWIS

3.1 Introduction

We are the first to propose the idea of simultaneous estimation and smoothing of

the diffusion tensors from DWI [30] and gave a significant improvement later on in [31].

We further extend our model [31] to restoration of DTI from complex valued diffusion

weighted images. Specifically, we propose a novel formulation of the DTI estimation

and smoothing as a constrained optimization problem. The specific approach we

use is called the augmented Lagrangian technique which allows one to deal with

inequality constraints. The in', 1'i/ of our formulation lies in the *1..:1:ii to directly,

in a single step process, estimate a smooth D from the noisy complex measurements

Si with the preservation of its positiveness. The formulation does not require any

adhoc methods of setting 'iiiu,:'. parameters to achieve the solution. These are the

key features .7/:liu,:',C;l.:::i,; our solution method from methods reported in literature to

date.

In contrast to our solution (to be described subsequently in detail), most of the

earlier approaches used a two step method involving (i) computation of a D from

||S1|| using a linear least-squares approach and then (ii) computing a smoothed D

via either smoothing of the eigenvalues and eigenvectors of D or using the matrix

flows approach in Chefd'hotel et al. [28]. The problem with the two step approach to

computing D is that the estimated D in the first step using the log-linearized model

need not be positive definite or even semi-definite. Moreover, it is hard to trust the

fidelity of the eigenvalues and vectors computed from such matrices even if they are

to be smoothed subsequently prior to mapping out the nerve fiber tracts.

Briefly, our model seeks to minimize a cost function involving, the sum of an

LP norm based gradient of the Cholesky factor L which ensure the positiveness of D

by the Cholesky factorization LLT and an LP norm based gradient of So, subject

to a nonlinear data constraint based on the complex (not linearized) St. i-1k.li-Tanner

equation (1.4). The model is posed as a constrained variational principle which can be

minimized by either discretizing the variational principle itself or the associated Euler-

Lagrange equation. We choose the former and use the augmented Lagrangian method

together with the limited memory quasi-Newton method to achieve the solution.

This chapter is organized as follows: in section 3.2, the detailed variational for-

mulation is described along with the nonlinear data constraints, the positive definite

constraint and the augmented Lagrangian solution. Section 3.3 contains the de-

tailed description of the discretization as well as the algorithmic description of the

augmented Lagrangian framework. In section 3.4, we present experiments on appli-

cation of our model to synthetic as well as real data. Synthetic data experiments are

conducted to present comparison of tensor field restoration results with a recently

presented work of Coulon et al. [22]. Moreover, results of comparison between the

use of the linearized St. i-1k.li-Tanner model and the nonlinear form of the same are

presented as well.

3.2 Constrained Variational Principle Formulation

Our solution to the recovery of a piecewise smooth diffusion tensor field from the

complex measurements Si is posed as a constrained variational principle. We seek

to minimize a measure of lack of smoothness in So and the diffusion tensor D being

estimated using an LP norm of the gradient in So and an LP norm of the gradient in

the Cholesky factor L. This measure is then constrained by a nonlinear data fidelity

term related to the complex St. i -1.i1-Tanner equation (1.4). The nonlinear data term

is constrained by an inequality which requires that it be bounded from above by a

possibly known tolerance factor. The positiveness constraint on the diffusion tensor

being estimated is achieved via the use of the Cholesky factorization theorem from

computational linear algebra. The constrained variational principle is discretized

and posed using the augmented Lagrangian technique [43]. Each subproblem in the

augmented Lagrangian framework is then solved using the limited memory Quasi-

Newton scheme [43]. The novelty of our formulation lies in the unified framework

for recovering and smoothing of the diffusion tensor field from the raw data SI. In

addition, to our knowledge, this is the first formulation which allows for simultaneous

estimation and smoothing of D as well as one in which the regularization parameter

is not set in an adhoc way.

Let So(x) = Ro(x) + ilo(x) be the complex DWI when no diffusion-encoding

magnetic gradient is present, D(x) be the unknown symmetric positive definite dif-

fusion tensor, LLT(x) be the Cholesky factorization of the diffusion tensor with L

being a lower triangular matrix, S(x) = R (x) + il (x), 1 1, .., N are the complex

DWIs measured after application of a diffusion-encoded magnetic gradient of known

strength and direction and N is the total number of measured DWIs The constrained

variational principle is

min S(So, L) [|VRol' + VoP + |VLP2] dx
So,L I
N
s.t. C(So, L) a2 2 (F 2 + F12)dx > 0 (3.1)

where Q is the image domain, the first and the second terms in the variational

principle are LP smoothness norm on the real and imaginary part of So, the third

term is an LP smoothness norm on L, where pi > 6/5 for Ro and 1o and r_ > 1 for

L. IVL P2 = E VL. I!'-, where d ED = {xx, yy, zz, xy, yz, xz} are indices to the

six nonzero components of L. The lower bounds on the value of p1 and p2 are chosen

so as to make the proof of existence of a solution for this minimization (see theorem

3.2.4 and its proof) mathematically tractable. a is a constant scale factor and a2 is

the noise standard deviation in the measurements SI. FRi and FI1 are defined as

FRi = R1-Roe-b':LLT, F1 = I -bi:LL

3.2.1 The Complex Nonlinear Data Constraint

The St. i-Li.-Tanner equation (1.4) shows the relation between the complex dif-

fusion weighted image SI and the diffusion tensor D. However, multivariate linear

regression based on (1.6) has been used to estimate the diffusion tensor D [1]. It

was pointed out in [1] that these results agree with nonlinear regression based on

the magnitude St. i-Li.-Tanner equation (1.5). However, if the signal to noise ratio

(SNR) is low and the number of SI is not very large (unlike in [1] where N = 315 or

N = 294), the result from multivariate linear regression will differ from the nonlin-

ear regression significantly. A robust estimator belonging to the M-estimator family

was used by Poupon et al. [21], however, its performance is not discussed in detail.

In Westin et al. [45]), an analytical solution is derived from (1.6) by using a dual

tensor basis, however, it should be noted that this can only be used for computing

the diffusion tensor D when there is no noise in the measurements SI or the SNR is

extremely high.

Our aim is to provide an accurate estimation of the diffusion tensor D for prac-

tical use, where the SNR may not be high and the total number of DWIs N is

restricted to a moderate number. The nonlinear data fidelity term based on the com-

plex St. i-Li.-Tanner equation (1.4) is fully justified for use in such situations. This

nonlinear data term is part of an inequality constraint that imposes an upper bound

on the closeness of the measurements SI to the mathematical model Soe-b,:LLT The

bound aa2 may be estimated automatically [46, 47].

3.2.2 LP Smoothness Norm

There are numerous image smoothing techniques, however, not many of them are

suitable for keeping sharp discontinuities. Partial Differential Equation based(PDE)

methods are the few that can capture edges and in particular total variation (TV)

norm based restoration [18] introduced by Rudin et al. is an excellent PDE-based edge

preserving denoising method. TV restoration aims to minimizing the total variation

functional

TV(u) = Vu (3.2)

subject to the noise constraint:

\\Au < 2 (3.3)

where z is a noisy scalar image defined on domain Q, u is the smoothed image, A

is bounded linear operator and it is an identity for image denoising, and a quantifies

the amount of Gaussian noise added to the observed image. TV based restoration

can be extended to L" smoothness norm based method [48] by replacing the total

variational functional with

Lp(u) f= I Vu (3.4)

where p > 1. When p = 1, equation (3.4) becomes total variation norm, when p = 2,

equation (3.4) becomes H1 norm. In Blomgren et al. [48], it was shown that LP

smoothness norm based restoration doesn't admit discontinuous solutions as the TV-

norm does when p > 1. However, when p is chosen close to 1, its behavior is close to

the TV-norm for restoring edges. We adopt LP smoothness norm in our constrained

model. In particular, we need pi > 6/5 for regularizing So and p2 > 1 for L to ensure

existence of the solution described in 3.2.4 Note that what is of importance here

is the estimation the diffusion tensor D and therefore, the edge-preserving property

in the estimation process is more relevant for the case of D than for So. In our

experiment, we choose pi = 1.205 for So and p_ = 1.00 for L.

3.2.3 The Positive Definite Constraint

In general, a matrix A Ec nxn is said to be positive definite if xTAx > 0, for

all x $0 in R". The diffusion tensor D happens to be a symmetric positive definite matrix but due to the noise in the data SI, it is hard to recover a D that retains this property unless one includes it explicitly as a constraint. One way to impose this constraint is using the Cholesky factorization theorem, which states that: If A is a *;lim,: 1, i positive /. fi':.l:l matrix then, there exists a unique factorization A = LLT where, L is a lower ti i.:ii.uir matrix with positive diagonal elements. After doing the Cholesky factorization, we have transferred the positive definiteness constraint on the matrix D to an inequality constraint on the diagonal elements of L. This is however still hard to satisfy theoretically because, the set on which the minimization takes place is an open set. However, in practise, with finite precision arithmetic, testing for a positive definiteness constraint is equivalent to testing for positive semi- definiteness. This is because for any symmetric positive definite matrix D, its machine representation D = D + E with || E ||< e || D ||, where c is a small multiple of the machine precision. When D is a small symmetric positive definite matrix, D can become a semi-definite matrix, it follows that in finite precision arithmetic, testing for definiteness is equivalent to testing for semi-definiteness. Thus, we repose the positive definiteness constraint on the diffusion tensor matrix as, xTDx > 0 which is satisfied when D = LLT. 3.2.4 Existence of a Solution Justification for using the augmented Lagrangian method for constrained prob- lems is given in Nocedal and Wright [43], thus we only need to prove there is a solution for the following subproblem: min (So, L; A, [t) (So,L)eA S(So, L) S(So,L) AC(So, L) + ,L if C(So, L) < pA LA2 otherwise where A > 0 is an estimate of the Lagrange multiplier, ft > 0 is a p. n.11,I' parameter and A = {(So, L) L c LP(Q) where p > 1 and Ro, lo c W1'P(Q) where p > 6/5}. Here Q C R3, LP(Q) denotes the space of functions with bounded LP norms, L2( ) is the space of square integrable functions on Q and W1'P(Q) denotes the Sobolev space of order p on Q [49]. Consider the augmented Lagrangian formulation (3.5) which serves as a subproblem of (3.1), the existence theorem will be stated and proved after the following two lemmas. If not pointed out, the definitions and theorems employed in the proof can be found in Evans [49]. Lemma 1 Let Ai = {(So, L) | (So, L) E A and C(So, L) < ipA, and suppose RI, I E L2(Q), then the following minimization problem (3.6) has a solution (So, L*) E A, if 1 min (So, L; A, p) = S(So, L) AC(So, L) -C2(So, L) (3.6) (So,L)eAi 2[/ Proof 1 We will verify the following three statements one by one and then prove this lemma: The first term S(So, L) in (3.6) is lower semi-continuous with respect to L in LP(Q), p > 1 and weakly lower semi-continuous with respect to So in W1'P(Q), p > ql. The second term C(So, L) in (3.6) is continuous with respect to So in W"P'(Q) when p > 6/5, and is continuous w ith respect to L in LP'() when p geql. The third term C2(So, L) in (3.6) has the same continuity property as the second term. (3.5) As in (3.6) is lower bounded, there exists a d :."^, i sequence (S', L") for it, where L" E LP(Q), d E D, p > 1, R" and I" E W1'P(Q), p > 6/5 and C(S", L") < pX. Then {L }I, d E D is bounded in LP (Q), p > 1. {R }'1 and {fIW}T, are bounded in W1P(Q)). T.i. fore there is a subsequence {(ST", L" )}, L Ec LP (Q),p > 1,d E D and R, I* G W1'P(Q), p > 6/5 such that when nk oo, {L""} L, d E D in L1(Q) and a.e. on Q. (From the compactness property of LP(Q), p > 1). {R"} -I R* and {IJ"} -1 I1 in W1P(Q)). (From the weak compactness of W1'P) {R"} -I R* and {1J"} 1f in L2(Q) and a.e on Q. (From Rellich-Kondrachov Compactness TbI .,. : when p > 6/5 and n = 3, this is why we need p > 6/5!). 1) Now we are ready to demonstrate the lower semi-continuity of the first term S(So, L) in (3.6). By the lower semi-continuity of LP norm in LP(Q), p > 1, we have IVL*P2dx 1|VL|7 -dx ddD By the lower semi-continuity of LP norm in W1'P(Q), we have SIVR"P'dx < liminf IVRf I P'dx f VI |"'dx < lim inf |VIk I'odx In "n-0" In C(S*, L*) [f|VR*(x)|Pl + |VI7*(x)|p\ + |VL*(x)|P21 dX fl J C(So L*) lim C(S",k L k) Ufk O Since C(S*, L*) N IS,9 U f2 l 1 I |SI S* -bl:L*L*T 2dX S e-bl:L*L"'k 12dX, we only need to show I N lim I||S, rn~o S1 S"ke-bl:L Lk(L'k)T 2dx 0nc-b.L^T This will be proved in several stages (a) Let FR* = R Re-b:L*L* and FRi, I [FR, nk Jo RI Re- -b:Llk(L^k)T, then FR*]2 dx R^' -bl:L'k(L'k)T - <2 LR (-btl*:L^k(L'k)T In 1 R6e-bl:L*L*T 2 dX - -b:L*L*T)]2 d 2 j k ) -bl:L*L*T 2 [(R -n R*c I 2A +2B Since ||IRO zn ip() is ';i.f*'1 : i.i' bounded in nk, by the Sobolev embedding theorem, for R3, we have |R"| l(2 < c61/ :k,, LP(n ) < C Noting the fact that Lnk has a strong convergence towards L* in L'(), we have from the Dominant Convergence T1i i.1 1,,n [49/ mli -b:L'k(Ln)T -b:L*L*T I2Q Thus 2) Next we claim (3.7) (3.8) (3.9) Thus we have A Rj [LRFk(e-bl:L k(Lgk) < I R || l1(22 e -bl:L1k!(Lk)T From the strong convergence of R"" to R* B < IR R-|2dX Now as A -- 0 and B -+ 0, we have lim \\FR; FR,n 112(o) (b) We now prove T -e bl:L*L*T)] 2 S-b:L*L*T 2 0 in L2(Q) and e-b:L*L*T < 1, we have Ro H RIl L2(n) 0O lim j [FR FRTk]2 dx= 0 nk^00 JI (3.10) FR2dX( lim JFR ndx (3.11) First of all, if f E L2(Q) and g E L2(Q), then we have f g L2(Q) and f +g L2(Q). Further we have 22 )9 f22dX- 2(X Lf2(o) g L2(Q) jf9dx jg"dx S/(f2 g2)X< f 2 g2 dx / f f gldx< I It is easy to verify that FR* E L2(Q), FR, E L2(Q) and are ';ii:.I' i,/ bounded in L2(Q). Thus ii,.'1';,.i/ the above equation, we have j FR2d jFRX XL2 = () L FRink I L2( (c) Similarly as previous step, we can prove IL Ie -b:L*L*T dx = lim jt IJfe-bj:L'k(L)T 2 dx (3.13) Combining with (b), it is easy to verify equation (3.9). 3) Now we will show that C(S, L*)2 = lim C(S"", Ln")2 (3.14) nk-00 The above can be I..;1 verified since C(S*, L*), C(ST", L"n) are bounded and C(S, L*) lim C(ST", L" ) (3.15) nk-00 Finally, we have from 1), 2) and 3) L/(S*, L*; A, p)) nk-00 TI/, i fore, (So, L*) is a minimizer of (So, L; A, [t) as .7. f.1. in (3.6). Lemma 2 Let A2 = {(So, L) (So, L) E A and C(So, L) > pXA}, and suppose RI, I E L2(Q), then the following minimization problem (3.17) has a solution (S, L*) E A2 if A2 0 min (So, L; A, [t) S(So, L) A2 (3.17) (So,L)eA2 2 The proof is similar as in the lemma 1. Theorem 1 Suppose R1I,I E L2(), then the augmented Lagrangian formulation (3.5) has a solution (S*, L*) E A. Proof 2 It is easy to see A / 0, as a matter of fact, constant functions will be members of A. Thus, there will be three cases: 1. Ai 0 and A2 = 2. A2 7 0 and A, 0, 3. Ai / 0 and A2 0. Here we provide a proof for case 3, case 1 and 2 are trivial to prove. Let (Sf, L1) be the solution for (3.6) and (S2, L2) be the solution for (3.17), it is not hard to see that the solution of (3.5) is (ST, L*) (S, L) if (So, LV; A, p) < (S2, L2; , (S02, L2) otherwise Finding a solution of the constrained variation principle (3.1) involves solving a sequence of (3.5) with fixed A and ft at each stage. It is much more difficult than when dealing with the problems of recovering and smoothing separately. However, there are benefits of posing the problem in this constrained unified framework, namely, one does not accumulate the errors from a two stage process. Moreover, this framework incorporates the nonlinear data term which is more appropriate for low SNR values prevalent when the magnitude of the diffusion-encoded magnetic gradient is high. Also, the noise model is correct for the nonlinear complex data model unlike the log-linearized case. Lastly, in the constrained formulation, it is now possible to pose mathematical questions of existence and uniqueness of the solution which was not possible in earlier formulations reported in literature. 3.3 Numerical Methods The minimization problem given by (3.1) can only be solved numerically. Here, we discretize the constrained variational principle (3.1), transform it into a sequence of unconstrained problems by using the augmented Lagrangian method and then iu]'' the limited Quasi-Newton technique [43] to solve them. Note that this framework allows us to solve the minimization without resorting to adhoc methods of choos- ing the "tuning" parameters. Also limited memory Quasi-Newton is the method of choice here due to the advantages it affords in the context of memory/storage savings. Proper line search technique is employed once the search direction is found by using limited memory Quasi-Newton method. 3.3.1 Discretized Constrained Variational Principle We use the standard finite difference method to discretize the problem. Let FR1,ijk Rl,ijk 0,jke 1. L LT FIl,ijk=I1,ijk 0,ijke-1. L L Fi,ijk=FRijk + iFljijk VRo ijk [ (ARo2 + (A o)2 + (A+R)2 ijk VIo ijk [(A0)2 +(A+10)2 +(AJ+10)2 j LVO Ik J Y+ ijk |VLd ijk= (AL)2 + (A Ld)2 + (A,+Ld)2 ijk |VL|IA k VL,1'1A dcED where A+, A' and A+ are forward difference operators, c is a small positive num- ber used to avoid singularities of the LP norm when p < 2. Now the discretized constrained variational principle can be written as min(So, L) = (|VRr, + |VI, + VL|k) So,L i,j,k N s.t. C(So, L) =(T2 -F ,| >0 (3.18) i,j,k 1= 1 3.3.2 Augmented Lagrangian Method The above problem is now posed using the augmented Lagrangian method, where a sequence of related unconstrained subproblems is solved, and the limit of these solutions is the solution to (3.18). Following the description in Sijbers [43], the k-th subproblem of (3.18) is given by min (So, L, s; Ak, [Ik) = S(So, L) Ak [C(So, L) s] + [C(So, L) s]2 (3.19) where s > 0 is a slack variable, Mtk, Ak are the barrier parameter and the Lagrange multiplier estimate for the k-th subproblem respectively. One can explicitly compute the slack variable s at the minimum as s = max (C(So, L) ukAk, 0) and substitute it in (3.19) to get an equivalent subproblem in (So, L) given by min (So, L; Ak, [Ik) (3.20) S(So, L) AkC(So, L) + SL if C(So, L) < kAk S(So, L)- Ai otherwise The following algorithm summarizes the procedure to find the solution for (3.18): Algorithm 1 Augmented Lagrangian Algorithm 1: Initialize So(0), L(0) using the nonlinear regression, choose an initial [lo and Ao. 2: for k = 1, 2, ... 3: Find an approximate minimizer So(k), L(k) of (-, .; Ak, Ilk) as in (3.20) starting with So(k 1), L(k 1); 4: If final convergence test is satisfied 5: STOP with an approximate solution So(k), L(k); 6: Update the Lagrange multiplier using Ak+1 = max(Ak C(So, L)/ kk, 0); 7: Choose a new penalty parameter kk+1 = [tk /2; 8: Set the new starting point for the next iteration to So(k), L(k); 9: end(for) 3.3.3 Limited Memory Quasi-Newton Method Due to the large number of unknown variables in the minimization, we solve the subproblem using limited memory Quasi-Newton technique. Hessian matrix at each iteration of the optimization by using only the first derivative information. In the Limited-Memory Broyden-Fletcher-Goldfarb-Shano (BFGS) method, search direction is computed without storing the approximated Hessian matrix which can be a very large matrix in general (O(N6) size for O(N3) unknowns). Let x = (So, L) be the vector of variables, and f(x) = (So, L; A, [p) denote the augmented Lagrangian function (3.20) to be minimized. For simplicity, we write f(x) = (So, L) by omitting the fixed parameter A and ft in the following description. At kth iteration, let Sk = Xk+i Xk be the update of the variable vector x, yk = Vfk+i Vfk the update of the gradient and Hk-1 the approximation of the inverse of the Hessian. The inverse of the approximate Hessian can be approximated using the BFGS updating formula: T Hk, = VkHe VT + SkSk (3.21) Yk Sk where Vk I ykskT Yk1Sk Then we can use the following L-BFGS two-loop recursion iterative procedure, which computes the search direction Hk1Vfk efficiently by using last m pairs of (Sk, Yk) [43]. Algorithm 2 Search Direction Update Algorithm 1: q Vfk; 2: for i = k- 1,k -2,...,k m 3: ai <-- pisq; 4: q q aiyi; 5: end(for) 6: r (H,)-1q; 7: for i k m,k m 1,...,k- 1 8: 3 <-- pyf r; 9: r r + si(Qa ) 10: end(for) 11: stop with result Hk-1Vf = r where pk = --- and (Ho)-1 is the initial approximation of the inverse of the k Sk T Hessian, we set (Ho)-1 = 7kI where 7_k s _yk_1- k II Ykyk-1 The gradient of our energy function is a W(So,L) a I(So,L) aI (So,L) aI (So,L) fORo 1 o O L,, OLYY ( (So,L) ( (So,L) L (So,L) (So, L) L,, L,, 9Ly L Lyz where j/k |V i... i'j' k 2 -i'j'k' .1, 8|VI.. i'j'k' O8Ld,ijk V I 'jk' + 2(A / OdLd,ijk i'j'kl if C(So, L)- N C(SoL) )FR FRi.j I 1 (-1 if C(So, L) - C(So,L) ZF aFh ,ijk lijk '1. l= 1 - PkAk > 0 otherwise -LtkAk > 0 otherwise if C(So, L) PkAk > 0 C(SoL) F i ,ijk ( RFiL,ijk otherwise d = xx, yy, zz, xy, yz, xz (3.23) Here ,i,7'k is computed over a neighborhood of the voxel (i, j, k) where the forward differences involves the variables Ro,ijk, I0,ijk or Ld,ijk. Each term in equation (3.23) can be computed analytically, for example, aFR1,ijk Laxx,ijk aF11,ijk 89Lxxijk 9b : LijkL k OLxxjk jk-:LjkLTk Ob : LijkL T Lijk 0,ijke-b:LijkLk Ob : LijkLk jkC i Lx,ijk (3.22) aL(so, L) 9Ro,ijk aL(so, L) 0Io,ijk ac(So, L) OLd,ijk (2btxxLxx,ijk + 2b ,,yLxy,ijk + 2b,,zLaxz,ijk) 3.3.4 Line Search The limited Quasi-newton method computes a search direction in which the minimum or maximum is estimated to lie and yield a one dimension optimization problem as mm f (x+ a d) (3.24) a where d is computed by algorithm 2 described in the previous subsection. Equation (3.24) needs to be solved by a search procedure where the solution is find by the following update iteration: Xk+1 = x k + ak d There are many line search techniques can can be roughly categorized as line search without derivatives like Fibonacci method and line search with derivatives like poly- nomial method. As we have an analytical form of derivative, we use a cubic interpo- lation [43] that is generally the most efficient for continuous functions like our energy function. 3.3.5 Implementation Issues The constraint in (3.18) is directly related to the standard deviation a of the noise which can be computed as in Seber [47]. Since there are N complex measurements and 8 unknown parameters at each voxel (note: So is complex-valued, so it is treated as 2 unknowns), we have >77 E I Sl(x)- S(x)e-b:L(x)L(x)T 2 2N 8 Initialization is very crucial for nonlinear optimization. In our case, we use the fol- lowing nonlinear regression with positive definiteness constraint as the initial guess: N mMin (So(x), L(x)) 1 ||S(x) So(x)e-b:L(x)L(x)T 2 (3.25) I=1 The above minimization is a simple nonlinear least square problem and can be efficiently solved by the Levenberg-Marquardt method [43] using the results of the corresponding linearized least square problem as initial guess. There are a few practical issues in implementing the augmented Lagrangian method and the Quasi-Newton method, these are settled by using the -II--' -1 ions in Nocedal and Wright [43] or empirically. For example, in the augmented Lagrangian method (see algorithm 1), we start with a penalty control parameter ft = 5.0, decrease it by a factor of 2 in each step until it is less than 0.01. We also choose Ao = 1.0. Note that the augmented Lagrangian method is quite robust with respect to the choice of plo and Ao since plo will eventually decrease to 0 and Ao approaches the Lagrange multiplier. The final convergence test has two criteria: The subproblem converges and ft < 0.01. As the subproblem is just a standard unconstrained minimization problem, the criteria to check whether it converges or not is achieved using any of the standard criteria in iterative optimization schemes [43] and for the line search, we employ cubic interpolation and Wolfe convergence criterion, see Nocedal and Wright [43] for more details. For the limited memory Quasi-Newton method, we use the last 5 update pairs to update the search direction. 3.4 Experimental Results In this section, we present three sets of experiments on the application of our direct tensor smoothing and estimation model. One is on complex valued synthetic data sets and the other two are on a complex valued DWI data acquired from a normal rat brain. For the synthetic data example, we compare the results obtained from our estimation procedure with competing methods published in literature. 3.4.1 Complex Valued Synthetic Data We synthesized an anisotropic tensor field on a 3D lattice of size 32 x 32 x 8. The volume consists of two homogeneous regions with the following values for So and Regional: So = 10.0eio D = 0.001 x [0.970 1.751 0.842 0.0 0.0 0.0], Region2: So = 8.0eio D = 0.001 x [1.556 1.165 0.842 0.338 0.0 0.0] where the tensor D is depicted as D d )yy), d... diy, dz dyz the dominant eigenvector (DEV) of the first region is along the y axis, while that of the second region is in the xy plane and inclined at 600 to the y axis. 0 is chosen to be 450 to yield an even distribution of the real and the imaginary part. The complex diffusion weighted images SI are generated using the St. -l;.]l- Tanner equation at each voxel x given by Sj(x) So(x)e-bl:D(x) + n(x), (3.26) where n(x) ~ N(0, aN) + iN(0, aTN), N(0, (aN) is a zero mean Gaussian noise with standard deviation aN. As the signal measured before Fourier transform in MRI is complex, it is reasonable to assume the noise is an additive complex Gaussian noise. The noise in the DWIs remains to be complex valued after the Fourier Transform. Thus our simulated data reflects the pir, -i. -, of MRI imaging. Note that the noise in the magnitude of the complex DWIs have a Rician distribution and approximates a Gaussian distribution when the SNR is high [42]. We choose the 7 commonly used configurations x, y, z, xy, xz, yz, xyz for the directions of the diffusion-encoded magnetic gradient as in Basser et al. [1] and use 3 different field strengths in each direction (100, 500 and 10OOs/mm2). Thus we have a total of 21 different b matrix. A slice of the generated data set is shown in Fig. 3.1, note that the DWIs are different when either the directions or the magnitudes of diffusion-encoded magnetic gradient are different. For better illustration of the superior performance of our model, we compare performance with the following methods in our experiments: (i) Linear linear regression on (1.6) as used in [1], (ii) Nonlinear nonlinear regression applied to (1.4), (iii) Linear + EVS (Eigenvector smoothing) linear regression followed by the DEV smoothing method described in Coulon et al. [22], (iv) Nonlinear + EVS nonlinear regression plus the smoothing as in (iii), and (v) Ours-Our method. Note that the EVS method [22] is a direction field restoration scheme that preserves discontinuities and is based on Chan and Shen's work [50]. Figure 3.2 shows an ellipsoid visualization of the restored diffusion tensor field for the synthetic data set with oN = 0.5. It is evident that our method restored the noisy tensor field quite well in comparison to the nonlinear regression method which did not perform well and the linear regression technique which performed worst. For further comparison, Fig. 3.3 shows the DEV computed from the original and the restored diffusion tensor field using all five methods as mentioned before. This figure clearly shows that our model yielded the best estimation of the original DEV field. The method in Coulon et al. [22], however, did not work well at voxels where the estimated DEVs are almost orthogonal to those in their neighborhoods. In such cases, Coulon et al.'s method will treat them as discontinuities and does not smooth them. Though it is possible to treat these locations as outliers, it is difficult to set a reasonable criteria to achieve the same. It is interesting to notice that the Nonlinear+EVS method which serves as an improvement of the existing Linear+EVS method can diminish this problem. Additional quantitative measures, described below, confirm the visual comparison results. To quantitatively assess the proposed model, we compare the accuracy of the DEV computed using the previously mentioned methods. Let 0 be the angle (in 48 Table 3.1: Comparison of the accuracy of the estimated DEVs using different methods for different noise levels. an = 0.5 Linear Nonlinear Linear+EVS Nonlinear+EVS Ours [to 9.57 7.44 1.63 1.19 0.80 (as 6.93 5.08 1.57 0.84 0.96 an 1.0 Linear Nonlinear Linear+EVS Nonlinear+EVS Ours [to 22.28 16.94 6.67 3.78 1.99 as 17.46 13.86 13.06 8.58 2.74 an = 1.5 Linear Nonlinear Linear+EVS Nonlinear+EVS Ours toL 33.14 26.40 14.55 9.09 4.08 as 22.60 20.19 22.39 17.46 4.70 degrees) between the estimated DEV and the original DEV, Table 3.1 shows the mean [o and standard deviation as of 0 using different methods for the synthetic data with different levels of additive Gaussian noises. A better method is one that yields smaller values. From this table, we can see that our model yields lower error values than all other methods under various noise levels. It is also clear from this table that the methods using the original nonlinear complex St. i-Li. -Tanner equation (1.4) are more accurate than those using the linearized one (1.6). The advantage of our method and the nonlinear approaches are more apparent when the noise level is higher, which supports our discussion in section 3.2.1. 3.4.2 Complex DWI of a Normal Rat Brain The normal rat brain data is imaged using a 17.6T (750MHz) Bruker Avance Imaging Spectrometer system with the following settings: TR = 3058ms, TE = 28.8ms, A = 17.8ms, 6 = 2.4ms, diffusion time = 17.0ms, bandwidth = 40kHz. The field of view is 15 x 15 x 21mm3 with a resolution of 117 x 117 x 270um3. The same set of 7 diffusion-encoded magnetic directions as the synthetic data are used with two different magnitudes (100, 500s/mm2). With a number of averages equal to 8 for each signal measurement S1, the raw data is a set of 14 complex DWI volume data, each with a size of 128 x 128 x 78. We extract a 128 x 128 x 10 volume in the region of the corpus callosum for our first experiment. Figure 3.4.2 depicts the restored images of the six independent components of the estimated diffusion tensor. As a comparison, Figs. 3.4 and 3.5 show the same images computed using linear regression applied to (1.6) and the nonlinear regression applied to (1.4) from the raw data respectively. For display purposes, we use the same brightness and contrast enhancement for displaying the corresponding images in all the three figures. We also present the computed DEV of the estimated diffusion tensor in Fig. 3.7. We didn't compare with the EVS methods because the sorting problem of the eigenvectors is very severe in free water or other isotropic regions, thus it is necessary to exclude those regions to make effective usage of EVS methods. This involves a segmentation issue which is non-trivial. We then extracted a 10 x 127 x 78 volume in the region of the cerebellum and show the sagittal view of the results in Fig. 3.8. The brightness and contrast enhancement of the figures are the same as in the previous experiment. In Figs. 3.4.2 and 3.8, the edge preserving smoothing is evident especially in the off diagonal terms of the diffusion tensor D which are essential in evaluating the structural anisotropy. We also notice that there are some differences in the region of free water between Figs. 3.4 and 3.5 visible in the off-diagonal terms while there is no difference visible inside the corpus callosum between these two figures. However, Fig. 3.7 gives more insight into this via the depiction of the computed DEVs. Note that smoothing effect on DEV is evident in the shadowed box in Fig. 3.7. Our model for estimating a smooth diffusion tensor field from the noisy DWIs may be used to achieve better accuracy in fiber tractography. Our future efforts will focus on applying the model described in this paper to achieve better fiber tract maps. Figure 3.1: A slice of several volumetric complex DWIs generated with UN = 0.5. Left to right: real and imaginary pairs of complex DWIs with varying magnitude of diffusion encoded magnetic gradient. Top to bottom: complex DWIs for varying directions of diffusion encoded magnetic gradient. poee poee sees sees 0see 'see 0000 '000 (a) Original &V fOe ApIp 0 A *eee areA 00- 1t 0e~ 5eD4 0ee' %*flm *ee*e (b) Linear eggs ease eggs ease dp~vopdv &V tvdp d dp 4p p 4 4p ~ Ap 40 0 dp V 4 0000r (c) Nonlinear Figure 3.2: A slice of the original (ground fields for the noisy synthetic data with (oN (d) Our Model truth) and the estimated diffusion tensor = 0.5. /777 I\i! -/; I ---/K-- \U\I// I I /- linear+EVS,/'/ nolna+V nd olur model. m ^ //-'/ \/\/ -// I---// \I/-=\ \ \ I \ /---/--^/ /-/-*^//-/-/ W /- \] /-1J -,- /-/,- / / /- // i*/-/ j; iilI /\ / /u \\I/ "\ /- // 4 / / \ \.- '-\- / \ I/I\l // //-I I 'it- J ////// > /XX;- I ,//\ /\ \/^ .~... / ^ / ) // ,. / / i i /1 1\i // \/ S/ i i ./II- -/ / / |I i-/ / /* *i/ / | /-*/-^ /-/-/-./-/ / / / Figure 3.3: A slice of the computed DEV field for the noisy synthetic data with aN = 1.5. Top left image is the DEV field computed from the original tensor field, and the other images arranged from left to right, top to bottom are the DEV field computed from estimated tensor field using the following methods: linear, nonlinear, linear EVS, nonlinear EVS and our model. Figure 3.4: A slice of the normal rat brain DTI estimated using multivariate linear regression without -ii.."llii':. viewed channel by channel. First row, left to right: D,,, D,, and D1,. Second row, left to right: So, Dy and Dyz. Last row, left to right: FA, < D > and D,,. Figure 3.5: A slice of the normal rat brain DTI estimated using multivariate nonlinear regression without -ii.....llii. viewed channel by channel. Arrangement of the figures are the same as in Fig. 3.4. Figure 3.6: A slice of the normal rat brain DTI estimated using using our proposed method, viewed channel by channel. Arrangement of the figures are the same as in Fig. 3.4. Figure 3.7: A slice of the computed DEV field from a normal rat brain. Top to bottom: Linear, nonlinear regression and our model. Left column: Region of interest for depicting the DEV indicated by the white box superposed on the D,, image. Right column: The computed DEV field inside the white rectangle on the left and the smoothing effect of our model is clearly visible in the shaded box. (a) (b) Figure 3.8: A slice of the normal rat brain DTIs in the region of the cerebellum viewed channel by channel. The DTIs are estimated using (a) multivariate non-linear regression without smoothing and (b) our proposed method (bottom row). Both (a) and (b) are sagittal views and are arranged as in Fig. 3.4. CHAPTER 4 DTI SEGMENTATION USING AN INFORMATION THEORETIC TENSOR DISSIMILARITY MEASURE 4.1 Introduction We tackle the DTI segmentation problem using region-based active contour model by incorporating an information theoretic tensor dissimilarity measure. Ge- ometric active contour models have long been used in scalar and vector image seg- mentation [51, 52, 53]. Our work is based on the active contour models derived from the Mumford-Shah functional [54], and can be viewed as a .;f,/?.,:i extension of the work on active contours without edges, by Chan and Vese [55] and the work on curve evolution implementation of the Mumford-Shah functional by Tsai et al. [53] to diffusion tensor images. Our key contributions are, (i) the .7.1 [hirl of a new discriminant of tensors based on information theory, (ii) a theorem and its proof that allows for the computation of the mean value of an SPD tensor field required in the active contour model in closed form and facilitates the i FT, .* :,t segmentation of the DTI, (iii) extension of the popular region-based active contour model to handle matrix-valued images, e.g., DTI. Rest of this chapter is organized as follows: in section 4.2, the new definition of tensor distance is introduced and its properties are discussed in detail. In section 4.3, active contour model for image segmentation is reviewed. Then in section 4.4, the piecewise constant region-based active contour models for DTI segmentation is discussed. We describe the associated Euler-Lagrange equation, the curve evolution equation, the level set formulation and the implementation details. Similarly in sec- tion 4.5, we discuss the piecewise smooth region-based active contour models for DTI segmentation. Finally in section 4.6, we present experiments on application of our model to synthetic tensor fields as well as real DTIs. 4.2 A New Tensor Distance and Its Properties 4.2.1 Definitions In the context of DT-MRI, diffusion of water molecules may be characterized by a 2-tensor D which is symmetric positive definite. This D is related to the displacement r of water molecules at each lattice point in the volumetric data at time t via 1 TD 'r p(r t, D) 2- 4t (4.1) p(2x) |2tD| Note that the mean of r is S(r) = 0 and the covariance matrix of r is D(r) = 2tD [47]. Thus, it is natural to use the distance measure between Gaussian distributions to induce a distance between these tensors. The most frequently used information theoretic "ili-i...i measure is the Kullback-Leibler divergence defined as KL(1,) = p(x)log X) dx (4.2) f iq(x) for two given densities p(x) and q(x). KL divergence is not symmetric and a popular way to symmetrize it is J(p, q)= -(KL(.,) +KL((I +)) (4.3) 2 which is called the J-divergence. We propose a novel definition of tensor "li-. .. I " as the square root of the J-divergence, i.e. d(TI, T2) t, TI),p(rt, T2)) (4.4) It is known that twice the KL divergence and thus twice the J-divergence is the squared distance of two infinitesimally nearby points on a Riemannian manifold of parameterized distributions [56]. Thus taking the square root in equation (4.4) is justified. Furthermore, equation (4.4) turns out to have a very simple form given by d(TI, T2) trT1T2 T2) -2n (4.5) where tr(-) is the matrix trace operator, n is the size of the square matrix T1 and T2- Derivation 1 : For Gaussian distributions p(r|t, TI) andp(rIt, TI) given as in equa- tion (4.1), we have KL (p(r|t, Ti) 1/(r|t, T2)) Sp(rt, Ti)log (r.t, TI) dr p(r t, T2) Ip(rlt, Ti)log rcTVIr r"T dr |I|I Sp(r t, TI) 1 |T2 2 IT1 j 1 |T21 -log 2 IT1 1 |T2 -log 2 IT1 1 T2 - log 2 T1| 21 T2 p(r|t, Ti)dr rT -T1 r rT 1r dr 4t 4t - fp(r t, T.i) T rdr+ 4t p(r|t, T) 2 dr 4t S(r r) S(r r) 4t 4t tr [7i (2tTl)] tr [4 (2tT)] 4I V4I I In tr(- ) 2 (4.6) Switching the role of p(r t, TI) and p(r t, T2) in the above steps leads to KL (p(r t, T2) (r tT1))= log IT n + tr(T11T 2 1 T2 (4.7) Str( ) 2 n+tr(T21TI) Combined with the definition of J-divergence, we have J(p(r t, Ti), p(r t, T2)) 1 [KL(p(rlt, Ti) |/.(r|t, T2))+ KL(p(rlt, T2) |/(r t, Ti))1 2 [log -1 n +tr(T TI)] + [log IT n + tr(T T2) 4 |IT 4 IT2l 1 [tr(T(T 2+ T2-T1) -2n] (4.8) Thus we have equation (4.5). U Now we give an example of the above "ii-i.i.. When T1 and T2 can both be diagonalized using the same orthogonal matrix 0, which means T1 = ODOT and T2 = OEOT, where D = diag{di, ..., d,} and E = diag {e, ..., e,}. Then we have tr(T, tr(TT l) i= 1 i= 1 Thus d(TT2)- 2n ,i + 2) i= 1 i= 1 i= 1 1 K d e- 2d e 1 (d e)2 ( 2 ) 2 C 2 (4 .9 ) 2 Cidi 2 Cidi 4.2.2 Affine Invariant Property When a coordinate system undergoes an affine transformation, the DTI will also be transformed. If the coordinate system undergoes an affine transform y = Ax + b, then the displacement of the water molecules will be transformed as r = Ar. Since r has a Gaussian distribution with covariance matrix 2tT, the transformed displacement r has a covariance matrix of 2tATAT. Thus, the transformed DTI will be T(y) AT(x)AT, y Ax + b (4.10) Figure 4.1: Transformation of tensor fields. Left: Original tensor field. Right: Trans- formed tensor field. Figure 4.1 shows an example of a 32 x 32 2D diffusion tensor field where the affine transformation is 0.8 0.0 2.0 A= b- 0.0 0.8 2.0 Our definition of tensor "1:i-.1.. 11 is invariant to such transformations, i.e. d(TI, T2) d(ATIAT, AT2A' (4.11) Though the transformation of the diffusion tensf r is actually a congruent transfor- mation, however, we still refer the above invariance as *.ilhw invariance because in the context of segmentation the invariance property has been linked with the trans- formation of the coordinate system. formation of the coordinate system. Derivation 2 : First of all, we have tr [(ATlAT)- (AT2AT)] Str(A- T T A- AT2AT) Str(A TT T2A T) Str(T T2ATA -T) tr(T 1T2) (4.12) Similarly we get tr [(AT2AT)-1(ATA T)] = tr(T21T1), thus we have d(AT1AT, AT2AT) I tr [(ATlA T)-1(AT2A T) + (AT2AT) -1(ATIAT)] 2n 'tr(T'T2 T21T1) 2n d(Ti, T2) (4.13) Thus our ": i.-/,. is invariant under affine coordinate transformation. 0 It is easy to show that Frobenius norm of the tensor difference commonly used in published work [28, 27, 33] does not have this property. 4.2.3 Tensor Field Mean Value We now develop a theorem that allows us to compute the mean value of a ten- sor field. This is essential for the region-based active contour model used in the segmentation algorithm. Theorem 2 The mean value of a tensor field .7. if 1 as M(T, R) = minMGSPD(n) f d2 [M, T(x)] dx (4.14) is given by M [= BAVB VB (4.15) where A = fRT(x)dx, B = J T- (x)dx and SPD(n) denotes the set of 1iil o lit' positive definite matrices of size n. Proof 3 Let E(M) = f d2 [M, T(x)] dx, we have E(M) = d2' [M, T(x) dx R = tr(M-1T(x) +T-(x)M)- dx Str [M- (T(x)dx) ( T (x)dx)M] nR 4 R 2 1 n -tr(M-A +BM) -RI 4 2 For a small perturbation in the neighborhood N(M) C SPD(n) represented by M + tV, where t is a small enough positive number, V is -; iim. hl', matrix, we have (M + tV)- = M-1 (I tVM-1 + o(t)) Thus E(M + tV) 1 n -tr [(M tV)-1A B(M tV)] -IRI 4 2 Str [M- (I- tVM- o(t))A B(M tV)] IRI 4 2 1 n -tr(M- A BM) |-RI 4 2 +tr(BV M-VM- A)t + o(t) 4 E(M) -tr(BV M- AM-V)t + o(t) 4 Thus at the critical point, we need tr(BV M-1AM 1V) = 0, V Iric V (4.16) This is actually equivalent to MBM = A and can be solved in closed form [571 ;/.:. J1.,/ the desired result in equation (4.15). Since A and B are both SPD matrices, M is also an SPD matrix, thus it is indeed a solution to the minimization equation (4.14). 0 In some special cases, the tensor field mean value can be computed easily. If the tensor field is constant, T(x) Te, then we have A = Tc\R| and B = T,-1IRI. Substituting into equation (4.15) we have M = T,. So the mean value of a constant tensor field is the constant itself which makes perfect sense. Now if all the tensors in the tensor field is diagonal, T(x) D(x) = diag{di(x),..., d,(x)}, we have A = D(x)dx = diag {di(x), ..., d(x)}dx JR JR = diag{ di(x)dx, ..., jd (x)dx} Similarly B diag{ j (x)dx,..., (x)dx} JR d1 JR "n As both A and B are diagonal matrices, their polynomial forms are all diagonal and the multiplication between these terms are commutable. Then we have M B-2B BA B BB B-A (= A2B- ) d]af R di(x)dx fR dn(x)dx diag -. 7 V fR(x)dx' f (x)dx In general, AB$ BA won't commute and equation (4.15) can not be simplified

further and needs to resort to the matrix diagonalization.

4.3 Active Contour Models for Image Segmentation

Active contour models (aka snakes) have been successfully used in numerous ap-

plications including computer vision, medical illi.'-il12. computer graphics etc. Gen-

erally speaking, active contour models deal with hyper-surfaces in R' and they are

often referred as curve evolution models in 2D and surface propagation models in

3D. The basic idea of active contour models is simple, it is just the movement of a

hyper-surface in R'U (that represents the boundary of regions) in the direction of the

normal with an application driven velocity. Level set methods serve as a powerful

mathematical tool to analyze and implement the hyper-surface evolution equations in

active contour models. They involve expressing the geometric hyper-surface evolution

in an Eulerian form allowing the use of efficient and robust numerical methods. In

addition, topology change of the hyper-surface is transparent. The time dependent

level set formulation was proposed independently by Dervieux and Thomasset [58]

to study multifluid incompressible flows, and Osher and Sethian [59] to study front

propagation. In the context of computer vision, the history of active contour models

began with the pioneering work by Kass, Witkin and Terzopoulos [60]. A few years

later, Malladi et al. [61], and Caselles et al. [62] independently proposed the seminal

idea of modeling image segmentation with geometric snakes in a level set formulation.

There are also many other significant contributions in image segmentation using ac-

tive contour model and level set methods. A brief review can be found in Vemuri et

al. [63]. For a complete discussion on the general curve evolution and level set meth-

ods, the readers are referred to two outstanding books, one by Sethian [64] and the

other by Osher and Paragios [65]. In this section, only basic concepts and methods

of 2D active contour models are presented for the sake of self containedness.

4.3.1 Snake Energy Functional

Most active contour models aim to minimize a snake energy functional. The

minimization of a functional is also called a variational principle and offers several

advantages over other methods in image segmentation. The most significant one is

that the incorporation of prior shape information into this framework is straightfor-

ward. The vast majority of active contour models can be categorized as two types:

edge-based snakes and region-based snakes.

The classical edge-based snake was first proposed by Kass et al. [60] and involved

minimizing the following energy functional:

E(C) = j [Eint(C(p)) + Eimage(C(p)) + Econ((p))] dp (4.17)

where C is a parameterized curve that separates the different regions, Eint = a Cp, 2

P|Cp12 is the internal energy that measures the length and the stiffness of the curve,

Eimage represents the image force that attracts the curve to locations with large image
gradient (as edges), Econ is given by the external constraint forces that usually refine

the curve C in the segmentation and is not used in non-interactive segmentation tasks,

a and 4 in the internal energy are control parameters that balance the geometric

behavior of the snake.

Though the classical snake solves the problem of linking edges to form a segmen-

tation, it has several disadvantages. The most severe one is that it can not converge

when far away from the true solution. The balloons model introduced by Cohen [66]

incorporates an additional constant inflating or shrinking external force as well as

stable numerical techniques to achieve a larger range of convergence. This constant

force was shown to be part of an area energy minimization term by K. Siddiqi et al.

[67]. The resulting energy functional is

E(C) = [Eint(c(p)) + Eimage(C(p))] p + dx (4.18)
Jo0 Ri

where Ri is the region inside the boundary C.

The geometric active contour models were introduced by Malladi et al. [61, 51]

and Caselles et al. [62] in the curve evolution context by adding an image stopping

term, and can also be derived as part of a weighted length energy functional [52]:

E(C)= gy( VI) C'(p) dp (4.19)

where g(-) is an inverse function of the image gradient |VI|.

The above three types of active contour models are all edge based since the

stopping criterion in all of them is a function of the image gradient. A more robust

type of active contour models is the regions-based model and is based on the Mumford-

Shah functional [54]:

E(f, C) (f g)2dX+a Vf2dx+ C (4.20)

where Q is the image domain, g is the original noisy image data, f is a piecewise

smooth approximation of g with discontinuities only along C, |C| is the arclength

of the curve C, a, 3 and 7 are control parameters. The first term models the data

fidelity and 7y is inversely proportional to the variance of the noise process, the sec-

ond term measures the smoothness of the approximation f and can be viewed as a

prior model of f given the discontinuity at C, the third term aims to remove tiny

disconnected regions and keep a smooth object boundary. With all these three terms,

the Mumford-Shah functional provides an elegant framework for simultaneous image

segmentation and smoothing. However, it is a difficult problem to solve the original

Mumford-Shah functional and numerous methods were proposed to tackle its various

approximations. For example, Chan and Vese [55] and Tsai et al. [53] presented

a two-stage implementation of (4.20) where the first stage involves, for a fixed C,

constructing a constant or smooth approximation to the image function inside and

outside of C and the second-stage evolves C for a fixed f.

4.3.2 Curve Evolution

The exact form of curve evolution is

C(x,=t) a(x)T(x) + P(x)N(x) (4.21)

where a and 3 are the speeds of the curve along the tangential and normal directions

respectively, T is the tangent and N is the unit outer normal to the curve. However

without loss of generality, equation (4.21) can be rewritten as a deformation along

the normal only in the form of the following PDE [68]:

C( t)= F(x)N(x) (4.22)

where F(x) = P(x) is the speed of the curve at location x along the normal and might

depend on various factors like local curvature, global properties of the curve etc. In

the following text, we will not explicitly specify the domain of a function when it is

clear from the context. For example, we will use F instead of F(x), N instead of

N(x) and etc. Particular form of F is the focus of many current researchers and are

usually derived from application related variational principles. With proper design of

the speed function, curve evolution can be used in the context of image segmentation

where F is related to the image data and the curve smoothing term.

Though curve evolution can be used directly for various applications, a more

natural and elegant way is to derive it from the first principles as gradient flows that

minimize the snake energy functional. For example, the gradient flow that minimize

(4.19) yields [52]

F = -(gk + Vg N)

For computing the curve evolution equation from region based active contour energy

functionals, we refer the readers to Aubert et al. [69] for the shape gradient method.

4.3.3 Level Set Formulation

Though traditional techniques for solving (4.22) including the marker/string

methods and the volume-of-fluid techniques [64] are useful under certain circum-

stance, they have quite a few draw backs. For example, the marker/string methods

provide numerical scheme based on parameterized curves and have instability and

topological limitations. The volume-of-fluid techniques are not accurate. In contrast,

level-set methods [59] offer several advantages at the same time as follows:

1. Topological changes as the evolving curves merge and split are transparent in

an implicit formulation.

2. Efficient and accurate numerical methods can yield stable solutions when shocks

form.

3. Extension from 2D curve evolution to 3D surface propagation is easy.

4. Can use fast solution techniques such as narrow banding and fast marching for

speedup.

The key idea of the level set methods is very simple. Given a higher dimensional

function q whose zero level set is the curve C (see Fig. 4.2), it is not hard to derive

a corresponding update equation for 9 as

S( -F,) F (4.23)

The higher dimensional function 9 is usually chosen to be the signed distance function

of the curve C. In such a case, |Vo| = 1 and will ensure numerical stability.

4.4 Piecewise Constant DTI Segmentation Model

Our model for DTI segmentation in 32 is posed as minimization of the following

variational principle based on the Mumford-Shah functional [54]:

E(T, C) d2(T(x), To(x))dx a VdT(x) 2dx + 3C (4.24)
Jo Jn/C

FN

(a) (b)
Figure 4.2: Evolution of a curve and its corresponding higher dimension function. (a)
An evolving curve with speed FN. (b) Higher dimension function whose zero level
set is the evolving curve in (a).

where the curve C is the boundary of the desired unknown segmentation, Q C R 2

is the image domain, To is the original noisy tensor field, T is a piecewise smooth

approximation of To with discontinuities only along C, |C| is the arclength of the

curve C, a and 3 are control parameters, dist(., .) is a measure of the distance between

two tensors, VdT(x) E R2 is defined as

dist(T(x), T(x + dtv))
VdT(x) v limdtodo

where v G ?2 is a unit vector. VdT can be called the gradient of T under the tensor

distance dist(., .) and its magnitude is a measure of the smoothness of the tensor

field T using dist(.,.) as a tensor distance. The extension of the Mumford-Shah

functional to 3D is straight forward by replacing the curve C with a surface S and

the implementation in 3D is similar as in 2D.

The above variational principle (4.24) will capture piecewise smooth regions while

maintaining a smooth boundary, the balance between the smoothness of the DTI in

each region and the boundaries is controlled by a and P. When a is extremely

large, equation (4.24) is reduced to a simplified form which aims to capture piecewise

constant regions of two types:

E(C, T1, T2)- fdt2 (T(x), Ti)dx I diSt2(T(x), T2)dxx + |C (4.25)

where R is the region enclosed by C and R' is the region outside C, T1 and T2 are

the mean values of the diffusion tensor field in region R and R' respectively.

The above model can be viewed as a modification of the active contour model

without edges for scalar valued images by Chan and Vese [55]. It can segment tensor

fields with two constant regions (each region type however can have disconnected

parts) in a very efficient way. We incorporate our new tensor "ii-I.. in this

active contour model to achieve DTI segmentation.

4.4.1 Curve Evolution Equation

The Euler Lagrange equation for the variational principle (4.25) is

[Lk d2(T, T) + 2(T, T2)] N = 0
Ti = M(T, R), T2 M(T, R)

where k is the curvature of the curve C, N is the outward normal to the curve.

When T1 and T2 are fixed, we have the following curve evolution form for the above

equation :

at [pk d2(T T1(t)) + d2(T T2(t))] N
where T1 = M(T, R), T2 M(T, Rc) (4.26)

4.4.2 Level Set Formulation

The curve evolution equation (4.26) can be easily implemented in a level set

framework. As shown in section 4.3, we have

W L/3V d (T. TI) + d(T. T2)] Vq (4.27)

where 0 is the signed distance function of C.

4.4.3 Implementation and Numerical Methods

We follow the two stage implementation as described in Chan and Vese [55] and

our modified procedure is as follows:

Algorithm 3 Two Stage Piecewise Constant Segmentation of DTIs
1: Set initial curve Co and compute its signed distance function 9o.
2: Compute TI and T2 according to equation (4.15).
3: Update signed distance function 0 according to equation (4.27).
4: Reinitialize 0 using the updated zero level set.
5: Stop if the solution is achieved, else go to step 2.

The key to the computation of T1 and T2 is the computation the square root of

an SPD matrix. We use the matrix diagonalization to achieve this, a real symmetric

matrix A can be diagonalized as

A = ODOT

where 0 is an orthogonal matrix and D = diag {di, d2, dn} is a diagonal matrix.

Then the polynomial form of A is

A' = OD'OT

where Dc = diag {dc, dc, ..., d }. Note that in equation (4.15), v BA B / BiAAB'i,

it has to be computed as follows:

Algorithm 4 Computing VB1 A 'B
1: Diagonalize B as OBDBOBT
2: Compute P VB as OBV BOBT.
3: Compute Q as PAP.
4: Diagonalize Q as QqDQOqT
5: Compute \Q as OQ/DQOQT

Equation (4.27) can be easily discretized using an explicit Euler scheme. We

can assume the spatial grid size to be 1, then the finite differences of the partial

derivatives are

1 1
A %_ i i+1,j i-j, A 'I i+1 i _-1)
2 2
At+ Qj ij, 1A3

1. i+1
A_|_ = t-i+l,j *i, A ,ij-

At *t+j 2- + -ij

sA.,= 1 i+1,j+1 i+i,j-1 i- j+ + i-iij-i)

Ajj '. j '* -+1 2. + *. ,-1

In this case, we have the following update equation:

At- [ d2(Tj,^ T'n) +d2(Tj,.^ T)]

*(At)2+ (A3 )2 (4.28)

where the curvature Qk of ,'* can be computed as

Aii .(A i i)2 2A* -*A Atj *i +A"t (Aj .2
%:3 [(A )2 (Aj i)2] 3/2

Update according to equation (4.28) on the whole domain Q has a complexity of

O(|2|) and is going to be slow when the the domain is huge. As we are most interested
in the evolving zero level set, updating only a narrow band around the zero level set

will be sufficient and this can be achieved using the narrow band method described

in [70, 71]. In order to maintain q as a signed distance function of C, it is necessary

to reinitialize q and can also be done only within a narrow band. There are also

several other efficient numerical schemes that one may em"'l'' for example the multi-

grid scheme as was done in Tsai et al. [53]. At this time, our explicit Euler scheme

with the narrow band method yielded reasonably fast solutions (3-5secs. for the 2D

synthetic data examples and several minutes for the 3D real DTI examples on a IGhz

Pentium-3 CPU).

4.5 Piecewise Smooth DTI Segmentation Model

In certain cases, the piecewise constant assumption will be violated and the

piecewise smooth model (4.24) has to be employed in such cases. Following the

curve evolution implementation of the Mumford-Shah functional by Tsai et al. [53],

we have the following two-stage scheme. In the smoothing stage, the curve is fixed

and a smoothing inside the curve and outside the curve is done by preserving the

discontinuity across the curve. In the curve evolution stage, the inside and outside of

the smoothed tensor field are fixed while the curve is allowed to move.

4.5.1 Discontinuity Preserving Smoothing

If the curve is fixed, we have

Ec(T)= d2 (T(x), To(x))dx a f 7T(x) 2d (4.29)

For implementation, the above VP is discretized as follows:

Ec(T) = d2(T(x,To(x))+a d2(T(x),T(y)) (4.30)
x (x,y)CNc

where Nc defines a collection of neighboring pixels. If a pair (x, y) cuts across the

boundary, it is excluded from Nc.

As we have an energy functional of a tensor field on a discrete grid, we can

compute its gradient with respect to this discrete tensor field. A straightforward way

to do this is to treat all the independent components of the tensors as the components

of a vector and compute the gradient of this energy function with respect to this

vector. However, the form of the gradient will not be compact. Instead, we use the

derivative of a matrix function f(A) with respect to its matrix variable as follows:

f (A) [ 9f(A) (4.31)
OA [A(i, j

It is not hard to see that

f (A)
OA(i, j)

f(A + dtEj) f (A)
limdto dt
dt

(4.32)

where Eyj is a matrix with 1 at location (i, j) and 0 elsewhere.

As the directional derivative with respect to a perturbation V on T(x) is given

Ec (T(x) + V)

Ec (T(x))

[d2(T(x) + V, To(x)) + a

yecNc(x)

d2(T(x) + V, T(y))

2(T (x),To(x)) a d2(T(x), T(y))
yeCNc(x)

-tr [(To (x) T (x)To(x)T- (x))V]

+a tr [(T- (y)- T-(x)T(y)T- (x))V]
yCeNc(x)

-tr [(B- T- (x)AT- (x))V]

A =a Y
yecNc(x)
B a 5
yeNc (x)

T- (y) + To(x)

T(y) + To(x)

We have the gradient from equation (4.32) and equation (4.33):

Ec -1 [B T- (x)AT-'(x)]
ST(x) 4

So the minimizer of the VP (4.30) satisfies

B = T- (x)AT-'(x)

where

(4.33)

(4.34)

(4.35)

4.5.2 Curve Evolution Equation and Level Set Formulation

Once the boundary discontinuity preserving smoothed tensor field is fixed, we

have

ET(C) d2(TR(), To(x))dx+ j 2 (TR(x), To(x))dx

+a VdTR(x) 2d + a VdTR(X) 2d +x 3 C (4.36)
R fRc

Thus the curve evolution equation will be

t { d2(TR, To) d2(TR, To)] a '( VdTRc 2 IVdTR 2) 3k} N (4.37)

Again for implementation, we have

aC
at [d2(R, T) d2(TR, To)] N

+a d2(TR, TR(y))- N (d2(TR, TR(y)) N
yeNRc (x) yEcNR(x)
-3kN (4.38)

The level set formulation for (4.38) is then given by

a t V V + F] 1V7 (4.39)

where ( is the signed distance function of C and the data dependent speed F is given

by

F = d2(TTo) d2(TRc, To)]

-a d2(TRc, TRc(y))- d2(TRTR(Y))
ycNRc(x) yEcNR(x)

4.5.3 Implementation and Numerical Methods

Similarly as in algorithm 3 and also as in Tsai et al. [53], we have the following

two-stage implementation. The major difference here lies in the computation of TR

78

Algorithm 5 Two Stage Piecewise Smooth Segmentation for DTI
1: Set initial curve Co and compute its signed distance function as Qo.
2: Compute TR and TRC.
3: Update level set y according to equation (4.39).
4: Reinitialize 9 using the updated zero level set.
5: Stop if the solution is achieved. Otherwise repeat from step 2.

and TRC. As the gradient can be computed, it is easy to design efficient numeri-

cal algorithm to achieve the discontinuity preserving smoothing. Currently, we use

4.6 Experimental Results

In this section, we present several sets of experiments on the application of our

DTI segmentation model. The first one is on 2D synthetic data sets, the second one

is on single slices of real DTIs estimated from DWIs, the last one is on 3D real DTIs.

In the experiments below, if not explicitly stated, the segmentation model used is the

piecewise constant model.

4.6.1 Synthetic Tensor Field Segmentation

The purpose of these experiments is to demonstrate the need to use the full

information contained in the tensors for segmentation purposes. To this end, we

synthesize two tensor fields, both are 2 x 2 symmetric positive definite matrix valued

images on a 128 x 128 lattice and have two homogeneous regions. The two regions in

the first tensor field only differ in the orientations while the two regions in the second

tensor field only differ in the scales. These two tensor fields are visualized as ellipses

as shown in Fig. 4.3(a) and Fig. 4.4(a) respectively. Each ellipse's axes correspond

to the eigenvector directions of the diffusion tensor and are scaled by the eigenvalues.

With an arbitrary initialization of the geometric active contour, our proposed model

can yield high q1i.iliv i] segmentation results as show in Figs. 4.3 and 4.4. The evolving

boundaries of the segmentation are shown as curves in red. Note that the first tensor

field can not be segmented by using only the scalar anisotropic properties of tensors as

in [36] and the second tensor field can not be segmented by using only the dominant

eigenvectors of the tensors. These two examples show that one must use the full

information contained in tensors in achieving quality segmentation. As our model is

a region based segmentation method, it is more resistant to noise than the gradient-

based snakes. Figures 4.5 and 4.6 depict the segmentation

process for synthetic noisy tensor fields and the results are very satisfactory.

4.6.2 Single Slice DTI Segmentation

Figure 4.7 shows a slice of the DTI estimated from the DWIs of a normal rat

spinal cord. Each of the six independent components of the individual symmetric

positive definite diffusion tensors in the DTI is shown as a scalar image in the top

row. The arrangements of the components from left to right are D,,, D,,, Dzz, D,,,

Dyz and D ,. The off diagonal terms are greatly enhanced by brightness and contrast

factors for better visualization. An ellipsoid visualization of same slice as the top row

is shown Fig. 4.7 (g). Each ellipsoid's axes correspond to the eigenvector directions

of the 3x3 diffusion tensor and are scaled by the eigenvalues and its color from blue to

red shows the lowest to highest degree of anisotropy. For example, diffusion tensors

in free water region are shown as large round blue ellipsoids. Figure 4.8 demonstrates

the separation of the gray matter and white matter inside the normal rat spinal cord

with the evolving curve in red superimposed on the ellipsoid visualization of the DTI.

Similarly, Fig. 4.9 shows a slice of DTI of a normal rat brain in the region of corpus

callosum and Fig. 4.10 depicts the segmentation procedure of the corpus callosum. In

the final step, the essential part of the corpus callosum is captured by the proposed

piecewise constant segmentation model. To further get the bending horns of the

corpus callosum, we use the segmentation results of the piecewise constant model as

initialization and apply the piecewise smooth model. The result is shown in Fig. 4.11.

Now we have a much better refinement. In all the above experiments, we exclude the

free water regions which are not of interest in the biological context.

4.6.3 DTI Segmentation in 3D

Now we demonstrate some segmentation results for 3D DTI. Figure 4.12 depicts

the segmentation procedure for a normal rat spinal cord DTI of size 108 x 108 x 10.

Figure 4.12 (a)-(d) clearly depict the surface evolution in 3D and Fig. 4.12 (e)-(h)

depict the intersection of the propagating surface in (a)-(d) with a slice of the D1,

component of the DTI. The separation of the gray matter and white matter inside

81

the normal rat spinal cord is achieved with ease. Similarly, Fig. 4.13 (a)-(h) show

the segmentation procedure for a normal rat brain DTI of size 114 x 108 x 12. In

addition, intersections of the final 3D segmentation with different slices of the D,,

component of the DTI are shown in Fig. 4.13 (i)-(l). It is evident that a majority of

the corpus callosum inside this volume is captured. Again, we exclude the free water

regions which are not of interest in the biological context.

I ~ ,

* A-~

"1--rrN
~~~- r~~' 4 A ~'1-- ~ I-i--~- Ti 7l,~;:-3 iii

< <..\,< 4.-A 4'z;

r~I7T TL-I4~

*A ":~~

L' A '- it~ i l II-~1 C/ l ~ i

A3

LC t C4!: ~1ttsJ, t

Figure 4.3: Segmentation of a synthetic tensor field where two regions differs only in

the orientations, (b),(c) and (d) are the initial, intermediate and final steps of the

curve evolution process in the segmentation.

rr.- -i"i- t.-~ tC=-"3

r~~~~~ ~~~~~ '-,( ~ IC.r I i,, r- i

rl3?- ~S- S cr- 1-~- .r,~ '1?~ -31('

?~ ~:: 4.* rr

3 k~ii::~L:~: 7
*k.z ~ 5~r ~ t' i :~

F4E~:S:~: Ci

*~: ciicZ

Yxrir

.y^-CC -.T' -

T1-""L :.*. i .y^r~ "^.1

"^r 1\ ,z *
.aa t~i
ir,x i-,a i.

c ~I
:,i
c.- i-* I.*

rr d'," T~~C~"

I

r ..2,

~:1 d~ IC_,C~L. .rlrE,~IIi;
-" ~C~rir,
.r r: ~
is .,:~,.r:i:~=~j:C
.~-L~i~rp
_

..s
1~~ ~'~ ~~~Yill)i

i. I-i -.i

ir :rr rjc

~ F~:*
II.. -
rl,
r,49:
i-r
:rl J:

r:iT i 7 r%3":1
51111:r
~:r-ikr
,x~r:4.,
rl~i~C r

:

*: i :** r l'I *

u 1 1 14 *i!. i< i t "
3 ': ic

iLi -

ft" Ii

it oi
I.

* I

Iii

II

*1:

ii i i i ?

":" :
,,;( i ii
i" ",i;' ijZ

""I";l!i ?
"i:' "I(

C? ~ii

Figure 4.4: Segmentation of a synthetic tensor field where two regions differs only

in scale, (b), (c) and (d) are the initial, intermediate and final steps of the curve

evolution process in the segmentation.

I.I
1 1
1.1
n
ui
n;

ii, I

i ri

": '' "

i: I I
r; I

;i tr I~ I

ii" i
ii iii i

i: i: i

: ::
i: I, i. I

c

I: ii
:: I

i;; I
:: i

I; L. i
i r,

1; r: Ilr t i? Ir? I ':

I "" :" :: "

;I ii I~

ii i: i i,
II I: FO

i ~ ;r [I I~ "" ii
Llii jj I, iiIl

:: :

.: I (

i. c
I: ;
~;
ii
~li
uli

-4* A 9

>7 a- a -L

A, *~s~~~I-r

.AAA.

-9 i(-F s-A-,

--i--- 'U"

AA A

1' ~~"Cr-

w 'ru -i

'I~, ~ r

-r 9~

- ) a.- 9.A

ISr -~ -
a' *r -5
i'L .AA

I- ~".- .p- A

t.: I.- -r A-L .9. 7

t.*.. ,: I I 9* -:-

i- -;[

ATi >7-

ci Y-

L

r:"- ".- I 7 ;
- 1 9 t *

-2 7

7** ''. A.: ^

..-* -.. : : .. il

7' <;... '."'- :T.- j ^" i .-* r .1* .'..;--' *-

*":- .<*.e *'._' -- ; '' *; *,- *X t -- -.-; _-. ;
Ar .

71, '9- -'A---~d^J ~ .~
a t .' < 94 F- ~,r Tii

A ii~-i7~ *- ~ :l r Ale~.~

--, -, A' L

S.2

7,

1 i
-A--

,II
- *i r

9-A --

~~i" ~ < -A~T"., r';S~

S-
9- '

m *'"s'r/'; *

* Ar*

Al' --
9'- :

71 F

:? '

Figure 4.5: Segmentation of a synthetic tensor field with two roughly homogeneous

regions that only differ in the orientations. (b)-(d) are the initial, intermediate and

final steps of the curve evolution process for segmentation of (a).

-- 7 ,AI

-7---

>7-. _-7~~-

-~ I

->7 -*9-i

xia -

rr~
i-- ---'
r -~----

1X

:I?:irr :i,

;
: I :
:~, 9~ :. ."
(~9~~i:~
~I r

"ii:
I ` i: lb ?
i ; ": ": "Ih

(c) (d)

Figure 4.6: Segmentation of a synthetic tensor field with two roughly homogeneous
regions that only differ scale. (b)-(d) are the initial, intermediate and final steps of
the curve evolution process for segmentation of (a).

I NE
4LM

(a) D11

(b) D9,

(c) D,

(d) D,,

(e) D,

(f) D,

Figure 4.7: A slice of the DTI of a normal rat spinal cord. (a)-(f): viewed channel
by channel as gray scale images. (g): viewed using ellipsoids.

(a) (b)

(c) (d)
Figure 4.8: Segmentation of the slice of DTI shown in Fig. 4.7. (a)-(d): initial,
intermediate and final steps in separating the gray and white matter inside the rat
spinal cord.

Full Text

PAGE 1

DIFFUSION TENSOR FIELD RESTORA TION AND SEGMENT A TION By ZHIZHOU W ANG A DISSER T A TION PRESENTED TO THE GRADUA TE SCHOOL OF THE UNIVERSITY OF FLORID A IN P AR TIAL FULFILLMENT OF THE REQUIREMENTS F OR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORID A 2004

PAGE 2

Cop yrigh t 2004 b y Zhizhou W ang

PAGE 3

I dedicate this w ork to m y family

PAGE 4

A CKNO WLEDGMENTS I w ould lik e to rst thank m y advisor, Dr. Baba C. V em uri, for ev erything he has done during m y do ctoral study This dissertation w ould not b e p ossible without him. Dr. V em uri in tro duced me to the eld of medical imaging and alw a ys helps and encourages me. His insigh t and exp erience ha v e guided me through m y researc h and he ga v e n umerous suggestions to this dissertation. I ha v e b een v ery luc ky to w ork with him. I w ould also lik e to thank m y excellen t committee, Dr. Y unmei Chen, Dr. Anand Rangara jan, Dr. Murali Rao, Dr. Sarta j K. Sahni and Dr. Baba C. V em uri, for pro viding v aluable advice. In addition, sp ecial thanks go to Dr. Aruna v a Banerjee for attending m y defense. My do ctoral researc h is a happ y co op eration with man y p eople. Dr. V em uri has b een in v olv ed with the whole progress, Dr. Chen has con tributed a lot in the DTI restoration part, Dr. Thomas H. Mareci and Evren Ozarslan ha v e b een m y resource of DT-MRI ph ysics and ha v e kindly pro vided the data for DTI segmen tation, Dr. Rac hid Deric he has pro vided excellen t commen ts on m y w ork in DTI segmen tation and Tim McGra w has pro vided b eautiful b er visualizations to mak e m y w ork more app ealing. I w ould lik e to thank the editors in the Editorial oce for their careful and thorough corrections. I w ould also lik e to thank the sta and the facult y mem b ers who ha v e b een v ery patien t and helpful during m y study here. In particular, Dr. Da vid Gu and Dr. Aruna v a Banerjee ha v e attended sev eral of m y seminars and sho w ed me quite a few in teresting p oin ts. I ha v e also b eneted a lot from the course taugh t b y Dr. Da vid Metzler and Dr. Anand Rangara jan. iv

PAGE 5

F riends in m y lab, m y departmen t and the Univ ersit y of Florida are m uc h appreciated for sharing m y jo ys and sadnesses. Sp ecial thanks go to Xiaobin W u, F angw en Chen, F u w ang Chen and Da ying Zhang who ha v e help ed me to pull through the busiest time in m y life. I am also fortunate to enjo y the friendship of Jundong Liu, Ju W ang, Haibin Lu, F ei W ang, Jie Zhou, Tim McGra w, Eric Sp ellman, Bing Jian, Neeti V ohra, Andy Shiue Le-Jeng, Lingyun Gu, Hongyu Guo and their families. Though it is a distan t memory I ha v e alw a ys c herished the men torship from some of m y advisors in the long past: Anan Sha, Yining Hu, Shengtao Qin, Y uan Zh u and Y uqing Gu. They guided me through dieren t stages of study and life, I w ould not ha v e b een able to ac hiev e what I ha v e to da y without them. Last, but not least, I w ould lik e to thank m y paren ts, m y sister and m y wife Jing for their understanding and lo v e during the past few y ears. Their supp ort and encouragemen t are m y source of strength. I am also thankful for m y son who has inspired me a lot with his little disco v eries. This researc h w as supp orted in part b y the gran ts NIH R01-NS42075. I also ha v e receiv ed tra v el gran ts from IEEE Computer So ciet y Conference on Computer Vision and P attern Recognition 2003 and the Departmen t of Computer and Information Science and Engineering, College of Engineering, Univ ersit y of Florida. Chapter 2 is cop yrigh ted b y IEEE and w as published in Pro ceedings of the 2004 IEEE In ternational Symp osium on Biomedical Imaging: F rom Nano to Macro, co-authored with Baba C. V em uri, Evren Ozarslan, Y unmei Chen and Thomas H. Mareci. Chapter 3 is cop yrigh ted b y IEEE and will app ear in IEEE T ransaction on Medical Imaging, co-authored with Baba C. V em uri, Y unmei Chen and Thomas H. Mareci. A preliminary v ersion of c hapter 4 is cop yrigh ted b y IEEE and will app ear in IEEE Computer So ciet y Conference on Computer Vision and P attern Recognition, co-authored with Baba C. V em uri. v

PAGE 6

T ABLE OF CONTENTS pageA CKNO WLEDGMENTS. . . . . . . . . . . . . . . ivLIST OF T ABLES. . . . . . . . . . . . . . . . . ixLIST OF FIGURES. . . . . . . . . . . . . . . . xABSTRA CT. . . . . . . . . . . . . . . . . . xiiCHAPTER 1 INTR ODUCTION. . . . . . . . . . . . . . . 11.1 ABC of Magnetic Resonance Imaging. . . . . . . . 11.1.1 Nature of a Proton Spin with a Magnetic Field. . . . 11.1.2 Net Magnetization and Its Detection. . . . . . . 21.1.3 Magnetic Resonance Imaging. . . . . . . . . 31.2 Bac kground of DT-MRI. . . . . . . . . . . . 51.2.1 Diusion Pro cess. . . . . . . . . . . . 51.2.2 Diusion W eigh ted Images. . . . . . . . . 51.2.3 Application of DT-MRI. . . . . . . . . . 71.3 Literature Review. . . . . . . . . . . . . . 91.3.1 Optimal b V alues. . . . . . . . . . . . 91.3.2 DTI Restoration. . . . . . . . . . . . 101.3.3 DTI Segmen tation. . . . . . . . . . . . 121.4 Con tributions. . . . . . . . . . . . . . . 142 ST A TISTICAL ANAL YSIS OF A NONLINEAR ESTIMA TOR F OR ADC AND ITS APPLICA TION TO OPTIMIZING b V ALUES. . . . 172.1 In tro duction. . . . . . . . . . . . . . . 172.2 Theory. . . . . . . . . . . . . . . . . 182.2.1 A Nonlinear Estimator for ADC. . . . . . . . 182.2.2 V ariance of the Nonlinear Estimate for ADC. . . . 202.2.3 W eigh ted CO V of the Nonlinear Estimator for ADC. . 212.2.4 Optimal Diusion W eigh ting F actors. . . . . . . 222.3 Results. . . . . . . . . . . . . . . . . 222.3.1 Sim ulation Results of Dieren t Estimators. . . . . 222.3.2 V ariance of Dieren t Estimates. . . . . . . . 232.3.3 W eigh ted CO V and Optimal Diusion W eigh ting F actors. 25 vi

PAGE 7

3 A CONSTRAINED V ARIA TIONAL PRINCIPLE F OR DIRECT ESTIMA TION AND SMOOTHING OF THE DTI FR OM COMPLEX D WIS 28 3.1 In tro duction . . . . . . . . . . . . . . . 28 3.2 Constrained V ariational Principle F orm ulation . . . . . 29 3.2.1 The Complex Nonlinear Data Constrain t . . . . . 31 3.2.2 L p Smo othness Norm . . . . . . . . . . 32 3.2.3 The P ositiv e Denite Constrain t . . . . . . . 33 3.2.4 Existence of a Solution . . . . . . . . . . 33 3.3 Numerical Metho ds . . . . . . . . . . . . . 39 3.3.1 Discretized Constrained V ariational Principle . . . . 40 3.3.2 Augmen ted Lagrangian Metho d . . . . . . . 40 3.3.3 Limited Memory Quasi-Newton Metho d . . . . . 41 3.3.4 Line Searc h . . . . . . . . . . . . . 44 3.3.5 Implemen tation Issues . . . . . . . . . . 44 3.4 Exp erimen tal Results . . . . . . . . . . . . 45 3.4.1 Complex V alued Syn thetic Data . . . . . . . 45 3.4.2 Complex D WI of a Normal Rat Brain . . . . . . 48 4 DTI SEGMENT A TION USING AN INF ORMA TION THEORETIC TENSOR DISSIMILARITY MEASURE . . . . . . . . . . 58 4.1 In tro duction . . . . . . . . . . . . . . . 58 4.2 A New T ensor Distance and Its Prop erties . . . . . . 59 4.2.1 Denitions . . . . . . . . . . . . . 59 4.2.2 Ane In v arian t Prop ert y . . . . . . . . . 61 4.2.3 T ensor Field Mean V alue . . . . . . . . . 63 4.3 Activ e Con tour Mo dels for Image Segmen tation . . . . . 66 4.3.1 Snak e Energy F unctional . . . . . . . . . 66 4.3.2 Curv e Ev olution . . . . . . . . . . . . 69 4.3.3 Lev el Set F orm ulation . . . . . . . . . . 70 4.4 Piecewise Constan t DTI Segmen tation Mo del . . . . . . 70 4.4.1 Curv e Ev olution Equation . . . . . . . . . 72 4.4.2 Lev el Set F orm ulation . . . . . . . . . . 72 4.4.3 Implemen tation and Numerical Metho ds . . . . . 73 4.5 Piecewise Smo oth DTI Segmen tation Mo del . . . . . . 75 4.5.1 Discon tin uit y Preserving Smo othing . . . . . . 75 4.5.2 Curv e Ev olution Equation and Lev el Set F orm ulation . 77 4.5.3 Implemen tation and Numerical Metho ds . . . . . 77 4.6 Exp erimen tal Results . . . . . . . . . . . . 79 4.6.1 Syn thetic T ensor Field Segmen tation . . . . . . 79 4.6.2 Single Slice DTI Segmen tation . . . . . . . . 80 4.6.3 DTI Segmen tation in 3D . . . . . . . . . 80 vii

PAGE 8

5 CONCLUSIONS . . . . . . . . . . . . . . . 92 5.1 Optimal b V alues . . . . . . . . . . . . . 92 5.2 DTI Restoration . . . . . . . . . . . . . . 92 5.3 DTI Segmen tation . . . . . . . . . . . . . 94 REFERENCES . . . . . . . . . . . . . . . . . 96 BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . 102 viii

PAGE 9

LIST OF T ABLES T able page 1.1 MRI prop erties of dieren t tissues. . . . . . . . . . . 4 2.1 Optim um -v alue for t ypical ADC range of h uman brains with m ultiple measuremen ts. . . . . . . . . . . . . . . . 26 2.2 Optim um diusion w eigh ting factors with m ultiple measuremen ts for normal rat brains. . . . . . . . . . . . . . . 26 3.1 Comparison of the accuracy of the estimated DEVs using dieren t metho ds for dieren t noise lev els. . . . . . . . . . 48 ix

PAGE 10

LIST OF FIGURES Figure page 1.1 Proton spin and its in teraction. . . . . . . . . . . 2 1.2 Equilibrium magnetization of proton spins. . . . . . . . 3 1.3 Detection of net magnetization. . . . . . . . . . . 4 1.4 Ellipsoid visualization of diusion tensors. . . . . . . . 5 1.5 A slice of D WI tak en from a normal rat brain. . . . . . . 7 1.6 A slice of D WI tak en from a normal rat brain with v arious diusion w eigh ting factors. . . . . . . . . . . . . . . 8 1.7 Fib er tract maps computed from diusion tensor imaging . . . 9 1.8 A general framew ork of DT-MRI analysis . . . . . . . . 10 2.1 Distribution of ADC v alues in the white matter and the gra y matter of a normal rat brain. . . . . . . . . . . . . 22 2.2 Sim ulation results of dieren t ADC estimates. . . . . . . 24 2.3 V ariance of dieren t ADC estimates when = 0 : 05. . . . . . 25 2.4 W eigh ted CO V of the nonlinear ADC estimate v ersus the diusion w eigh t factors. . . . . . . . . . . . . . . 27 3.1 A slice of sev eral v olumetric complex D WIs . . . . . . . 50 3.2 A slice of the original (ground truth) and the estimated diusion tensor elds . . . . . . . . . . . . . . . . . 51 3.3 A slice of the computed DEV eld for the noisy syn thetic data . . 52 3.4 A slice of the normal rat brain DTI estimated using m ultiv ariate linear regression without smo othing, view ed c hannel b y c hannel. . . 53 3.5 A slice of the normal rat brain DTI estimated using m ultiv ariate nonlinear regression without smo othing, view ed c hannel b y c hannel. . 54 3.6 A slice of the normal rat brain DTI estimated using using our prop osed metho d, view ed c hannel b y c hannel. . . . . . . . . 55 x

PAGE 11

3.7 A slice of the computed DEV eld from a normal rat brain. . . . 56 3.8 A slice of the normal rat brain DTIs in the region of the cereb ellum view ed c hannel b y c hannel. . . . . . . . . . . . 57 4.1 T ransformation of tensor elds. . . . . . . . . . . 62 4.2 Ev olution of a curv e and its corresp onding higher dimension function. 71 4.3 Segmen tation of a syn thetic tensor eld where t w o regions diers only in the orien tations, . . . . . . . . . . . . . 82 4.4 Segmen tation of a syn thetic tensor eld where t w o regions diers only in scale, . . . . . . . . . . . . . . . . 83 4.5 Segmen tation of a syn thetic tensor eld with t w o roughly homogeneous regions that only dier in the orien tations. . . . . . . . 84 4.6 Segmen tation of a syn thetic tensor eld with t w o roughly homogeneous regions that only dier in scale. . . . . . . . . . . 85 4.7 A slice of the DTI of a normal rat spinal cord. . . . . . . 86 4.8 Segmen tation of the slice of DTI sho wn in Fig. 4.7 . . . . . 87 4.9 A slice of the DTI of a normal rat brain. . . . . . . . . 88 4.10 Segmen tation of the corpus callosum from a slice of DTI sho wn in Fig. 4.9 . . . . . . . . . . . . . . . . . . 89 4.11 Segmen tation of the corpus callosum from a slice of the DTI in Fig. 4.9 using piecewise smo oth mo del. . . . . . . . . . 90 4.12 3D Segmen tation of the DTI of a normal rat spinal cord. . . . 90 4.13 3D Segmen tation of the corpus callosum from the DTI of a normal rat brain. . . . . . . . . . . . . . . . . . 91 xi

PAGE 12

Abstract of Dissertation Presen ted to the Graduate Sc ho ol of the Univ ersit y of Florida in P artial F ulllmen t of the Requiremen ts for the Degree of Do ctor of Philosoph y DIFFUSION TENSOR FIELD RESTORA TION AND SEGMENT A TION By Zhizhou W ang August 2004 Chair: Baba C. V em uri Ma jor Departmen t: Computer and Information Science and Engineering Diusion T ensor Magnetic Resonance Imaging (DT-MRI) is a relativ e new MRI mo dalit y that can b e used to study tissue microstructure, e.g., white matter connectivit y in brain in viv o. It has attracted v ast atten tion during the past few y ears. In this dissertation, no v el solutions to t w o imp ortan t problems in DT-MRI analysis: diusion tensor eld (ak a diusion tensor image, DTI) restoration and segmen tation are presen ted. F or DTI restoration, w e rst dev elop a mo del for estimating the accuracy of a nonlinear estimator used in estimating the apparen t diusivit y co ecien t (ADC). W e also use this mo del to design optimal diusion w eigh ting factors b y accoun ting for the fact that the ground truth of the ADC is a distribution instead of a single v alue. The prop osed metho d ma y b e extended to study the statistical prop erties of DTI estimators and to design corresp onding optimal acquisition parameters. W e then presen t a no v el constrained v ariational principle for sim ultaneous smo othing and estimation of the DTI from complex v alued diusion w eigh ted images (D WI). The constrained v ariational principle in v olv es the minimization of a regularization term of L p smo othness norms, sub ject to a nonlinear inequalit y constrain t on the data. The data term w e emplo y is the original Stejsk al-T anner equation instead of xii

PAGE 13

the linearized v ersion usually emplo y ed in literature. The complex v alued nonlinear form leads to a more accurate (when compared to the linearized v ersion) estimate of the DTI. The inequalit y constrain t requires that the nonlinear least squares data term b e b ounded from ab o v e b y a kno wn tolerance factor. Finally in order to accommo date the p ositiv e denite constrain t on the diusion tensor, it is expressed in terms of Cholesky factors and estimated. The constrained v ariational principle is solv ed using the augmen ted Lagrangian tec hnique in conjunction with the limited memory quasi-Newton metho d. F or DTI segmen tation, w e presen t a no v el denition of tensor \distance" grounded in concepts from information theory and incorp orate it in the segmen tation of DTI. Diusion tensor is a symmetric p ositiv e denite (SPD) tensor at eac h p oin t of a DTI and can b e in terpreted as the co v ariance matrix of a lo cal Gaussian distribution. Th us, a natural measure of dissimilarit y b et w een diusion tensors w ould b e the Kullbac k-Leibler(KL) div ergence or its relativ e. In particular, w e dene a tensor \distance" as the square ro ot of the J-div ergence (symmetrized KL) b et w een t w o Gaussian distributions corresp onding to the tensors b eing compared. Our denition leads to no v el closed form expressions for the \distance" itself and the mean v alue of a DTI. W e then incorp orate this new tensor \distance" in a region based activ e con tour mo del for DTI segmen tation and the closed expressions w e deriv ed greatly help the computation. xiii

PAGE 14

CHAPTER 1 INTR ODUCTION In their seminal w ork [ 1 ], Basser et al., in tro duced diusion tensor magnetic resonance imaging (DT-MRI) as a new MRI mo dalit y from whic h anisotropic w ater diusion can b e inferred quan titativ ely Since then, DT-MRI has b ecame a p o w erful metho d to study the tissue microstructure, e.g., white matter connectivit y in the brain in viv o. DT-MRI analysis consists of a v ariet y of in teresting problems: diusion w eigh ted image (D WI) acquisition parameter optimization, diusion tensor eld (ak a diusion tensor image, DTI) restoration, DTI segmen tation, DTI registration, DTI atlas construction, b er trac king and visualization. Since this researc h area is still v ery y oung, most questions raised in the ab o v e problems ha v e not b een solv ed y et. In this c hapter, the basics of MRI and DT-MRI are review ed, follo w ed b y a literature review of the state of art researc h in DT-MRI analysis. 1.1 ABC of Magnetic Resonance Imaging Magnetic resonance imaging (MRI) is a p o w erful nonin v asiv e imaging mo dalit y used mainly to \see" the soft tissues inside the h uman b o dy in medical sciences. It has its b eginnings in the early 1970s and is based on the ph ysics of n uclear magnetic resonance that w as thoroughly studied in the rst half of the 20th cen tury Here only the most fundermen tal concepts and ideas in MRI are briery presen ted; readers are referred to Haac k e et al. [ 2 ] for an excellen t and complete co v erage of this topic. 1.1.1 Nature of a Proton Spin with a Magnetic Field 1 H or proton MR is the most common tec hnique for imaging biological systems due to its high concen tration in the h uman b o dy and high sensitivit y A classical view of a proton is a tin y spinning p ositiv e c harge. The high angular rotation of 1

PAGE 15

2 (a) (b) Figure 1.1: Proton spin and its in teraction. (a) Proton spin. (b) In teraction of a proton spin with an external magnetic eld. a proton causes a circular curren t lo op I whic h in turn brings a magnetic momen t (see Fig. 1.1 (a)) As sho wn in Fig. 1.1 (b), a spinning proton immersed in an external magnetic eld B 0 will ha v e a pro cessional motion around the eld direction if is not aligned with B 0 The frequency of this precession is called the Larmor frequency and is giv en b y 0 = r B 0 (1.1) where the constan t r is called the gyromagnetic ratio. 1.1.2 Net Magnetization and Its Detection Consider an ob ject that con tains n umerous protons and place it in a large static magnetic eld. Ev en tually the precessing protons will b e aligned with the external magnetic eld; ho w ev er, the alignmen ts for a group of protons migh t b e parallel or an tiparallel due to thermal agitation with the absolute temp erature T Still, the excess of parallel alignmen ts will bring a net equilibrium magnetization M 0 that is prop ortional to the spin densit y 0 ev erywhere (Fig. 1.2 ). Ev en in a large b o dy the non-v anishing net magnetization is not signican t enough to b e detected when it is aligned with the applied magnetic eld. Ho w ev er,

PAGE 16

3 Figure 1.2: Equilibrium magnetization of proton spins. The blac k colored ones are parallel while the the gra y colored ones are an tiparallel to the external magnetic eld. as sho wn in Fig. 1.3 the magnetization v ector can b e tipp ed a w a y b y a nearb y radiofrequency (rf ) magnetic pulse and then precess around the static magnetic eld and its frequency as giv en b y equation ( 1.1 ). It is no w p ossible to detect this precessing magnetization using a closely placed coil in whic h an electronic signal is induced and measured. There are three tissue prop erties that ha v e imp ortan t impact on the strength of the signal. One of them is the spin densit y the other t w o are the longitudinal relaxation time T 1 and the transv ersal relaxation time T 2 Both T 1 and T 2 are tissue-sp ecic time constan ts for protons and measure the rate of c hange in net magnetization. While T 1 decides the time tak en for spinning protons to realign with the external magnetic eld, T 2 measures the deca y of magnetization p erp endicular to the main magnetic eld. 1.1.3 Magnetic Resonance Imaging Imaging aims to relate signal with the spatial lo calization of v arious factors. This is ac hiev ed b y using a spatially v arian t magnetic eld that leads to a spatially c hanging distribution of resonance frequencies ( x ) = r ( B 0 + G x ), where G is the gradien t of the magnetic eld. No w the measured signal con tains a sp ectrum of

PAGE 17

4 (a) (b) Figure 1.3: Detection of net magnetization. (a) Net magnetization is tipp ed a w a y b y a rf pulse. (b) Pro cessed net magnetization is detected b y a nearb y coil. T able 1.1: MRI prop erties of dieren t tissues white matter grey matter w ater b one air tumor infarct T1 brigh t grey dark dark dark dark dark T2 brigh t in term. brigh t dark dark brigh t brigh t receiv ed signals: S ( t ) = Z M ( x ) e ir G x d x (1.2) where M ( x ) is the magnetization that is a com bination of the spin densit y longitudinal relaxation, transv ersal relaxation and other factors lik e diusion. M ( x ) can b e computed b y in v erting the receiv ed signals using F ourier transformation: M ( x ) = Z S ( k ) e i k x d k (1.3) where k = r G t (2 ) is called the recipro cal space v ector to mak e the F ourier transformation ob vious. In practice, a particular slice in a 3D v olume is selected using a magnetic eld with c hanging magnitude along a certain direction. Dieren t kinds of con trast images can b e deriv ed from the eectiv e spin densit y T able 1.1 sho ws the imaging prop erties of v arious normal and abnormal tissues in the brain [ 3 ], with the abbreviation in term. referring to an in termediate brigh tness v alue b et w een dark and brigh t.

PAGE 18

5 (a) (b) Figure 1.4: Ellipsoid visualization of diusion tensors. (a) Anisotropic diusion tensor. (b) Isotropic diusion tensor. 1.2 Bac kground of DT-MRI 1.2.1 Diusion Pro cess Diusion is a pro cess of in termingling molecules as a result of random thermal agitation, and in our con text, refers sp ecically to the random translational motion of w ater molecules in the part of the anatom y b eing imaged with MR. In three dimensional space, diusivit y can b e describ ed b y a 3 3 matrix D called diusion tensor that is in timately related to the geometry and organization of the microscopic en vironmen t. The diusion tensor D can b e easily visualized as ellipsoids sho wn in Fig. 1.4 where the axes of eac h ellipsoid corresp ond to the eigen v ector directions of the diusion tensor and are scaled b y the eigen v alues. The color of the ellipsoid ranging from blue to red sho ws the lo w est to highest degree of anisotrop y F or example, diusion tensors in free w ater region are sho wn as large round blue ellipsoids that are just blue spheres. 1.2.2 Diusion W eigh ted Images Diusion w eigh ted image S l and the diusion tensor D are related through the Stejsk al-T anner equation as giv en b y [ 1 ] S l = S 0 e b l : D = S 0 e P 3i =1 P 3j =1 b l;ij D ij (1.4)

PAGE 19

6 where b l is the diusion w eigh ting matrix of the l -th magnetic gradien t, ":" denotes the generalized inner pro duct for matrices. Equation ( 1.4 ) is a compact form suitable for mathematical analysis, some authors prefer to use the LeBihan's b -factor and the diusion sensitizing gradien t direction g = [ g x ; g y ; g z ] T to rerect the imaging ph ysics as the follo ws: S l = S 0 e b g T Dg It is not hard to v erify that b l ; 11 = bg 2 x ; b l ; 12 = bg x g y ; b l ; 13 = bg x g z b l ; 21 = bg x g y ; b l ; 22 = bg 2 y ; b l ; 23 = bg y g z b l ; 31 = bg z g x ; b l ; 32 = bg z g y ; b l ; 33 = bg 2 z The p opular phase-enco ding metho d for acquiring DT-MRI images yields complex measuremen ts; th us, S l and S 0 will b e complex v ariables and equation ( 1.4 ) still holds in suc h cases. In the follo wing text, w e assume S l = R l + iI l and S 0 = R 0 + iI 0 are complex v ariables, where R l = r eal ( S l ), I l = imag inar y ( S l ), R 0 = r eal ( S 0 ) and I 0 = imag inar y ( S 0 ). T aking the magnitude on b oth sides in equation ( 1.4 ) yields k S l k = k S 0 k e b l : D = k S 0 k e P 3i =1 P 3j =1 b l;ij D ij (1.5) T aking log on b oth sides of equation ( 1.5 ) leads to the follo wing linearized Stejsk al-T anner equation: l og ( k S l k ) = l og ( k S 0 k ) b l : D = l og ( k S 0 k ) 3 X i =1 3 X j =1 b l ;ij D ij (1.6) Figure 1.5 sho ws a slice of the magnitude, real part and imaginary part of a D WI. Note that in all of published literature to date, what ha v e b een considered is only the magnitude of the complex measuremen ts and in particular the linearized equation ( 1.6 ).

PAGE 20

7 (a) Magnitude (b) Real part (c) Imaginary part Figure 1.5: A slice of D WI tak en from a normal rat brain. Negativ e v alues in the real part and the imaginary part are sho wn in blue. 1.2.3 Application of DT-MRI Giv en sev eral non-collinear diusion w eigh ted in tensit y measuremen ts, D can b e estimated via m ultiv ariate regression mo dels from equation ( 1.4 ) or its v arian ts suc h as the linearized v ersion in equation ( 1.6 ). T o impro v e the accuracy dieren t b v alues are also emplo y ed to acquire the ra w D WIs. Figure 1.6 sho ws the complex v alued D WIs in dieren t directions and with dieren t b v alues, the c hanges in D WIs are clearly eviden t with dieren t b v alues. Though the dierence in D WIs is not as ob vious with c hanging directions, it is still visible inside the corpus callosum where the diusion tensor is anisotropic since the v alue of D WI at a pixel c hanges with directions only when the diusion tensor at that lo cation is anisotropic. A general principle is that w ater diuses preferably along ordered tissues e.g., the brain white matter; th us, diusion anisotrop y can then b e computed to sho w microstructural and ph ysiological features of tissues [ 4 ]. Esp ecially in highly organized nerv e tissue, lik e white matter, diusion tensor pro vides a complete c haracterization of the restricted motion of w ater through the tissue that can b e used to infer b er tracts. Another p ossible application of the diusion tensor eld is segmen tation of anisotropic regions.

PAGE 21

8 Figure 1.6: A slice of D WI tak en from a normal rat brain with v arious diusion w eigh ting factors. Left to righ t: Fixed direction, v arying b v alues. T op to b ottom: xed b v alue, c hanging directions. DT-MRI has recen tly b ecame a p o w erful metho d to study the tissue microstructure e.g., white matter connectivit y in the brain in viv o. T o this end, Fig. 1.7 is an example depicting the b er tract map whic h is traced using the diusion tensor image inside the corpus callosum of a h uman brain and is visualized as colored stream tub es [ 5 ]. A general framew ork of DT-MRI analysis is sho wn in Fig. 1.8 The rst step of DT-MRI analysis is the optimal acquisition of the diusion w eigh ted images and is of particular in terest to imaging ph ysics as w ell as analysists. The second step is the restoration of DTI and ma y in v olv e b oth estimation and smo othing. Giv en the restored DTI, a large amoun t of published w ork in the literature has b een fo cused on b er trac king and visualization. Ho w ev er, with the ric h information con tained in DTI, it is also b enecial to segmen t some structures whic h migh t not b e easy to ac hiev e from the standard MRI images. This segmen tation can also assist in fo cusing on a smaller region of in terest (R OI) for b er trac king, a useful task in man y clinical

PAGE 22

9 Figure 1.7: Fib er tract maps computed from diusion tensor imaging visualized as colored stream tub es. applications. Finally it is crucial to v alidate the trac k ed b er path w a ys with higher resolution images lik e ruro images of histology [ 6 ]. 1.3 Literature Review 1.3.1 Optimal b V alues In DT-MRI, the most imp ortan t con trol parameters in data acquisition are the diusion w eigh ting factors (ak a. b -v alues) as sho wn in equation ( 1.4 ). By c ho osing these b -v alues carefully one can obtain high qualit y diusion w eigh ted images without additional cost. Da Xing et al. [ 7 ] prop osed a sc heme to nd optimal b -v alues for measuremen t of the apparen t diusion co ecien t (ADC) based on the use of t w o diusion w eigh ted images acquired with b 1 and b 2 Armitage and Bastin [ 8 ] extends Xing's result to DT-MRI. Ho w ev er, in [ 7 ] and [ 8 ], the rep orted metho ds use the linear equation ( 1.6 ). Though not aiming at optimizing b v alues, Pierpaoli and Basser [ 9 ], Basser and P a jevic [ 10 ], Anderson [ 11 ] and others ha v e studied the eects of noise on

PAGE 23

10 Figure 1.8: A general framew ork of DT-MRI analysis: from ra w data to b er tract map computation, visualization and segmen tation. DTI estimation. Earlier w orks [ 9 10 ] are based on n umerical sim ulations while the latest w ork [ 11 ] aims to pro vide a theoretical analysis. Anderson [ 11 ] uses a p o w er series of equation ( 1.5 ) to the rst order to get a noise mo del for the linearized equation and then ev aluates the p erturbation of the estimated eigen v alues and eigen v ectors. Just recen tly Brih uega-Moreno et al. [ 12 ] aimed to optimize diusion measuremen ts b y minimizing the Cramer-Rao lo w er b ound (CRLB), whic h is a function of the diusion w eigh ting factors. Their approac h is indep enden t of the estimation metho d; ho w ev er, this p oses a practical dicult y in c ho osing an estimator whic h can ac hiev e the Cramer-Rao lo w er b ound. 1.3.2 DTI Restoration Curren tly there are t w o p opular approac hes, one in v olv es smo othing the magnitude of the ra w data k S l k while preserving relev an t detail and then estimating the

PAGE 24

11 diusion tensor D from the smo othed ra w data [ 13 14 ]. The ra w data in this context consist of sev eral diusion w eigh ted images acquired for v arying magnetic eld strengths and directions. Note that at least sev en v alues at eac h 3D grid p oin t in the data domain are required to estimate the six unkno wns in the 3x3 symmetric tensor D and one scale parameter k S 0 k The ra w data smo othing or de-noising can b e form ulated using v ariational principles whic h in turn require solution to PDEs or at times directly using PDEs whic h are not necessarily arriv ed at from v ariational principles [ 15 16 17 18 19 20 ]. Another approac h to restore the DTI is to smo oth the dominan t eigen v ector (also refereed as principal diusion direction) after the diusion tensor has b een estimated from the ra w noisy measuremen ts k S l k In P oup on et al. [ 21 ], an energy function based on a Mark o vian mo del w as used to regularize the noisy dominan t eigen v ector eld computed directly from the noisy estimates of D obtained from the measuremen ts k S l k using the linearized Stejsk al-T anner equation ( 1.6 ). Coulon et al. [ 22 ] prop osed an iterativ e restoration sc heme for principal diusion direction based on direction map restoration w ork rep orted in T ang et al. [ 23 ]. Other sophisticated v ector eld restoration metho ds [ 24 25 26 ] can p oten tially b e applied to the problem of restoring the dominan t eigen v ector elds computed from the noisy estimates of D W eic k ert [ 27 ] used a nonlinear diusion tec hnique to restore a giv en tensor eld. Their sc heme w as a generalization of the nonlinear anisotropic regularization of v ectorv alued image to the matrix-v alued image case. The regularization term there in v olv ed the trace of a p enalized measure applied to the sum of the comp onen t-wise structure tensors of the giv en matrix. Recen tly Chefd'Hotel et al. [ 28 29 ] presen ted an elegan t geometric solution to the problem of smo othing a noisy D that w as computed from k S l k using the log-linearized mo del ( 1.6 ) describ ed ab o v e. They assume that the giv en (computed) tensor eld D from k S l k is p ositiv e denite and dev elop a clev er approac h based on dieren tial geometry of manifolds to ac hiev e constrained smo othing where

PAGE 25

12 the smo othed tensor eld is constrained to b e p ositiv e semi-denite. In teresting results of mapp ed b ers are sho wn for h uman brain MRI. The idea of sim ultaneous estimation and smo othing of the diusion tensors from D WI w as pioneered b y our earlier w ork [ 30 ] and impro v ed up on in [ 31 ]. This impro v emen t in v olv ed metho ds to o v ercome the problem of man ual c hoice of regularization con trol parameters. In b oth these w orks [ 30 31 ], the estimated smo oth tensor eld w as guaran teed to b e p ositiv e semi-denite. Moreo v er, these w orks w ere a rep ort of the rst use of the nonlinear Stejsk al-T anner equation in the estimation of the diusion tensors. Recen tly in Tsc h ump erl e and Deric he [ 32 ], a robust v ersion of the linearized Stejsk al-T anner equation is used as the data term along with a robust regularization term in a unied v ariational principle to estimate a smo oth D from the noisy signal measuremen ts. Note that the data term uses a linearized v ersion of the Stejsk al-T anner equation as in earlier w orks [ 22 28 21 14 ]. 1.3.3 DTI Segmen tation One k ey factor in tensor eld analysis is a prop er c hoice of tensor distance that measures the similarit y or dissimilarit y b et w een tensors and is particularly imp ortan t in DTI segmen tation and registration. In general, an y kind of matrix norm can b e used to induce a tensor distance. One suc h example is the tensor Euclidean distance obtained b y using the F r ob enius norm Due to its simplicit y tensor Euclidean distance has b een used extensiv ely in tensor eld restoration [ 28 27 ]. Alexander et al. [ 33 ] considered a n um b er of tensor similarit y measures for matc hing diusion tensor images and concluded empirically that the Euclidean dierence measure yields the b est results. Though not man y sophisticated tensor distances ha v e b een prop osed in tensor eld analysis, there are quite a few in the mac hine learning literature. Miller and Chefd'hotel [ 34 ] prop osed an in teresting measure on transformation groups to design an in v arian t k ernel for non-parametric densit y estimation. What is most closely related to our presen t w ork

PAGE 26

13 w as prop osed b y Tsuda et al. [ 35 ]. They in tro duced information geometry in the space of p ositiv e denite matrices to deriv e a Kullbac k-Leibler div ergence for t w o matrices and then used it in an \em" algorithm (not the w ell kno wn EM exp ectation maximization) to appro ximate an incomplete k ernel. In the con text of DT-MRI segmen tation, recen tly Zh uk o v et al. [ 36 ] prop osed a lev el set segmen tation metho d whic h is in fact a segmen tation of a scalar anisotropic measure of the diusion tensor. The fact that a scalar eld computed from the diusion tensor eld is used implies that they ha v e ignored the direction information con tained in the tensor eld. Th us, this metho d will fail if t w o homogeneous regions of tensor eld ha v e the same anisotrop y prop ert y but are orien ted in a totally dieren t direction! T o the b est of our kno wledge, b efore the acceptance of our w ork in [ 37 ], the only published w ork whic h aims to segmen t matrix-v alued images w as rep orted in F eddern et al. [ 38 ]. In their w ork, F eddern et al. extended the concept of a structure tensor to a tensor eld and then used it for extending the geo desic activ e con tours to tensor elds. The stopping criterion in this case is c hosen as a decreasing function of the trace of the sum of the structure tensors formed from individual elemen ts of the giv en tensor eld. This amoun ts to taking the F rob enius norm of the tensor eld formed b y the gradien t magnitude of the individual c hannels of the giv en tensor eld. This sc heme is a gradien t based activ e con tour(snak e) whose p erformance is lac king in absence of signican t gradien t in the measuremen ts. Moreo v er, norm used there is not in v arian t to ane transformations of the input co ordinates on whic h the original tensor eld is dened. Ane in v ariance is a v ery desirable prop ert y for the segmen tation sc heme when dealing with data sets obtained from dieren t hardw are or from sub jects exhibiting anatomical c hanges due to dev elopmen t of pathology etc. in medical imaging applications.

PAGE 27

14 In [ 37 ], w e applied a region based mo del for tensor eld segmen tation b y incorp orating a tensor distance based on matrix F rob enius norm. Sim ultaneously Rousson et al. [ 39 ] extended the classical surface ev olution segmen tation mo del b y incorp orating region statistics dened on the tensor elds for DT-MRI segmen tation. In b oth w orks [ 37 39 ], the diusion tensor is treated as a simple matrix and ev ery comp onen t is treated equally Ho w ev er, the diusion tensor is actually the co v ariance matrix of a lo cal diusion pro cess and dieren t comp onen ts ha v e dieren t imp ortance/w eigh t. In [ 40 ], w e w ere the rst to use this fact in tensor eld segmen tation. In particular, w e prop osed a no v el tensor \distance" based on information theory and incorp orated it in region based mo dels for tensor eld segmen tation. V ery recen tly the concepts presen ted therein ha v e b een extended b y Lenglet et al. [ 41 ], to the case of regions with piecewise constan t non-unit-v ariances. They also use the Fisher information metric on the manifold of lo cal Gaussians to ac hiev e segmen tation of the diusion tensor eld. 1.4 Con tributions Statistical Analysis of a Nonlinear Estimator for ADC and Its Application to Optimizing b V alues W e dev elop a mo del for estimating the accuracy of a nonlinear estimator used in computing the apparen t diusivit y co ecien t (ADC) whic h pro vides useful information ab out the structure of tissue b eing imaged with diusion w eigh ted MR. F urther, w e study the statistical prop erties of the nonlinear estimator and use them to design optimal diusion w eigh ting factors. Sp ecically w e sho w that a w eigh ted linear estimator can w ell appro ximate the nonlinear estimator and th us can b e used to analyze the statistical prop erties of the nonlinear estimator. F urthermore, to accoun t for the fact that the ground truth of the ADC is a distribution instead of a single v alue, a w eigh ted co ecien t of v ariance (CO V) is prop osed as a criteria to b e minimized for the determination of the

PAGE 28

15 optimal diusion w eigh ting factors. Our mo del has the p oten tial to b e extended to analyze the statistical prop erties of a nonlinear estimator for diusion tensor and th us obtain the optimal b v alues for DTI acquisition. A Constrained V ariational Principle for DTI Restoration. W e presen t a no v el constrained v ariational principle for sim ultaneous smo othing and estimation of the diusion tensor eld from complex v alued diusion w eigh ted images (D WI). The constrained v ariational principle in v olv es the minimization of a regularization term using L p smo othness norm, sub ject to a nonlinear inequalit y constrain t on the data. The data term w e emplo y is the original Stejsk al-T anner equation instead of the linearized v ersion usually emplo y ed in literature. W e empirically sho w that the complex v alued nonlinear form leads to a more accurate (when compared to the linearized v ersion) estimate of the DTI. The inequalit y constrain t requires that the nonlinear least squares data term b e b ounded from ab o v e b y a kno wn tolerance factor whic h can b e computed from the data. Finally in order to accommo date the p ositiv e denite constrain t on the diusion tensor, it is expressed in terms of Cholesky factors and estimated. The constrained v ariational principle is solv ed ecien tly b y using the augmen ted Lagrangian tec hnique in conjunction with the limited memory quasi-Newton metho d. DTI Segmen tation Using an Information Theoretic T ensor Dissimilarit y Measure. W e aim to segmen t the DTI using all the information con tained in the tensors dening the eld, not only the scalar anisotropic prop erties, but also the orien tation information con tained therein. The k ey step will b e a no v el denition of the matrix distance whic h can b e used to measure heterogeneit y of the diusion tensor eld quan titativ ely W e presen t a no v el denition of tensor \distance" grounded in concepts from information theory and incorp orate it in

PAGE 29

16 the segmen tation of tensor-v alued images. In DTI, the symmetric p ositiv e definite (SPD) diusion tensor at eac h p oin t can b e in terpreted as the co v ariance matrix of a lo cal Gaussian distribution. Th us, a natural measure of dissimilarit y b et w een diusion tensors w ould b e the Kullbac k-Leibler(KL) div ergence or its relativ e. In particular, w e dene a tensor \distance" as the square ro ot of the J-div ergence (symmetrized KL) b et w een t w o Gaussian distributions corresp onding to the tensors b eing compared. Our denition leads to no v el closed form expressions for the \distance" itself and the mean v alue of a DTI. Unlik e the traditional F rob enius norm-based tensor distance, our \distance" is ane in v arian t, a desirable prop ert y in man y applications. W e then incorp orate this new tensor \distance" in a region-based activ e con tour mo del for DTI segmentation, and the closed expressions w e deriv ed greatly helps the computation. T o our know le dge, this is the rst use of a r e gion-b ase d active c ontour mo del for diusion tensor eld se gmentation.

PAGE 30

CHAPTER 2 ST A TISTICAL ANAL YSIS OF A NONLINEAR ESTIMA TOR F OR ADC AND ITS APPLICA TION TO OPTIMIZING b V ALUES 2.1 In tro duction Prior to the in v en tion of DT-MRI, apparen t diusion co ecien t (ADC) has b een used in tensiv ely as a con trast mec hanism in clinical MRI. The principle underlying b oth ADC and DT-MRI is the sensitivit y of the magnetic resonance (MR) signal to the random motion of w ater molecules. In the case of ADC, the diusion w eigh ted image (D WI) is also giv en b y the Stejsk al-T anner equation: S = S 0 e bD (2.1) where D is the ADC, and S 0 is the non-D W image. D WIs are often corrupted b y noise, the noise mo del for complex v alued data is [ 42 ] S = S 0 e bD + n r + in i (2.2) where n r and n i are uncorrelated Gaussian noise with zero mean and standard deviation Giv en a set of D WIs acquired with dieren t b -v alues, ADC can b e estimated via linear or nonlinear regression. Linear estimators of the ADC and the diusion tensor ha v e b een w ell studied and also used in designing optimal b v alues. Ho w ev er, analysis regarding the nonlinear estimation has not b een considered b efore. In this c hapter, statistical analysis of a nonlinear estimator for ADC and its application to optimizing b v alues are carried out. The metho d used here also sheds some ligh ts on ho w to analysis the nonlinear estimator for diusion tensor and design a corresp onding optimal acquisition metho d. 17

PAGE 31

18 2.2 Theory 2.2.1 A Nonlinear Estimator for ADC Let S 0 = R 0 + iI 0 b e the ec ho in tensit y without magnetic gradien t, D b e the ADC, S k = R k + iI k b e the measuremen t for the k th magnetic gradien t, k = 1 ; :::; K Since S k = S 0 e b k D + n r + in i as in equation ( 2.2 ), the sum of squared errors (SSE) that measures the go o dness of estimation is E ( S 0 ; D ) = X k ( R k R 0 e b k D ) 2 + ( I k I 0 e b k D ) 2 (2.3) Then w e ha v e a nonlinear estimate ( S 0 ; D ) that is the solution of a nonlinear optimization problem, i.e. ( S 0 ; D ) = min S 0 ;D E ( S 0 ; D ) (2.4) Though there is no analytical form for the nonlinear estimate ( S 0 ; D ), it can b e ecien tly solv ed n umerically Lev en b erg-Marquardt (LM) metho d [ 43 ] is ideal for nonlinear least squares optimization problems lik e equation ( 2.4 ). Let F ( S 0 ; D ) = 2666666666666664 R 1 R 0 e b 1 D ::: R K R 0 e b K D I 1 I 0 e b 1 D ::: I K I 0 e b K D 3777777777777775 Then E ( S 0 ; D ) = F ( S 0 ; D ) T F ( S 0 ; D )

PAGE 32

19 The Jacobian matrix J ( S 0 ; D ) of F ( S 0 ; D ) is J ( S 0 ; D ) = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 )Tj/T1_132 11.9552 Tf0.0188 Tc (e )Tj/T1_137 7.9701 Tf0.0062 Tc (b 1 D 0b 1 R 0 e )Tj/T1_137 7.9701 Tf6.5992 0 Td(b 1 D ::: ::: ::: )Tj/T1_132 11.9552 Tf0.0188 Tc 9.2294 0 Td(e )Tj/T1_137 7.9701 Tf0.0062 Tc 6.5992 0 Td(b K D 0b K R 0 e )Tj/T1_137 7.9701 Tf6.5992 0 Td(b K D 0 )Tj/T1_132 11.9552 Tf0.0289 Tc 9.2294 0 Td(e )Tj/T1_137 7.9701 Tf0.0062 Tc 6.5992 0 Td(b 1 D b 1 I 0 e )Tj/T1_137 7.9701 Tf0.0062 Tc 6.5992 0 Td(b 1 D ::: ::: ::: 0 )Tj/T1_132 11.9552 Tf0.0289 Tc 9.2294 0 Td(e )Tj/T1_137 7.9701 Tf0.0062 Tc 6.5992 0 Td(b K D b K I 0 e )Tj/T1_137 7.9701 Tf0.0062 Tc (b K D 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 Then the gradien t G ( S 0 ; D ) of E ( S 0 ; D ) is giv en b y G ( S 0 ; D ) = 2 J ( S 0 ; D ) T F ( S 0 ; D ) If w e denote the Hessian matrix of the k th comp onen t of F ( S 0 ; D ) as H k ( S 0 ; D ), then the Hessian matrix H ( S 0 ; D ) of E ( S 0 ; D ) is H ( S 0 ; D ) = 2 J ( S 0 ; D ) T J ( S 0 ; D ) + 2 K X k =1 F k ( S 0 ; D ) H k ( S 0 ; D ) The essence of the LM metho d is to compute a searc h direction at iteration n according to ( J T n J n + n I ) d n = )Tj/T1_131 11.9552 Tf9.2294 0 Td(J n F n (2.5) where k 0 is a con trol parameter. When k is zero, the searc h direction is computed as the Gauss-Newton metho d. As k go es to infnit y the searc h direction approac hes the steep est descen t direction. With prop er c hoices of k the LM metho d can p erform lik e the gradien t descen t metho d when the curren t solution is far from the fnal solution and it can appro ximate the Gauss-Newton metho d when the curren t solution is close to the fnal solution.

PAGE 33

20 2.2.2 V ariance of the Nonlinear Estimate for ADC The nonlinear estimate giv en in ( 2.4 ) can only b e computed n umerically th us it is imp ossible to analytically compute its statistical prop erties. W e prop ose to use a w eigh ted linear estimator to appro ximate the nonlinear estimator. Since the w eigh ted linear estimator has a close form solution, w e can analytically appro ximate the statistical b eha vior of the nonlinear estimator. The sum of w eigh ted squared errors of the linearized form of the mo del in ( 2.1 ) is E w ( k S 0 k ; D ) = X k w 2 k ( l og ( k S k k ) l og ( k S 0 k ) + b k D ) 2 (2.6) where w k = k S 0 e b k D g k ; k = 1 ; :::; K are the w eigh ts, D g is the ground truth. A corresp onding w eigh ted linear estimate is giv en b y ( k S 0 k ; D ) = min k S 0 k ;D E w ( k S 0 k ; D ) (2.7) The solution ( k S 0 k ; D ) can b e easily computed as 264 l n ( k S 0 k ) D 375 = A 264 P k ( w 2 k b k l n ( k S k k ) w 2 k l n ( k S k k ) 375 (2.8) where = 1 ( P k j w k j 2 b k ) 2 + ( P k j w k j 2 )( P k w 2 k b 2k ) A = 264 P k w 2 k b k P k w 2 k b 2k P k w 2 k P k w 2 k b k 375

PAGE 34

21 It is p oin ted out in [ 42 ] that k S k k k S 0 k e b k D + n when S N R > 3, then b y using T a ylor expansion on l n ( k S k k ), w e ha v e l n ( k S k k ) l n ( k S 0 k e b k D + n ) l n ( k S 0 k e b k D ) + n k S 0 k e b k D (2.9) th us l n ( k S k k ) k S 0 k e b k D and w e ha v e v ar ( D ) = 2 2 k S 0 k 2 X k ( w 2 k e b k D X j ( b j b k ) w 2 j #) 2 (2.10) It is easy to pro v e that equation ( 2.10 ) is the same as the CRLB giv en in [ 12 ] for t w o b -v alues and w e found empirically that this is true ev en for m ultiple b -v alues more than t w o. Ho w ev er, whether equation ( 2.10 ) and the CRLB giv en in [ 12 ] are actually the same remains to b e explored. Note that this w eigh ted linear estimator is NOT a realistic estimator since the w eigh ting are related to the ground truth, ho w ev er, it serv es w ell as a to ol to analyze the nonlinear estimator whic h is v alidated b y sim ulation exp erimen ts later. 2.2.3 W eigh ted CO V of the Nonlinear Estimator for ADC Equation ( 2.10 ) sho ws the v ariance of the w eigh ted linear estimator when the ground truth is a single ADC v alue. Ho w ev er, the ground truth D g is usually a distribution instead of a single v alue as sho wn in Fig. 2.1 Th us w e prop ose to use the follo wing w eigh ted CO V of D C O V w ( D ) = Z C O V ( D j D g ) p ( D g ) dD g (2.11) where C O V ( D j D g ) = p V ar ( D j D g ) =D g p ( D g ) is the probabilit y densit y function of the ground truth.

PAGE 35

22 0.5 1 1.5 2 2.5 x 10 -3 0 0.02 0.04 0.06 0.08 0.1 0.12 D gProbability Density Grey matterWhite Matter Figure 2.1: Distribution of ADC v alues in the white matter and the gra y matter of a normal rat brain. 2.2.4 Optimal Diusion W eigh ting F actors The w eigh ted CO V of D in equation ( 2.11 ) dep ends on the ground truth distribution, total n um b er of measuremen ts and the c hoice of diusion w eigh ting factors. The ground truth distribution can not b e con trolled. Though increasing total n um b er of measuremen ts will decrease the w eigh ted CO V, this will also increase the acquisition time whic h is not desirable. When the total n um b er of measuremen ts is xed, what one can do is to c ho ose the diusion w eigh ting factors carefully to get a minim um w eigh ted CO V. 2.3 Results 2.3.1 Sim ulation Results of Dieren t Estimators The sim ulation pro cedure that w e dev elop ed consists of the follo wing steps: 1. Cho ose a ground truth D g and S 0 2. Cho ose a set of diusion w eigh ting factors b 1 ..., b N 3. Generate n complex measuremen ts using ( 2.2 ) with the selected set of b -v alues.

PAGE 36

23 4. Estimate D using the nonlinear estimator in ( 2.4 ), the w eigh ted linear estimator in ( 2.7 ) and the linear estimator in Xing et al. [ 7 ] resp ectiv ely The last t w o steps are executed for sev eral trials. The follo wing settings w ere used in our sim ulation here: D g = 0 : 001 mm 2 =s S 0 = 10 + 10 i b 1 = 100 s=mm 2 b 2 = 500 s=mm 2 and b 3 = 1000 s=mm 2 = 0 : 5 or = 1 : 0. The results of the sim ulation are sho wn in Fig. 2.2 It is eviden t that the nonlinear estimator yields the b est results, the w eigh ted linear least square estimator has similar outcomes lik e the nonlinear estimator while the linear least square mo del do es not p erform as go o d as the other t w o. This indicates that the nonlinear estimator should b e used for b etter accuracy 2.3.2 V ariance of Dieren t Estimates The v ariance of the nonlinear estimate can only b e estimated b y sim ulation. This sim ulation will b e similar as describ ed earlier. Ho w ev er, the total n um b er of trial is set to b e 2000 for a stable and accurate estimation of the v ariance. The v ariance of the linear (LSQR) estimate is giv en in Bito et al. [ 44 ] and the w eigh ted linear estimate (w eigh ted LSQR) can b e computed from equation ( 2.10 ). The settings used here are: D g = 0 : 001 mm 2 =s S 0 = 10 + 10 i b 1 = 0, b 3 = 2000 s=mm 2 b 2 will v ary from 0 to 3000 s=mm 2 and = 0 : 05. The results of the computation are sho wn in Fig. 2.3 It is clear that the v ariance of the w eigh ted linear estimate is a go o d appro ximation of the nonlinear estimate. Also w e see that in most cases, the nonlinear estimate yields a smaller v ariance than the linear estimate. Ho w ev er, they happ en to ha v e the same v ariance when b 2 = b 1 or b 2 = b 3

PAGE 37

24 2 4 6 8 10 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 x 10 -3 Trial Estimated D 2 4 6 8 10 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 x 10 -3 Trial Estimated D LinearComplex Nonlinear Weighted LinearComplex Nonlinear 2 4 6 8 10 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 x 10 -3 Trial Estimated D 2 4 6 8 10 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 x 10 -3 Trial Estimated D LinearComplex Nonlinear Weighted LinearComplex Nonlinear Figure 2.2: Sim ulation results of dieren t ADC estimates. T op: Sim ulation results when = 0 : 5. Bottom: Sim ulation results when = 1 : 0.

PAGE 38

25 0 500 1000 1500 2000 2500 3000 0 0.5 1 1.5 2 x 10 -10 var(D*) versus b 2 with s =0.05 b2var(D*) LSQRWeighted LSQRComplex NLSQR(Simulation) Figure 2.3: V ariance of dieren t ADC estimates when = 0 : 05. 2.3.3 W eigh ted CO V and Optimal Diusion W eigh ting F actors It is easy to compute the w eigh ted CO V in equation ( 2.11 ) b y n umerical in tegration giv en the ground truth ADC and the diusion w eigh ting factors and it is also p ossible to nd optimal diusion w eigh ting factors n umerically Curren tly w e use the medium-scale sequen tial quadratic programming (SQP) algorithm in Matlab as the n umerical optimization pro cedure to ac hiev e this. F or our rst exp erimen t, w e use the t ypical ADC range found in the h uman brain as giv en in Brih uega-Moreno et al. [ 12 ] and assume the distribution to b e uniform. In this case, w e can use the dimensionless v alue whic h is dened as min ( D g ) b The optimal -v alues with m ultiple measuremen ts up to v e are summarized in table 2.1 Notice that our result is quite dieren t from [ 12 ] and these dierences are caused b y the dieren t precision in the n umerical in tegration tec hniques used in the step to compute the w eigh ted CO V. Our in tegration has higher accuracy than the one

PAGE 39

26 T able 2.1: Optim um -v alue for t ypical ADC range of h uman brains with m ultiple measuremen ts. m is the total n um b er of measuremen ts, the n um b er of measuremen ts at a particular -v alue are sho wn in the brac k et in the ev en t that it is more than one and the unit of the D v alues is mm 2 =s D range is D range is [0 : 3 10 3 ; 3 10 3 ] [0 : 1 10 3 ; 3 10 3 ] m 1 2 3 2 0 0 : 210 3 0 0 : 223(2) 4 0 0 : 190(2) 0 : 380 5 0 0 : 201(3) 0 : 519 m 1 2 3 2 0 0 : 080 3 0 0 : 055 0 : 170 4 0 0 : 064(2) 0 : 276 5 0 0 : 070(3) 0 : 328 T able 2.2: Optim um diusion w eigh ting factors with m ultiple measuremen ts for normal rat brains. m is the total n um b er of measuremen ts, the n um b er of measuremen t at a particular b-v alue is sho wn in the brac k et in the ev en t that it is more than one and the unit of the b-v alues is s=mm 2 Grey matter White matter m b 1 b 2 2 0 2370 3 0 2530(2) 4 0 2650(3) 5 0(2) 2460(4) m b 1 b 2 2 0 2830 3 0 3010(2) 4 0 3150(3) 5 0 3250(4) used in [ 12 ] and hence yields higher accuracy results. F or our second exp erimen t, w e use real distributions of the ADC v alues estimated from a normal rat brain. Histograms of the ADC v alues from dieren t parts of a normal rat brain are used as these distributions. These are not uniform as eviden t from the Fig. 2.1 Results of w eigh ted CO V v ersus b 2 and b 3 using (0 ; b 2 ; b 3 ) as diusion w eigh ting factors are sho wn in Fig. 2.4 F urthermore, the optimal b -v alues with m ultiple measuremen ts of up to v e are summarized in table 2.2 Notice that w e get t w o dieren t b v alues as the optimal diusion w eigh ting factors for these real distributions.

PAGE 40

27 0 1000 2000 3000 4000 0 1000 2000 3000 4000 0 0.1 0.2 0.3 0.4 b2 b3 COVw(D*) 0 1000 2000 3000 4000 0 1000 2000 3000 4000 0 0.1 0.2 0.3 0.4 0.5 b2 b3 COVw(D*) Figure 2.4: W eigh ted CO V of the nonlinear ADC estimate v ersus the diusion w eigh t factors. C O V W ( D ) of the gra y matter (First ro w) and the white matter (Second ro w) v ersus b 2 and b 3 using (0 ; b 2 ; b 3 ) as diusion w eigh ting factors.

PAGE 41

CHAPTER 3 A CONSTRAINED V ARIA TIONAL PRINCIPLE F OR DIRECT ESTIMA TION AND SMOOTHING OF THE DTI FR OM COMPLEX D WIS 3.1 In tro duction W e are the rst to prop ose the idea of sim ultaneous estimation and smo othing of the diusion tensors from D WI [ 30 ] and ga v e a signican t impro v emen t later on in [ 31 ]. W e further extend our mo del [ 31 ] to restoration of DTI from complex v alued diusion w eigh ted images. Sp ecically w e prop ose a no v el form ulation of the DTI estimation and smo othing as a constrained optimization problem. The sp ecic approac h w e use is called the augmen ted Lagrangian tec hnique whic h allo ws one to deal with inequalit y constrain ts. The novelty of our formulation lies in the ability to dir e ctly, in a single step pr o c ess, estimate a smo oth D fr om the noisy c omplex me asur ements S l with the pr eservation of its p ositiveness. The formulation do es not r e quir e any adho c metho ds of setting tuning p ar ameters to achieve the solution. These ar e the key fe atur es distinguishing our solution metho d fr om metho ds r ep orte d in liter atur e to date. In con trast to our solution (to b e describ ed subsequen tly in detail), most of the earlier approac hes used a t w o step metho d in v olving (i) computation of a D from k S l k using a linear least-squares approac h and then (ii) computing a smo othed D via either smo othing of the eigen v alues and eigen v ectors of D or using the matrix ro ws approac h in Chefd'hotel et al. [ 28 ]. The problem with the t w o step approac h to computing D is that the estimated D in the rst step using the log-linearized mo del need not b e p ositiv e denite or ev en semi-denite. Moreo v er, it is hard to trust the delit y of the eigen v alues and v ectors computed from suc h matrices ev en if they are to b e smo othed subsequen tly prior to mapping out the nerv e b er tracts. 28

PAGE 42

29 Briery our mo del seeks to minimize a cost function in v olving, the sum of an L p norm based gradien t of the Cholesky factor L whic h ensure the p ositiv eness of D b y the Cholesky factorization LL T { and an L p norm based gradien t of S 0 sub ject to a nonlinear data constrain t based on the complex (not linearized) Stejsk al-T anner equation ( 1.4 ). The mo del is p osed as a constrained v ariational principle whic h can b e minimized b y either discretizing the v ariational principle itself or the asso ciated EulerLagrange equation. W e c ho ose the former and use the augmen ted Lagrangian metho d together with the limited memory quasi-Newton metho d to ac hiev e the solution. This c hapter is organized as follo ws: in section 3.2 the detailed v ariational form ulation is describ ed along with the nonlinear data constrain ts, the p ositiv e denite constrain t and the augmen ted Lagrangian solution. Section 3.3 con tains the detailed description of the discretization as w ell as the algorithmic description of the augmen ted Lagrangian framew ork. In section 3.4 w e presen t exp erimen ts on application of our mo del to syn thetic as w ell as real data. Syn thetic data exp erimen ts are conducted to presen t comparison of tensor eld restoration results with a recen tly presen ted w ork of Coulon et al. [ 22 ]. Moreo v er, results of comparison b et w een the use of the linearized Stejsk al-T anner mo del and the nonlinear form of the same are presen ted as w ell. 3.2 Constrained V ariational Principle F orm ulation Our solution to the reco v ery of a piecewise smo oth diusion tensor eld from the complex measuremen ts S l is p osed as a constrained v ariational principle. W e seek to minimize a measure of lac k of smo othness in S 0 and the diusion tensor D b eing estimated using an L p norm of the gradien t in S 0 and an L p norm of the gradien t in the Cholesky factor L This measure is then constrained b y a nonlinear data delit y term related to the complex Stejsk al-T anner equation ( 1.4 ). The nonlinear data term is constrained b y an inequalit y whic h requires that it b e b ounded from ab o v e b y a p ossibly kno wn tolerance factor. The p ositiv eness constrain t on the diusion tensor

PAGE 43

30 b eing estimated is ac hiev ed via the use of the Cholesky factorization theorem from computational linear algebra. The constrained v ariational principle is discretized and p osed using the augmen ted Lagrangian tec hnique [ 43 ]. Eac h subproblem in the augmen ted Lagrangian framew ork is then solv ed using the limited memory QuasiNewton sc heme [ 43 ]. The no v elt y of our form ulation lies in the unied framew ork for reco v ering and smo othing of the diusion tensor eld from the ra w data S l In addition, to our kno wledge, this is the rst form ulation whic h allo ws for sim ultaneous estimation and smo othing of D as w ell as one in whic h the regularization parameter is not set in an adho c w a y Let S 0 ( x ) = R 0 ( x ) + iI 0 ( x ) b e the complex D WI when no diusion-enco ding magnetic gradien t is presen t, D ( x ) b e the unkno wn symmetric p ositiv e denite diffusion tensor, LL T ( x ) b e the Cholesky factorization of the diusion tensor with L b eing a lo w er triangular matrix, S l ( x ) = R l ( x ) + iI l ( x ) ; l = 1 ; ::; N are the complex D WIs measured after application of a diusion-enco ded magnetic gradien t of kno wn strength and direction and N is the total n um b er of measured D WIs The constrained v ariational principle is min S 0 ; L E ( S 0 ; L ) = Z n [ jr R 0 j p 1 + jr I 0 j p 1 + jr L j p 2 ] d x s :t: C ( S 0 ; L ) = 2 Z n N X l =1 ( F R 2 l + F I 2 l ) d x 0 (3.1) where n is the image domain, the rst and the second terms in the v ariational principle are L p smo othness norm on the real and imaginary part of S 0 the third term is an L p smo othness norm on L where p 1 > 6 = 5 for R 0 and I 0 and p 2 1 for L jr L j p 2 = P d jr L d j p 2 where d 2 D = f xx; y y ; z z ; xy ; y z ; xz g are indices to the six nonzero comp onen ts of L The lo w er b ounds on the v alue of p 1 and p 2 are c hosen so as to mak e the pro of of existence of a solution for this minimization (see theorem 3.2.4 and its pro of ) mathematically tractable. is a constan t scale factor and 2 is

PAGE 44

31 the noise standard deviation in the measuremen ts S l F R l and F I l are dened as F R l = R l R 0 e b l : LL T ; F I l = I l I 0 e b l : LL T 3.2.1 The Complex Nonlinear Data Constrain t The Stejsk al-T anner equation ( 1.4 ) sho ws the relation b et w een the complex diffusion w eigh ted image S l and the diusion tensor D Ho w ev er, m ultiv ariate linear regression based on ( 1.6 ) has b een used to estimate the diusion tensor D [ 1 ]. It w as p oin ted out in [ 1 ] that these results agree with nonlinear regression based on the magnitude Stejsk al-T anner equation ( 1.5 ). Ho w ev er, if the signal to noise ratio (SNR) is lo w and the n um b er of S l is not v ery large (unlik e in [ 1 ] where N = 315 or N = 294), the result from m ultiv ariate linear regression will dier from the nonlinear regression signican tly A robust estimator b elonging to the M-estimator family w as used b y P oup on et al. [ 21 ], ho w ev er, its p erformance is not discussed in detail. In W estin et al. [ 45 ]), an analytical solution is deriv ed from ( 1.6 ) b y using a dual tensor basis, ho w ev er, it should b e noted that this can only b e used for computing the diusion tensor D when there is no noise in the measuremen ts S l or the SNR is extremely high. Our aim is to pro vide an accurate estimation of the diusion tensor D for practical use, where the SNR ma y not b e high and the total n um b er of D WIs N is restricted to a mo derate n um b er. The nonlinear data delit y term based on the complex Stejsk al-T anner equation ( 1.4 ) is fully justied for use in suc h situations. This nonlinear data term is part of an inequalit y constrain t that imp oses an upp er b ound on the closeness of the measuremen ts S l to the mathematical mo del S 0 e b l : LL T The b ound 2 ma y b e estimated automatically [ 46 47 ].

PAGE 45

32 3.2.2 L p Smo othness Norm There are n umerous image smo othing tec hniques, ho w ev er, not man y of them are suitable for k eeping sharp discon tin uities. P artial Dieren tial Equation based(PDE) metho ds are the few that can capture edges and in particular total v ariation (TV) norm based restoration [ 18 ] in tro duced b y Rudin et al. is an excellen t PDE-based edge preserving denoising metho d. TV restoration aims to minimizing the total v ariation functional T V ( u ) = Z n jr u j (3.2) sub ject to the noise constrain t: k A u z k 2 2 (3.3) where z is a noisy scalar image dened on domain n, u is the smo othed image, A is b ounded linear op erator and it is an iden tit y for image denoising, and quan ties the amoun t of Gaussian noise added to the observ ed image. TV based restoration can b e extended to L p smo othness norm based metho d [ 48 ] b y replacing the total v ariational functional with Lp ( u ) = Z n jr u j p (3.4) where p 1. When p = 1, equation ( 3.4 ) b ecomes total v ariation norm, when p = 2, equation ( 3.4 ) b ecomes H 1 norm. In Blomgren et al. [ 48 ], it w as sho wn that L p smo othness norm based restoration do esn't admit discon tin uous solutions as the TVnorm do es when p > 1. Ho w ev er, when p is c hosen close to 1, its b eha vior is close to the TV-norm for restoring edges. W e adopt L p smo othness norm in our constrained mo del. In particular, w e need p 1 > 6 = 5 for regularizing S 0 and p 2 1 for L to ensure existence of the solution describ ed in 3.2.4 Note that what is of imp ortance here is the estimation the diusion tensor D and therefore, the edge-preserving prop ert y

PAGE 46

33 in the estimation pro cess is more relev an t for the case of D than for S 0 In our exp erimen t, w e c ho ose p 1 = 1 : 205 for S 0 and p 2 = 1 : 00 for L 3.2.3 The P ositiv e Denite Constrain t In general, a matrix A 2 < n n is said to b e p ositiv e denite if x T Ax > 0, for all x 6 = 0 in < n The diusion tensor D happ ens to b e a symmetric p ositiv e denite matrix but due to the noise in the data S l it is hard to reco v er a D that retains this prop ert y unless one includes it explicitly as a constrain t. One w a y to imp ose this constrain t is using the Cholesky factorization theorem, whic h states that: If A is a symmetric p ositive denite matrix then, ther e exists a unique factorization A = LL T wher e, L is a lower triangular matrix with p ositive diagonal elements After doing the Cholesky factorization, w e ha v e transfered the p ositiv e deniteness constrain t on the matrix D to an inequalit y constrain t on the diagonal elemen ts of L This is ho w ev er still hard to satisfy theoretically b ecause, the set on whic h the minimization tak es place is an op en set. Ho w ev er, in practise, with nite precision arithmetic, testing for a p ositiv e deniteness constrain t is equiv alen t to testing for p ositiv e semideniteness. This is b ecause for an y symmetric p ositiv e denite matrix D its mac hine represen tation ~ D = D + E with k E k k D k where is a small m ultiple of the mac hine precision. When D is a small symmetric p ositiv e denite matrix, ~ D can b ecome a semi-denite matrix, it follo ws that in nite precision arithmetic, testing for deniteness is equiv alen t to testing for semi-deniteness. Th us, w e rep ose the p ositiv e deniteness constrain t on the diusion tensor matrix as, x T ~ Dx 0 whic h is satised when ~ D = LL T 3.2.4 Existence of a Solution Justication for using the augmen ted Lagrangian metho d for constrained problems is giv en in No cedal and W righ t [ 43 ], th us w e only need to pro v e there is a solution

PAGE 47

34 for the follo wing subproblem: min ( S 0 ; L ) 2A L ( S 0 ; L ; ; ) = 8><>: E ( S 0 ; L ) C ( S 0 ; L ) + C 2 ( S 0 ; L ) 2 if C ( S 0 ; L ) E ( S 0 ; L ) 2 2 other w ise (3.5) where 0 is an estimate of the Lagrange m ultiplier, > 0 is a p enalt y parameter and A = f ( S 0 ; L ) j L 2 L p (n) w her e p 1 and R 0 ; I 0 2 W 1 ;p (n) w her e p > 6 = 5 g Here n < 3 L p (n) denotes the space of functions with b ounded L p norms, L 2 (n) is the space of square in tegrable functions on n and W 1 ;p (n) denotes the Sob olev space of order p on n [ 49 ]. Consider the augmen ted Lagrangian form ulation ( 3.5 ) whic h serv es as a subproblem of ( 3.1 ), the existence theorem will b e stated and pro v ed after the follo wing t w o lemmas. If not p oin ted out, the denitions and theorems emplo y ed in the pro of can b e found in Ev ans [ 49 ]. Lemma 1 L et A 1 = f ( S 0 ; L ) j ( S 0 ; L ) 2 A and C ( S 0 ; L ) and supp ose R l ; I l 2 L 2 (n) then the fol lowing minimization pr oblem ( 3.6 ) has a solution ( S 0 ; L ) 2 A 1 if A 1 6 = ; : min ( S 0 ; L ) 2A 1 L ( S 0 ; L ; ; ) = E ( S 0 ; L ) C ( S 0 ; L ) + 1 2 C 2 ( S 0 ; L ) (3.6) Pro of 1 We wil l verify the fol lowing thr e e statements one by one and then pr ove this lemma: The rst term E ( S 0 ; L ) in ( 3.6 ) is lower semi-c ontinuous with r esp e ct to L in L p (n) p 1 and we akly lower semi-c ontinuous with r esp e ct to S 0 in W 1 ;p (n) p q 1 The se c ond term C ( S 0 ; L ) in ( 3.6 ) is c ontinuous with r esp e ct to S 0 in W 1 ;p (n) when p > 6 = 5 and is c ontinuous w ith r esp e ct to L in L p (n) when p g eq 1 The thir d term C 2 ( S 0 ; L ) in ( 3.6 ) has the same c ontinuity pr op erty as the se c ond term.

PAGE 48

35 As L in ( 3.6 ) is lower b ounde d, ther e exists a minimizing se quenc e ( S n 0 ; L n ) for it, wher e L nd 2 L p (n) ; d 2 D p 1 R n 0 and I n 0 2 W 1 ;p (n) p > 6 = 5 and C ( S n 0 ; L n ) Then f L nd g 1n =1 ; d 2 D is b ounde d in L p (n) p 1 f R n 0 g 1n =1 and f I n 0 g 1n =1 ar e b ounde d in W 1 ;p (n) Ther efor e ther e is a subse quenc e f ( S n k 0 ; L n k ) g L d 2 L p (n) ; p 1 ; d 2 D and R 0 ; I 0 2 W 1 ;p (n) ; p > 6 = 5 such that when n k 1 f L n k d g L d ; d 2 D in L 1 (n) and a.e. on n (F r om the c omp actness pr op erty of L p (n) p 1 ). f R n k 0 g R 0 and f I n k 0 g I 0 in W 1 ;p (n) (F r om the we ak c omp actness of W 1 ;p ) f R n k 0 g R 0 and f I n k 0 g I 0 in L 2 (n) and a.e on n (F r om R el lich-Kondr achov Comp actness The or em when p > 6 = 5 and n = 3 this is wh y w e need p > 6 = 5 ). 1) Now we ar e r e ady to demonstr ate the lower semi-c ontinuity of the rst term E ( S 0 ; L ) in ( 3.6 ). By the lower semi-c ontinuity of L p norm in L p (n) p 1 we have Z n jr L j p 2 d x = Z n X d 2D jr L d j p 2 d x lim inf n k !1 Z n X d 2D jr L n k d j p 2 d x = lim inf n k !1 Z n jr L n k j p 2 d x By the lower semi-c ontinuity of L p norm in W 1 ;p (n) we have Z n jr R 0 j p 1 d x lim inf n k !1 Z n jr R n k 0 j p 1 d x Z n jr I 0 j p 1 d x lim inf n k !1 Z n jr I n k 0 j p 1 d x

PAGE 49

36 Thus E ( S 0 ; L ) = Z n [ jr R 0 ( x ) j p 1 + jr I 0 ( x ) j p 1 + jr L ( x ) j p 2 ] d x lim inf n k !1 Z n [ jr R n k 0 ( x ) j p 1 + jr I n k 0 ( x ) j p 1 + jr L ( x ) j p 2 ] d x (3.7) 2) Next we claim C ( S 0 ; L ) = lim n k !1 C ( S n k 0 ; L n k ) (3.8) Sinc e C ( S 0 ; L ) = 2 R n P Nl =1 k S l S 0 e b l : L L n k k 2 d x we only ne e d to show Z n N X l =1 k S l S 0 e b l : L L T k 2 d x = lim n k !1 Z n N X l =1 k S l S n k 0 e b l : L n k ( L n k ) T k 2 d x (3.9) This wil l b e pr ove d in sever al stages (a) L et F R l = R l R 0 e b l : L L T and F R l ;n k = R l R n k 0 e b l : L n k ( L n k ) T then Z n [ F R l ;n k F R l ] 2 d x = Z n h R n k 0 e b l : L n k ( L n k ) T R 0 e b l : L L T i 2 d x 2 Z n h R n k 0 ( e b l : L n k ( L n k ) T e b l : L L T ) i 2 d x + 2 Z n h ( R n k 0 R 0 ) e b l : L L T i 2 d x =2 A + 2 B Sinc e k R n k 0 k W 1 ;p (n) is uniformly b ounde d in n k by the Sob olev emb e dding the or em, for R 3 we have k R n k 0 k L 2 (n) C k R n k 0 k W 1 ;p (n) C Noting the fact that L n k has a str ong c onver genc e towar ds L in L 1 (n) we have fr om the Dominant Conver genc e The or em [ 49 ] l im n k !1 k e b l : L n k ( L n k ) T e b l : L L T k L 2 (n) = 0

PAGE 50

37 Thus we have A = Z n h R n k 0 ( e b l : L n k ( L n k ) T e b l : L L T ) i 2 d x k R n k 0 k 2L 2 (n) k e b l : L n k ( L n k ) T e b l : L L T k 2L 2 (n) n k !1 0 F r om the str ong c onver genc e of R n k 0 to R 0 in L 2 (n) and e b l : L L T 1 we have j B j Z n j R n k 0 R 0 j 2 d x = k R n k 0 R 0 k L 2 (n) n k !1 0 Now as A n k !1 0 and B n k !1 0 we have lim n k !1 k F R l F R l ;n k k 2L 2 (n) = lim n k !1 Z n [ F R l F R l ;n k ] 2 d x = 0 (3.10) (b) We now pr ove Z n F R l 2 d x = lim n k !1 Z n F R 2 l ;n k d x (3.11) First of al l, if f 2 L 2 (n) and g 2 L 2 (n) then we have f g 2 L 2 (n) and f + g 2 L 2 (n) F urther we have k f k 2L 2 (n) k g k 2L 2 (n) = Z n f 2 d x Z n g 2 d x = Z n ( f 2 g 2 ) d x Z n j f 2 g 2 j d x = Z n j f g jj f + g j d x k f g k L 2 (n) k f + g k L 2 (n) ( H o l der 0 s ineq ual ity ) (3.12) It is e asy to verify that F R l 2 L 2 (n) F R l ;n k 2 L 2 (n) and ar e uniformly b ounde d in L 2 (n) Thus applying the ab ove e quation, we have Z n F R l 2 d x Z n F R 2 l ;n k d x = k F R l k L 2 (n) k F R l ;n k k L 2 (n) C k F R l F R l ;n k k L 2 (n) 0 ; as n k 1

PAGE 51

38 (c) Similarly as pr evious step, we c an pr ove Z n h I l I 0 e b l : L L T i 2 d x = lim n k !1 Z n h I l I n k 0 e b l : L n k ( L n k ) T i 2 d x (3.13) Combining with (b), it is e asy to verify e quation ( 3.9 ). 3) Now we wil l show that C ( S 0 ; L ) 2 = lim n k !1 C ( S n k 0 ; L n k ) 2 (3.14) The ab ove c an b e e asily verie d sinc e C ( S 0 ; L ) C ( S n k 0 ; L n k ) ar e b ounde d and C ( S 0 ; L ) = lim n k !1 C ( S n k 0 ; L n k ) (3.15) Final ly, we have fr om 1), 2) and 3) L ( S 0 ; L ; ; ) lim inf n k !1 L ( S n k 0 ; L n k ; ; ) = inf L ( S 0 ; L ; ; ) (3.16) Ther efor e, ( S 0 ; L ) is a minimizer of L ( S 0 ; L ; ; ) as dene d in ( 3.6 ). Lemma 2 L et A 2 = f ( S 0 ; L ) j ( S 0 ; L ) 2 A and C ( S 0 ; L ) > g and supp ose R l ; I l 2 L 2 (n) then the fol lowing minimization pr oblem ( 3.17 ) has a solution ( S 0 ; L ) 2 A 2 if A 2 6 = ; : min ( S 0 ; L ) 2A 2 L ( S 0 ; L ; ; ) = E ( S 0 ; L ) 2 2 (3.17) The pro of is similar as in the lemma 1. Theorem 1 Supp ose R l ; I l 2 L 2 (n) then the augmente d L agr angian formulation ( 3.5 ) has a solution ( S 0 ; L ) 2 A Pro of 2 It is e asy to se e A 6 = ; as a matter of fact, c onstant functions wil l b e memb ers of A Thus, ther e wil l b e thr e e c ases: 1. A 1 6 = ; and A 2 = ; 2. A 2 6 = ; and A 1 = ;

PAGE 52

39 3. A 1 6 = ; and A 2 6 = ; Her e we pr ovide a pr o of for c ase 3, c ase 1 and 2 ar e trivial to pr ove. L et ( S 1 0 ; L 1 ) b e the solution for ( 3.6 ) and ( S 2 0 ; L 2 ) b e the solution for ( 3.17 ), it is not har d to se e that the solution of ( 3.5 ) is ( S 0 ; L )= 8><>: ( S 1 0 ; L 1 ) if L ( S 1 0 ; L 1 ; ; ) L ( S 2 0 ; L 2 ; ; ) ( S 2 0 ; L 2 ) other w ise Finding a solution of the constrained v ariation principle ( 3.1 ) in v olv es solving a sequence of ( 3.5 ) with xed and at eac h stage. It is m uc h more dicult than when dealing with the problems of reco v ering and smo othing separately Ho w ev er, there are b enets of p osing the problem in this constrained unied framew ork, namely one do es not accum ulate the errors from a t w o stage pro cess. Moreo v er, this framew ork incorp orates the nonlinear data term whic h is more appropriate for lo w SNR v alues prev alen t when the magnitude of the diusion-enco ded magnetic gradien t is high. Also, the noise mo del is correct for the nonlinear complex data mo del unlik e the log-linearized case. Lastly in the constrained form ulation, it is no w p ossible to p ose mathematical questions of existence and uniqueness of the solution { whic h w as not p ossible in earlier form ulations rep orted in literature. 3.3 Numerical Metho ds The minimization problem giv en b y ( 3.1 ) can only b e solv ed n umerically Here, w e discretize the constrained v ariational principle ( 3.1 ), transform it in to a sequence of unconstrained problems b y using the augmen ted Lagrangian metho d and then emplo y the limited Quasi-Newton tec hnique [ 43 ] to solv e them. Note that this framew ork allo ws us to solv e the minimization without resorting to adho c metho ds of c ho osing the "tuning" parameters. Also limited memory Quasi-Newton is the metho d of c hoice here due to the adv an tages it aords in the con text of memory/storage sa vings.

PAGE 53

40 Prop er line searc h tec hnique is emplo y ed once the searc h direction is found b y using limited memory Quasi-Newton metho d. 3.3.1 Discretized Constrained V ariational Principle W e use the standard nite dierence metho d to discretize the problem. Let F R l ;ij k = R l ;ij k R 0 ;ij k e b l : L ij k L Tij k F I l ;ij k = I l ;ij k I 0 ;ij k e b l : L ij k L Tij k F l ;ij k = F R l ;ij k + iF I l ;ij k jr R 0 j ij k = h q ( +x R 0 ) 2 + ( +y R 0 ) 2 + ( +z R 0 ) 2 + i ij k jr I 0 j ij k = h q ( +x I 0 ) 2 + ( +y I 0 ) 2 + ( +z I 0 ) 2 + i ij k jr L d j ij k = h q ( +x L d ) 2 + ( +y L d ) 2 + ( +z L d ) 2 + i ij k jr L j pij k = X d 2D jr L d j pij k where +x +y and +z are forw ard dierence op erators, is a small p ositiv e n umb er used to a v oid singularities of the L p norm when p < 2. No w the discretized constrained v ariational principle can b e written as min S 0 ; L E ( S 0 ; L ) = X i;j;k ( jr R 0 j p 1 ij k + jr I 0 j p 1 ij k + jr L j p 2 ij k ) s :t: C ( S 0 ; L ) = 2 X i;j;k N X l =1 k F l ;ij k k 2 0 (3.18) 3.3.2 Augmen ted Lagrangian Metho d The ab o v e problem is no w p osed using the augmen ted Lagrangian metho d, where a sequence of related unconstrained subproblems is solv ed, and the limit of these solutions is the solution to ( 3.18 ). F ollo wing the description in Sijb ers [ 43 ], the k -th

PAGE 54

41 subproblem of ( 3.18 ) is giv en b y min L ( S 0 ; L ; s ; k ; k ) = E ( S 0 ; L ) k [ C ( S 0 ; L ) s ] + 1 2 k [ C ( S 0 ; L ) s ] 2 (3.19) where s 0 is a slac k v ariable, k k are the barrier parameter and the Lagrange m ultiplier estimate for the k -th subproblem resp ectiv ely One can explicitly compute the slac k v ariable s at the minim um as s = max ( C ( S 0 ; L ) k k ; 0) and substitute it in ( 3.19 ) to get an equiv alen t subproblem in ( S 0 ; L ) giv en b y min L ( S 0 ; L ; k ; k ) (3.20) = 8><>: E ( S 0 ; L ) k C ( S 0 ; L ) + C 2 ( S 0 ; L ) 2 k if C ( S 0 ; L ) k k E ( S 0 ; L ) k 2 2k other w ise The follo wing algorithm summarizes the pro cedure to nd the solution for ( 3.18 ): Algorithm 1 Augmen ted Lagrangian Algorithm 1: Initialize S 0 (0), L (0) using the nonlinear regression, c ho ose an initial 0 and 0 2: for k = 1 ; 2, ... 3: Find an appro ximate minimizer S 0 ( k ), L ( k ) of L ( ; ; k ; k ) as in ( 3.20 ) starting with S 0 ( k 1), L ( k 1); 4: If nal con v ergence test is satised 5: STOP with an appro ximate solution S 0 ( k ), L ( k ); 6: Up date the Lagrange m ultiplier using k +1 = max ( k C ( S 0 ; L ) = k ; 0); 7: Cho ose a new p enalt y parameter k +1 = k = 2; 8: Set the new starting p oin t for the next iteration to S 0 ( k ), L ( k ); 9: end(for) 3.3.3 Limited Memory Quasi-Newton Metho d Due to the large n um b er of unkno wn v ariables in the minimization, w e solv e the subproblem using limited memory Quasi-Newton tec hnique. Hessian matrix at eac h iteration of the optimization b y using only the rst deriv ativ e information. In the

PAGE 55

42 Limited-Memory Bro yden-Fletc her-Goldfarb-Shano (BF GS) metho d, searc h direction is computed without storing the appro ximated Hessian matrix whic h can b e a v ery large matrix in general ( O ( N 6 ) size for O ( N 3 ) unkno wns). Let x = ( S 0 ; L ) b e the v ector of v ariables, and f ( x ) = L ( S 0 ; L ; ; ) denote the augmen ted Lagrangian function ( 3.20 ) to b e minimized. F or simplicit y w e write f ( x ) = L ( S 0 ; L ) b y omitting the xed parameter and in the follo wing description. A t k th iteration, let s k = x k +1 x k b e the up date of the v ariable v ector x y k = r f k +1 r f k the up date of the gradien t and H k 1 the appro ximation of the in v erse of the Hessian. The in v erse of the appro ximate Hessian can b e appro ximated using the BF GS up dating form ula: H 1 k +1 = V k H 1 k V T k + s k s Tk y T k s k (3.21) where V k = I y k s k T y k T s k Then w e can use the follo wing L-BF GS t w o-lo op recursion iterativ e pro cedure, whic h computes the searc h direction H 1 k r f k ecien tly b y using last m pairs of ( s k ; y k ) [ 43 ]. Algorithm 2 Searc h Direction Up date Algorithm 1: q r f k ; 2: for i = k 1 ; k 2 ; :::; k m 3: i i s Ti q ; 4: q q i y i ; 5: end(for) 6: r ( H 0k ) 1 q ; 7: for i = k m; k m 1 ; :::; k 1 8: i y T i r ; 9: r r + s i ( i ) 10: end(for) 11: stop with result H k 1 r f k = r where k = 1 y T k s k and ( H 0k ) 1 is the initial appro ximation of the in v erse of the Hessian, w e set ( H 0k ) 1 = r k I where r k = s Tk 1 y k 1 y T k 1 y k 1

PAGE 56

43 The gradien t of our energy function is r f ( x ) = ( @ L ( S 0 ; L ) @ R 0 ; @ L ( S 0 ; L ) @ I 0 ; @ L ( S 0 ; L ) @ L xx ; @ L ( S 0 ; L ) @ L y y ; @ L ( S 0 ; L ) @ L z z ; @ L ( S 0 ; L ) @ L xy ; @ L ( S 0 ; L ) @ L y z ; @ L ( S 0 ; L ) @ L xz ) (3.22) where @ L ( S 0 ; L ) @ R 0 ;ij k = 8>>><>>>: P i 0 j 0 k 0 @ jr R 0 j pi 0 j 0 k 0 @ R 0 ;ij k if C ( S 0 ; L ) k k > 0 X i 0 j 0 k 0 @ jr R 0 j pi 0 j 0 k 0 @ R 0 ;ij k + 2( C ( S 0 ; L ) ) N X l =1 F R l ;ij k @ F R l;ij k @ R 0 ;ij k other w ise @ L ( S 0 ; L ) @ I 0 ;ij k = 8>>><>>>: P i 0 j 0 k 0 @ jr I 0 j pi 0 j 0 k 0 @ I 0 ;ij k if C ( S 0 ; L ) k k > 0 X i 0 j 0 k 0 @ jr I 0 j pi 0 j 0 k 0 @ I 0 ;ij k + 2( C ( S 0 ; L ) ) N X l =1 F I l ;ij k @ F I l;ij k @ I 0 ;ij k other w ise @ L ( S 0 ; L ) @ L d;ij k = 8>>>>>><>>>>>>: P i 0 j 0 k 0 @ jr L d j pi 0 j 0 k 0 @ L d;ij k if C ( S 0 ; L ) k k > 0 X i 0 j 0 k 0 @ jr L d j pi 0 j 0 k 0 @ L d;ij k + 2( C ( S 0 ; L ) ) N X l =1 ( F R l ;ij k @ F R l;ij k @ L d;ij k + F I l ;ij k @ F I l;ij k @ L d;ij k ) other w ise d = xx; y y ; z z ; xy ; y z ; xz (3.23) Here P i 0 j 0 k 0 is computed o v er a neigh b orho o d of the v o xel ( i; j; k ) where the forw ard dierences in v olv es the v ariables R 0 ;ij k I 0 ;ij k or L d;ij k Eac h term in equation ( 3.23 ) can b e computed analytically for example, @ F R l ;ij k @ L xx;ij k = R 0 ;ij k e b l : L ijk L Tijk @ b l : L ijk L Tijk @ L xx;ij k @ F I l ;ij k @ L xx;ij k = I 0 ;ij k e b l : L ijk L Tijk @ b l : L ijk L Tijk @ L xx;ij k @ b l : L ijk L Tijk @ L xx;ij k = (2 b l ;xx L xx;ij k + 2 b l ;xy L xy ;ij k + 2 b l ;xz L xz ;ij k )

PAGE 57

44 3.3.4 Line Searc h The limited Quasi-newton metho d computes a searc h direction in whic h the minim um or maxim um is estimated to lie and yield a one dimension optimization problem as min f ( x + d ) (3.24) where d is computed b y algorithm 2 describ ed in the previous subsection. Equation ( 3.24 ) needs to b e solv ed b y a searc h pro cedure where the solution is nd b y the follo wing up date iteration: x k +1 = x k + k d There are man y line searc h tec hniques can can b e roughly categorized as line searc h without deriv ativ es lik e Fib onacci metho d and line searc h with deriv ativ es lik e p olynomial metho d. As w e ha v e an analytical form of deriv ativ e, w e use a cubic in terp olation [ 43 ] that is generally the most ecien t for con tin uous functions lik e our energy function. 3.3.5 Implemen tation Issues The constrain t in ( 3.18 ) is directly related to the standard deviation of the noise whic h can b e computed as in Seb er [ 47 ]. Since there are N complex measuremen ts and 8 unkno wn parameters at eac h v o xel (note: S 0 is complex-v alued, so it is treated as 2 unkno wns), w e ha v e = P Nl =1 k S l ( x ) S 0 ( x ) e b l : L ( x ) L ( x ) T k 2 2 N 8 Initialization is v ery crucial for nonlinear optimization. In our case, w e use the follo wing nonlinear regression with p ositiv e deniteness constrain t as the initial guess: min E ( S 0 ( x ) ; L ( x )) = N X l =1 k S l ( x ) S 0 ( x ) e b l : L ( x ) L ( x ) T k 2 (3.25)

PAGE 58

45 The ab o v e minimization is a simple nonlinear least square problem and can b e ecien tly solv ed b y the Lev en b erg-Marquardt metho d [ 43 ] using the results of the corresp onding linearized least square problem as initial guess. There are a few practical issues in implemen ting the augmen ted Lagrangian metho d and the Quasi-Newton metho d, these are settled b y using the suggestions in No cedal and W righ t [ 43 ] or empirically F or example, in the augmen ted Lagrangian metho d (see algorithm 1 ), w e start with a p enalt y con trol parameter = 5 : 0, decrease it b y a factor of 2 in eac h step un til it is less than 0 : 01. W e also c ho ose 0 = 1 : 0. Note that the augmen ted Lagrangian metho d is quite robust with resp ect to the c hoice of 0 and 0 since 0 will ev en tually decrease to 0 and 0 approac hes the Lagrange m ultiplier. The nal con v ergence test has t w o criteria: The subproblem con v erges and < 0 : 01. As the subproblem is just a standard unconstrained minimization problem, the criteria to c hec k whether it con v erges or not is ac hiev ed using an y of the standard criteria in iterativ e optimization sc hemes [ 43 ] and for the line searc h, w e emplo y cubic in terp olation and W olfe con v ergence criterion, see No cedal and W righ t [ 43 ] for more details. F or the limited memory Quasi-Newton metho d, w e use the last 5 up date pairs to up date the searc h direction. 3.4 Exp erimen tal Results In this section, w e presen t three sets of exp erimen ts on the application of our direct tensor smo othing and estimation mo del. One is on complex v alued syn thetic data sets and the other t w o are on a complex v alued D WI data acquired from a normal rat brain. F or the syn thetic data example, w e compare the results obtained from our estimation pro cedure with comp eting metho ds published in literature. 3.4.1 Complex V alued Syn thetic Data W e syn thesized an anisotropic tensor eld on a 3D lattice of size 32 32 8. The v olume consists of t w o homogeneous regions with the follo wing v alues for S 0 and

PAGE 59

46 D : R eg ion 1 : S 0 = 10 : 0 e i D = 0 : 001 [0 : 970 1 : 751 0 : 842 0 : 0 0 : 0 0 : 0] ; R eg ion 2 : S 0 = 8 : 0 e i D = 0 : 001 [1 : 556 1 : 165 0 : 842 0 : 338 0 : 0 0 : 0] where the tensor D is depicted as D = [ d xx ; d y y ; d z z ; d xy ; d xz ; d y z ] the dominan t eigen v ector (DEV) of the rst region is along the y axis, while that of the second region is in the xy plane and inclined at 60 o to the y axis. is c hosen to b e 45 o to yield an ev en distribution of the real and the imaginary part. The complex diusion w eigh ted images S l are generated using the Stejsk alT anner equation at eac h v o xel x giv en b y S l ( x ) = S 0 ( x ) e b l : D ( x ) + n ( x ) ; (3.26) where n ( x ) N (0 ; N ) + iN (0 ; N ), N (0 ; N ) is a zero mean Gaussian noise with standard deviation N As the signal measured b efore F ourier transform in MRI is complex, it is reasonable to assume the noise is an additiv e complex Gaussian noise. The noise in the D WIs remains to b e complex v alued after the F ourier T ransform. Th us our sim ulated data rerects the ph ysics of MRI imaging. Note that the noise in the magnitude of the complex D WIs ha v e a Rician distribution and appro ximates a Gaussian distribution when the SNR is high [ 42 ]. W e c ho ose the 7 commonly used congurations x y z xy xz y z xy z for the directions of the diusion-enco ded magnetic gradien t as in Basser et al. [ 1 ] and use 3 dieren t eld strengths in eac h direction (100, 500 and 1000 s=mm 2 ). Th us w e ha v e a total of 21 dieren t b matrix. A slice of the generated data set is sho wn in Fig. 3.1 note that the D WIs are dieren t

PAGE 60

47 when either the directions or the magnitudes of diusion-enco ded magnetic gradien t are dieren t. F or b etter illustration of the sup erior p erformance of our mo del, w e compare p erformance with the follo wing metho ds in our exp erimen ts: (i) Linear linear regression on ( 1.6 ) as used in [ 1 ], (ii) Nonlinear nonlinear regression applied to ( 1.4 ), (iii) Linear + EVS (Eigen v ector smo othing) linear regression follo w ed b y the DEV smo othing metho d describ ed in Coulon et al. [ 22 ], (iv) Nonlinear + EVS nonlinear regression plus the smo othing as in (iii), and (v) Ours -Our metho d. Note that the EVS metho d [ 22 ] is a direction eld restoration sc heme that preserv es discon tin uities and is based on Chan and Shen's w ork [ 50 ]. Figure 3.2 sho ws an ellipsoid visualization of the restored diusion tensor eld for the syn thetic data set with N = 0 : 5. It is eviden t that our metho d restored the noisy tensor eld quite w ell in comparison to the nonlinear regression metho d whic h did not p erform w ell and the linear regression tec hnique whic h p erformed w orst. F or further comparison, Fig. 3.3 sho ws the DEV computed from the original and the restored diusion tensor eld using all v e metho ds as men tioned b efore. This gure clearly sho ws that our mo del yielded the b est estimation of the original DEV eld. The metho d in Coulon et al. [ 22 ], ho w ev er, did not w ork w ell at v o xels where the estimated DEVs are almost orthogonal to those in their neigh b orho o ds. In suc h cases, Coulon et al.'s metho d will treat them as discon tin uities and do es not smo oth them. Though it is p ossible to treat these lo cations as outliers, it is dicult to set a reasonable criteria to ac hiev e the same. It is in teresting to notice that the Nonlinear+EVS metho d whic h serv es as an impro v emen t of the existing Linear+EVS metho d can diminish this problem. Additional quan titativ e measures, describ ed b elo w, conrm the visual comparison results. T o quan titativ ely assess the prop osed mo del, w e compare the accuracy of the DEV computed using the previously men tioned metho ds. Let b e the angle (in

PAGE 61

48 T able 3.1: Comparison of the accuracy of the estimated DEVs using dieren t metho ds for dieren t noise lev els. n = 0 : 5 Linear Nonlinear Linear+EVS Nonlinear+EVS Ours 9 : 57 7 : 44 1 : 63 1 : 19 0 : 80 6 : 93 5 : 08 1 : 57 0 : 84 0 : 96 n = 1 : 0 Linear Nonlinear Linear+EVS Nonlinear+EVS Ours 22 : 28 16 : 94 6 : 67 3 : 78 1 : 99 17 : 46 13 : 86 13 : 06 8 : 58 2 : 74 n = 1 : 5 Linear Nonlinear Linear+EVS Nonlinear+EVS Ours 33 : 14 26 : 40 14 : 55 9 : 09 4 : 08 22 : 60 20 : 19 22 : 39 17 : 46 4 : 70 degrees) b et w een the estimated DEV and the original DEV, T able 3.1 sho ws the mean and standard deviation of using dieren t metho ds for the syn thetic data with dieren t lev els of additiv e Gaussian noises. A b etter metho d is one that yields smaller v alues. F rom this table, w e can see that our mo del yields lo w er error v alues than all other metho ds under v arious noise lev els. It is also clear from this table that the metho ds using the original nonlinear complex Stejsk al-T anner equation ( 1.4 ) are more accurate than those using the linearized one ( 1.6 ). The adv an tage of our metho d and the nonlinear approac hes are more apparen t when the noise lev el is higher, whic h supp orts our discussion in section 3.2.1 3.4.2 Complex D WI of a Normal Rat Brain The normal rat brain data is imaged using a 17.6T (750MHz) Bruk er Av ance Imaging Sp ectrometer system with the follo wing settings: T R = 3058 ms T E = 28 : 8 ms = 17 : 8 ms = 2 : 4 ms dif f usion time = 17 : 0 ms bandw idth = 40 k H z The eld of view is 15 15 21 mm 3 with a resolution of 117 117 270 um 3 The same set of 7 diusion-enco ded magnetic directions as the syn thetic data are used

PAGE 62

49 with t w o dieren t magnitudes (100, 500 s=mm 2 ). With a n um b er of a v erages equal to 8 for eac h signal measuremen t S l the ra w data is a set of 14 complex D WI v olume data, eac h with a size of 128 128 78. W e extract a 128 128 10 v olume in the region of the corpus callosum for our rst exp erimen t. Figure 3.4.2 depicts the restored images of the six indep enden t comp onen ts of the estimated diusion tensor. As a comparison, Figs. 3.4 and 3.5 sho w the same images computed using linear regression applied to ( 1.6 ) and the nonlinear regression applied to ( 1.4 ) from the ra w data resp ectiv ely F or displa y purp oses, w e use the same brigh tness and con trast enhancemen t for displa ying the corresp onding images in all the three gures. W e also presen t the computed DEV of the estimated diusion tensor in Fig. 3.7 W e didn't compare with the EVS metho ds b ecause the sorting problem of the eigen v ectors is v ery sev ere in free w ater or other isotropic regions, th us it is necessary to exclude those regions to mak e eectiv e usage of EVS metho ds. This in v olv es a segmen tation issue whic h is non-trivial. W e then extracted a 10 127 78 v olume in the region of the cereb ellum and sho w the sagittal view of the results in Fig. 3.8 The brigh tness and con trast enhancemen t of the gures are the same as in the previous exp erimen t. In Figs. 3.4.2 and 3.8 the edge preserving smo othing is eviden t esp ecially in the o diagonal terms of the diusion tensor D whic h are essen tial in ev aluating the structural anisotrop y W e also notice that there are some dierences in the region of free w ater b et w een Figs. 3.4 and 3.5 visible in the o-diagonal terms while there is no dierence visible inside the corpus callosum b et w een these t w o gures. Ho w ev er, Fig. 3.7 giv es more insigh t in to this via the depiction of the computed DEVs. Note that smo othing eect on DEV is eviden t in the shado w ed b o x in Fig. 3.7 Our mo del for estimating a smo oth diusion tensor eld from the noisy D WIs ma y b e used to ac hiev e b etter accuracy in b er tractograph y Our future eorts will fo cus on applying the mo del describ ed in this pap er to ac hiev e b etter b er tract maps.

PAGE 63

50 Figure 3.1: A slice of sev eral v olumetric complex D WIs generated with N = 0 : 5. Left to righ t: real and imaginary pairs of complex D WIs with v arying magnitude of diusion enco ded magnetic gradien t. T op to b ottom: complex D WIs for v arying directions of diusion enco ded magnetic gradien t.

PAGE 64

51 (a) Original (b) Linear (c) Nonlinear (d) Our Mo del Figure 3.2: A slice of the original (ground truth) and the estimated diusion tensor elds for the noisy syn thetic data with N = 0 : 5.

PAGE 65

52 Figure 3.3: A slice of the computed DEV eld for the noisy syn thetic data with N = 1 : 5. T op left image is the DEV eld computed from the original tensor eld, and the other images arranged from left to righ t, top to b ottom are the DEV eld computed from estimated tensor eld using the follo wing metho ds: linear, nonlinear, linear+EVS, nonlinear+EVS and our mo del.

PAGE 66

53 Figure 3.4: A slice of the normal rat brain DTI estimated using m ultiv ariate linear regression without smo othing, view ed c hannel b y c hannel. First ro w, left to righ t: D xx D xy and D xz Second ro w, left to righ t: S 0 D y y and D y z Last ro w, left to righ t: F A < D > and D z z

PAGE 67

54 Figure 3.5: A slice of the normal rat brain DTI estimated using m ultiv ariate nonlinear regression without smo othing, view ed c hannel b y c hannel. Arrangemen t of the gures are the same as in Fig. 3.4

PAGE 68

55 Figure 3.6: A slice of the normal rat brain DTI estimated using using our prop osed metho d, view ed c hannel b y c hannel. Arrangemen t of the gures are the same as in Fig. 3.4

PAGE 69

56 Figure 3.7: A slice of the computed DEV eld from a normal rat brain. T op to b ottom: Linear, nonlinear regression and our mo del. Left column: Region of in terest for depicting the DEV indicated b y the white b o x sup erp osed on the D xx image. Righ t column: The computed DEV eld inside the white rectangle on the left and the smo othing eect of our mo del is clearly visible in the shaded b o x.

PAGE 70

57 (a) (b) Figure 3.8: A slice of the normal rat brain DTIs in the region of the cereb ellum view ed c hannel b y c hannel. The DTIs are estimated using (a) m ultiv ariate non-linear regression without smo othing and (b) our prop osed metho d (b ottom ro w). Both (a) and (b) are sagittal views and are arranged as in Fig. 3.4

PAGE 71

CHAPTER 4 DTI SEGMENT A TION USING AN INF ORMA TION THEORETIC TENSOR DISSIMILARITY MEASURE 4.1 In tro duction W e tac kle the DTI segmen tation problem using region-based activ e con tour mo del b y incorp orating an information theoretic tensor dissimilarit y measure. Geometric activ e con tour mo dels ha v e long b een used in scalar and v ector image segmen tation [ 51 52 53 ]. Our w ork is based on the activ e con tour mo dels deriv ed from the Mumford-Shah functional [ 54 ], and can b e view ed as a signic ant extension of the w ork on activ e con tours without edges, b y Chan and V ese [ 55 ] and the w ork on curv e ev olution implemen tation of the Mumford-Shah functional b y Tsai et al. [ 53 ] to diusion tensor images Our key c ontributions ar e, (i) the denition of a new discriminant of tensors b ase d on information the ory, (ii) a the or em and its pr o of that al lows for the c omputation of the me an value of an SPD tensor eld r e quir e d in the active c ontour mo del in close d form and facilitates the ecient se gmentation of the DTI, (iii) extension of the p opular r e gion-b ase d active c ontour mo del to hand le matrix-value d images, e.g., DTI Rest of this c hapter is organized as follo ws: in section 4.2 the new denition of tensor distance is in tro duced and its prop erties are discussed in detail. In section 4.3 activ e con tour mo del for image segmen tation is review ed. Then in section 4.4 the piecewise constan t region-based activ e con tour mo dels for DTI segmen tation is discussed. W e describ e the asso ciated Euler-Lagrange equation, the curv e ev olution equation, the lev el set form ulation and the implemen tation details. Similarly in section 4.5 w e discuss the piecewise smo oth region-based activ e con tour mo dels for DTI 58

PAGE 72

59 segmen tation. Finally in section 4.6 w e presen t exp erimen ts on application of our mo del to syn thetic tensor elds as w ell as real DTIs. 4.2 A New T ensor Distance and Its Prop erties 4.2.1 Denitions In the con text of DT-MRI, diusion of w ater molecules ma y b e c haracterized b y a 2-tensor D whic h is symmetric p ositiv e denite. This D is related to the displacemen t r of w ater molecules at eac h lattice p oin t in the v olumetric data at time t via p ( r j t; D ) = 1 p (2 ) n j 2 t D j e r T D 1 r 4 t (4.1) Note that the mean of r is E ( r ) = 0 and the co v ariance matrix of r is D ( r ) = 2 t D [ 47 ]. Th us, it is natural to use the distance measure b et w een Gaussian distributions to induce a distance b et w een these tensors. The most frequen tly used information theoretic \distance" measure is the Kullbac k-Leibler div ergence dened as K L ( p k q ) = Z p ( x ) l og p ( x ) q ( x ) d x (4.2) for t w o giv en densities p ( x ) and q ( x ). KL div ergence is not symmetric and a p opular w a y to symmetrize it is J ( p; q ) = 1 2 ( K L ( p k q ) + K L ( q k p )) (4.3) whic h is called the J-div ergence. W e prop ose a no v el denition of tensor \distance" as the square ro ot of the J-div ergence, i.e. d ( T 1 ; T 2 ) = p J ( p ( r j t; T 1 ) ; p ( r j t; T 2 )) (4.4) It is kno wn that t wice the KL div ergence and th us t wice the J-div ergence is the squared distance of t w o innitesimally nearb y p oin ts on a Riemannian manifold of parameterized distributions [ 56 ]. Th us taking the square ro ot in equation ( 4.4 ) is

PAGE 73

60 justied. F urthermore, equation ( 4.4 ) turns out to ha v e a v ery simple form giv en b y d ( T 1 ; T 2 ) = 1 2 q tr ( T 1 1 T 2 + T 1 2 T 1 ) 2 n (4.5) where tr ( ) is the matrix trace op erator, n is the size of the square matrix T 1 and T 2 Deriv ation 1 : F or Gaussian distributions p ( r j t; T 1 ) and p ( r j t; T 1 ) given as in e quation ( 4.1 ), we have K L ( p ( r j t; T 1 ) k p ( r j t; T 2 )) = Z p ( r j t; T 1 ) l og p ( r j t; T 1 ) p ( r j t; T 2 ) d r = Z p ( r j t; T 1 ) l og s j T 2 j j T 1 j e r T T 1 1 r 4 t + r T T 1 2 r 4 t # d r = Z p ( r j t; T 1 ) 1 2 l og j T 2 j j T 1 j r T T 1 1 r 4 t + r T T 1 2 r 4 t d r = 1 2 l og j T 2 j j T 1 j Z p ( r j t; T 1 ) d r Z p ( r j t; T 1 ) r T T 1 1 r 4 t d r + Z p ( r j t; T 1 ) r T T 1 2 r 4 t d r = 1 2 l og j T 2 j j T 1 j E ( r T 1 1 4 t r ) + E ( r T 1 2 4 t r ) = 1 2 l og j T 2 j j T 1 j tr T 1 1 4 t (2 t T 1 ) + tr T 1 2 4 t (2 t T 1 ) = 1 2 l og j T 2 j j T 1 j tr ( I n 2 ) + tr ( T 1 2 T 1 2 ) = 1 2 l og j T 2 j j T 1 j n + tr ( T 1 2 T 1 ) (4.6) Switching the r ole of p ( r j t; T 1 ) and p ( r j t; T 2 ) in the ab ove steps le ads to K L ( p ( r j t; T 2 ) k p ( r j t; T 1 )) = 1 2 l og j T 1 j j T 2 j n + tr ( T 1 1 T 2 ) (4.7)

PAGE 74

61 Combine d with the denition of J-diver genc e, we have J ( p ( r j t; T 1 ) ; p ( r j t; T 2 )) = 1 2 [ K L ( p ( r j t; T 1 ) k p ( r j t; T 2 )) + K L ( p ( r j t; T 2 ) k p ( r j t; T 1 ))] = 1 4 l og j T 2 j j T 1 j n + tr ( T 1 2 T 1 ) + 1 4 l og j T 1 j j T 2 j n + tr ( T 1 1 T 2 ) = 1 4 tr ( T 1 1 T 2 + T 1 2 T 1 ) 2 n (4.8) Thus we have e quation ( 4.5 ). No w w e giv e an example of the ab o v e \distance". When T 1 and T 2 can b oth b e diagonalized using the same orthogonal matrix O whic h means T 1 = ODO T and T 2 = OEO T where D = diag f d 1 ; :::; d n g and E = diag f e 1 ; :::; e n g Then w e ha v e tr ( T 1 1 T 2 ) = n X i =1 e i d i ; tr ( T 1 2 T 1 ) = n X i =1 d i e i Th us d ( T 1 ; T 2 ) = 1 2 vuut n X i =1 e i d i + n X i =1 d i e i 2 n = 1 2 vuut n X i =1 ( d i e i + e i d i 2) = 1 2 vuut n X i =1 ( d 2i + e 2i 2 d i e i e i d i ) = 1 2 vuut n X i =1 ( d i e i ) 2 e i d i (4.9) 4.2.2 Ane In v arian t Prop ert y When a co ordinate system undergo es an ane transformation, the DTI will also b e transformed. If the co ordinate system undergo es an ane transform y = Ax + b then the displacemen t of the w ater molecules will b e transformed as ^ r = Ar Since r has a Gaussian distribution with co v ariance matrix 2 t T the transformed displacemen t ^ r has a co v ariance matrix of 2 t A T A T Th us, the transformed DTI will b e ^ T ( y ) = A T ( x ) A T ; y = Ax + b (4.10)

PAGE 75

62 Figure 4.1: T ransformation of tensor elds. Left: Original tensor eld. Righ t: T ransformed tensor eld. Figure 4.1 sho ws an example of a 32 32 2D diusion tensor eld where the ane transformation is A = 264 0 : 8 0 : 0 0 : 0 0 : 8 375 ; b = 264 2 : 0 2 : 0 375 Our denition of tensor \distance" is in v arian t to suc h transformations, i.e. d ( T 1 ; T 2 ) = d ( A T 1 A T ; A T 2 A T ) (4.11) Though the transformation of the diusion tensor is actually a congruen t transformation, ho w ev er, w e still refer the ab o v e in v ariance as \ane" in v ariance b ecause in the con text of segmen tation the in v ariance prop ert y has b een link ed with the transformation of the co ordinate system.

PAGE 76

63 Deriv ation 2 : First of al l, we have tr ( A T 1 A T ) 1 ( A T 2 A T ) = tr ( A T T 1 1 A 1 A T 2 A T ) = tr ( A T T 1 1 T 2 A T ) = tr ( T 1 1 T 2 A T A T ) = tr ( T 1 1 T 2 ) (4.12) Similarly we get tr ( A T 2 A T ) 1 ( A T 1 A T ) = tr ( T 1 2 T 1 ) thus we have d ( A T 1 A T ; A T 2 A T ) = 1 2 q tr ( A T 1 A T ) 1 ( A T 2 A T ) + ( A T 2 A T ) 1 ( A T 1 A T ) 2 n = 1 2 q tr ( T 1 1 T 2 + T 1 2 T 1 ) 2 n = d ( T 1 ; T 2 ) (4.13) Thus our \distanc e" is invariant under ane c o or dinate tr ansformation. It is easy to sho w that F rob enius norm of the tensor dierence commonly used in published w ork [ 28 27 33 ] do es not ha v e this prop ert y 4.2.3 T ensor Field Mean V alue W e no w dev elop a theorem that allo ws us to compute the mean v alue of a tensor eld. This is essen tial for the region-based activ e con tour mo del used in the segmen tation algorithm. Theorem 2 The me an value of a tensor eld dene d as M ( T ; R ) = min M 2 S P D ( n ) Z R d 2 [ M ; T ( x )] d x (4.14)

PAGE 77

64 is given by M = p B 1 q p BA p B p B 1 (4.15) wher e A = R R T ( x ) d x B = R R T 1 ( x ) d x and S P D ( n ) denotes the set of symmetric p ositive denite matric es of size n Pro of 3 L et E ( M ) = R R d 2 [ M ; T ( x )] d x we have E ( M ) = Z R d 2 [ M ; T ( x )] d x = Z R 1 4 tr ( M 1 T ( x ) + T 1 ( x ) M ) n 2 d x = 1 4 tr M 1 ( Z R T ( x ) d x ) + ( Z R T 1 ( x ) d x ) M n 2 j R j = 1 4 tr ( M 1 A + BM ) n 2 j R j F or a smal l p erturb ation in the neighb orho o d N ( M ) S P D ( n ) r epr esente d by M + t V wher e t is a smal l enough p ositive numb er, V is symmetric matrix, we have ( M + t V ) 1 = M 1 ( I t VM 1 + o ( t )) Thus E ( M + t V ) = 1 4 tr ( M + t V ) 1 A + B ( M + t V ) n 2 j R j = 1 4 tr M 1 ( I t VM 1 + o ( t )) A + B ( M + t V ) n 2 j R j = 1 4 tr ( M 1 A + BM ) n 2 j R j + 1 4 tr ( BV M 1 VM 1 A ) t + o ( t ) = E ( M ) + 1 4 tr ( BV M 1 AM 1 V ) t + o ( t ) Thus at the critic al p oint, we ne e d tr ( BV M 1 A M 1 V ) = 0 ; 8 sy mmetr ic V (4.16)

PAGE 78

65 This is actual ly e quivalent to MB M = A and c an b e solve d in close d form [ 57 ] yielding the desir e d r esult in e quation ( 4.15 ). Sinc e A and B ar e b oth SPD matric es, M is also an SPD matrix, thus it is inde e d a solution to the minimization e quation ( 4.14 ). In some sp ecial cases, the tensor eld mean v alue can b e computed easily If the tensor eld is constan t, T ( x ) T c then w e ha v e A = T c j R j and B = T c 1 j R j Substituting in to equation ( 4.15 ) w e ha v e M = T c So the mean v alue of a constan t tensor eld is the constan t itself whic h mak es p erfect sense. No w if all the tensors in the tensor eld is diagonal, T ( x ) D ( x ) = diag f d 1 ( x ) ; :::; d n ( x ) g w e ha v e A = Z R D ( x ) d x = Z R diag f d 1 ( x ) ; :::; d n ( x ) g d x = diag f Z R d 1 ( x ) d x ; :::; Z R d n ( x ) d x g Similarly B = diag f Z R 1 d 1 ( x ) d x ; :::; Z R 1 d n ( x ) d x g As b oth A and B are diagonal matrices, their p olynomial forms are all diagonal and the m ultiplication b et w een these terms are comm utable. Then w e ha v e M = p B 1 q p BA p B p B 1 = B 1 2 B 1 4 A 1 2 B 1 4 B 1 2 = B 1 2 A 1 2 (= A 1 2 B 1 2 ) = diag ( s R R d 1 ( x ) d x R R 1 d 1 ( x ) d x ; :::; s R R d n ( x ) d x R R 1 d n ( x ) d x ) In general, AB 6 = BA w on't comm ute and equation ( 4.15 ) can not b e simplied further and needs to resort to the matrix diagonalization.

PAGE 79

66 4.3 Activ e Con tour Mo dels for Image Segmen tation Activ e con tour mo dels (ak a snak es) ha v e b een successfully used in n umerous applications including computer vision, medical imaging, computer graphics etc. Generally sp eaking, activ e con tour mo dels deal with h yp er-surfaces in < n and they are often referred as curv e ev olution mo dels in 2D and surface propagation mo dels in 3D. The basic idea of activ e con tour mo dels is simple, it is just the mo v emen t of a h yp er-surface in < n (that represen ts the b oundary of regions) in the direction of the normal with an application driv en v elo cit y Lev el set metho ds serv e as a p o w erful mathematical to ol to analyze and implemen t the h yp er-surface ev olution equations in activ e con tour mo dels. They in v olv e expressing the geometric h yp er-surface ev olution in an Eulerian form allo wing the use of ecien t and robust n umerical metho ds. In addition, top ology c hange of the h yp er-surface is transparen t. The time dep enden t lev el set form ulation w as prop osed indep enden tly b y Dervieux and Thomasset [ 58 ] to study m ultiruid incompressible ro ws, and Osher and Sethian [ 59 ] to study fron t propagation. In the con text of computer vision, the history of activ e con tour mo dels b egan with the pioneering w ork b y Kass, Witkin and T erzop oulos [ 60 ]. A few y ears later, Malladi et al. [ 61 ], and Caselles et al. [ 62 ] indep enden tly prop osed the seminal idea of mo deling image segmen tation with geometric snak es in a lev el set form ulation. There are also man y other signican t con tributions in image segmen tation using activ e con tour mo del and lev el set metho ds. A brief review can b e found in V em uri et al. [ 63 ]. F or a complete discussion on the general curv e ev olution and lev el set metho ds, the readers are referred to t w o outstanding b o oks, one b y Sethian [ 64 ] and the other b y Osher and P aragios [ 65 ]. In this section, only basic concepts and metho ds of 2D activ e con tour mo dels are presen ted for the sak e of self con tainedness. 4.3.1 Snak e Energy F unctional Most activ e con tour mo dels aim to minimize a snak e energy functional. The minimization of a functional is also called a v ariational principle and oers sev eral

PAGE 80

67 adv an tages o v er other metho ds in image segmen tation. The most signican t one is that the incorp oration of prior shap e information in to this framew ork is straigh tforw ard. The v ast ma jorit y of activ e con tour mo dels can b e categorized as t w o t yp es: edge-based snak es and region-based snak es. The classical edge-based snak e w as rst prop osed b y Kass et al. [ 60 ] and in v olv ed minimizing the follo wing energy functional: E ( C ) = Z 1 0 E int ( C ( p )) + E imag e ( C ( p )) + E con ( C ( p )) dp (4.17) where C is a parameterized curv e that separates the dieren t regions, E int = j C p j 2 + j C pp j 2 is the in ternal energy that measures the length and the stiness of the curv e, E imag e represen ts the image force that attracts the curv e to lo cations with large image gradien t (as edges), E con is giv en b y the external constrain t forces that usually rene the curv e C in the segmen tation and is not used in non-in teractiv e segmen tation tasks, and in the in ternal energy are con trol parameters that balance the geometric b eha vior of the snak e. Though the classical snak e solv es the problem of linking edges to form a segmentation, it has sev eral disadv an tages. The most sev ere one is that it can not con v erge when far a w a y from the true solution. The ballo ons mo del in tro duced b y Cohen [ 66 ] incorp orates an additional constan t inrating or shrinking external force as w ell as stable n umerical tec hniques to ac hiev e a larger range of con v ergence. This constan t force w as sho wn to b e part of an area energy minimization term b y K. Siddiqi et al. [ 67 ]. The resulting energy functional is E ( C ) = Z 1 0 E int ( C ( p )) + E imag e ( C ( p )) dp + r Z R i d x (4.18) where R i is the region inside the b oundary C The geometric activ e con tour mo dels w ere in tro duced b y Malladi et al. [ 61 51 ] and Caselles et al. [ 62 ] in the curv e ev olution con text b y adding an image stopping

PAGE 81

68 term, and can also b e deriv ed as part of a w eigh ted length energy functional [ 52 ]: E ( C ) = Z 1 0 g ( jr I j ) j C 0 ( p ) j dp (4.19) where g ( ) is an in v erse function of the image gradien t jr I j The ab o v e three t yp es of activ e con tour mo dels are all edge based since the stopping criterion in all of them is a function of the image gradien t. A more robust t yp e of activ e con tour mo dels is the regions-based mo del and is based on the MumfordShah functional [ 54 ]: E ( f ; C ) = r Z n ( f g ) 2 d x + Z n = C jr f j 2 d x + j C j (4.20) where n is the image domain, g is the original noisy image data, f is a piecewise smo oth appro ximation of g with discon tin uities only along C j C j is the arclength of the curv e C , and r are con trol parameters. The rst term mo dels the data delit y and r is in v ersely prop ortional to the v ariance of the noise pro cess, the second term measures the smo othness of the appro ximation f and can b e view ed as a prior mo del of f giv en the discon tin uit y at C the third term aims to remo v e tin y disconnected regions and k eep a smo oth ob ject b oundary With all these three terms, the Mumford-Shah functional pro vides an elegan t framew ork for sim ultaneous image segmen tation and smo othing. Ho w ev er, it is a dicult problem to solv e the original Mumford-Shah functional and n umerous metho ds w ere prop osed to tac kle its v arious appro ximations. F or example, Chan and V ese [ 55 ] and Tsai et al. [ 53 ] presen ted a t w o-stage implemen tation of ( 4.20 ) where the rst stage in v olv es, for a xed C constructing a constan t or smo oth appro ximation to the image function inside and outside of C and the second-stage ev olv es C for a xed f

PAGE 82

69 4.3.2 Curv e Ev olution The exact form of curv e ev olution is @ C ( x ; t ) @ t = ( x ) T ( x ) + ( x ) N ( x ) (4.21) where and are the sp eeds of the curv e along the tangen tial and normal directions resp ectiv ely T is the tangen t and N is the unit outer normal to the curv e. Ho w ev er without loss of generalit y equation ( 4.21 ) can b e rewritten as a deformation along the normal only in the form of the follo wing PDE [ 68 ]: @ C ( x ; t ) @ t = F ( x ) N ( x ) (4.22) where F ( x ) = ( x ) is the sp eed of the curv e at lo cation x along the normal and migh t dep end on v arious factors lik e lo cal curv ature, global prop erties of the curv e etc. In the follo wing text, w e will not explicitly sp ecify the domain of a function when it is clear from the con text. F or example, w e will use F instead of F ( x ), N instead of N ( x ) and etc. P articular form of F is the fo cus of man y curren t researc hers and are usually deriv ed from application related v ariational principles. With prop er design of the sp eed function, curv e ev olution can b e used in the con text of image segmen tation where F is related to the image data and the curv e smo othing term. Though curv e ev olution can b e used directly for v arious applications, a more natural and elegan t w a y is to deriv e it from the rst principles as gradien t ro ws that minimize the snak e energy functional. F or example, the gradien t ro w that minimize ( 4.19 ) yields [ 52 ] F = ( g k + r g N ) F or computing the curv e ev olution equation from region based activ e con tour energy functionals, w e refer the readers to Aub ert et al. [ 69 ] for the shap e gradien t metho d.

PAGE 83

70 4.3.3 Lev el Set F orm ulation Though traditional tec hniques for solving ( 4.22 ) including the mark er/string metho ds and the v olume-of-ruid tec hniques [ 64 ] are useful under certain circumstance, they ha v e quite a few dra w bac ks. F or example, the mark er/string metho ds pro vide n umerical sc heme based on parameterized curv es and ha v e instabilit y and top ological limitations. The v olume-of-ruid tec hniques are not accurate. In con trast, lev el-set metho ds [ 59 ] oer sev eral adv an tages at the same time as follo ws: 1. T op ological c hanges as the ev olving curv es merge and split are transparen t in an implicit form ulation. 2. Ecien t and accurate n umerical metho ds can yield stable solutions when sho c ks form. 3. Extension from 2D curv e ev olution to 3D surface propagation is easy 4. Can use fast solution tec hniques suc h as narro w banding and fast marc hing for sp eedup. The k ey idea of the lev el set metho ds is v ery simple. Giv en a higher dimensional function whose zero lev el set is the curv e C (see Fig. 4.2 ), it is not hard to deriv e a corresp onding up date equation for as @ ( x ; t ) @ t = F jr j (4.23) The higher dimensional function is usually c hosen to b e the signed distance function of the curv e C In suc h a case, jr j = 1 and will ensure n umerical stabilit y 4.4 Piecewise Constan t DTI Segmen tation Mo del Our mo del for DTI segmen tation in < 2 is p osed as minimization of the follo wing v ariational principle based on the Mumford-Shah functional [ 54 ]: E ( T ; C ) = Z n d 2 ( T ( x ) ; T 0 ( x )) d x + Z n = C jr d T ( x ) j 2 d x + j C j (4.24)

PAGE 84

71 F N C (a) ( x ) = 0 (b) Figure 4.2: Ev olution of a curv e and its corresp onding higher dimension function. (a) An ev olving curv e with sp eed F N (b) Higher dimension function whose zero lev el set is the ev olving curv e in (a). where the curv e C is the b oundary of the desired unkno wn segmen tation, n < 2 is the image domain, T 0 is the original noisy tensor eld, T is a piecewise smo oth appro ximation of T 0 with discon tin uities only along C j C j is the arclength of the curv e C and are con trol parameters, dist ( :; : ) is a measure of the distance b et w een t w o tensors, r d T ( x ) 2 < 2 is dened as r d T ( x ) v = l im dt 0 dist ( T ( x ) ; T ( x + dt v )) dt where v 2 < 2 is a unit v ector. r d T can b e called the gradien t of T under the tensor distance dist ( :; : ) and its magnitude is a measure of the smo othness of the tensor eld T using dist ( :; : ) as a tensor distance. The extension of the Mumford-Shah functional to 3D is straigh t forw ard b y replacing the curv e C with a surface S and the implemen tation in 3D is similar as in 2D. The ab o v e v ariational principle ( 4.24 ) will capture piecewise smo oth regions while main taining a smo oth b oundary the balance b et w een the smo othness of the DTI in eac h region and the b oundaries is con trolled b y and When is extremely large, equation ( 4.24 ) is reduced to a simplied form whic h aims to capture piecewise

PAGE 85

72 constan t regions of t w o t yp es: E ( C ; T 1 ; T 2 ) = Z R dist 2 ( T ( x ) ; T 1 ) d x + Z R c dist 2 ( T ( x ) ; T 2 ) d x + j C j (4.25) where R is the region enclosed b y C and R c is the region outside C T 1 and T 2 are the mean v alues of the diusion tensor eld in region R and R c resp ectiv ely The ab o v e mo del can b e view ed as a mo dication of the activ e con tour mo del without edges for scalar v alued images b y Chan and V ese [ 55 ]. It can segmen t tensor elds with t w o constan t regions (eac h region t yp e ho w ev er can ha v e disconnected parts) in a v ery ecien t w a y W e incorp orate our new tensor \distance" in this activ e con tour mo del to ac hiev e DTI segmen tation. 4.4.1 Curv e Ev olution Equation The Euler Lagrange equation for the v ariational principle ( 4.25 ) is k d 2 ( T ; T 1 ) + d 2 ( T ; T 2 ) N = 0 T 1 = M ( T ; R ) ; T 2 = M ( T ; R c ) where k is the curv ature of the curv e C N is the out w ard normal to the curv e. When T 1 and T 2 are xed, w e ha v e the follo wing curv e ev olution form for the ab o v e equation : @ C @ t = k d 2 ( T ; T 1 ( t )) + d 2 ( T ; T 2 ( t )) N w her e T 1 = M ( T ; R ) ; T 2 = M ( T ; R c ) (4.26) 4.4.2 Lev el Set F orm ulation The curv e ev olution equation ( 4.26 ) can b e easily implemen ted in a lev el set framew ork. As sho wn in section 4.3 w e ha v e @ @ t = r r jr j d 2 ( T ; T 1 ) + d 2 ( T ; T 2 ) jr j (4.27)

PAGE 86

73 where is the signed distance function of C 4.4.3 Implemen tation and Numerical Metho ds W e follo w the t w o stage implemen tation as describ ed in Chan and V ese [ 55 ] and our mo died pro cedure is as follo ws: Algorithm 3 Tw o Stage Piecewise Constan t Segmen tation of DTIs 1: Set initial curv e C 0 and compute its signed distance function 0 2: Compute T 1 and T 2 according to equation ( 4.15 ). 3: Up date signed distance function according to equation ( 4.27 ). 4: Reinitialize using the up dated zero lev el set. 5: Stop if the solution is ac hiev ed, else go to step 2. The k ey to the computation of T 1 and T 2 is the computation the square ro ot of an SPD matrix. W e use the matrix diagonalization to ac hiev e this, a real symmetric matrix A can b e diagonalized as A = ODO T where O is an orthogonal matrix and D = diag f d 1 ; d 2 ; :::; d n g is a diagonal matrix. Then the p olynomial form of A is A = OD O T where D = diag f d 1 ; d 2 ; :::; d n g Note that in equation ( 4.15 ), p p BA p B 6 = B 1 4 A 1 2 B 1 4 it has to b e computed as follo ws: Algorithm 4 Computing p p BA p B 1: Diagonalize B as O B D B O B T 2: Compute P = p B as O B p D B O B T 3: Compute Q as P AP 4: Diagonalize Q as Q Q D Q O Q T 5: Compute p Q as O Q p D Q O Q T Equation ( 4.27 ) can b e easily discretized using an explicit Euler sc heme. W e can assume the spatial grid size to b e 1, then the nite dierences of the partial

PAGE 87

74 deriv ativ es are i i;j = 1 2 ( i +1 ;j i 1 ;j ) ; j i;j = 1 2 ( i;j +1 i;j 1 ) i i;j = i;j i 1 ;j ; j i;j = i;j i;j 1 i+ i;j = i +1 ;j i;j ; j+ i;j = i;j +1 i;j ii i;j = i +1 ;j 2 i;j + i 1 ;j ij i;j = 1 4 ( i +1 ;j +1 i +1 ;j 1 i 1 ;j +1 + i 1 ;j 1 ) j j i;j = i;j +1 2 i;j + i;j 1 In this case, w e ha v e the follo wing up date equation: n +1 i;j ni;j t = k n i;j d 2 ( T i;j ; T n1 ) + d 2 ( T i;j ; T n2 ) q ( i i;j ) 2 + ( j i;j ) 2 (4.28) where the curv ature k n i;j of n can b e computed as k n i;j = j j ni;j ( i ni;j ) 2 2 i ni;j j ni;j ij ni;j + ii ni;j ( j ni;j ) 2 ( i ni;j ) 2 + ( j ni;j ) 2 3 = 2 Up date according to equation ( 4.28 ) on the whole domain n has a complexit y of O ( j n j ) and is going to b e slo w when the the domain is h uge. As w e are most in terested in the ev olving zero lev el set, up dating only a narro w band around the zero lev el set will b e sucien t and this can b e ac hiev ed using the narro w band metho d describ ed in [ 70 71 ]. In order to main tain as a signed distance function of C it is necessary to reinitialize and can also b e done only within a narro w band. There are also sev eral other ecien t n umerical sc hemes that one ma y emplo y for example the m ultigrid sc heme as w as done in Tsai et al. [ 53 ]. A t this time, our explicit Euler sc heme with the narro w band metho d yielded reasonably fast solutions (3-5secs. for the 2D syn thetic data examples and sev eral min utes for the 3D real DTI examples on a 1Ghz P en tium-3 CPU).

PAGE 88

75 4.5 Piecewise Smo oth DTI Segmen tation Mo del In certain cases, the piecewise constan t assumption will b e violated and the piecewise smo oth mo del ( 4.24 ) has to b e emplo y ed in suc h cases. F ollo wing the curv e ev olution implemen tation of the Mumford-Shah functional b y Tsai et al. [ 53 ], w e ha v e the follo wing t w o-stage sc heme. In the smo othing stage, the curv e is xed and a smo othing inside the curv e and outside the curv e is done b y preserving the discon tin uit y across the curv e. In the curv e ev olution stage, the inside and outside of the smo othed tensor eld are xed while the curv e is allo w ed to mo v e. 4.5.1 Discon tin uit y Preserving Smo othing If the curv e is xed, w e ha v e E C ( T ) = Z n d 2 ( T ( x ) ; T 0 ( x )) d x + Z n = C jr d T ( x ) j 2 d x (4.29) F or implemen tation, the ab o v e VP is discretized as follo ws: E C ( T ) = X x d 2 ( T ( x ; T 0 ( x )) + X ( x ; y ) 2 N C d 2 ( T ( x ) ; T ( y )) (4.30) where N C denes a collection of neigh b oring pixels. If a pair ( x ; y ) cuts across the b oundary it is excluded from N C As w e ha v e an energy functional of a tensor eld on a discrete grid, w e can compute its gradien t with resp ect to this discrete tensor eld. A straigh tforw ard w a y to do this is to treat all the indep enden t comp onen ts of the tensors as the comp onen ts of a v ector and compute the gradien t of this energy function with resp ect to this v ector. Ho w ev er, the form of the gradien t will not b e compact. Instead, w e use the deriv ativ e of a matrix function f ( A ) with resp ect to its matrix v ariable as follo ws: @ f ( A ) @ A = @ f ( A ) @ A ( i; j ) (4.31)

PAGE 89

76 It is not hard to see that @ f ( A ) @ A ( i; j ) = l im dt 0 f ( A + dt E ij ) f ( A ) dt (4.32) where E ij is a matrix with 1 at lo cation ( i; j ) and 0 elsewhere. As the directional deriv ativ e with resp ect to a p erturbation V on T ( x ) is giv en b y E C ( T ( x ) + V ) E C ( T ( x )) = 24 d 2 ( T ( x ) + V ; T 0 ( x )) + X y 2 N C ( x ) d 2 ( T ( x ) + V ; T ( y )) 35 24 d 2 ( T ( x ) ; T 0 ( x )) + X y 2 N C ( x ) d 2 ( T ( x ) ; T ( y )) 35 = 1 4 tr ( T 1 0 ( x ) T 1 ( x ) T 0 ( x ) T 1 ( x )) V + X y 2 N C ( x ) tr ( T 1 ( y ) T 1 ( x ) T ( y ) T 1 ( x )) V = 1 4 tr ( B T 1 ( x ) A T 1 ( x )) V (4.33) where A = X y 2 N C ( x ) T 1 ( y ) + T 1 0 ( x ) B = X y 2 N C ( x ) T ( y ) + T 0 ( x ) W e ha v e the gradien t from equation ( 4.32 ) and equation ( 4.33 ): @ E C @ T ( x ) = 1 4 B T 1 ( x ) A T 1 ( x ) (4.34) So the minimizer of the VP ( 4.30 ) satises B = T 1 ( x ) A T 1 ( x ) (4.35)

PAGE 90

77 4.5.2 Curv e Ev olution Equation and Lev el Set F orm ulation Once the b oundary discon tin uit y preserving smo othed tensor eld is xed, w e ha v e E T ( C ) = Z R d 2 ( T R ( x ) ; T 0 ( x )) d x + Z R c d 2 ( T R c ( x ) ; T 0 ( x )) d x + Z R jr d T R ( x ) j 2 d x + Z R c jr d T R c ( x ) j 2 d x + j C j (4.36) Th us the curv e ev olution equation will b e @ C @ t = d 2 ( T R ; T 0 ) d 2 ( T R c ; T 0 ) + ( jr d T R c j 2 jr d T R j 2 ) k N (4.37) Again for implemen tation, w e ha v e @ C @ t = d 2 ( T R ; T 0 ) d 2 ( T R c ; T 0 ) N + 24 X y 2 N R c ( x ) d 2 ( T R c ; T R c ( y )) X y 2 N R ( x ) d 2 ( T R ; T R ( y )) 35 N k N (4.38) The lev el set form ulation for ( 4.38 ) is then giv en b y @ @ t = r r jr j + F jr j (4.39) where is the signed distance function of C and the data dep enden t sp eed F is giv en b y F = d 2 ( T R ; T 0 ) d 2 ( T R c ; T 0 ) 24 X y 2 N R c ( x ) d 2 ( T R c ; T R c ( y )) X y 2 N R ( x ) d 2 ( T R ; T R ( y )) 35 4.5.3 Implemen tation and Numerical Metho ds Similarly as in algorithm 3 and also as in Tsai et al. [ 53 ], w e ha v e the follo wing t w o-stage implemen tation. The ma jor dierence here lies in the computation of T R

PAGE 91

78 Algorithm 5 Tw o Stage Piecewise Smo oth Segmen tation for DTI 1: Set initial curv e C 0 and compute its signed distance function as 0 2: Compute T R and T R c 3: Up date lev el set according to equation ( 4.39 ). 4: Reinitialize using the up dated zero lev el set. 5: Stop if the solution is ac hiev ed. Otherwise rep eat from step 2. and T R c As the gradien t can b e computed, it is easy to design ecien t n umerical algorithm to ac hiev e the discon tin uit y preserving smo othing. Curren tly w e use gradien t descen t with adaptiv e step size due to its simplicit y

PAGE 92

79 4.6 Exp erimen tal Results In this section, w e presen t sev eral sets of exp erimen ts on the application of our DTI segmen tation mo del. The rst one is on 2D syn thetic data sets, the second one is on single slices of real DTIs estimated from D WIs, the last one is on 3D real DTIs. In the exp erimen ts b elo w, if not explicitly stated, the segmen tation mo del used is the piecewise constan t mo del. 4.6.1 Syn thetic T ensor Field Segmen tation The purp ose of these exp erimen ts is to demonstrate the need to use the full information con tained in the tensors for segmen tation purp oses. T o this end, w e syn thesize t w o tensor elds, b oth are 2 2 symmetric p ositiv e denite matrix v alued images on a 128 128 lattice and ha v e t w o homogeneous regions. The t w o regions in the rst tensor eld only dier in the orien tations while the t w o regions in the second tensor eld only dier in the scales. These t w o tensor elds are visualized as ellipses as sho wn in Fig. 4.3 (a) and Fig. 4.4 (a) resp ectiv ely Eac h ellipse's axes corresp ond to the eigen v ector directions of the diusion tensor and are scaled b y the eigen v alues. With an arbitrary initialization of the geometric activ e con tour, our prop osed mo del can yield high qualit y segmen tation results as sho w in Figs. 4.3 and 4.4 The ev olving b oundaries of the segmen tation are sho wn as curv es in red. Note that the rst tensor eld can not b e segmen ted b y using only the scalar anisotropic prop erties of tensors as in [ 36 ] and the second tensor eld can not b e segmen ted b y using only the dominan t eigen v ectors of the tensors. These t w o examples sho w that one m ust use the full information con tained in tensors in ac hieving qualit y segmen tation. As our mo del is a region based segmen tation metho d, it is more resistan t to noise than the gradien tbased snak es. Figures 4.5 and 4.6 depict the segmen tation pro cess for syn thetic noisy tensor elds and the results are v ery satisfactory

PAGE 93

80 4.6.2 Single Slice DTI Segmen tation Figure 4.7 sho ws a slice of the DTI estimated from the D WIs of a normal rat spinal cord. Eac h of the six indep enden t comp onen ts of the individual symmetric p ositiv e denite diusion tensors in the DTI is sho wn as a scalar image in the top ro w. The arrangemen ts of the comp onen ts from left to righ t are D xx D y y D z z D xy D y z and D xz The o diagonal terms are greatly enhanced b y brigh tness and con trast factors for b etter visualization. An ellipsoid visualization of same slice as the top ro w is sho wn Fig. 4.7 (g). Eac h ellipsoid's axes corresp ond to the eigen v ector directions of the 3x3 diusion tensor and are scaled b y the eigen v alues and its color from blue to red sho ws the lo w est to highest degree of anisotrop y F or example, diusion tensors in free w ater region are sho wn as large round blue ellipsoids. Figure 4.8 demonstrates the separation of the gra y matter and white matter inside the normal rat spinal cord with the ev olving curv e in red sup erimp osed on the ellipsoid visualization of the DTI. Similarly Fig. 4.9 sho ws a slice of DTI of a normal rat brain in the region of corpus callosum and Fig. 4.10 depicts the segmen tation pro cedure of the corpus callosum. In the nal step, the essen tial part of the corpus callosum is captured b y the prop osed piecewise constan t segmen tation mo del. T o further get the b ending horns of the corpus callosum, w e use the segmen tation results of the piecewise constan t mo del as initialization and apply the piecewise smo oth mo del. The result is sho wn in Fig. 4.11 No w w e ha v e a m uc h b etter renemen t. In all the ab o v e exp erimen ts, w e exclude the free w ater regions whic h are not of in terest in the biological con text. 4.6.3 DTI Segmen tation in 3D No w w e demonstrate some segmen tation results for 3D DTI. Figure 4.12 depicts the segmen tation pro cedure for a normal rat spinal cord DTI of size 108 108 10. Figure 4.12 (a)-(d) clearly depict the surface ev olution in 3D and Fig. 4.12 (e)-(h) depict the in tersection of the propagating surface in (a)-(d) with a slice of the D xx comp onen t of the DTI. The separation of the gra y matter and white matter inside

PAGE 94

81 the normal rat spinal cord is ac hiev ed with ease. Similarly Fig. 4.13 (a)-(h) sho w the segmen tation pro cedure for a normal rat brain DTI of size 114 108 12. In addition, in tersections of the nal 3D segmen tation with dieren t slices of the D xx comp onen t of the DTI are sho wn in Fig. 4.13 (i)-(l). It is eviden t that a ma jorit y of the corpus callosum inside this v olume is captured. Again, w e exclude the free w ater regions whic h are not of in terest in the biological con text.

PAGE 95

82 (a) (b) (c) (d) Figure 4.3: Segmen tation of a syn thetic tensor eld where t w o regions diers only in the orien tations, (b),(c) and (d) are the initial, in termediate and nal steps of the curv e ev olution pro cess in the segmen tation.

PAGE 96

83 (a) (b) (c) (d) Figure 4.4: Segmen tation of a syn thetic tensor eld where t w o regions diers only in scale, (b), (c) and (d) are the initial, in termediate and nal steps of the curv e ev olution pro cess in the segmen tation.

PAGE 97

84 (a) (b) (c) (d) Figure 4.5: Segmen tation of a syn thetic tensor eld with t w o roughly homogeneous regions that only dier in the orien tations. (b)-(d) are the initial, in termediate and nal steps of the curv e ev olution pro cess for segmen tation of (a).

PAGE 98

85 (a) (b) (c) (d) Figure 4.6: Segmen tation of a syn thetic tensor eld with t w o roughly homogeneous regions that only dier scale. (b)-(d) are the initial, in termediate and nal steps of the curv e ev olution pro cess for segmen tation of (a).

PAGE 99

86 (a) D xx (b) D y y (c) D z z (d) D xy (e) D y z (f ) D xz (g) Figure 4.7: A slice of the DTI of a normal rat spinal cord. (a)-(f ): view ed c hannel b y c hannel as gra y scale images. (g): view ed using ellipsoids.

PAGE 100

87 (a) (b) (c) (d) Figure 4.8: Segmen tation of the slice of DTI sho wn in Fig. 4.7 (a)-(d): initial, in termediate and nal steps in separating the gra y and white matter inside the rat spinal cord.

PAGE 101

88 (a) D xx (b) D y y (c) D z z (d) D xy (e) D y z (f ) D xz (g) Figure 4.9: A slice of the DTI of a normal rat brain. (a)-(f ): view ed c hannel b y c hannel as gra y scale images. (g): view ed using ellipsoids.

PAGE 102

89 (a) (b) (c) (d) Figure 4.10: Segmen tation of the corpus callosum from a slice of DTI sho wn in Fig. 4.9 (a)-(d): initial, in termediate and nal steps in segmen ting the corpus callosum.

PAGE 103

90 Figure 4.11: Segmen tation of corpus callosum from a the slice of the DTI in Fig. 4.9 using piecewise smo oth mo del. (a)-(b): initial and nal steps in separating the corpus callosum. (a) (b) (c) (d) (e) (f ) (g) (h) Figure 4.12: 3D Segmen tation of the DTI of a normal rat spinal cord. (a)-(d): initial, in termediate and nal steps in separating the gra y and white matter inside the rat spinal cord. (e)-(h): a 2D slice of the corresp onding ev olving segmen tation in (a)-(d) sup erimp osed on the D xx comp onen t.

PAGE 104

91 (a) (b) (c) (d) (e) (f ) (g) (h) (i) (j) (k) (l) Figure 4.13: 3D Segmen tation of the corpus callosum from the DTI of a normal rat brain. (a)-(d): initial, in termediate and nal steps in separating the corpus callosum. (e)-(h): a 2D slice of the corresp onding ev olving 3D segmen tation in (a)(d) sup erimp osed on the D xx comp onen t. (i)-(l): dieren t 2D slices of the nal segmen tation sup erimp osed on the D xx comp onen t.

PAGE 105

CHAPTER 5 CONCLUSIONS 5.1 Optimal b V alues W e rst prop osed a w eigh ted linear estimator to analyze the statistical prop erties of a nonlinear estimator of ADC. This allo ws us to deriv e a closed form expression to appro ximate the v ariance of the nonlinear estimator. Our metho d w as v alidated later b y sim ulation exp erimen ts. Next, w e sho w ed that the ground truth of ADC is often a distribution instead of a simple in terv al of v alues or just a single v alue and prop osed to use w eigh ted CO V as a criteria for determining the optimal diusion w eigh ting factors. Application of this approac h to h uman brains using uniform distribution yield results compatible with previous w ork, though ours are more precise as w e use a more accurate n umerical in tegration step. In addition, w e applied our approac h to the case of a normal rat brain with the real distributions obtained from segmen ted regions of gra y and white matter and computed the optimal diusion w eigh ting factors that are not pro vided in an y of the existing literature. 5.2 DTI Restoration W e presen ted a no v el constrained v ariational principle form ulation for sim ultaneous smo othing and estimation of the p ositiv e denite diusion tensor eld from complex diusion w eigh ted images (D WI). T o our kno wledge, this is the rst attempt at sim ultaneous smo othing and estimation of the p ositiv e denite diusion tensor eld from the complex D WI data. W e used the Cholesky decomp osition to incorp orate the p ositiv e deniteness constrain t on the diusion tensor to b e estimated. The constrained v ariational principle form ulation is transformed in to a sequence of 92

PAGE 106

93 unconstrained problems using the augmen ted Lagrangian tec hnique and solv ed n umerically Pro of of the existence of a solution for the minimization problem p osed in the augmen ted Lagrangian framew ork is presen ted. Results of comparison b et w een our metho d and a represen tativ e [ 22 ] from the comp eting sc hemes are sho wn for syn thetic data under a v ariet y of situations in v olving the use of linearized and nonlinear data acquisition mo dels depicting the inruence of the c hoice of the data acquisition mo del on the estimation. It w as concluded that using the complex nonlinear data mo del yields b etter accuracy in comparison to the log-linearized mo del. Also, sup erior p erformance of our metho d in estimating the tensor eld o v er the results of comparison b et w een our metho d and a represen tativ e [ 22 ] from the comp eting sc hemes are sho wn for syn thetic data under a v ariet y of situations in v olving the use of linearized and nonlinear data acquisition mo dels depicting the inruence of the c hoice of the data acquisition mo del on the estimation. It w as concluded that using the complex nonlinear data mo del yields b etter accuracy in comparison to the log-linearized mo del. Also, sup erior p erformance of our metho d in estimating the tensor eld o v er the c hosen comp eting metho d w as demonstrated for the syn thetic data exp erimen t. The estimated diusion tensors are quite smo oth without loss of essen tial features when insp ected visually via the use of ellipsoid visualization as w ell as principal diusion direction (PDD) eld visualization. The sup erior p erformance is b orne out via a quan titativ e statistical comparison of the angle b et w een estimated PDD and ground truth PDD. Additional quan titativ e v alidation ma y b e p erformed b y comparing the estimated b er tracts from the smo oth tensor eld obtained here to those obtained from histology as w as done in our earlier w ork [ 6 ] and will b e the fo cus of our future eorts. Though the presen ted w ork fo cuses on Diusion T ensor imaging, the constrained v ariational principle and the applied n umerical metho ds can b e easily tailored to processing the high angular r esolution diusion imaging (HARDI) data and other image

PAGE 107

94 datasets. Our ongoing eorts are in fact fo cussed on application of the constrained v ariational principle form ulation presen ted here to HARDI data sets. Three signican t adv an tages of our approac h will carry o v er to an y application of our metho d, (i) a unied v ariational framew ork for direct smo othing and estimation, (ii) no ad-ho c metho ds of c ho osing the smo othing con trol parameters and (iii) ecien t n umerical metho ds applied in solving the nonlinear large scale problem. Finally in the context of tensor eld estimation, the Cholesky decomp osition metho d emplo y ed here to main tain the p ositiv e deniteness of the diusion tensor pro vides an easy and effectiv e w a y to satisfy suc h a constrain t if required in other tensor eld pro cessing applications. 5.3 DTI Segmen tation W e presen ted a novel tensor eld se gmentation metho d b y incorp orating a new discriminan t for tensors in to the region based activ e con tour mo dels [ 55 53 ]. The particular discriminan t w e emplo y ed is based on information theory whic h oers several adv an tages: It naturally follo ws from the ph ysical phenomenon of diusion, is ane in v arian t and is computationally tractable. The c omputational tr actability follows fr om a the or em that we pr ove d which al lows for the c omputation of the me an of the tensor eld in close d form By using a discriminan t on tensors, as opp osed to either the eigen v alues or the eigen v ectors of these tensors, w e mak e full use of all the information con tained in tensors. This prop osed mo del is then implemen ted in a lev el set framew ork to tak e adv an tage of the easy abilit y of this framew ork to c hange top ologies when desired. Our approac h w as applied to syn thetic and real diusion tensor eld segmen tation. The exp erimen tal results are v ery promising, essen tial part of the regions are w ell captured and top ological c hanges are handled naturally The tensor \distance" w e used is not a true distance as it violates the triangle inequalit y W e could use a true tensor distance (e.g., Rao's distance [ 72 ]) that is dened as the geo desic distance b et w een Gaussian distributions in an information

PAGE 108

95 geometry framew ork. Ho w ev er, the computation diculties asso ciated with suc h a tensor distance in the con text of segmen tation remain to b e in v estigated. The activ e mo del can b e extended b y incorp orating shap e statistics to impro v e the robustness in situations when the ob ject of in terest is partially o ccluded. In addition, w e can apply similar ideas to other tensor elds lik e the structure tensor eld for texture image segmen tation. These will b e our future direction of researc h.

PAGE 109

REFERENCES [1] P J. Basser, J. Mattiello, and D. Lebihan, \Estimation of the eectiv e selfdiusion tensor from the NMR spin ec ho," J. Magn. R eson. v ol. 103, no. 3, pp. 247{254, 1994. [2] E. M. Haac k e, R. W. Bro wn, M. R. Thompson, and R. V enk atesan, Magnetic R esonanc e Imaging: Physic al Principles and Se quenc e Design Springer-V erlag T elos, 1999. [3] K. A. Johnson and J. A. Bec k er, \The whole brain atlas," 1999, h ttp://www.med.harv ard.edu/AANLIB/home.h tml [4] P J. Basser and C. Pierpaoli, \Microstructural and ph ysiological features of tissue elucidated b y quan titativ e diusion-tensor MRI," J. Magn. R eson. v ol. 111, no. 3, pp. 209{219, 1996. [5] T. McGra w, B. C. V em uri, Y. Chen, M. Rao, and T. H. Mareci, \DT-MRI denoising and neuronal b er trac king," Me d. Image A nal. v ol. 8, no. 2, pp. 95{111, 2004. [6] B. C. V em uri, Y. Chen, M. Rao, Z. W ang, T. McGra w, T. H. Mareci, S. J. Blac kband, and P Reier, \Automatic b er tractograph y from dti and its v alidation," in IEEE International Symp osium on Biome dic al Imaging 2002, pp. 501{504. [7] D. Xing, N. G. P apadakis, C. L. Huang, V. M. Lee, T. A. Carp en ter, and L. D. Hall, \Optimized diusion-w eigh ting for measuremen ts of apparen t diusion co ecien t (adc) in h uman brain," Magn. R eson. Imaging v ol. 15, no. 7, pp. 771{784, 1997. [8] P A. Armitage and M. E. Bastin, \Utilising the diusion-to-noise ratio to optimise magnetic resonance diusion tensor acquisition strategies for impro ving measuremen ts of diusion anisotrop y ," Magn. R eson. Me d. v ol. 45, no. 6, pp. 1056{1065, 2001. [9] C. Pierpaoli and P J. Basser, \T o w ard a quan tatita v e assessmen t of diusion anisotrop y ," Magn. R eson. Me d. v ol. 36, pp. 893{906, 1996. [10] P J. Basser and S. P a jevic, \Statistical artifacts in diusion tensor MRI caused b y bac kground noise," Magn. R eson. Me d. v ol. 44, pp. 41{50, 2000. [11] A. W. Anderson, \Theoretical analysis of the eects of noise on diusion tensor imaging," Magn. R eson. Me d. v ol. 46, no. 6, pp. 1174{1188, 2001. 96

PAGE 110

97 [12] O. Brih uega-Moreno, F. P Heese, and L. D. Hall, \Optimization of diusion measuremen ts using cramer-rao lo w er b ound theory and its application to articular cartilage," Magn. R eson. Me d. v ol. 50, no. 5, pp. 1069{1076, 2003. [13] G. J. M. P ark er, J. A. Sc hnab el, M. R. Symms, D. J. W erring, and G. J. Bak er, \Nonlinear smo othing for reduction of systematic and random errors in diusion tensor imaging," J. Magn. R eson. Imaging v ol. 11, no. 6, pp. 702{710, 2000. [14] B. C. V em uri, Y. Chen, M. Rao, T. McGra w, Z. W ang, and T. H. Mareci, \Fib er tract mapping from diusion tensor MRI," in Pr o c. IEEE Workshop on V ariational and L evel Set Metho ds in Computer Vision 2001, pp. 81{88. [15] L. Alv arez, P .-L. Lions, and J.-M. Morel, \Image selectiv e smo othing and edge detection b y nonlinear diusion. ii," SIAM J. Numer. A nal. v ol. 29, no. 3, pp. 845{866, Jun. 1992. [16] T. F. Chan, G. Golub, and P Mulet, \A nonlinear primal-dual metho d for tvbased image restoration," in Pr o c. 12th Intl. Conf. A nalysis and Optimization of Systems: Images, Wavelets, and PDE's 1996, pp. 241{252. [17] P P erona and J. Malik, \Scale-space and edge detection using anisotropic diusion," IEEE T r ans. Pattern A nal. Machine Intel l. v ol. 12, no. 7, pp. 629{639, 1990. [18] L. I. Rudin, S. Osher, and E. F atemi, \Nonlinear v ariation based noise remo v al algorithms," Physic a D v ol. 60, pp. 259{268, 1992. [19] J. W eic k ert, \A review of nonlinear diusion ltering," in Pr o c. First Intl. Conf. Sc ale-sp ac e The ory in Computer Vision ser. LNCS, B. M. ter Haar Romen y L. Florac k, J. J. Ko enderink, and M. A. Viergev er, Eds., v ol. 1252. Springer, 1997, pp. 3{28. [20] V. Caselles, J. M. Morel, G. Sapiro, and A. T annen baum, IEEE T r ans. Image Pr o c essing, Sp e cial Issue on PDEs and Ge ometry-Driven Diusion in Image Pr o c essing and A nalysis v ol. 7, no. 3, 1998. [21] C. P oup on, J. F. Mangin, C. A. Clark, V. F rouin, J. Regis, D. Bihan, and I. Blo c k, \T o w ards inference of h uman brain connectivit y from MR diusion tensor data," Me d. Image A nal. v ol. 5, no. 1, pp. 1{15, Mar. 2001. [22] O. Coulon, D. C. Alexander, and S. R. Arridge, \A regularization sc heme for diusion tensor magnetic resonance images," in Pr o c. Information Pr o c essing in Me dic al Imaging ser. LNCS, M. F. Insana and R. M. Leah y Eds., v ol. 2082. Springer, 2001, pp. 92{105. [23] B. T ang, G. Sapiro, and V. Caselles, \Diusion of general data on non-rat manifolds via harmonic maps theory: the direction diusion case," Int. J. of Comput. Vision v ol. 36, no. 2, pp. 149{161, 2000.

PAGE 111

98 [24] D. Tsc h ump erl e and R. Deric he, \Orthonormal v ector sets regularization with p de's and applications," Int. J. Comput. Vision v ol. 50, no. 3, pp. 237{252, Dec. 2002. [25] R. Kimmel, R. Malladi, and N. A. So c hen, \Images as em b edded maps and minimal surfaces: mo vies, color, texture, and v olumetric medical images," Intl. J. Comput. Vision v ol. 39, no. 2, pp. 111{129, 2000. [26] P P erona, \Orien tation diusions," IEEE T r ans. Image Pr o c essing v ol. 7, no. 3, pp. 457{467, 1998. [27] J. W eic k ert, \Diusion and regularization metho ds for tensor-v alued images," in First SIAM-EMS Conf. Appl. Math. in Our Changing World 2001. [28] C. Chefd'hotel, O. F augeras, D. Tsc h ump erl e, and R. Deric he, \Constrained ro ws of matrix-v alued functions: application to diusion tensor regularization," in Pr o c. 7th Eur op. Conf. on Comput. Vision ser. LNCS, A. Heyden, G. Sparr, M. Nielsen, and P Johansen, Eds., v ol. 2351. Springer, 2002, pp. 251{265. [29] C. Chefd'hotel, D. Tsc h ump erl e, R. Deric he, and O. F augeras, \Regularizing ro ws for constrained matrix-v alued images," Journal of Mathematic al Imaging and Vision v ol. 20, no. 1 & 2, pp. 147{162, 2004. [30] Z. W ang, B. C. V em uri, Y. Chen, and T. H. Mareci, \Sim ultaneous smo othing and estimation of the tensor eld from diusion tensor MRI," in IEEE Comp. So c. Conf. on Comput. Vision and Patt. R e c o g. v ol. I. IEEE Computer So ciet y Press, 2003, pp. 461{466. [31] ||, \A constrained v ariational principle for direct estimation and smo othing of the diusion tensor eld from dwi," in Pr o c. Information Pr o c essing in Me dic al Imaging ser. LNCS, C. J. T a ylor and J. A. Noble, Eds., v ol. 2732. Springer, 2003, pp. 660{671. [32] D. Tsc h ump erl e and R. Deric he, \V ariational framew orks for DT-MRI estimation, regularization and visualization," in Pr o c. 9th Intl. Conf. on Comput. Vision v ol. 2. IEEE Computer So ciet y Press, 2003, pp. 116{121. [33] D. C. Alexander, J. C. Gee, and R. Ba jcsy \Similarit y measure for matc hing diusion tensor images," in Pr o c. BMV C T. P Pridmore and D. Elliman, Eds. British Mac hine Vision Asso ciation, 1999, pp. 93{102. [34] E. G. Miller and C. Chefd'hotel, \Practical non-parametric densit y estimation on a transformation group for vision," in IEEE Comp. So c. Conf. on Comput. Vision and Patt. R e c o g. v ol. I I. IEEE Computer So ciet y Press, 2003, pp. 114{121. [35] K. Tsuda, S. Ak aho, and K. Asai, \The em algorithm for k ernel matrix completion with auxiliary data," J. Mach. L e arn. R es. v ol. 4, pp. 67{81, 2003.

PAGE 112

99 [36] L. Zh uk o v, K. Museth, D. Breen, R. Whitak er, and A. Barr, \Lev el set mo deling and segmen tation of DT-MRI brain data," J. Ele ctr on. Imaging v ol. 12, no. 1, pp. 125{133, 2003. [37] Z. W ang and B. C. V em uri, \T ensor eld segmen tation using region based activ e con tour mo del," in Pr o c. of the 8th Eur op e. Conf. on Comput. Vision (4) ser. LNCS, T. P a jdla and J. Matas, Eds., v ol. 3024. Springer, 2004, pp. 304{315. [38] C. F eddern, J. W eic k ert, and B. Burgeth, \Lev el-set metho ds for tensor-v alued images," in Pr o c. 2nd IEEE Workshop on V ariational, Ge ometric and L evel Set Metho ds in Computer Vision 2003, pp. 65{72. [39] M. Rousson, C. Lenglet, and R. Deric he, \Lev el set and region based surface propagation for diusion tensor MRI segmen tation," in Pr o c. Computer Vision Appr o aches to Me dic al Image A nalysis 2004. [40] Z. W ang and B. C. V em uri, \An ane in v arian t tensor dissimilarit y measure and its applications to tensor-v alued image segmen tation," in IEEE Comp. So c. Conf. on Comput. Vision and Patt. R e c o g. IEEE Computer So ciet y Press, 2004, in press. [41] C. Lenglet, M. Rousson, and R. Deric he, \T o w ard segmen tation of 3d probabilit y densit y elds b y surface ev olution: Application to diusion MRI," in Seventh Intl. Conf. Me dic al Image Computing and Computer-Assiste d Intervention 2004, in press. [42] J. Sijb ers, \Signal and noise estimation from magnetic resonance images," PhD Dissertation, Univ ersitaire Instelling An t w erp en, Departemen t Natuurkunde, Ma y 1998. [43] J. No cedal and S. J. W righ t, Numeric al optimization Springer, 2000. [44] Y. Bito, S. Hirata, and E. Y amamoto, \Optim um gradien t factors for apparen t diusion co ecien t measuremen ts," in Bo ok of abstr acts:SMR/ESMRMB Joint Me eting v ol. 2, Berk eley 1995, p. 913. [45] C. F. W estin, S. E. Maier, H. Mamata, A. Naba vi, F. A. Jolesz, and R. Kikinis, \Pro cessing and visualization for diusion tensor MRI," Me d. Image A nal. v ol. 6, no. 2, pp. 93{108, 2002. [46] B. W. Lindgren, Statistic al the ory Chapman & Hall/CR C, 1939. [47] G. A. F. Seb er, Line ar r e gr ession analysis John Wiley & Sons, 1977. [48] P Blomgren, T. F. Chan, and P Mulet, \Extensions to total v ariation denoising," UCLA, USA, T ec h. Rep. Rep.97-42, Sept. 1997. [49] L. C. Ev ans, Partial dier ential e quations American Mathematical So ciet y 1997.

PAGE 113

100 [50] T. F. Chan and J. Shen, \V ariational restoration of non-rat image features:mo del and algorithm," SIAM J. Appl. Math. v ol. 61, no. 4, pp. 1338{1361, 2000. [51] R. Malladi, J. A. Sethian, and B. C. V em uri, \Shap e mo deling with fron t propagation: A lev el set approac h," IEEE T r ans. Pattern A nal. Machine Intel l. v ol. 17, no. 2, pp. 158{175, 1995. [52] V. Caselles, R. Kimmel, and G. Sapiro, \Geo desic activ e con tours," in Pr o c. 5th Intl. Conf. on Comput. Vision IEEE Computer So ciet y Press, 1995, pp. 694{699. [53] A. Tsai, A. Y. Jr., and A. S. Willsky \Curv e ev olution implemen tation of the m umford-shah functional for image segmen tation, denoising, in terp olation, and magnication," IEEE T r ans. Image Pr o c essing v ol. 10, no. 8, pp. 1169{1186, Aug. 2001. [54] D. Mumford and J. Shah, \Optimal appro ximations b y piecewise smo oth functions and asso ciated v ariational-problems," Comm. on Pur e and Appl. Math. v ol. 42, no. 5, pp. 577{685, 1989. [55] T. F. Chan and L. A. V ese, \Activ e con tours without edges," IEEE T r ans. Image Pr o c essing v ol. 10, no. 2, pp. 266{277, 2001. [56] S. Amari, \Information geometry on hierarc h y of probabilit y distributions," IEEE T r ans. Inform. The ory v ol. 47, no. 5, pp. 1701{1711, 2001. [57] T. Ando, \Conca vit y of certain maps and p ositiv e denite matrices and applications to Hadamard pro ducts," Line ar A lg. Appl. no. 26, pp. 203{241, 1979. [58] A. Dervieux and F. Thomasset, \Multiruid incompressible ro ws b y a nite elemen t metho d," in Seventh International Confer enc e on Numeric al Metho ds in Fluid Dynamics ser. LNP W. Reynolds and R. W. MacCormac k, Eds., v ol. 141. Springer-V erlag, 1980, pp. 158{163. [59] S. Osher and J. A. Sethian, \F ron ts propagating with curv ature dep enden t sp eed: algorithms based on Hamilton-Jacobi form ulation," J. Comput. Phys. v ol. 79, pp. 12{49, 1988. [60] M. Kass, A. Witkin, and D. T erzop oulous, \Snak es: Activ e con tour mo dels," Int. J. Comput. Vision v ol. 1, no. 4, pp. 321{331, 1988. [61] R. Malladi, J. A. Sethian, and B. C. V em uri, \A top ology indep enden t shap e mo deling sc heme," in Pr o c. SPIE Conf. on Ge ometric Metho ds in Computer Vision II v ol. 2031, Jul. 1993, pp. 246{256. [62] V. Caselles, F. Catte, T. Coll, and F. Dib os, \A geometric mo del for activ e con tours in image pro cessing," Numerische Mathematik v ol. 66, pp. 1{31, 1993.

PAGE 114

101 [63] B. C. V em uri, Y. Guo, and Z. W ang, \Deformable p edal curv es and surfaces: Hybrid geometric activ e mo dels for shap e reco v ery ," Int. J. Comput. Vision v ol. 44, pp. 137{155, Sept. 2001. [64] J. A. Sethian, L evel Set Metho ds: Evolving INterfac es in Ge ometry, Fluid Mechanics, Computer Vision, and Materials Scienc e Cam bridge Univ ersit y Press, 1996. [65] S. Osher and N. P aragios, Ge ometric L evel Set Metho ds in Imaging, Vision and Gr aphics Springer V erlag, 2002. [66] L. D. Cohen, \On activ e con tour mo dels and ballo ons," CV GIP: Image Underst. v ol. 53, no. 2, pp. 211{218, 1991. [67] K. Siddiqi, Y. Lauziere, A. T annen baum, and S. Zuc k er, \Area and lengthminimizing ro ws for shap e segmen tation," IEEE T r ans. Image Pr o c essing v ol. 7, no. 3, pp. 433{444, 1998. [68] M. Gage, \On an area-preserving ev olution equation for plane curv es," Contemp. Math. v ol. 51, pp. 51{62, 1986. [69] G. Aub ert, M. Barlaud, O. F augeras, and S. Jehan-Besson, \Image segmen tation using activ e con tours: calculus of v ariations or shap e gradien ts?" I3S Lab oratory F rance, T ec h. Rep. RR-2002-18-FR, 2002. [70] D. Adalsteinsson and J. A. Sethian, \A fast lev el set metho d for prop ogating in terfaces," J. Compu. Phys. v ol. 118, no. 2, pp. 269{277, 1995. [71] R. Malladi, J. A. Sethian, and B. C. V em uri, \A fast lev el set based algorithm for top ology-indep enden t shap e mo deling," J. Math. Imaging and Vision v ol. 6, no. 2/3, pp. 269{289, 1996. [72] C. A tkinson and A. F. S. Mitc hell, \Rao's distance measure," Sankhya. The Indian Journal of Statistics no. 43, pp. 345{365, 1981.

PAGE 115

BIOGRAPHICAL SKETCH Zhizhou W ang w as b orn in W u W ei, AnHui, P R. China. He receiv ed his Bac helor of Science degree from the Sp ecial Class for Gifted Y oung at the Univ ersit y of Science and T ec hnology of China, P R. China, in 1996. He earned his Master of Science degree from the Institute of Soft w are, Chinese Academ y of Science, in 1999. He receiv ed his Do ctor of Philosoph y degree in computer engineering from Univ ersit y of Florida, Gainesville, in August 2004. His researc h in terests include medical imaging, computer vision, scien tic visualization and computer graphics. 102