Group Title: Department of Computer and Information Science and Engineering Technical Reports
Title: A constrained variational principle for direct estimation and smoothing of the diffusion tensor field from DWI
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00095589/00001
 Material Information
Title: A constrained variational principle for direct estimation and smoothing of the diffusion tensor field from DWI
Series Title: Department of Computer and Information Science and Engineering Technical Report ; 03-004
Physical Description: Book
Language: English
Creator: Wang, Z.
Vemuri, B. C.
Chen, Y.
Mareci, T.
Publisher: Department of Computer and Information Science and Engineering, University of Florida
Place of Publication: Gainesville, Fla.
Copyright Date: 2003
 Record Information
Bibliographic ID: UF00095589
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.

Downloads

This item has the following downloads:

2003335 ( PDF )


Full Text











A Constrained Variational Principle for Direct

Estimation and Smoothing of the Diffusion Tensor Field

from DWI *


Z. Wang', B.C. Vemuri1, Y. Chen2 and T. Mareci3

1 Department of Computer & Information Sciences & Engr.,
University of Florida, Gainesville, FL 32611
2 Department of Mathematics
University of Florida, Gainesville, FL 32611
3 Department of Biochemistry
University of Florida, Gainesville, Fl. 32610



Abstract. In this paper, we present a novel constrained variational principle for
simultaneous smoothing and estimation of the diffusion tensor field from diffu-
sion weighted imaging (DWI). The constrained variational principle involves the
minimization of a regularization term in an LP norm, subject to a nonlinear in-
equality constraint on the data. The data term we employ is the original Stejskal-
Tanner equation instead of the linearized version usually employed in literature.
The original nonlinear form leads to a more accurate (when compared to the lin-
earized form) estimated tensor field. The inequality constraint requires that the
nonlinear least squares data term be bounded from above by a possibly known
tolerance factor. Finally, in order to accommodate the positive definite constraint
on the diffusion tensor, it is expressed in terms of cholesky factors and estimated.
variational principle is solved using the augmented Lagrangian technique in con-
junction with the limited memory quasi-Newton method. Both synthetic and real
data experiments are shown to depict the performance of the tensor field estima-
tion algorithm. Fiber tracts in a rat brain are then mapped using a particle system
based visualization technique.


Index Terms Diffusion Tensor MRI, Stjeskal-Tanner equation, nonlinear programing,
augmented Lagrangian method, limited memory quasi-Newton.


1 Introduction

Diffusion is a process of movement of molecules as a result of random thermal agita-
tion and in our context, refers specific cally to the random translational motion of water
molecules in the part of the anatomy being imaged with MR. In three dimension, wa-
ter diffusivity can be described by a 3 x 3 matrix D called diffusion tensor which is
intimately related to the geometry and organization of the microscopic environment.

* This research was supported in part by the grant: NIH RO1-NS42075. Manuscript submitted
to IPMI'03, also a Tech. Report, CISE-TR-03-004














General principle is that water diffuses preferably along ordered tissues e.g., the brain
white matter.
Diffusion tensor MRI is a relatively new MR image modality from which anisotropy
of water diffusion can be inferred quantitatively [2], thus provides a method to study
the tissue microstructure e.g., white matter connectivity in the brain in vivo. Diffusion
weighted echo intensity image SI and the diffusion tensor D are related through the
Stejskal-Tanner equation [2] as given by:

S = Soe-bl: = Soe- -=1D =S blj Dij (1)
where bi is the diffusion weighting of the 1-th magnetic gradient, ":" denotes the gen-
eralized inner product for matrices.
Taking log on both sides of equation (1) yields the following transformed linear
Stej skal-Tanner equation:
3 3
log(SI) = log(So) bi : D = log(So) E bt,ijDij (2)
i=1 j=1
Given several (at least seven) non-collinear diffusion weighted intensity measurements,
D can be estimated via multivariate regression models from either of the above two
equations. Diffusion anisotropy can then be computed to show microstructural and
physiological features of tissues [3]. Especially in highly organized nerve tissue, like
white matter, diffusion tensor provides a complete characterization of the restricted mo-
tion of water through the tissue that can be used to infer fi ber tracts. The development of
diffusion tensor acquisition, processing, and analysis methods provides the framework
for creating fi ber tract maps based on this complete diffusion tensor analysis [8, 11].
For automatic fi ber tract mapping, the diffusion tensor fi eld must be smoothed with-
out losing relevant features. Currently there are two popular approaches, one involves
smoothing the raw data SI while preserving relevant detail and then estimate diffusion
tensor D from the smoothed raw data (Parker et.al., [16,23]). The raw data in this con-
text consists of several diffusion weighted images acquired for varying magnetic fi eld
strengths and directions. Note that at least seven values at each 3D grid point in the
data domain are required to estimate the six unknowns in the symmetric 2-tensor D and
one scale parameter So. The raw data smoothing or de-noising can be formulated using
variational principles which in turn requires solution to PDEs or at times directly using
PDEs which are not necessarily arrived at from variational principles (see [17, 1, 24, 20,
7] and others in [6]).
Another approach to restore the diffusion tensor fi eld is to smooth the principal dif-
fusion direction after the diffusion tensor has been estimated from the raw noisy mea-
surements SI. In Poupon et al. [19], an energy function based on a Markovian model
was used to regularize the noisy dominant eigenvector fi eld computed directly from the
noisy estimates of D obtained from the measurements SI using the linearized Stejskal-
Tanner equation (2). Coulon et.al., [9] proposed an iterative restoration scheme for prin-
cipal diffusion direction based on direction map restoration work reported in [21]. Other
sophisticated vector fi eld restoration methods [22, 12, 18] can potentially be applied to
the problem of restoring the dominant eigen-vector fi elds computed from the noisy esti-
mates of D. Recently, Chefd'Hotel et.al., [5] presented an elegant geometric solution to














the problem of smoothing a noisy D that was computed from SI using the log-linearized
model (2) described above. They assume that the given (computed) tensor fi eld D from
St is positive defi nite and develop a clever approach based on differential geometry of
manifolds to achieve constrained smoothing where the smoothed tensor fi eld is con-
strained to be positive semi-defi nite. Interesting results of mapped fi bers are shown for
human brain MRI.

We propose a novel formulation of the diffusion tensor fi eld estimation and smooth-
ing as a constrained optimization problem. The specific c approach we use is called the
augmented Lagrangian technique which allows one to deal with inequality constraints.
The novelty of our formulation lies in the ability to directly, in a single step process,
estimate a smooth D from the noisy measurements SI with the preservation of the posi-
tiveness constraint on D. The formulation does not require any adhoc methods of .. rin
parameter values to achieve the solution. These are the key features dI, ,o' i,,ii hi ,,0 our
solution method from methods reported in literature to date.

In contrast to our solution (to be described subsequently in detail), most of the
earlier approaches used a two step method involving, (i) computation of a D from SI
using a linear least-squares approach and then (ii) computing a smoothed D via ei-
ther smoothing of the eigen-values and eigen-vectors of D or using the matrix flows
approach in [5]. The problem with the two step approach to computing D is that the
estimated D in the fi rst step using the log-linearized model need not be positive defi nite
or even semi-defi nite. Moreover, it is hard to trust the fi delity of the eigen values and
vectors computed from such matrices even if they are to be smoothed subsequently prior
to mapping out the nerve fi ber tracts. Also, the noise model used in the log-linearized
scheme is not consistent with the physics.

Briefly, our model seeks to minimize a cost function involving, the sum of an LP
norm based gradient of the diffusion tensor D whose positiveness is ensured via
a Cholesky factorization to LLT and an LP norm based gradient of So, subject to a
nonlinear data constraint based on the original (not linearized) Stejskal-Tanner equation
(1). The model is posed as a constrained variational principle which can be minimized
by either discretizing the variational principle itself or the associated Euler-Lagrange
equation. We choose the former and use the augmented Lagrangian method together
with the limited memory quasi-Newton method to achieve the solution.

Rest of the paper is organized as follows: in section 2, the detailed variational for-
mulation is described along with the nonlinear data constraints, the positive defi nite
constraint and the augmented Lagrangian solution. Section 3 contains the detailed de-
scription of the discretization as well as the algorithmic description of the augmented
Lagrangian framework. In section 4, we present experiments on application of our
model to synthetic as well as real data. Synthetic data experiments are conducted to
present comparison of tensor fi eld restoration results with a recently presented work of
Coulon et.al., [9]. Moreover, results of comparison between the use of the linearized
Stej skal-Tanner model and the nonlinear form of the same are presented as well.














2 Constrained Variational Principle Formulation

Our solution to the recovery of a piecewise smooth diffusion tensor fi eld from the mea-
surements SI is posed as a constrained variational principle. We seek to minimize a
measure of lack of smoothness in the diffusion tensor D being estimated using the LP
norm of its gradient. This measure is then constrained by a nonlinear data fi delity term
related to the original Stejskal-Tanner equation (1). This nonlinear data term is con-
strained by an inequality which requires that it be bounded from above by a possibly
known tolerance factor. The positiveness constraint on the diffusion tensor being esti-
mated is achieved via the use of the Cholesky factorization theorem from computational
linear algebra. The constrained variational principle is discretized and posed using the
augmented Lagrangian technique [14]. The augmented Lagrangian is then solved using
the limited memory quasi-Newton scheme. The novelty of our formulation lies in the
unifi ed framework for recovering and smoothing of the tensor fi eld from the data $. In
addition, to our knowledge, this is the fi rst formulation which allows for simultaneous
estimation and smoothing of D as well as one in which the regularization parameter
is not set in an adhoc way. The approach presented here describes a principled way to
determine the regularization parameter.
Let So (X) be the response intensity when no diffusion-encoding gradient is present,
D(X) the unknown symmetric positive defi nite tensor, LL (X) be the Cholesky fac-
torization of the diffusion tensor with L being a lower triangular matrix, SI, I = 1, .., N
is the response intensity image measured after application of a magnetic gradient of
known strength and direction and N is the total number of intensity images each corre-
sponding to a direction and strength of the applied magnetic gradient.

minS(So, L) = /(IVSo(X) + IVL(X) )dX
N
subject to C(So,L) = ao2 ( Soe-b'LLT )2dX > 0 (3)
Il=1

where Q is the image domain, bl is the diffusion weighting of the 1-th magnetic gra-
dient, ":" is the generalized inner product of matrices. The fi rst and the second terms
in the variational principle are LP smoothness constraint on So and L respectively,
where p > 12/7 for So and p > 1 for L. VL P = d IV!- ,1", where d E D =
{xx, yy, zz, xy, yz, z} are indices to the six nonzero components of L. The lower
bounds on the value of p are chosen so as to make the proof of existence of a sub-
problem solution for this minimization (see section 2.4) mathematically tractable. a is
a constant scale factor and a2 is the noise variance in the measurements St.


2.1 The Nonlinear Data Constraint

The Stejskal-Tanner equation (1) shows the relation between diffusion weighted echo
intensity image SI and the diffusion tensor D. However, multivariate linear regres-
sion based on equation (2) has been used to estimate the diffusion tensor D [2]. It
was pointed out in [2] that these results agree with nonlinear regression based on the














original Stejskal-Tanner equation (1). However, if the signal to noise ratio (SNR) is low
and the number of intensity images St is not very large (unlike in [2] where N = 315
or N = 294), the result from multivariate linear regression will differ from the non-
linear regression signifi cantly. A robust estimator belonging to the M-estimator family
was used by Poupon et.al., [19], however, its performance is not discussed in detail. In
Westin et. al., [25]), an analytical solution is derived from equation (2) by using a dual
tensor basis, however, it should be noted that this can only be used for computing the
tensor D when there is no noise in the measurements SI or the SNR is extremely high.
Our aim is to provide an accurate estimation of diffusion tensor D for practical clin-
ical use, where the SNR may not be high and the total number of intensity images N
is restricted to a moderate number. The nonlinear data fi delity term based on original
Stejskal-Tanner equation (1) is fully justifi ed for use in such situations. This nonlinear
data term is part of an inequality constraint that imposes an upper bound on the close-
ness of the measurements St to the mathematical model Soeb:LLT The bound acr2
may be estimated automatically from the measurements using any variance estimation
methods from literature [13].


2.2 The Lp smoothness constraint

In Blomgren et.al., [4], it was shown that LP smoothness constraint doesn't admit dis-
continuous solutions as the TV-norm does when p > 1. However, when p is chosen
close to 1, its behavior is close to the TV-norm for restoring edges. In our constrained
model, we need p > 12/7 for regularizing So and p >= 1 for L to ensure existence of
the solution. Note that what is of importance here is the estimation the diffusion tensor
D and therefore, the edge-preserving property in the estimation process is more rele-
vant for the case of D than for So. Hence, we choose an appropriate p for So and D
permitted by the theorem below. In our experiment, we choose p = 1.705 for So and
p = 1.00 for L.


2.3 The Positive Definite Constraint

In general, a matrix A E I" is said to be positive defi nite if xTAx > 0, for all
x $ 0 in Rin. The diffusion tensor D happens to be a positive defi nite matrix but due
to the noise in the data St, it is hard to recover a D that retains this property unless
one includes it explicitly as a constraint. One way to impose this constraint is using the
Cholesky factorization theorem, which states that: IfA is a symmetric positive definite
matrix then, there exists a unique factorization A = LLT where, Lis a lower triangular
matrix with positive diagonal elements. After doing the Cholesky factorization, we have
transferred the inequality constraint on the matrix D to an inequality constraint on the
diagonal elements of L. This is however still hard to satisfy theoretically because, the set
on which the minimization takes place is an open set. However, in practise, with fi nite
precision arithmetic, testing for a positive defi niteness constraint is equivalent to testing
for positive semi-defi niteness. To answer the question of positive semi-defi niteness, a
stable method would yield a positive response even for nearby symmetric matrices.
This is because, D = D + E with I E I E I D I1, where e is a small multiple of














the machine precision. Because, with an arbitrarily small perturbation, a semi-defi nite
matrix can become defi nite, it follows that in fi nite precision arithmetic, testing for
defi niteness is equivalent to testing for semi-defi niteness. Thus, we repose the positive
defi niteness constraint on the diffusion tensor matrix as, xFDx > 0 which is satisfi ed
when D = LLT.


2.4 Comments on Existence of the Solution

Consider the augmented Lagrangian formulation which serves as a subproblem of (3):

min (So, L; A, p) = S(So, L) AC(So, L) + C2(So, L) (4)
(So,L)EA 2p

Where A = {(So,L) L E BV(Q), Ld E L2(f2),d E D and So E WI'p(Q),p >
12/7}. Here f2 C 13, BV(Q) denotes the space of bounded variation functions on
the domain Q, L2(f) is the space of square integrable functions on Q and Wl'p(f2)
denotes the Sobolev space of order p on Q [10].


Theorem 1 Suppose SIt L4(f2), then the augmented Lagrangian formulation (4)
has a solution (So,L) E A.

Proof Outline:
We can prove the following:
Lower semi-continuity of the fi rst term E($, L) in (4).
Continuity of the second term C(So, L) in (4) for So E W1'p(2) when > 6/5.
Continuity of the third term C2(So, L) in (4) for So E W1'p(2) when p > 12/7.

Thus, if (S(), L(n)) is a minimizing sequence, then it has a subsequence (S"k) L("k))
where L(nk)) converges weakly in BV(f) and S0k) converges weakly in Wl'p(9)
when > 12/7. From the compact embedding theorem [10], L(nk) converges strongly
in L2 a.e. (almost everywhere) on Q. Similarly, S0) converges strongly in L4 a.e. on
Q. Thus the minimizing sequence has a convergent subsequence, and the convergence
is the solution of the minimization problem (4)([10]).
Finding a solution of the constrained variation principle (3) involves solving a se-
quence of (4) with fi xed A and p at each stage. It is much more diffi cult than when
dealing with the problems of recovering and smoothing separately. However, there are
benefit ts of posing the problem in this constrained unifi ed framework, namely, one does
not accumulate the errors from a two stage process. Moreover, this framework incorpo-
rates the nonlinear data term which is more appropriate for low SNR values prevalent
when b is high. Also, the noise model is correct for the nonlinear data model unlike the
log-linearized case. Lastly, in the constrained formulation, it is now possible to pose
mathematical questions of existence and uniqueness of the solution which was not
possible in earlier formulations reported in literature.














3 Numerical Methods

We discretize the constrained variational principle (3) and then transform it into a se-
quence of unconstrained problems by using the augmented Lagrangian method and then
employ the limited quasi-Newton technique [14] to solve them. Let

RI,ijk = Sl,ijk SO,ijke-b:LiikLk

|VSo ijk = [(ASo)2 + (/ASo)2 + (AfSo)2 +
ijk

VLd ijk = [(AtLd)2 +ALd) +(ALd)2 + d dE D (5)
L- ijk
|VL ',, = Z |V/- E I,
dED

Where A+, A+ and A+ are forward difference operators, E is a small positive number
used to avoid singularities of the LP norm when < 2. Now the discretized constrained
variational principle can be written as:

min S(So,L) = (IV s,, + IVL )
So,L
S L i,j,k
N
subject to C(So, L) = aU2 2- > 0 (6)
i,j,k 1=1
The above problem is now posed using the augmented Lagrangian method, where
a sequence of related unconstrained subproblems are solved, and the limit of these so-
lutions is the solution to (6). Following the description in [14], the k-th subproblem of
(6) is given by:

min (So, L, s; Ak, Pk) = $(So, L) Ak (C(So, L) s) + I (C(So, L) s)2 (7)
2Pk
where s > 0 is a slack variable, Pk, Ak are the barrier parameter and the Lagrange
multiplier for the k-th subproblem respectively.
One can explicitly compute the slack variable s at the minimum from

s = max((C(So, L) Ak, 0)

and substitute it in (7) to get an equivalent subproblem in (So, L) given by:

min A(So, L; Ak, Pk)
S(So, L) AkC(So,L) + 1C2(So, L) if C(So, L) PkAk (
S= (8)
(So,L) -AX otherwise

The following algorithm summarizes the procedure to fi nd the solution for (6):














Initialize So (0), L(0) using the nonlinear regression, choose initial Po and Ao.
for k = 1, 2,...
Find approximate minimizer So(k), L(k) of CA(', "; Ak, Lk) starting with
So(k 1), L(k 1);
If fi nal convergence test is satisfi ed
STOP with approximate solution So(k), L(k);
Update Lagrange multiplier using Ak+1 = max(Ak C(So, L)/pk, 0);
Choose new penalty parameter Pk+1 = k /2;
Set new starting point for the next iteration to So (k), L (k);
endfor

Due to the large number of unknown variables in the minimization, we solve the
subproblem using limited memory Quasi-Newton technique. Quasi-Newton like meth-
ods compute the approximate Hessian matrix at each iteration of the optimization by us-
ing only the fi rst derivative information. In Limited-Memory Broyden-Fletcher-Goldfarb-
Shano (BFGS), search direction is computed without storing the approximated Hessian
matrix. Details can be found in Nocedal et.al., [14].


4 Experimental Results

In this section, we present two sets of experiments on the application of our smoothing
tensor estimation model. One is on synthetic data sets and the other is on a real data set
consisting of a DWI acquired from a normal rat brain.
We synthesized an anisotropic tensor fi eld on a 3D lattice of size 32x32x8. The
volume consists of two homogeneous regions with the following values for So and D:

Region : So = 10.00 D = 0.001 x [0.9697 1.7513 0.8423 0.0 0.0 0.0]
Region: So = 8.33 D = 0.001 x [1.5559 1.1651 0.8423 0.3384 0.0 0.0]

Where the tensor D is depicted as [dx, dyy, dzz,dxy, dz, dy], the dominant eigen
vector of the fi rst region is along the y axis, while that of the second region is in the xy
plane and inclined at 60 degrees to the y axis.
The diffusion weighted images SI are generated using the Stejskal-Tanner equation
at each voxel X given by:

SI(X) = So(X)e-b:D(X) +- n(X), n(X) ~ N(0, N) (9)

where N(0, UN) is a zero mean Gaussian noise with standard deviation UN. We choose
the 7 commonly used gradient configurations ([25]) and 3 different fi eld strengths in
each direction for bl values.
Figure 1 shows the results for a synthetic data set with rN = 1.5. We display the
dominant eigen vectors computed from the original and the restored diffusion tensor
fi eld using the following methods: (i) Linear linear regression from (2) as in [2], (ii)
Nonlinear nonlinear regression from (1), (iii) Linear + EVS (Eigen vector smooth-
ing) linear regression followed by the dominant eigen vector smoothing method de-
scribed in Coulon et.al., [9], (iv) Nonlinear + EVS nonlinear regression plus the















smoothing as in (iii), and (v) Our method. It is evident from this fi gure that our new
model yields very good estimates of the dominant eigen vector fi eld. The method in
Coulon et.al., however, will not work well at voxels where the estimated dominant eigen
vectors are almost orthogonal to the ones in their neighborhoods. In such cases, Coulon
et.al.'s method will treat them as discontinuities and doesn't smooth them. Though it is
possible to treat these locations as outliers in Coulon et.al.'s method, it is diffi cult to set
a reasonable criteria. Further quantitative measures (described below) also shows the
superiority of our model.


























Fig. 1. A slice of the results for the synthetic data with crA = 1.5: Top left image is the dominant
eigen vector computed from original tensor field, and the other images arranged from left to
right, top to bottom are the dominant eigen vectors computed from estimated tensor field using
the following methods: linear regression, nonlinear regression, linear + EVS, nonlinear+EVS and
our model.


To quantitatively assess the proposed model, we compare the accuracy of the domi-
nant eigen vector computed from previously mentioned methods. Let 0 be the angle(in
degrees) between the dominant eigen vector of the estimated diffusion tensor fi eld and
the dominant eigen vector of the original tensor fi eld, table 1 shows the mean (/j) and
standard deviation(cr0) of 0 using different methods for the synthetic data with different
levels of additive Gaussian noises. A better method is one that yields smaller values.
From this table, we can see the our model yields lower values than all other methods
under various noise levels. It is also clear from this table that methods using the original
nonlinear Stejskal-Tanner equation (1) are more accurate than those using the linearized















one (2). The advantage of our method and the nonlinear approaches are more apparent
when the noise level is higher, which supports our discussion in section 2.1.



a, = 0.5
Linear Nonlinear Linear+EVS Nonlinear+EVS Our method
Jie 10.00 8.22 1.76 1.46 0.76
ae 7.29 5.90 2.38 1.44 1.17

ca = 1.0
Linear Nonlinear Linear+EVS Nonlinear+EVS Our method
ie 22.69 18.85 6.87 4.75 2.19
co 17.77 15.15 14.59 10.64 2.52

a, = 0.5
Linear Nonlinear Linear+EVS Nonlinear+EVS Our method
Jie 34.10 30.26 16.29 12.50 6.47
co 23.11 22.22 24.37 22.19 9.58

Table 1. Comparison of the accuracy of the estimated dominant eigen vectors using different
methods for different noise levels.






The normal rat brain data we used here has 21 diffusion weighted images mea-
sured using the same configuration of tI as in the previous example, each image is
a 128x128x78 volume data. We extract 10 slices in the region of interest, namely the
corpus callosum, for our experiment. Figure 2(b) depicts images of the six indepen-
dent components of the estimated diffusion tensor, the computed FA, the trace(D) and
So (echo intensity without applied gradient) obtained using our proposed model. As
a comparison, fi gure 2(a) shows the same images computed using linear least square
fi tting based on the linearized Stejskal-Tanner equation from the raw data. For display
purposes, we use the same brightness and contrast enhancement for displaying the cor-
responding images in the two fi gures. The effectiveness of edge preservation in our
method is clearly evident in the off-diagonal components of D. In addition, fi ber tracts
were estimated as integral curves of the dominant eigen vector fi eld of the estimated
D and is visualized using the particle systems technique (Pang et.al [15]). The mapped
fi ber tracts are found to follow the expected tracts quite well from a neuroanatomical
perspective as shown in fi gure 3.

In the above presented results, what is to be noted is that we have demonstrated
a proof of concept for the proposed simultaneous recovery and smoothing of D in the
case of the synthetic data and the normal rat brain respectively. The quality of results ob-
tained for the normal rat brain is reasonably satisfactory for visual inspection purposes,
however intensive quantitative validation of the mapped fi bers needs to be performed
and will be the focus of our future efforts.



























(a) (b)
Fig. 2. (a)Results of normal rat brain estimated using multivariate linear regression without
smoothing. (b) Results of normal rat brain estimated using our proposed method. Both (a) and (b)
are arranged as following: First row, left to right: D Dov and D,. Second row, left to right:
So, DYY and Dy.. Third row, left to right: FA, < D > and Dz.


Fig. 3. Rat brain fiber tracts in and around the corpus callosum visualized using particle systems
superimposed on the So image. The particles are shown in bright orange. Left: Intermediate frame
of an animation sequence depicting fiber growth; Right: the last frame of the same sequence from
a different viewpoint. [This is a colorfigure]


I















5 Conclusions

We presented a novel constrained variational principle formulation for simultaneous
smoothing and estimation of the positive defi nite diffusion tensor fi eld from diffusion
weighted images (DWI). To our knowledge, this is the fi rst attempt at simultaneous
smoothing and estimation of the positive defi nite diffusion tensor fi eld from the raw
data. We used the Cholesky decomposition to incorporate the positive defi niteness con-
straint on the diffusion tensor to be estimated. The constrained variational principle
formulation is transformed into a sequence of unconstrained problems using the aug-
mented Lagrangian technique and solved numerically. Proof of the existence of a solu-
tion for each problem in the sequence of unconstrained problems is outlined.
Results of comparison between our method and a competing scheme [9] are shown
for synthetic data under different situations involving use of linearized and nonlinear
data acquisition models depicting the influence of the choice of the data acquisition
model on the estimation. It was concluded that using the nonlinear data model yields
better accuracy in comparison to the log-linearized model. Also, the superiority of our
method in estimating tensor fi eld over the chosen competing method was demonstrated
for the synthetic data experiment.
Finally, fi ber tract mapping of a normal rat brain were depicted using a particle
system based visualization scheme. The estimated diffusion tensors are quite smooth
without losing essential features when inspected visually. However, quantitative valida-
tion of the estimated fi bers is essential and will be the focus of our future efforts.


Acknowledgments

Authors would like to thank Timothy McGraw in providing assistance on generating
the fi bers and Evren Ozarslan for providing us the rat brain DWI and for discussing the
details of the physics of imaging.


References

1. L.Alvarez, P. L. Lions, and J. M. Morel, "Image selective smoothing and edge detection by
nonlinear diffusion. ii,"SIAAJ. Numer. Anal., vol. 29, no. 3, pp. 845-866, June 1992.
2. P. J. Basser, J. Mattiello and D. Lebihan, "Estimation of the Effective Self-Diffusion Tensor
from the NMR Spin Echo,"J. Magn. Reson., series B 103, pp. 247-254, 1994.
3. P. J. Basser and C. Pierpaoli, "Microstructural and Physiological Features of Tissue Eluci-
dated by Quantitative-Diffusion-Tensor MRI,"J. Magn. Reson., series B 111, pp. 209-219,
1996.
4. P. Blomgren, T.F. Chan and P. Mulet, "Extensions to Total Variation Denoising,", Tech.
Rep.97-42, UCLA, September 1997.
5. C. Chefd'hotel, D. Tschumperle', Rachid Deriche, Olivier D. Faugeras, "Constrained Flows
of Matrix-Valued Functions: Application to Diffusion Tensor Regularization,"ECCV, Vol.
1, pp. 251-265, 2002.
6. V Caselles, J. M. Morel, G. Sapiro, and A. Tannenbaum, IEEE TIP, special issue on PDEs
and geometry-driven diffusion in image processing and analysis, Vol 7, No. 3, 1998.
















7. T.F. Chan, G. Golub, and P Mulet, "A nonlinear primal-dual method for TV-based image
restoration,"inProc. 12thInt. Conf Analysis andOr ..... :., *.. FSystems: Images, Wavelets,
andPDE's, Paris, France, June 26-28, 1996, M. Berger et.al., Eds., no. 219, pp. 241-252.
8. T.E. Conturo, N.F. Lori, T.S. Cull, E. Akbudak, A.Z. Snyder, J.S. Shimony, R.C. McKinstry,
H. Burton, and M.E. Raichle, "Tracking neuronal fiber pathways in the living human brain,"
inProc. Natl. Acad Sci. USA 96, 10422-10427 (1999)
9. 0. Coulon, D.C. Alexander and S.R. Arridge, "A Regularization Scheme for Diffusion Ten-
sor Magnetic Resonance Images,"IPMI, 2001, pp. 92-105, Springer-Verlag .
10. L.C. Evans, "Partial Differential Equations," Graduate Studies in Mathematics, American
Mathematical Society, 1997.
11. D.K. Jones, A. Simmons, S.C.R. Williams, and M.A. Horsfield, "Non-invasive assessment of
axonal fiber connectivity in the human brain via diffusion tensor MRI,"Magn. Reson. Med.,
42, 37-41 (1999).
12. R. Kimmel, R. Malladi and N.A. Sochen, "Images as Embedded Maps and Minimal Sur-
faces: Movies, Color, Texture, and Volumetric Medical Images,"IJCV, 39(2), pp. 111-129,
2000.
13. Bernard W.Lindgren Statistical Theory, Chapman & Hall/CRC, 1993.
14. J. Nocedal and S.J. Wright, Num. Optimization, Springer, 2000.
15. A. Pang and K. Smith, "Spray Rendering: Visualization Using Smart Particles,"IEEE Visu-
alization 1993 Conference Proceedings, 1993, pp. 283-290.
16. G.J.M. Parker, J.A. Schnabel, M.R. Symms, D.J. Werring, and G.J. Baker, "Nonlinear
smoothing for reduction of systematic and random errors in diffusion tensor imaging,"Magn.
Reson. Imag. 11, 702-710, 2000.
17. PPerona and J.Malik, "Scale-space and edge detection using anisotropic diffusion,"IEEE
TPAMI, vol. 12, no. 7, pp. 629-639, 1990.
18. Perona, P, "Orientation diffusions,"IEEE TIP, vol.7, no.3, pp. 457-467, 1998.
19. C. Poupon, J.F. Mangin, C.A. Clark, V Frouin, J. Regis, D.Le Bihan and I. Block, "Towards
inference of human brain connectivity from MR diffusion tensor data,", Med. Image Anal.,
vol. 5, pp. 1-15, 2001.
20. L. I. Rudin, S. Osher, and E. Fatemi, "Nonlinear variation based noise removal algorithms,"
Physica D, vol. 60, pp. 259-268, 1992.
21. B. Tang, G. Sapiro and V Caselles, 'Diffusion of General Data on Non-Flat Manifolds via
Harmonic Maps Theory: The Direction Diffusion Case,"IJCV, 36(2), pp. 149-161, 2000.
22. D. Tschumperle and R. Deriche. "Regularization of orthonormal vector sets using coupled
PDE's,"Proceedings of EEE Workshop on Variational and Level Set Methods in Computer
Vision, pp. 3-10, July 2001
23. B. C. Vemuri, Y Chen, M.Rao, T.McGraw, Z. Wang and T.Mareci, "Fiber Tract Mapping
from Diffusion Tensor MRI,"Proceedings of EEE Workshop on Variational and Level Set
Methods in Computer Vision, pp. 81-88, July 2001
24. J.Weickert, "A review of nonlinear diffusion filtering,"Scale-space theory in computer vi-
sion, 1997, vol. 1252, of Lecture Notes in Computer Science, pp. 3-28, Springer-Verlag.
25. C.F. Westin, S.E. Maier, H. Mamata A. Nabavi, F.A. Jolesz and R. Kikinis, "Processing and
visualization for diffusion tensor MRI,"Med. Image Anal., vol. 6, pp. 93-108, 2002.




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs