An O(N) Iterative Solution
to the Poisson Equation
in Lowlevel Vision Problems1
S. H. Lai and B. C. Vemuri
UFCIS Technical Report TR93035
Computer and Information Sciences Department
University of Florida
Bldg. CSE, Room 301
Gainesville, Florida 32611
November 15, 1993
1This work was supported in part by the NSF grant ECS9210648.
In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, Washington 1994.
In this paper, we present a novel iterative numerical solution to the Poisson equation whose so
lution is needed in a ,',.',lii of lowlevel vision problems. Our algorithm is an O(N) (N being
the number of discretization points) iterative technique and does not make any assumptions on the
shape of the input domain unlike the ,/'.1,il~1, ,11 domain assumption in the proof of convergence of
multigrid techniques [.1', 29]. We present two major results ,.,,i., l. a generalized version of the
capacitance matrix theorem [6] and a theorem on O(N) convergence of the alternating direction
implicit method (ADI) used in our algorithm. Using this generalized theorem, we express the linear
sill, ir, corresponding to the discretized Poisson equation as a L,.,,i,,....'" and a Capacitance matrix
equation. The former is solved using the ADI method while the solution to the later is obtained
using a modified biconjugate gradient algorithm. We demonstrate the algorithm performance on
si/ill ... ",, data for the surface reconstruction and the SFS problems.
1 Introduction
The problem of reconstructing and representing threedimensional shapes has received an enormous
amount of attention in vision research for the past decade. In this paper we will be concerned with
the problem of surface shape recovery from shading and sparse depth information. These problems
may be formulated in the framework of variational principles which lead to solving EulerLagrange
equations as the necessary condition for a minimum. In these problems, there is a need to solve one
or more Poisson equations of the form Av = f. Several other problems in lowlevel vision can be
formulated in the variational principle framework and lead to solving one or more discrete Poisson
equations. Our selection of the surface reconstruction and shape from shading problem in this paper
has no predisposed preference. The Poisson equation (an elliptic PDE), when discretized leads to a
large sparse linear system which may be solved by using either direct or iterative numerical methods.
In the iterative solution category, we have numerous techniques namely, GaussSeidel [7], suc
cessive over relaxation (SOR), conjugate gradient [9], hierarchical conjugate gradient (HCG) [20],
and multigrid methods. The slowest being GaussSeidel, taking O(N2) time to converge for an
N x N problem [9] while, the fastest being multigrid methods taking O(N) time to converge [10].
Although the multigrid techniques have been applied successfully to a general class of problems,
the proofs of convergence are restricted to a special class of problems [10]. Hackbusch [10] presents
a general concept for proving the convergence of multigrid iterations under some assumptions on
the regularity of the domain boundary. Braess [5] proved the convergence of the multigrid so
lution to the Poisson equation on a uniform grid and showed that the results are independent of
the shape of the domain as long it is polygonal and convex. Recently, Xu 2"] presents a unified
theory for classifying a class of iterative solution methods for symmetric positive definite problems.
An abstract convergence theory is established and applied to a particular problem by specifying,
a decomposition of the underlying space, and the corresponding subspace solvers. With multigrid
methods, the subspaces are given by multiple coarser grids. As a direct consequence of the abstract
theory, optimal convergence estimates for a given algorithm can be obtained via the estimation
of only two parameters (see Xu 1'] for details). Although, results for convergence of multigrid
methods for complicated domains have been discussed in ', 29], the analysis is limited to domains
with Lipschutz boundaries and has to be done on a case by case basis and is different for each case.
Direct solutions to the discretized Poisson equation (for irregular regions) are based on a the
orem called the capacitance matrix theorem which was stated and proved in Buzbee et al., [6].
The direct solutions using the capacitance matrix theorem employ the FourierToeplitz method in
combination with LU decomposition [19], the Cholesky decomposition [6] or the conjugate gradient
[18] technique. The FourierToeplitz method requires O(N log N) time and the LU decomposition,
Cholesky factorization or the conjugate gradient require O(n3) = O(N/N). Thus, making the
overall complexity O(N/N).
In this paper, we present an iterative O(N) solution to the Poisson equation, which does not make
any assumptions on the domain shape or its ,,il.; ,', and treats them all in a uniform manner.
Our algorithm is based on the generalized version of the capacitance matrix theorem which we
state and prove in this paper. In Our algorithm we first transform the linear system of equations
into a Lyapunov matrix equation and an associated linear system with a capacitance matrix. The
former is solved using the alternating direction implicit method (ADI) while the solution to the
later is obtained using a modified biconjugate gradient technique. The total time complexity of
our algorithm is the sum of the time complexities of the ADI and the biconjugate gradient methods.
We prove that the ADI method takes a constant number of iterations with each iteration taking
O(N) time. In our proof, we make use of the spectral characteristic of the Laplacian operator which
appears on the left hand side of the Poisson equation and impose a very mild restriction on the
spectral characteristics of the input data. This restriction amounts to requiring that the frequency
content of the data be upper bound by an increasing function of 1 2 (w is the frequency domain
vector valued variable) of degree less than 1. In all lowlevel vision problems considered in this
paper and many others, this restriction is not violated. For the biconjugate gradient algorithm,
which requires a (dense) matrix vector multiplication in each iteration, we propose to carry out
this operation in a wavelet basis. Approximation to matrix vector multiplication in a wavelet
basis requires only O( /N) = O(n) time. Thus, the biconjugate gradient algorithm with the
aforementioned wavelet basis approximation to the matrix vector products will require O(N) time
since there are O( N) = O(n) iterations in the algorithm. Hence, the total time complexity of our
algorithm is O(N). We will present the details of our algorithm in subsequent sections.
The rest of the paper is organized as follows: in section 2, we present various lowlevel vision
problems whose formulations lead to solving the discretized Poisson equation, section 3 contains
our formulation for the solution to the Poisson equation and a "flow chart" of our algorithm along
with a description of the individual blocks of the "flow chart" are presented in section 3.3. In section
4 we present a numerical solution to the discretized Poisson equation and prove the convergence of
the ADI iterative technique applied to our problem. A modified biconjugate gradient algorithm
for solving the capacitance matrix equation is also presented. In section 5, we present examples of
our algorithm applied to synthetic data for the surface reconstruction and the shape from shading
problems and finally conclude in section 6.
2 Lowlevel Vision Problems and Previous Work
There are many problems in lowlevel vision which when formulated as variational principles lead
to solving one or more discretized Poisson equations. Some of these problems are: the surface
reconstruction [3, 23, 26, 4] shape from shading [12, 19], lightness [11] and optical flow [13, 19]. In
this paper, we will very briefly discuss the formulation of the first two of these problems as variational
principles and without dwelling much on them, will point out the structure of the matrices that
appear in the linear system which needs to be solved for surface shape recovery.
2.1 Variational Principle Formulation
Variational formulations for various lowlevel vision problems have been reported in Poggio et al.,
[17]. These formulations make use of the popular theory of regularization. In a regularization
framework, generic smoothness assumptions are imposed on the solution space prior to attempt
ing any functional minimization. The smoothness constraints are well characterized by a class of
generalized multidimensional spline functionals [22]. The formulations involve minimization of an
energy functional 8, which is the sum of the energy contribution from the smoothness constraint
(Ss) and the data constraint (Ed) i.e.,
inf
Find u such that S(u) = v c (v) (1)
where, 7 defines the linear admissible space of smooth functions defined on R2 and the functional
S(v) = P(v) + S(v) (2)
Where, A is called the regularization parameter that controls the contribution of the smoothness
term, S(v) : H H R is a functional on 7H that is a measure of smoothness of an admissible function
v(x, y) and P(v) : 7 H R is a functional that measures the discrepancy between the function and
the given data [22].
We will now proceed to discuss the specific functionals used for S(v) and '(v) in the surface
reconstruction and the SFS problems.
2.1.1 Surface Reconstruction
Surface reconstruction from range data has been intensely studied for the past decade by many
researchers in the computer vision community [3, 23, 20, 26, 4, 25, 24]. Variational Splines [22] have
emerged as the single most popular solution to the surface reconstruction problem. In the following,
we give the precise expression for the smoothness and data constraints used in the variational
formulation of surface reconstruction problem. The smoothness constraint using only the lowest
order derivative term can be written as,
S(v) = (vf + v )} dxdy, (3)
where v(x, y) is the admissible function and vX, v, its partial derivatives assumed to be small.
The above energy expression can be interpreted as the deflection energy in a membrane (e.g., a
rubber sheet) and serves as a stabilizer in the overall variational principle for the surface recon
struction problem. To the stabilizer, we add data constraints via what are known as penalty terms.
The following penalty term which measures the discrepancy between the surface and data weighted
by the uncertainty in the data may be used,
P(v) = 1 c, (v(x yi) d )2 (4)
Where, di are the depth data points specified in the domain Q and c, are the uncertainty associated
with the data. The total energy is S = AS(v) + P(v). Where, A is the regularization parameter
that controls the amount of smoothing performed. The goal is to find a u that minimizes the total
potential energy S(v).
To compute a numerical solution to the above minimization problem, we first discretized the func
tionals S(v) and P(v) using finite element techniques [23]. The energy due to the data compatibility
term in discrete form becomes,
Ed(x,d)= (x d)TKd(x d). (5)
Where x is the discretized surface, d are the data points, and Kd is a diagonal matrix (for un
correlated noise in the data) containing the uncertainty ca associated with the data points. The
smoothness energy in discrete form is
E,(x) = x K,x (6)
where K, is a very large (n2 x n2), sparse and banded matrix called the stiffness matrix. The
resulting energy function is a quadratic in x given by,
E(x) = xTKx xTb + c (7)
2
with K = AK, + Kd and b = Kdd. The minimum of this energy function u is found by solving the
large sparse linear system Kx = b. The structure of the stiffness matrix K can be analyzed using
the computational molecules [23]. We present our fast solution to this problem in a subsequent
section.
2.1.2 Shape from Shading
Like the surface reconstruction problem, the shape from shading problem has received enormous
amount of attention in the past decade. Numerous solution methods using variational principle
formulations have been proposed. For a comprehensive set of papers on this topic, we refer the
reader to the book edited by Horn and Brooks [12] and the work in [15, 21, 19].
In this problem, it is required to recover the shape of surfaces from image irradiance which
depends on surface geometry and reflectance, scene illuminance and imaging geometry. The image
irradiance can be expressed directly as a function of the surface orientation if illuminance, reflectance
and imaging geometry are assumed to be constant. The shape from shading problem is commonly
posed as a nonlinear, first order partial differential equation in two unknowns, called the image
irradiance equation, E(x, y) = R(p, q) [12]. Where, E(x, y) is the image irradiance at a point (x, y),
p = vx, q = v, are first partial of the surface function v(x, y), and R(p, q) is the relation between
surface orientation (p, q) and image irradiance E(x, y). To overcome the ambiguity caused along
occluding contours in the gradient space parameterization (p, q), a reparameterization of surface
orientation in terms of a stereographic mapping: f = 2ap, g = 2aq with a = 1/(1 + /1 + p2 + q2) is
used. With this reparameterization, the overall energy function (sum of stabilizer and data energies)
to be minimized is expressed as
S(f,g) = A ff1 + f2) + (g + g>1, i + [E(x, y) R(f, g)]2dxdy. (8)
The Euler Lagrange equations which express the necessary condition for a minimum are given by
the system of coupled PDEs
Af = A[R(f,g)E(x,y)]R (9)
Ag = [R(f,g)E(x,y)]R,
The above equations may be discretized and solved using the algorithm of Simchony et al., [19]
which enforces integrability condition on p and q. The method involves solving three different
discretized Poisson equations namely, Kxl = bl, Kx2 = b2, and KIX3 = b3. Where, K is the
stiffness matrix obtained by discretizing the Laplacian operator (on the LHS of the equation 9)
with Dirichlet boundary conditions. K1 is the stiffness matrix obtained by discretizing the Poisson
equation for the depth from orientation problem i.e., Az = p, + qy with Neumann boundary
conditions. bl,b2 are the appropriate RHSs obtained by discrete computation of the RHS of
equation 9 at the current estimated value of (f, g). b3 is the discrete form of the RHS of depth from
orientation equation for estimated (p, q) from equation 9. The structure of the stiffness matrices
K can be analyzed using the computational molecules [23]. In the next section, we propose a fast
solution to the above discretized Poisson equations.
3 Proposed Algorithm for Solving the Poisson Equation
In this section, we will propose a new algorithm to solve the algebraic system of equations resulting
from the discretization of the Poisson equation or the minimization of the discrete version of the
constrained variational principle discussed in the previous section. This algorithm is based on the
generalized version of the capacitance matrix theorem that we state and prove in this section.
3.1 Capacitance Matrix Technique
The Capacitance matrix technique [6] has been used to solve discrete elliptic PDEs on irregular
domains for the past several decades. This technique involves expressing the stiffness matrix K as
the sum of Ko and UVT. Where, we can choose Ko to be a well structured matrix that is slightly
different from the original stiffness matrix K for the given PDE with certain boundary conditions
and UVT encodes the difference between K and Ko. Thus, the solution to the linear system Kx = b
can be obtained by solving the alternate linear system Kox = b' and the associated linear system
with a capacitance matrix which will be discussed subsequently. O(N log N) numerical methods
may be used to solve the linear system Kox = b', for example, the FourierToeplitz method [19, 6].
In this paper, we will present an O(N) ADI method for solving the same. As for the capacitance
matrix equation, the solution depends on the dimensionality of UVT which is typically O( /N).
In the past, Buzbee et al., [6] have used the LU decomposition (O(n3) = O(N N) to solve the
Capacitance matrix equation. We present a modified biconjugate gradient technique that takes
O(N) time for solving the same.
The ShermanMorrisonWoodbury formula [9] can be applied to solve (Ko + UVT)x = b only
when K and Ko are nonsingular. When the rank(Ko) = (N 1) and K is nonsingular, the
capacitance matrix theorem as stated in Buzbee et al., [6] can be applied to obtain the solution.
However, in many applications, K may be singular and rank(Ko) < (N 1) e.g., the Poisson
equation with Neumann boundary conditions yields a singular K matrix; for higher order PDEs
such as the biharmonic equation, the rank(Ko) < (N 3). These problems can be solved via the
application of the generalized capacitance matrix theorem which is stated and proved below.
Theorem 1 GivenK = Ko + UVT, where K, Ko E JNxN, U = [uo,... ,Upl], V = [vo,..., Vp_1],
with uo,... up_ being linearly independent vectors, and p = O(n) where, n = \N. Assume
rank(Ko) = N m, and let qi, i = 0,..., m 1, be the eigenvectors of Ko corresponding to
zero eigenvalues, i.e. Ko0q = 0, such that qqj = (ij), 0 < i,j < m 1, and let qTj =
ac6(i,j), for 0 < i,j < m 1, where the constant ai 0. Let x be a solution to
m1 qjrb
Kox= b Y u, (10)
j=o qj uj
where b G range(K). For 0 < i < p 1, let rl be a solution to
ml T
Korj = u q uj, (11)
j=o q3 uj
and rio... 'rl,_m are nonzero vectors. Define the capacitance matrix C as
m1 qTU
C = I + VT e +l~ (12)
j=o q, uj
where r = [ro,..., p_], and ej sRPxl is the unit vector with 1 at the jth component and 0
elsewhere. Assume there is a solution / to
Sm1 qjrb (13)
j=o qj uj
then x nr is a solution to the linear s./, i Kx = b.
Proof: To prove this theorem, we will show that K(R rf) = b. Expanding by substituting, K
for Ko + UVT, we have
K(x i/3) = Kox Ko0/ + UVx UVT/l. (14)
In this equation, we can use 10 and 11 for Kox and Ko0 respectively. For, VTyl, we use equations
12 and 13 to derive
nl qr7b nl qf7U
V~' = V Tej K + ej+l (15)
j=o j uj j=0o j Ju
3 j= q
which is then substituted in the last term of equation 14. This gives us K(R P/) = b. .
In the above theorem, solutions to equations 10 and 11 exist since their RHSs are in the range of
Ko. This can be seen by showing that the RHSs of these equations are orthogonal to the qj, the
eigenvector belonging to null(Ko). As for the solution to equation 13, we need more discussion.
We will show that a solution to equation 13 always exists in the two cases of interest (in this paper)
the Poisson equation with Neumann and Dirichlet boundary conditions respectively by proving
that the capacitance matrix C is nonsingular.
For the surface reconstruction and SFS problems, we choose Ko to be the matrix corresponding
to the discrete Laplacian operator on an embedded rectangular domain with periodic boundary
conditions. For this particular choice, rank(Ko) = (N 1). In the case of Poisson equation with
Dirichlet boundary condition (e.g., SR and SFS problems) the stiffness matrix K is nonsingular and
the following theorem concludes that the capacitance matrix C will be nonsingular implying that
the equation 13 always has a solution.
Corollary 1 In theorem 1, if rank(K) = N and rank(Ko) = N 1, then the matrix C is nonsin
gular.
Proof: To prove that C is nonsingular, we show that CO = 0 = f = 0. When m = 1, we have
the condition rank(Ko) = (N 1). Suppose CO = 0, then we have
VT/ = ectl P (16)
qo uo
from equation 12. By definition, Ky/ = Kor0/3 + UVTr/y. Substituting equations 11 and 16 into
this equation, we get Kr3 = 0. Since K is nonsingular, this implies r3 = 0. Hence, we have
Kor/ = 0. Substituting for Kor from equation 11 yields,
[ q~ul q~rup '0] (17)
0 u q uo ... upl qo uo 1/3 = 0. (17)
qofuo gl uo
After rearranging, this yields
p1 p1 qTu
Siu, uuo. (18)
i=1 i=1 q uo
The assumption that uo,..., up_ are linearly independent combined with equation 18 makes/ =i
0, for 1 < i < p 1. Since, r/P = /3P_ = 0, we have /3oro = 0 = Po = 0 because, ro was
assumed to be a nonzero vector in theorem 1. Thus, 3 = 0, and this leads to the conclusion that
C is nonsingular. m
Now let's consider the problem of Poisson equation with Neumann boundary condition. The
matrix Ko is the same as that in the Dirichlet boundary condition case, but the stiffness matrix K
is singular here. To be more precise, rank(K) = rank(Ko) = N 1 and null(K) = null(Ko)
span(qo). In addition, after discretizing the boundary condition, we have Vi, q u = q v = 0.
This leads to the following corollary.
Corollary 2 In theorem 1, if rank(K) = rank(Ko) = N 1, null(Ko) = null(K) and qui =
q0vi = 0,Vi, then equation 10 can be simplified as
KoR = b, (19)
and equation 11 can be simplified as
Kori = ui, (20)
0 < i < p 1. The capacitance matrix C = I + VTrl is nonsingular and equation 13 becomes
CO = Vb. (21)
Til, before, 5 r~/ is a solution to the linear s'l/, i. Kx = b.
Proof: Since b G range(K), b is othogonal to null(K) = null(Ko), i.e. qob = 0. So equation 10
is simplified to 19. Similarly, q0ui = 0 for all i, therefore equation 11 is simplified to equation 20.
To prove that C is nonsingular, we show that C3 = 0 =~ f = 0. Suppose C3 = 0, it is obvious
that VTrl = 3. Thus we have
Kr/3 = Ko0l + UVTrlP
= Ko0r + U(VTrP
= U/3 + (3)
= 0
This means r/P e null(K) = null(Ko). Consequently, Kor/3 = 0. Substituting for Kor from
equation 20 yields,
uo ul ... u,_ 3= 0. (22)
Since uo,..., up_ are assumed to be linearly independent, we have 3 = 0, and this proves that C
is nonsingular. m
In summary, we propose theorem 1 as a general theorem for the capacitance matrix method. It
can be used for Elliptic PDEs of any order as long as equation 13 has a solution. In this paper, we
are particularly interested in solving the lowlevel vision problems corresponding to the 2D Poisson
equation with Dirichlet and Neumann boundary conditions. Consequently, corollaries 1 and 2 are
presented to show that the capacitance matrix equation has a unique solution for the Dirichlet and
Neumann cases.
3.2 The choice of Ko
The capacitance matrix method can be used to solve several linear systems of the form Kox = b as
shown in equations 10 and 11 respectively. Especially, in equation 11, we have to solve it p times,
where p is the rank of K Ko. Therefore, it is very crucial to choose a well structured Ko so as to
efficiently compute the solution in the capacitance matrix method.
In general, there are three criteria that dictate the choice Ko. The first is the difference between
Ko and K must be small in the sense that rank(K Ko) is small. Secondly, Ko must possess some
nice structure so as to have a fast numerical solution to the associated linear system. Lastly, Ko
should be translation invariant; this property has the advantage of saving the computational cost in
solving equation 11 for each ui, 0 < i < p 1, since there are only one or two nonzero components
in each ui. Therefore, we need to compute the solution ho to a linear system consisting of a matrix
Ko and an RHS vector which is the projection of the unit vector onto the range of Ko. Because,
ho is translation invariant, we can obtain the solution of r7 in equation 11 without having to solve
any of the linear systems associated with each ri.
For discretized Poisson equations on irregular domains, we use the standard technique of imbeding
irregular domains in rectangular regions [6]. Thus, the boundary conditions can be easily incor
porated into UVT. Following the three categories given above, we can choose Ko as the discrete
Laplacian operator on the doubly periodic imbedded rectangular domain. The exact form of Ko is
given by.
D I I
I
Ko = .. .. (23)
I
I I D
Where,
4 1 1
1 4 1
D= .. (24)
1 4 1
1 1 4
The matrix Ko can be regarded as the result obtained from the 5point approximation to Laplacian
operator on a doubly periodic imbedded rectangular domain. From a computational molecule
[23] point of view, only the membrane molecule is used at every node of the doubly periodic
rectangular domain. Therefore, the data constraint molecule, derivative (Neumann boundary)
constraint molecule and all molecular inhibitions due to depth discontinuities and the imbeding
boundary are included in the matrix UVT. Since UVT = _o u ,iv, each uivf can be considered
as a data constraint molecule, a derivative constraint molecule or a molecule inhibition at some
location. Thus, we can see that the structures of ui and vi are very simple. Each vector contains
either one or two nonzero components. One nonzero component is for the case of the data constraint
molecule while, two nonzero components are for the other two cases.
We can see that this particular choice of Ko makes it circulant Toeplitz and thus translation
invariant. In addition, it can be decomposed as the sum of two tensor products of matrices A and
I, i.e.,
Ko = A I + I A, (25)
with
2 1 1
1 2 1
A= .. *.. .. e nxn,
1 2 1
1 1 2
and 0 is the tensor (Kronecker) product. By using this special structure of the matrix Ko, we can
rewrite any linear system of the form Koz = f as the following Lyapunov matrix equation
AZ + ZA = F, (26)
where Z and F are n x n matrices corresponding to their concatenated n2 X 1 vectors z and f
respectively. It should be noted that n2 = N, and that the matrix A in equation 26 is circulant
Toeplitz and symmetric positivesemidefinite. We can use the ADI method to solve this Lyapunov
matrix equation in O(N) operations. We will prove this O(N) computational complexity of the
ADI method in the next section.
3.3 Structure of the proposed Algorithm
In section 3.1, we proved that the capacitance matrix technique can be used to solve the discretized
Poisson equation with Dirichlet and Neumann boundary conditions. We now describe the structure
of the proposed algorithm via a flowchart given in figure 1.
The flow of the algorithm follows from theorem 1. For linear systems with matrix Ko, we use
the ADI method to solve the corresponding Lyapunov matrix equations. To form the capacitance
matrix C, we have to know ri, 0 < i < p 1, in equation 11. In fact, we don't need to solve equation
11 for each ri. By using the translation invariance property of Ko, we just solve the linear system
(using the ADI method)
KTel
Koho = ei q0 q qo, (27)
qlo q
Figure 1: The flow chart of the proposed algorithm
where the unit vector el has a single nonzero element at location 1. Thus, the solution ho can be
regarded as the impulse response to the pseudoinverse operator K0+. Since Ko is singular with a
simple zero eigenvalue, the general form of the solutions can be written as ho + aqo, Va e R. The
solution ri can be easily obtained from the impulse response ho because K+ is a linear shift invariant
operator. In fact, we don't need to explicitly compute and store every ri since, they are needed
only for the computation of the C matrix using equation 12. This is achieved by obtaining the
(i,j)th component of VTr i.e., vfTr from the values of ho corresponding to the relative locations
of the nonzero components of vi and uj.
In addition, we don't explicitly compute r/3 for the final solution since this matrixvector mul
tiplication takes O(pn2) = O(n3) operations (as p = O(n)). Instead, we solve the following linear
system
qTUR
Koy = UP q o (28)
to get the solution y which has the same projection on the range of Ko as r/3. As for the projection
of r/3 on the null space of Ko, it can be easily computed from the inner product of 3 and a =
(ao,... a,_l), where ai = q ri is the projection of rq on the null space of Ko. The coefficients ai
can be chosen arbitrarily as long as ao 0. To simplify the computations in our implementation,
we assign ai = 1,Vi. Thus,
q0 qo
/J3 = y (aT/3 )qo. (29)
The capacitance matrix C is a dense matrix of size p x p. To solve for / in equation 13, we
employ the BCG (biconjugate gradient) method in a wavelet basis to approximate the matrix
vector multiplication in each iteration. The details of this modified BCG method and the ADI
method are discussed in the next section.
4 Numerical Solution
In this section, we discuss the convergence of ADI method applied to the Lyapunov matrix equation
and show that the computational cost for solving this equation is O(N) by proving that the ADI
method can converge in a constant number of iterations independent of N. In addition, we present
a modified biconjugate gradient method that is used in solving the capacitance linear system.
4.1 ADI Method for the Lyapunov Equation
The ADI method for solving a Lyapunov matrix equation is described in [16]. For any Lyapunov
matrix equation of the form AX + XA = B, the ADI method involves the following steps in each
iteration j = 1,2,...,J,
(A +pI)Xj,_ = B Xj_(A pI) (30)
Xj(A+pI) = B (A pj)Xj_l (31)
For our problem, note that the matrix A is circulant Toeplitz and symmetric positive semidefinite
as discussed in section 3.2. The special structure of matrix A helps in advancing the ADI iterations
very efficiently in each step of the technique. On the lefthand side (LHS) of the equations 30 and
31 respectively, the matrix A + pjl is very close to being tridiagonal except for the nonzero entries
at the corners namely, the entries (1,n) and (n,1). These nonzero corner entries are caused due to
the doubly periodic boundary conditions discussed in the last section. In addition, A + pjl is SPD
for any positive parameter pj. Therefore, we can compute the LU decomposition of this matrix
and then use the forward and back substitution to compute the updated solution. Due to the the
special structure of the matrix A + pjI, the solution update requires only O(N) time per iteration.
We now examine the convergence of the ADI method for the Lyapunov matrix equation in our
algorithm. Let the initial Xo = 0 (zero matrix), and AX1 = X1 X*, where X* is the true solution
of the Lyapunov matrix equation, Ao, A1,..., Xn, and qo, qi,..., qni be the eigenvalues and the
eigenvectors of A respectively. For this symmetric matrix A, we have
A = QDQT, (32)
where Q = [qo q ... qn], D =diag[Ao, A,.. A4 _], and k = 4sin2(  ) for k=
0,..., n 1. We express the eigenvalue Ak in this form so as to have nondecreasing eigenvalues with
respect to the indices. The range of eigenvalues is between 0 and 4, and the zero eigenvalue (when
k = 0) is simple. The orthogonal matrix Q contains the discrete sine or cosine function in each
column with increasing frequency corresponding to increasing eigenvalue, i.e., the index increases.
For example, the eigenvector corresponding to the simple zero eigenvalue is a flat function whose
frequency is 0.
By taking the tensor product of the eigenvectors, qk ql, k, = 0,... n 1, we can generate an
orthogonal basis for sRnx and express the true solution X* in this basis as follows:
n1 n1
X* = akl(qk q). (33)
k=o0 =0
Subtracting each of the equations 30 and 31 from the Lyapunov equation AX + XA = B, and
expressing X in the above tensor product basis, we have the following result for the error after t
iterations,
AX E rI p )pJ] akl(qk q). (34)
k=o l=0 j=1 + P A + P
The function inside the bracket is the error reduction factor in the basis qk qi. Notice that the
Lyapunov matrix equation has an infinite number of solutions due to the singularity of the matrix
A. The error reduction in each step is always less than 1 for eigenvectors with positive eigenvalues
and positive parameters pj while it is always 1 for the basis qo 0 qo (which is evident from equation
34 after substituting k = 1 = 0 and Ao = 0). A solution to the Lyapunov matrix equation can be
obtained without reducing this factor at the component k = 1 = 0.
The classical ADI minimax parameter problem is to find the ADI parameters pj to minimize
the maximum of the error reduction function excluding the zero eigenvalue, i.e., to minimize the
function
t = max k Pj p (35)
Smax I( P)( PJ). (35)
v(k,l)\(0,0) j=1 k +Pj AA pj
Applying the result from the classical ADI minimax analysis [16], leads to an O(log N) iterations
for convergence [24]. This in turn yields an overall computational complexity of O(Nlog N) for
the ADI method since, O(N) time is required per iteration in the ADI method. An O(N log N)
computational complexity is unacceptable for our purposes (recall that we seek an O(N) solution).
Thus, in order to design an O(N) algorithm, it is essential to use the additional knowledge about
the problem e.g., spectral characteristics of the differential operator in the PDE. The classical ADI
convergence analysis does not make use of any such knowledge. In this thesis, we reformulate the
ADI convergence analysis by incorporating the spectral characteristics of the Laplacian operator.
This is facilitated by making the following assumption.
Assumption 1 Let the projection of B, the RHS of the L.i.in,, i matrix equation, on the basis
qk q1 be bkl. Assume there exist a constant a R+ and an > 0 such that ,V(k, 1)\(0,0), Ibkl <
(k2+2) for some m > 0.5, and FCk FIk2+ 2 2 aw
For our Lyapunov matrix equation AX + XA = B, the relationship between the solution X and
the data B in the eigen basis of the matrix Ko is akl = Therefore, the assumption bk <
(k2+2 = k < A+A)(k+ ) for some m > 0.5. Using the relation x < sinx < x for
x G [0, 7/2], we have ck2 < k < 2k2 with c1 = and c2 = Thus, requiring that the solution
satisfy the constraint aklj < nm for some m > 0.5.
This assumption can be given the following spectral interpretation. The smaller values k or 1
correspond to lower frequencies along vertical or horizontal direction, and vice versa. Consequently,
the assumption means that the overall frequency content of the input data or the right hand side
can be bounded by an increasing function of I1 = (k2 + 2)0.5 of degree less than 1. This can be
easily satisfied when the overall frequency distribution of the input data is decreasing or uniform.
This is true in reality since, the low frequency content in signals is normally much higher than the
high frequency content.
We first define two matrices X' and AX't for use in the convergence analysis of the ADI method.
Let X' and AX', be the projections of the true solution X* and the error in computed solution (in
iteration t) AX, on the range of the matrix Ko ( = A 0 I + I A), respectively. Note that, X'
and X* are almost the same except that X' doesn't have projection at the basis qo 0 qo. In the
ADI literature [16], the convergence criterion is normally defined as follows: The ADI method for
the Lyapunov matrix equation is said to have converged when the relative Frobenius norm IN'I
is reduced to within a specified error tolerance c lying between 0 and 1, i.e., IAX'l F < elX'll.F
This means we don't need to reduce the error component in the basis corresponding to the zero
eigenvalue or in the null space of Ko. The problem is to find the number of iterations required in
the ADI method and the associated ADI parameter set to achieve IAX'' F < elX'll.F
We make use of this convergence criterion and assumption 1 to reformulate the convergence theory
of the ADI method as follows. The Frobenius norm IAX'' F can be written as,
AX'1 12 j i ( Ak PJ 2 pP ) 2 36)
AX, EE )2( ) (36)
k I j=1Ak +Pj 1 P
where the double summation takes every k and 1 between 0 and n 1 except k = 1 = 0 (simulta
neously). As a consequence of assumption 1 the ADI method for our Lyapunov matrix equation
can converge in a constant number of iterations. We state and prove this convergence result in the
following theorem.
Theorem 2 Given assumption 1 and an error tolerance c between 0 and 1, there exists a constant
integer t such that Vn, AX'1tlF < eX'l.F
Proof: From the discussion on assumption 1, we have the following inequality for IIX'IIF
X' J\ ( )/\2 n :( k )\2 (37)
A + Al 16MkI 2 +2 M m (37)
Applying assumption 1 to equation 36, we have the following inequality
2 A P Al pj
II~AX' F lP 2 2)2m)2
k I (k + 2(Ak + A j=1 Ak +pj l +pj
W2
4m2 I )R(p p, P. ., A),
n K + (((k)c2 +()2m (Ak + A1)2q n21
19
where q is a positive number between 0 and 2m + 1 and
1 k Pj 2 A j 2
R(p, t)= r +) (38)
Using the fact that Ak = 4sin2( >2) > 4()2 for k > 0 and substituting into the above inequality,
we have
IIAX' I 42qm2 (2 k + ()2)2,m+2 q( R(p ,2,* * t) (39)
Then we have,
2
AX' < Zm i1 max R(I, P2,... Pt). (40)
F 42qn4m2 k I (( )2)2m+22q 2 V(k,l)\(O,)
Note that the double summation on the RHS of equation 40 takes every integer k and 1 between 0
and n 1 except k = 1 = 0 simultaneously. Let the set of all the pair (k, 1) in this double summation
be denoted by A. By using the split in the set A, we have
tI(,)+(t)+f(t) nl nl
f 1 {f(o, 1)+ f(1,) + f(1, 1) + f(k, 0) + f(0, 1) + E f(k, )}, (41)
k I k=2 1=2 (k,l)EB
where f(k) = a B = (, 1) ( 1) = 1,. ., n 1. By using the fact that
((;) +(;)2)2m+2q
f(k, 1) is a strictly decreasing function of k and 1, we can bound the terms with summation in
equation 41 by the corresponding terms in an integration form as follows.
S( 1 < {f(0,1) + f(1,0) + f(1,1)} +2 1 4m+42q d
k I n
1 n'1 1 1
+ y4m+42q dy + ( y2)2+2 dxdy, (42)
S 4m+42qn 2 y22m+2
where the integration domain S = {(x, y) (x, y) e [, ] x [0, 1]\[0, ) x [0, )}. By transforming the
Cartesian coordinate to the polar coordinate, the term with double integral in equation 42 can be
bounded by the following integral,
/ 2 1 (.,. = R4(nm+22q 22m+1q)
o r4m+42q 4(2m + 1 q)
< R4m+22q (43)
4(2m $+ q)
Substituting equation 43 into equation 42 and calculating the integrals in equation 42, we obtain
the following inequality
1 1
1< 4m+22q (44)
S ( )2 ( )2)2+2q 2< (44)
where / = 2 + 4+32q +4(2 1q) is a constant.
Using equations 37, 40 and 44, the convergence criterion IIAXtF < ellx'lF can be met with by
the following derived condition
Oe24
max R(pp2,. *, **) 2' Mn2, (45)
v(k,1)\(o,o) 3
where M is the constant defined as M : 4 Furthermore, we can bound the function maxkl\(o,o) R(PI, 2 ,
/3
by a function of Ak as follows:
S Ak pj )2)( t ( A pj2
max R(pi,...,pt) = max 1 (Ap rj j2
v(k,l)\(o,o) V(k,l)\(o,o) (Ak + A,)q 1 + = A + p
1 3 Ak1 P2++
< max (A 2
The maximum taken from the values of function R at the nonzero eigenvalues,can be bounded by
using the maximum of the same function in an interval containing all the nonzero eigenvalues. Then
the convergence can be reached by using the following modified requirement
max l( p)2 < Mn2. (46)
Al=4sin2(')
For any positive pj, the function 1 I.( ( )2 is always less than the righthand side when x >
1. Therefore, in equation 46, the interval (4 sin2( ), 4) can be replaced by S = (4 sin2() min(4, )).
MQn2 Mq n2
The convergence requirement in equation 46 can be made less stringent by using the inequality,
Stt
max r ax 2)2)(ax ), (47)
xGS PX4 m a x s 3 Pj +ES x'
leading to the following modified requirement which when satisfied automatically makes the in
equality 46 true.
P X Pj3)2 2 1 = M
max)2 < (Mn2)( 1 ) = (Mi2q)(4sin2( ))q = (48)
x(S + pj maxies 7 n
This bound M' will approach a constant when n  o. Thus, the problem becomes one of finding
the number of iterations t and the parameters pj such that the requirement in 48 is satisfied. This
requirement is of the same form as the classical ADI minimax problem which was solved by Jordan
as quoted in [27]. We can use the result in [27] to determine the number of iterations required for
convergence. The number of iterations needed to meet this requirement is
in 4 in 4
J = n I n], (49)
k' = (50)
v' + v'2 1
v' (v+ ), (51)
2 v
where, [z] denotes the smallest integer larger than z and v is the expansion factor of an interval
(a, b) defined as b. The expansion factor v of the interval of interest (S) is fixed when n approaches
infinity, i.e.
1
lim v (52)
nioo 47F 2MIklq
In the classical ADI minimax formulation, v is the spectral radius of the eigenvalues being considered
in A, which is shown be of the order of n2 [24]. Since we use a different formulation for convergence
min(4, M')
and impose the spectral assumption on the data, the resulting v becomes 4sin"2 When n  o,
the ratio v approaches a constant as given in equation 52 and M' also approaches a constant as
mentioned above. Therefore, we conclude that limo,, J is a constant, i.e. there is a constant t
such that IIAX' IF < e lX'llF is satisfied for all n. m
In [27], an optimal set of ADI parameters for the general ADI iterations was given. We may use
these parameters to solve the minimax problem in equation 48. Although this set of ADI parameters
is not the optimal set to meet our convergence requirement, it provides us with a parameter set
which can achieve the convergence criterion in a constant number of iterations. Therefore, we use
the result in [27] to select a suboptimal ADI parameter set. Solving for the optimal ADI parameters
in our problem requires complicated analysis using transformation theory [27] and is currently under
investigation.
4.2 Modified BCG Method for the Capacitance Matrix Equation
In this section we present an algorithm to solve the capacitance matrix equation defined earlier.
The conjugategradient(CG) method is a very powerful iterative scheme to solve the symmetric
positive definite linear system. A natural generalization of the CGtype method to solve a general
nonsymmetric linear system is called the biconjugate gradient(BCG) method [14]. The linear system
with the capacitance matrix is a nonsymmetric and indefinite system. The size of the capacitance
matrix is determined by the rank of the matrix UVT, which contains the difference between K and
Ko. With the Ko proposed in section 3.2, the size of the associated capacitance matrix is usually
O(n). The BCG method requires the computation of a matrixvector product in each iteration.
Since the capacitance matrix C is dense, this matrixvector product takes O(N) operations. In this
section, we present a modified biconjugate gradient method to solve this linear system by applying
the wavelet transform technique to approximate the multiplication and reduce the computational
cost to O(n). Since the BCG converges in O(n) iterations, the computational complexity for the
modified BCG method is O(n2) = O(N).
The BCG algorithm [14] for the nonsymmetric linear system CO = f is as follows.
0. Choose 0o, and set qo = ro = f Co0; Choose ro 7 0, and set 40 = ro;
N =T D =T N D.
1. Compute 0 = rkrk_1, of = qkCqk_1, and ak = ck/ ak
2. Update f3 = ik1 + kqkc1;
3. Set rk = rk_ akCq 1, and rk = rk1 akCTqk1;
4. Compute p = rkTrk, and Pk = p /af;
5. Set qk = rk + Pkqk1, and qk = rk + Pkqk1;
6. If rk a 0 or rk a 0, stop; else go to 1.
Like the CG, the operations and storage requirement per step in the BCG are constant. The
convergence behavior is similar to the CG method except that the BCG algorithm is susceptible to
breakdowns and numerical instabilities. To be more specific, division by 0 (or a number very close
to 0) may occur in computing ak and pk. These two different breakdowns may be avoided by using
the lookahead Lanczos algorithm or the quasiminimal residual(QMR) algorithm [8].
In the above algorithm, two matrixvector multiplications Cqk1 and Cqk1 are required in each
iteration. Since the matrix C is dense, the direct multiplication will take O(N) operations, which
is not acceptable to our pursuit of an O(N) algorithm for the Poisson equation. Thus, we propose
to carry out the multiplication in a wavelet basis to reduce the computational cost [1, 2]. With
appropriate wavelet basis [2], the decomposed matrix can be approximated to high accuracy by a
sparse matrix. After the approximation, the multiplication of this sparse matrix and the decomposed
vector can be reduced to O(n) or O(nlog(n)) operations [2]. Consequently, the modified BCG
method is computationally efficient.
The wavelet transform of a matrix D and vector q can be written as follows:
D = PDpT, q = Pq, (53)
where P is the wavelet transform matrix with each row corresponding to a wavelet basis vector.
If we use the orthogonal wavelet transform, then Dq = p (Dq). Therefore, we compute matrix
vector multiplication in the wavelet domain and then transform back to the original domain by
using the wavelet reconstruction (inverse wavelet transform) to obtain the product Dq.
It is very crucial to know the structure of the capacitance matrix to achieve an accurate and
efficient approximation of the matrix. In equation 12, the capacitance matrix is composed of
three matrices, i.e., the identity matrix I, e1 qe and V The first two matrices are
j=0 ej+llu and V~rr. The first two matrices are
extremely sparse, but the third is a dense matrix. When computing the matrixvector product of
the capacitance matrix and a vector, we directly compute the separate products for the first two
sparse matrices while the aforementioned wavelet approximation scheme is used only for the matrix
VTrY.
5 Experimental Results
In this section, we present the results of applying our algorithm to the surface reconstruction and
shape from shading problems. Our algorithm can also be applied to other lowlevel vision problems
such as the lightness, and optic flow problems, however, due to lack of space we chose to limit our
discussion to the aforementioned two problems.
Surface Reconstruction: In this problem we applied our algorithm to solve the linear system
Kx = b which was derived in section 2.1.1. We synthesized a sparse range data set on a 64 x 64
grid and introduced a depth discontinuity along the line between coordinates (1, 32) and (30, 32).
The input data set is very sparse and contains 15 data points randomly distributed in the plane as
shown in figure 2(a). We apply our algorithm to this data to obtain the interpolated surface. We
can apply equation 49 to compute the number of iterations required to guarantee the error tolerance
e to be within 105. In this example we observe that 10 ADI iterations are needed to attain the
tolerance. We depict the surfaces after 1 and 10 ADI iterations in figures 2(b) and 2(c) respectively.
In figure 2(b), we can see that just one ADI iteration can produce the global shape of the exact
surface. This is because we advance the ADI iterations with the parameters pj's starting with a
small value and proceeding toward large values. The smaller parameters are used to recover the low
frequency detail of the surface and the large values refine the shape with high frequency components.
Consequently, it is not surprising that the global shape was recovered in one iteration.
This phenomena of global to local shape recovery is quite different from the other iterative numeri
cal methods. Most of the other numerical methods do not exhibit this type of spectral characteristic
during the iterations. In addition to the fast convergence rate, another advantage of the ADI method
is that we can make use of the spectral properties of the differential operator in the PDE (in our
case, the Laplacian) to speed up the convergence rate.
The SFS Problem: In this problem, we tested our algorithm on a synthetically generated image
of a Lambertian sphere illuminated by a distant point source and viewed from the same location as
the point source. Figure 3(a) depicts the 64 x 64 original shaded image with an 8 bit resolution in
gray per pixel. The sphere is assumed to have Lambertian reflectance properties with a constant
Figure 2: Surface reconstruction example; (a)
ADI method and (c) after 10 iterations.
original sparse data (b) solution after 1 iteration of
(a) (b)
\\ I F I III III
S\\\\\\\ I I I /II II
\\\\\\\\\\\ I III11/ ////
\\\\\\\\\\\ 1 If/ll// ///ll
\\\\\\\\\\\\ F FIII///f//////
\\\\\\\\\\\\\ F I ///////////
\F\\\\\\\\\\\ \ F F /F // //////
\\\\\\\\\S \\\\ \l / // ///////
/////s s \11111\ : : \\\\ / *\\ x\\
(C) (d)
Figure 3: Shape from shading example, (a) Shaded sphere image (b) irregular occluding boundary
imbedded in a rectangle (c) recovered surface orientation, (d) reconstructed surface shape.
albedo. Both depth and orientation data are specified on an occluding contour depicted in figure
3(b). Note that we have to solve three systems of equations namely Kxi = bi, Kx2 = b2 and
K1X3 = b3 (see section 2.1.2). The first two systems are for recovering the surface normals and the
last equation for recovering the depth from the surface orientation map. We start the ADI iteration
for the linear systems with the initial values of p = q = z = 0. The recovered surface orientation
is shown in figure 3(c) and the final surface shape obtained using the orinetation information is
depicted in figure 3(d). For each computed (p, q) on the RHS of equation 9, the ADI error tolerance
was set to 105. The same tolerance was used for the ADI iterations in the depth from orientation
problem.
Although, we have not compared the computational time of existing solution methods for the dis
cretized Poisson equation with the computational performance of our algorithm, we believe that our
method will outperform most other existing methods due to the O(N) complexity of our algorithm.
Future research efforts will focus on empirically demonstrating this claim.
6 Conclusion
In this paper, we presented a novel and fast (0(N)) algorithm for solving the discretized Poisson
equation whose solution is required in numerous lowlevel vision problems. We presented two major
results namely, a generalized capacitance matrix theorem and an 0(N) convergence result for the
ADI method. Our algorithm does not make any assumptions on the shape of the input domain unlike
the polyhedral domain assumption in the convergence of multigrid techniques "', 29]. Thus, this
makes our method the first O(N) solution to the discretized Poisson equation in literature, with no
assumptions on the input domain boundary.
The implication of the generalized capacitance matrix theorem presented in this paper is that, we
can now apply the capacitance matrix technique to PDEs with higher order differential operators
such as the biharmonic operator etc. Also, we can handle cases such as the SFS problem on an
irregular domain with Neumann boundary conditions which specifically yields a singular K matrix.
This case can not be solved using the capacitance matrix technique of Buzbee et al., [6] as mentioned
in [19] because, the capacitance matrix theorem of [6] requires a nonsingular K. Our algorithm gives
a solution to this problem (recovers a surface) within a scale factor in depth.
The techniques presented here will be useful for fast solutions to problems in other engineering
fields leading to the Poisson equation. The framework of our solution holds strong promise for
solving higher order PDEs in O(N) time. Our future research efforts will be focused toward
extending our O(N) solution to PDEs with higher order differential operators.
References
[1] B.K. Alpert. "Wavelets and other bases for fast numerical linear algebra,". Academic Press
Inc., 1992.
[2] G. Beylkin, R. R. Coifman, and V. Rokhlin. Fast wavelet transforms and numerical algorithm
I. Comm. on Pure and Applied Math., 44:141183, 1991.
[3] A. Blake and A. Zisserman. Visual Reconstruction. The MIT Press, Cambridge, MA, 1987.
[4] R.M. Bolle and B.C. Vemuri. "On threedimensional surface reconstruction methods,". IEEE
Trans. Pattern Anal. Machine Intell., 13(1):113, 1991.
[5] Dietrich Braess. "The convergence of a multigrid method with GaussSeidel relaxation for the
Poisson equation,". Mathematics of Computation, 42(166):505519, April 1984.
[6] B. L. Buzbee, F. W. Dorr, J. A. George, and G. H. Golub. "The direct solution of the discrete
Poison equation on irregular regions, ". SIAM Journal of Numerical A,.,,lii.' 8(4):722736,
December 1971.
[7] G. Dahlquist and A. Bjorck. Numerical Methods. PrenticeHall Inc., Englewood Cliffs, NJ,
1974.
[8] R.W. Freund and N.M. Nachtigal. "QMR: a quasiminimal residual method for nonHermitian
matrices,". Technical Report 90.51, RIACS, NASA Ames Research Center, Moffett Field, 1990.
[9] G.H. Golub and C.F. Van Loan. Matrix Computations. The Johns Hopkins University Press,
2nd edition, 1989.
[10] W. Hackbusch and U. Trottenberg, editors. Multigrid Methods (Lecture Notes in Mathematics),
volume 960. SpringerVerlag, New York, 1982.
[11] B. K. P. Horn. "Determining the lightness from an image,". Computer Vision Graphics and
Image Processing, 3:111299, 1974.
[12] B. K. P. Horn and M. J. Brooks, editors. I/.1p. from sl/.. /.,,j The MIT Press, Cambridge,
MA, 1989.
[13] B. K. P. Horn and B. G. Schunk. "Determining optical flow,". Artificial Intelligence, 17:1tS
203, 1981.
[14] C. Lanczos. "An iteration method for the solution of the eigenvalue problem of linear differential
and integral operators,". J. Res. Natl. Bur. Stand., 45:2552" 1950.
[15] D. Lee. "A provably convergent algorithm for shape from shading,". In DARPA Image Un
derstanding Workshop, pages 489496, Miami, December l DARPA.
[16] A. Lu and E. L. Wachspress. "Solution of Lyapunov equations by alternating direction implicit
iteration,". Computers Math. Applic., 21(9):4358, 1991.
[17] T. Poggio and V. Torre. "IllPosed problems and regularization analysis in early vision,".
Technical Report AI Memo 773, MIT, A.I. LAB. Cambridge, MA,, 1984.
[18] W. Proskurowski and 0. Widlund. "On the numerical solution of Helmholotz's equation by
the capacitance matrix method,". Mathematics of Computation, 30(135):433468, 1976.
[19] T. Simchony, R. Chellappa, and M. Shao. "Direct analytical methods for solving Poisson
equations in computer vision problems,". IEEE Transactions on Pattern A,,.,..i and Machine
Intelligence, 12(5), May 1990.
[20] R. Szeliski. "Fast surface interpolation using hierarchical basis functions,". IEEE Trans.
Pattern Anal. Machine Intell., 12(6):513528, 1990.
[21] D. Terzopoulos. "Image analysis using multigrid relaxation methods,". IEEE Trans. Pattern
Anal. Machine Intell., 8:129139, 1'1.
[22] D. Terzopoulos. i. _,larization of inverse visual problems involving discontinuil; . IEEE
Trans. Pattern Anal. Machine Intell., 8(4):413424, 1'li.
[23] D. Terzopoulos. "The computation of visiblesurface representations,". IEEE Trans. Pattern
Anal. Machine Intell., 10(4):417438, 1'"
[24] B. C. Vemuri and S. H. Lai. "A fast solution to the surface reconstruction problem,". In SPIE
conference on Mathematical Imaging: Geometric Methods in Computer Vision, pages 2737,
San Diego, CA, July 1993. SPIE.
[25] B. C. Vemuri and R. Malladi. "Constructing intrinsic parameters with active models for invari
ant surface reconstruction,". IEEE Transactions on Pattern A,,,. l. and Machine Intelligence,
15(7):668681, July 1993.
[26] B.C. Vemuri, A. Mitiche, and J.K. Aggarwal. "Curvaturebased representation of objects from
range data,". Image and Vision Comput., 4(2):107114, 1"''i.
[27] E. L. Wachspress. "Extended application of alternating direction implicit iteration model
problem theory,". Journal of SIAM, 11(4):9941016, December 1963.
1L] Jinchao Xu. "Iterative methods by space decomposition and subspace correction,". SIAM
Review, 34(4):581613, December 1992.
[29] Xuejun Zhang. "Multilevel Schwarz methods,. Numerische Mathematik, 63:521539, 1992.
Acknowledgements
This research was supported in part by the NSF grant ECS9210648. We thank Dr. M. Shah of
UCF for supplying the SFS data.
