Generalized Capacitance Matrix
Theorems for Solving
Linear Systems1
S. H. Lai and B. C. Vemuri
email: hong@cis.ufl.edu and v. iin1.!i ;,1 n)a.cis.ufl.edu
UF-CIS Technical Report TR-94-026
Computer and Information Sciences Department
University of Florida
Bldg. CSE, Room 301
Gainesville, Florida 32611
August 19, 1994
1This research was supported in part by the National Science Foundation under the grant NSF:ECS-9210648
Abstract
The capacitance matrix method has been widely used as an efficient numerical tool for
solving the boundary value problems on irregular regions. Initially, this method was based
on the Sherman-Morrison-Woodbury formula, an expression for the inverse of the matrix
(A + UVT) with A E Rnxn and U,V E JXP. Extensions to the capacitance matrix method
with restrictive assumptions on the matrices A and (A + UVT) have been reported in the
past. In this paper, we present several theorems which are generalizations of the capacitance
matrix theorem [2] and are suited for very general matrices A and (A + UVT). A modified
capacitance matrix method is developed from these theorems and holds the promise of being
applicable to more general boundary value problems; in addition, it gives ample freedom to
choose the matrix A for developing very efficient numerical algorithms.
1 Introduction
There are numerous applications of the Sherman-Morrison-Woodbury formula in various fields
[5][14]. These applications can be classified into two categories. One is to compute the inverse
of a matrix that has low-rank difference w.r.t. another matrix whose inverse is available or can be
efficiently computed; the other is to solve a linear system whose matrix differs from a well-structured
matrix by a low-rank matrix and very efficient methods exist to solve the linear system with the
well-structured matrix. The latter case is also called the capacitance matrix method. This method
has been widely used for solving linear systems arising from the discretization of the boundary value
problems, primarily for the elliptic partial differential equations. In this paper, we state and prove
some very general theorems which are then applied in developing a new capacitance matrix method.
When A EG nx, U, V eG nx, and both the matrices A and (I + VTA-1U) are nonsingular,
the Sherman-Morrison-Woodbury formula [4] gives an expression for the inverse of (A + UVT) as
(A + UVT)-1 = A-' A-U(I + VTA-'U)-lVTA-1. (1)
The above assumptions imply that the matrix (A + UVT) is also nonsingular. The matrix (I +
VTA-1U) is called the capacitance matrix and is denoted by C. Note that a similar formula can
be obtained for the inverse of (A + UGVT), where the matrix G E s(RX is nonsingular [5].
Let B = A + UVT, with A, B, and C being nonsingular, then the linear system Bx = b can be
solved using the traditional capacitance matrix method, which involves the following steps:
1. Solve AR = b for R.
2. Compute W = A-1U by solving AW = U column by column.
3. Form the capacitance matrix C = I + VTW.
4. Solve Co = VTR for 3.
5. Form/Compute the solution x = R W3.
This capacitance matrix method is derived directly from the Sherman-Morrison-Woodbury formula,
and therefore, it can be applied only when A, B, and C are all nonsingular, making it unsuitable
for general linear system problems. In addition, the choice of a well-structured matrix A which is
close to the original matrix is limited to be nonsingular.
Buzbee et al. [2] proposed a fast algorithm to solve the Poisson equation ( a second-order elliptic
PDE ) on irregular regions. Their algorithm used a domain imbedding technique to imbed the
irregular region in a rectangular region and utilized a fast Poisson solver ( on the rectangular region
) which is based on the capacitance matrix method. The discrete Poisson operator on the rectangular
domain is the Laplacian matrix, which is singular with rank n 1. Therefore, they presented a
theorem to generalize the traditional capacitance matrix method to the case when rank(A) = n 1
and B is nonsingular. This generalization is based on modifying the right-hand-sides (RHSs) of the
linear systems appearing in the first and second steps.
The capacitance matrix method has been further employed as a discrete analogy to the poten-
tial theory for partial differential equations by Proskurowski and Wildlund [9] [10], Shieh [12] [13],
O'Leary and Wildlund [8], Dryja [3], Borgers and Wildlund [1], and others. They used the frame-
work of the capacitance matrix method to solve the second-order elliptic boundary value problems.
Although the matrix B could be singular for the Neumann problem, the chosen matrix A was
assumed to be nonsingular. This limits the choice of the well-structured matrix A in designing a
more efficient algorithm.
In this paper, we present several theorems which we call generalized capacitance matrix theorems
that lead to a modified capacitance matrix method which can deal with very general linear systems.
In this method, there are no restrictions on the rank of the matrices A or B and it can be applied
to solve any linear system as long as it has a solution. In addition, with this generalized capacitance
matrix method, we have more freedom to choose the well-structured matrix A.
In the next section, we will present the generalized capacitance matrix theorems for different types
of relationships between the matrices A and B. Some discussion regarding how these theorems may
be used to modify the traditional capacitance matrix method will be given in the final section.
2 Generalized Capacitance Matrix Theorems
The generalized capacitance matrix theorems are distinguished according to the types of relationship
between the matrices A and B. The first theorem is for the case when null(B) C null(A) and
range(A) C range(B), while the second theorem is for the assumption null(A) C null(B) and
range(B) C range(A). There are no assumptions on A and B in the third theorem. The generalized
capacitance matrix theorem for the first case is given as follows.
Theorem 1 Given B = A + UGVT, where A,B E sRnx", G (E ~pXp is nonsingular, U =
[ui,...,Up] EG nX, V = [v,...,vp] EG JnX, with both the sets {ui,...,up} and {vi,...,vp}
containing linearly independent vectors. Assume null(B) C null(A) and range(A) C range(B).
Let rank(B) = n m and rank(A) = n mA, 0 < mB < mA < n, then there exist two
sets of orthogonal basis, {qi,..., qmA} and {q' ,..., q'mA} f or null(AT) and null(A) respectively,
such that null(BT) span{q~A--mB+I, * qmA} and null(B)= span{q'mA-B+1,. . ,q'mA}. Let
qfuj = ai(i, j), a, # 0, for 1 < i,j < mA mB.
Let x be a solution to
mA-mB qTb
A = b- Y uj, (2)
j=1 qj uj
where b G range(B). For 1 < i < p, let rl be a solution to
mA-mB T
A/,i = ui 2 uj, (3)
j=1 q3 uj
and choose rY ... rmA-m B to be linear combinations of q', . qmA-B and linearly independent.
Define the capacitance matrix C as
mA-mB qTu
C=G-+VT - G-e, (4)
j=1 q3 uq
where ri = [rl,..., rp], and ej is the j-th standard basis vector in RP. T/i ,. there exists a unique
solution to the capacitance matrix equation
mTA-mB qrb
C/ = V G-lej, (5)
j=1 gq uj
and x iP/ is a solution to the linear s~l/l ,,i Bx = b.
We will use the following proposition to prove this theorem.
Proposition 1 Under the assumptions in T.,. rem 1, we have v q'j = ui q = 0 for 1 < i < p
and mrA rB +1 < J < rnA. In addition, VT[q' .. qmA -mB and U [q ... qmA-mB] are of full
rank.
Proof: From the assumption in Theorem 1, Bq'j = Aq'j = 0, mA rnB 1 < j < mA. Since
Bq'j = (A+ UGVT)q'j, we have UGVTq'3 = 0 for mA a + 1 < j < mA. From the assumption
that U e JRnxP,,p < n, has full rank and G is nonsingular, it is obvious that VTq'j = 0. Therefore,
we have v q' = 0 for 1 < i < p and mA mB + 1 < j < mA. Similarly, we can show that uiqj = 0,
1 *
*
We use proof by contradiction to show that the matrix VT[q'I ... q'mA-B ] is of full rank. Let's
assume this matrix is rank deficient, i.e., there exists a nonzero vector a = (ai,..., amA-mB) such
that V[q'I ...q'mA-B]a = 0. Let r = [q' ...q'mAmB]a, then r E null(A) and r is orthogonal
to null(B). Post-multiplying both sides of the equation B = A + UGVT by r yields Br =
Ar + UGVTr. Since Ar = 0 and VTr = 0, we obtain Br = 0. But, from the above, r is a nonzero
vector orthogonal to null(B), i.e. Br / 0, leading to a contradiction. Consequently, the assumption
that VT[q'/ ... q'mA-B] is rank deficient is not true, implying that V'[q'I ... q'mA mB] is of full
rank. Similarly, we can prove that UT[qi .. qmA-m] is of full rank. m
We are now equipped to prove Theorem 1. The proof is in four parts, and we will show that (1)
there exist two sets of orthogonal basis, {qi,..., qm } and {q'l,... q'mA I, for null(AT) and null(A)
respectively, such that null(BT) = span{qm-mB+1, * qMA} and null(B) = span{q'_A-B+ . ., q' },
(2) there exist solutions to equations 2 and 3, (3) the capacitance matrix C is nonsingular, implying
that equation 5 has a unique solution, and (4) B(x rit) = b.
Proof for Theorem 1:
(1) Since rank(A) = n mA and rank(B) = n mB, it is obvious from their singular value
decompositions that there exist mA orthogonal eigenvectors, 1',..., 'm A, for A and mB orthog-
onal eigenvectors, 1, ..., bB, for B such that null(A) = span {( ,. .., 'A } and null(B) =
span {(b,... m~ }. From the assumption null(B) C null(A), we can write each j, i = 1,..., m,
in terms of a linear combination of the basis vectors 1 ,..., 'mA. Thus, we have
S =T (6)
where T e s~BXmA is of full rank. By using the Householder or Givens transformation [4], we can
obtain the factorization of T as follows:
T = 0 T' Q, (7)
where 0 is an mB (mA mB) zero matrix, T E Sm'BTB is nonsingular, and Q e SERAXmA is
orthogonal. Define the vectors q',..., q'mA by the following relation
=Q (8)
We can see that {q',... q' A I is a set of orthogonal basis for null(A) and its subset {qmA-mB+1 q m n}
forms an orthogonal basis for null(B). Thus, the existence of the basis {q',,.. ., q'A } which satisfies
the aforementioned requirements is proved. Similarly, we can prove that there exists an orthogonal
basis {qi,..., qm} for null(AT) and null(BT) = spanr{qmA-mB+l, .. qmA is satisfied.
(2) From Proposition 1, we have uiqj = 0 for 1 < i < p and mA mB +1 < mA. The
assumption b e range(B) implies bTqj = 0 for mA B + 1 < j < mA. Combining these facts and
the orthogonality of the eigenvectors qj, j 1,... mA, we can see that the RHSs of equations 2 and
3 are orthogonal to qj for j = 1,..., mA, thus, the RHSs are orthogonal to null(AT). Consequently,
the RHSs of equations 2 and 3 are both in the range of A, which means there always exist solutions
to equations 2 and 3.
(3) To prove that C is nonsingular, we show that CO = 0 = /3 = 0. Let's suppose CO = 0, then,
from equation 4, we have
mA-mB
VTP1 0=
j=1
G-lej
Tq
qj uj
G-10.
By definition, By3 = A.r/ + UVTylP. Substituting equations 3 and 9 into this equation, we
By3 = 0. Thus, lP/3 null(B). Since null(B) C null(A), we have lP/3 null(A), i.e. Al/
Substituting A, by equation 3 and denoting / = (ip,..., 3), we obtain
p p mA-mB T
ul l q u'iU3 = 0.
i=1 i=1 j=1 quj
(10)
After rearranging, equation 10 becomes
qTU )uj +
qTj u
Pj=mA
j=mA -m-+1
Since ul,...,up are linearly independent, equation 11 implies the following conditions for fj
I j -, J= ,..., mA mB,
qj uj
Pl = 0, j = mA -rB + 1,... ,p.
Applying these conditions into the RHS of equation 9, we obtain
VT[yI,...,YmA-mB
Pm -m 0.
mA-TnB
For i = 1,..., mA mB, the RHS in equation 3 equals 0, thus, GE null(A). ,From the assump-
tion that YI,...', mBA- are chosen to be linear combinations of q',,..., q'mA-B and linearly
independent, we can write [rI ... lm- m ] in terms of [q'I ... q'mA -B] as follows:
["1i .. mA-mB] = [q'1 q'A-mB]T,
(15)
where T e R(mA- mB)X(mA-mB) is nonsingular. By substituting equation 15 into equation 14, equa-
tion 14 becomes
VT[q/' ... qmA mB]T
: =o 0.
PmA-mB
mA-mB
( A
J=1
53A--Ti
il
ajuj
(11)
(12)
(13)
(14)
(16)
SFrom Proposition 1, the matrix VT[q'I ... q'm -, ] is of full rank and it is of size p x (mA -nMB)
with p > mA mB, which implies
T =0. (17)
Since T is nonsingular, this makes /3 = 0, i = 1,..., mA mB. Combining this with equation 13,
we obtain 0 = 0. Thus concluding that CO = 0 implies 3 = 0, therefore, C is nonsingular.
(4) Now, we will show that B(R rP/) = b. Substituting A + UGVT for B and expanding, we
have
B(x ir/) = AR ArP/ + UGVTx UGVTr3. (18)
Rewriting VTry using equations 4 and 5 and some algebraic manipulation, we get
SmA-mB qb +mA M-B qTU
Vx -- q G-lej G-/3+ ej (19)
j=1 q uj j=u qTuj
Substituting for Ax, Ay and VTr/ from equations 2, 3 and 19 respectively, equation 18 leads to
B(R rP) = b. m
In theorem 1, we assume that the matrices U and V contain linearly independent columns and G
is nonsingular, which is equivalent to saying that U, V and G are all of full rank, in addition, the
condition qfuj = ai6(i, j) must be satisfied for 1 < i, j < mA --iB. In fact, we can prove that given
any two matrices A, B EG Rnx, there always exist full-ranked matrices U, V E nx and GE G pXp
such that UGVT = B A and p = rank(B A). This can be verified by taking the singular value
decomposition of the matrix B A as UE7VT, where both U and V are orthogonal matrices and
Z is a diagonal matrix with rank p = rank(B A). We can reduce the size of the matrix Z by
retaining the p nonzero diagonal entries and taking out the remaining rows and columns of zeros.
This reduced matrix GE sRPXP is nonsingular. In addition, we take out the columns in U and V
corresponding the zero singular values in Z, then the reduced matrices U, V E sn are obtained.
It is not difficult to see that UGVT = B A and the matrices U, V and G are of full rank.
As for the condition q uj = ai(i,j), 1 < i,j < mA mB, we can find such a U by transfor-
mation. If {U, G, V} satisfies UGVT = B A and each is of full rank, then the alternative set
{UQ, Q-1G, V} also satisfies these requirements when the matrix Q e ERPxP is nonsingular. From
Proposition 1, UT[qi ... qmA-mB] has full rank. We can find a transformation matrix Q to make
the upper (mA mB) x (mA mB) block of the transformed matrix QTUT[qi ... qA_,-,] diagonal.
Then, the condition qfuj = ai(i,j), 1 < i,j *
*
There are infinite number of choices for the matrices U,V and G to satisfy the aforementioned
requirements. The above choice from the singular value decomposition of B A is just a particular
one that leads to a diagonal G matrix. This makes the computation of G-1 in equation 4 trivial.
The choice of A, U, V and G is very crucial for the efficiency of the numerical algorithms based on
the capacitance matrix method to solve a linear system.
Now let's turn to the other case when null(A) C null(B) and range(B) C range(A). Note that
these two constraints are equivalent to each other when both B and A are symmetric.
Theorem 2 Given B = A + UGVT, where A,B E s"nx", G E RpXp is nonsingular, U =
[ui,...,up] e nXp, V = [vi,...,vp] EG nX, with both the sets {u',...,,u} and {vi,...,v,}
containing linearly independent vectors. Assume null(A) C null(B) and range(B) C range(A).
Let R be a solution to
Ax = b, (20)
where b G range(B). For 1 < i < p, let yr be a solution to
Ay, = ui, (21)
Define the capacitance matrix C as G-1 +VTy, where r = [, .. r ]. Tl7, there exist a solution
3 to the capacitance matrix equation
CO = VTR, (22)
and x ri is a solution to the linear s./li ,, Bx = b.
We will use the following proposition to prove the above theorem.
Proposition 2 In Ti1 rem 2, let {qi,..., qmA} and {q'1,...,q',A} be the sets of orthogonal eigen-
vectors in null(AT) and null(A) respectively, where A = n rank(A). Tl., qu = q' = 0,
for i = ,..., mA and j = 1,...,p.
Proof: From the assumption that null(A) C null(B), we can see that the eigenvectors q'i,
i = ,..., mA, are also in null(B). Since Bq'i = Aq'i + UGVTq'i, we obtain
UGVTq'i = 0, (23)
for 1 < i < mA. The matrix U is of size n x p, n > p, and of full rank, consequently GVTq'i = 0.
From the assumption that G is nonsingular, we have VTq', = 0. Therefore, q', vj 0 for i
1,..., mA and j= l,...,p.
Similarly, we obtain the equation VG UTqi = 0 from the condition BTq, = A qi = 0 which is
derived from the assumption range(B) C range(A). Using the same reasoning on this equation
leads to the conclusion that qfu = 0, i 1,..., mA and j 1,...,p. *
We are now poised to prove Theorem 2.
Proof for Theorem 2 : The proof is presented in three parts namely, (1) we will show that
there exist solutions for equations 20 and 21, (2) we will establish that there exist a solution for
equation 22, and (3) show that B(x ri) = b.
(1) To show that there exist a solution for equations 20 and 21, we prove that the RHSs of both
equations are in the range of A. It is obvious that b G range(A) for equation 20, since b G range(B)
and range(B) C range(A). For equation 21, from Proposition 2 we can see that ui, 1 < i < p, is
orthogonal to null(AT), which means ui G range(A) for i 1,... ,p.
(2) As for equation 22, we need to show that VTX is in the range of C. We use proof by contra-
diction in the following. Assume that VTX is not in the range of C, then there exists a nonzero
vector 7y null(CT), i.e. CTy = 0, such that
ST(VTX) 0. (24)
Using the generalized inverse 2 A+ of the matrix A, we can write x as follows.
mA
X = A+b + aiq'i, (25)
i=1
where mA = n rank(A), q'l,..., q' A are the orthogonal eigenvectors in null(A), and a, E ,, Vi.
Then,
mA
VT = VTA+b + aiVTq',, (26)
i=1
;From Proposition 2, we have VTq'i = 0, for i = ,..., mA. Substituting into equation 26, we have
VTR = VTA+b. Substituting for VTR in the inequality 24 yields
(A+TV7)Tb 0. (27)
We can show that A+TV7y null(BT) as follows.
BT(A+TVy) = (A + UGVT)TA+TVy (28)
= (ATA+TV + VGTUTA+TV)y (29)
Since ATA+ = I- 7 q'iq using a result of Proposition 2 namely, q'TV = 0 for i = 1,..., mA,
we have
mA
ATA+TV qqV = V. (30)
i=1
Substituting equation equation 30 into equation 29 yields
BT(A+TVy) = VG(G-1 + VTA+U)T7 (31)
Again, using Proposition 2, we can show that C = G-1 + VTA+U. Consequently, we obtain
BT(A+ TV) = VGTCy = 0 since CTy = 0 by assumption. Thus, A+T Vy null(BT).
Combining this with the inequality 27 leads to the conclusion that b is not in range(B). This is
a contradiction to the assumption b G range(B). Therefore, the assumption that VTX is not in
the range of C is not true. Hence, we can conclude that VTX G C. This proves that there exists a
solution to equation 22.
(3) The proof for B(x rif) = b is very similar to part (3) of the proof for Theorem 1 and is
omitted here. m
2Let the singular value decomposition of A be A = U'VT, where U and V are orthogonal matri-
ces and Z = diag(0, ...,0, OA+, ..., An). Then its generalized inverse A+ = VZ+UT, where + =
diag(0,...,0, -1 ..., ) [4].
I A _1 + '
If we restrict the above constraints on the relation between B and A to null(B) = null(A) and
range(B) = range(A), then the capacitance matrix C is nonsingular, i.e., the capacitance matrix
equation has a unique solution. This result is stated in the following corollary.
Corollary 1 In Til rem 2, if null(A) = null(B) and range(A) = range(B), then the capacitance
matrix C is nonsingular.
Proof: To prove that C is nonsingular, we show that C/3 = 0 = 3 = 0. Suppose C3 = 0,
then, from the definition of C, we have VTrl = -G-1/, which can be substituted into the
equation B/3 = ArlP + UVTrly, along with the equation for Ari from (21), to get B/3 = 0,
i.e., r/3 e null(B). Since null(A) = null(B), we have Ar3 = 0. Using equation 21 for Art, we get
U/ = 0. ;From the assumption on the matrix U namely, ul,..., up are linearly independent, we
conclude that 0 = 0. Therefore, C is nonsingular. m
In the above theorems, we impose the constraints on the relationship between B and A either
via null(B) C null(A) and range(A) C range(B) or via null(A) C null(B) and range(B) C
range(A). Now we consider the general case, i.e., no constraint on the relationship between B and
A is imposed. The following theorem is given for this general case and can be used for any matrices
B and A in Rx"" as long as there exists a solution for the associated capacitance matrix equation.
Theorem 3 Given B = A + UGVT, where B,A e Enx", G E E~PX is nonsingular, U =
[ui,...,up] e ~nxp, V = [vI,...,vp] EG nxp, with both the sets {ui,...,up} and {vl,...,vp}
containing linearly independent vectors. Let rank(A) = n-mA, dim(null(B) nnull(A)) = m', and
dim(null(BT) null(AT)) = m, 0 *
{qi,...,qmAA} and {q'1,...,q'm }, for null(AT) and null(A) respectively, such that null(BT) n*
null(AT) = span{q~A-,,+l ...* qA} and null(B) n null(A)= span{q'mA-m_'+1,..., q' TA}. Let
qfuj = ai6(i,j), ao 0, for 1 < i,j < mA m.
Let R be a solution to
A-m qTb
Ax = b u, (32)
j=1 qj uj
where b G range(B). For 1 < i < p, let rj be a solution to
mA-m qTui
Ayj = u- --uj. (33)
j=1 q uj
Define the capacitance matrix C as
mA-m TU
C =G- VT+ G-le 3 (34)
j=1 q uJ
where r = [rh,. .., rip], and ej is the j-th standard basis vector in RP. Assume there exists a solution
3 to the capacitance matrix equation
mA-'m qTb
C/3 = VR G-lej, (35)
j=1 qj u
then x ri is a solution to the linear sli/, i,, Bx = b.
The proof of this theorem is omitted here since, it is similar to the proof of Theorem 1. This proof
will require the use of the fact that q u = 0, for mA + *
*
In general, Theorems 1 and 2 can be regarded as the special cases for Theorem 3 with the exception
of the existence of a solution to the capacitance matrix equation. In the first two theorems, there
always exists a solution for the capacitance matrix equation, while the existence of such solution is
assumed in Theorem 3. We cannot guarantee the capacitance matrix equation always has a solution
for the general case; but, with certain constraints, we can prove it does have a solution. This is
given in the following corollary.
Corollary 2 In Ti.. -rem 3, assume q'j range(V) for 1 < j < mA m'. Let xTq'j = aj and
fq' = dij, 1 < i < p and 1 < j < mA m'. If the coefficients aj and dij are chosen such that
aj = 0 when q' b = 0 and the equation
VGTd = a' VG TUq q'j (36)
is satisfied for j = 1,..., mA n', where dj = (dli ... dpj)T and
i qb when q'Tb 0
0 elsewhere
then, there ,l,,.i,' exists a solution to equation 35.
Proof: To prove the existence of a solution for equation 35, we need to show that the RHS of
equation 35 is in the range of C for which we will use proof by contradiction. Suppose the RHS of
equation 35 is not in the range of C, then there exists a nonzero vector 7y null(CT), i.e. CT = 0,
such that
S(VTR- _-le) 0) (37)
-bG (37)
j=1 qj "u
Denote the generalized inverse of the matrix A by A+, then we have
mA-m qTb mA-m'
VT = VTA+b VTA+ u j+ a V qj. (38)
j= j uj j=1
Substituting equation 38 into inequality 37 and using the definition of a'j, the inequality can be
rearranged as follows
mA-m TA+TV mA-m' mA-m T-T
(+, Tq qjeTG
(A+TV qju + ajq'jqj Vy qje3 )Tb 0. (39)
j=1 qTuj j j=1 q
Denote the vector in the parenthesis by i.e.,
:= A+TVy
mA-m TA+TV
S1qju
j=1 qj uj
mA-m
-y+ a' q' q'jTVy
7=1
mA-m TG T
q ej
=1 qTuj
In the following, we show that e null(BT). Simplifying the expansion of BT7 yields
mA-m UTquT A+TV
(ATA+V + VGTUTA+V VGT
j=1 qj j
mA-m'
a IjVG UT q'jqV
j=1
By substituting the identity ATA+V = V -
into the above equation, we get
mA-m T T G-T-
uT qjejTG T
VGT T
j=1 q uj
="m' j'j/ V and the assumption of equation 36
m-m q U
BT = VGT(G-1+VTA+U-VTA+ 5 uj- -
j=1 q uj
mA-m'
+ E v qdf
j=l
mA-m
SE
j=l
G-'e j )U (40)
3qfu "
Substituting VTi by VTA+u, E=-m I VA+uj + E -m' dijVTqj, we can see that the
matrix in parenthesis of equation 40 is equivalent to the capacitance matrix C. ;From the assump-
tion CT = 0, we have BT = 0, i.e., e null(BT). Combining this with the inequality 39 leads
to the result that b is not in range(B). This contradicts the assumption b G range(B). Therefore,
the assumption in the beginning of the proof that VTX is not in the range of C is not true. Thus
we can conclude that VTX G C. This proves that there exists a solution to equation 35. *
3 Discussion and Summary
From the theorems presented in section 2, we can modify the capacitance matrix method as follows.
1. Solve equation 32 for x.
2. Solve equation 33 for ri, for i = ,... ,p.
3. Form the capacitance matrix C from equation 34.
4. Solve the capacitance matrix equation (i.e. eq. 35) for /.
5. The solution x = x rt.
This modified capacitance matrix method can always be applied to the cases when null(B) C
null(A) and range(A) C range(B) or when null(A) C null(B) and range(B) C range(A), as
stated in Theorem 1 and 2 respectively. However, to apply this method to more general cases, we
need to make sure that a solution exists to the capacitance matrix equation in step 4. Corollary
2 gives a sufficient condition for the existence of the solution to the capacitance matrix equation
under the general case, but no necessary condition for the existence of a solution to the capacitance
matrix equation is available at this time.
We have successfully applied this modified capacitance matrix method to a host of low-level vision
problems which use the membrane or thin-plate spline to constrain the solution [14, 7]. These
low-level vision problems include the surface reconstruction, shape from orientation, optical flow,
lightness problem, etc.[14, 7]. These problems when formulated as variational principles and dis-
cretized, lead to solving one or more large sparse linear systems. For the membrane spline, we can
choose the well-structured matrix A, which is close to the matrix B, i.e. the stiffness matrix in the
large linear system, to be the Laplacian matrix, thus making the matrix UVT encode the compo-
nents of the stiffness matrix corresponding to the data constraints, discontinuities, and boundary
conditions The resulting matrices A and B either satisfy the condition null(B) C null(A) and
range(A) C range(B) in Theorem 1 ( for surface reconstruction and optical flow ) or satisfy the
condition null(A) C null(B) and range(B) C range(A) in Theorem 2 ( for shape from orientation
and lightness problem ).
The traditional capacitance matrix method [2, 9, 12, 13, 8, 10, 3, 1] has been primarily applied to
solve the second-order elliptic boundary value problems, whereas our modified capacitance matrix
method ( presented above ) can be used to solve higher-order elliptic boundary value problems.
Furthermore, efficient numerical algorithms can be designed [14, 7] based on an appropriate choice
of the matrix A.
Some works on the extension of the Sherman-Morrison-Woodbury formula in equation 1 to sin-
gular A, B, and the associated capacitance matrix C have been reported in [6], [11], and references
therein. However, the generalized inverse forms of (A + UGVT) have been derived under restric-
tive assumptions. Although we can make use of these generalized inverse formulas to design the
corresponding capacitance matrix methods, these assumptions restrict them from being applied
to general linear systems. Our generalized capacitance matrix theorems are developed for solving
general linear systems, and it is possible to derive the generalized inverse form of (A + UGVT) for
more general cases based on these theorems, which will be the focus of our future research.
References
[1] C. Borgers and O. B. Wildlund. "On finite element domain imbedding methods,". SIAM J.
Numer. Anal., 27(4):963-978, 1990.
[2] B. L. Buzbee, F. W. Dorr, J. A. George, and G. H. Golub. "The direct solution of the discrete
Poison equation on irregular regions, ". SIAM Journal of Numerical A,,,.lii-, 8(4):722-736,
December 1971.
[3] M. Dryja. "A finite element-capacitance matrix method for the elliptic problem,". SIAM J.
Numer. Anal., 20(4):671-680, 1'l ;
[4] G.H. Golub and C.F. Van Loan. Matrix Computations. The Johns Hopkins University Press,
2nd edition, 1989.
[5] W. W. Hager. "Updating the inverse of a matrix,". SIAM Review, 31(2):221-239, June 1989.
[6] H. V. Henderson and S. R. Searle. "On deriving the inverse of a sum of matrices,". SIAM
Review, 23(1):53-60, 1981.
[7] S. H. Lai and B. C. Vemuri. "An O(N) iterative solution to the Poisson equations in low-level
vision problems,". In IEEE conference on Computer Vision & Pattern Recognition, Seattle,
Washington, June 1994.
[8] D.P. O'Leary and 0. Wildlund. "Capacitance matrix methods for the Helmholtz equation on
general three-dimensional regions,". Math. Comp., 33(147):849-879, July 1979.
[9] W. Proskurowski and 0. Wildlund. "On the numerical solution of Helmholtz's equation by the
capacitance matrix method,". Math. Comp., 30(135):433-468, July 1976.
[10] W. Proskurowski and 0. Wildlund. "A finite element capacitance matrix method for the
Neumann problem for Laplace's equation,". SIAM J. Sci. Statist. Comput., 1(4):410 1_'
1' 11
[11] K. S. Riedel. "A Sherman-Morrison-Woodbury identity for rank augmenting matrices with
application to centering,". SIAM J. Matrix Anal. Appl., 13(2):659-662, 1992.
[12] A. Shieh. "On the convergence of the conjugate gradient method for singular capacitance matrix
equations from the Neumann problem of the Poisson equation,". Numer. Math., 29:307-327,
1978.
[13] A. Shieh. "Fast Poisson solvers on general two-dimensional regions for the Dirichlet problem,".
Numer. Math., 31:405-429, 1979.
[14] B. C. Vemuri and S. H. Lai. "A fast solution to the surface reconstruction problem,". In SPIE
conference on Mathematical Imaging: Geometric Methods in Computer Vision, pages 27-37,
San Diego, CA, July 1993. SPIE.
*
* |