Citation
Pressure-based methods on single-instruction stream/multiple-data stream computers

Material Information

Title:
Pressure-based methods on single-instruction stream/multiple-data stream computers
Creator:
Blosch, Edwin L
Publication Date:
Language:
English
Physical Description:
vii, 189 leaves : ill. ; 29 cm.

Subjects

Subjects / Keywords:
Algorithms ( jstor )
Cavity flow ( jstor )
Convection ( jstor )
Data smoothing ( jstor )
Mathematical procedures ( jstor )
Multigrid methods ( jstor )
Perceptron convergence procedure ( jstor )
Run time ( jstor )
Truncation errors ( jstor )
Velocity ( jstor )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1994.
Bibliography:
Includes bibliographical references (leaves 182-188).
General Note:
Typescript.
General Note:
Vita.
Statement of Responsibility:
Edwin L. Blosch.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright [name of dissertation author]. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
021584257 ( ALEPH )
AKN2725 ( NOTIS )
33373637 ( OCLC )

Downloads

This item has the following downloads:


Full Text







PRESSURE-BASED METHODS ON SINGLE-INSTRUCTION
STREAM/MULTIPLE-DATA STREAM COMPUTERS














By

EDWIN L. BLOSCH


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


1994











ACKNOWLEDGEMENTS


I would like to express my thanks to my advisor Dr. Wei Shyy for reflecting

carefully on my results and for directing my research toward interesting issues. I

would also like to thank him for the exceptional personal support and flexibility he

offered me during my last year of study, which was done off-campus. I would also like

to acknowledge the contributions of the other members of my Ph.D. committee, Dr.

Chen-Chi Hsu, Dr. Bruce Carroll, Dr. David Mikolaitis, and Dr. Sartaj Sahni. Dr.

Hsu and Dr. Carroll supervised my B.S. and M.S. degree research studies, respectively,

and Dr. Mikolaitis, in the role of graduate coordinator, enabled me to obtain financial

support from the Department of Energy.

Also I would like to thank Madhukar Rao, Rick Smith and H.S. Udaykumar, for

paying fees on my behalf and for registering me for classes while I was in California.

Jeff Wright, S. Thakur, Shin-Jye Liang, Guobao Guo and Pedro Lopez-Fernandez

have also made direct and indirect contributions for which I am grateful.

Special thanks go to Dr. Jamie Sethian, Dr. Alexandre Chorin and Dr. Paul Con-

cus of Lawrence Berkeley Laboratory for allowing me to visit LBL and use their

resources, for giving personal words of support and constructive advice, and for the

privilege of interacting with them and their graduate students in the applied mathe-

matics branch.

Last but not least I would like to thank my wife, Laura, for her patience, her

example, and her frank thoughts on "cups with sliding lids," "flow through straws,"

and numerical simulations in general.











My research was supported in part by the Computational Science Graduate Fel-

lowship Program of the Office of Scientific Computing in the Department of Energy.

The CM-5s used in this study were partially funded by National Science Foundation

Infrastructure Grant CDA-8722788 (in the computer science department of the Uni-

versity of California-Berkeley), and a grant of HPC time from the DoD HPC Shared

Resource Center, Army High-Performance Computing Research Center, Minneapolis,

Minnesota.

















TABLE OF CONTENTS




ACKNOWLEDGEMENTS ............................ ii

ABSTRACT ..................... ............... vi

CHAPTERS

1 INTRODUCTION ..................... .......... 1

1.1 Motivations ............................... 1
1.2 Governing Equations ........................... 3
1.3 Numerical Methods for Viscous Incompressible Flow .......... 5
1.4 Parallel Com putting ............................ 7
1.4.1 Data-Parallelism and SIMD Computers ................ 8
1.4.2 Algorithms and Performance ..................... 11
1.5 Pressure-Based Multigrid Methods .... 13
1.6 Description of the Research ..... ..... 17

2 PRESSURE-CORRECTION METHODS .... 21

2.1 Finite-Volume Discretization on Staggered Grids .... 21
2.2 The SIMPLE Method ................. ....... 23
2.3 Discrete Formulation of the Pressure-Correction Equation ...... 27
2.4 Well-Posedness of the Pressure-Correction Equation ... 30
2.4.1 Analysis ... .. . 30
2.4.2 Verification by Numerical Experiments .... 33
2.5 Numerical Treatment of Outflow Boundaries .... 38
2.6 Concluding Remarks ................... ........ 40

3 EFFICIENCY AND SCALABILITY ON SIMD COMPUTERS ...... 53

3.1 Background ............... .. .............. 53
3.1.1 Speedup and Efficiency .. .. 53
3.1.2 Comparison Between CM-2, CM-5, and MP-1 ... 55
3.1.3 Hierarchical and Cut-and-Stack Data Mappings ... 57
3.2 Implementional Considerations .. ... 59
3.3 Numerical Experiments ... 61
3.3.1 Efficiency of Point and Line Solvers for the Inner Iterations 62
3.3.2 Effect of Uniform Boundary Condition Implementation 69
3.3.3 Overall Performance ....................... 70
3.3.4 Isoefficiency Plot ......................... 72











3.4 Concluding Remarks ...........................

4 A NONLINEAR PRESSURE-CORRECTION MULTIGRID METHOD ..

4.1 Background . .
4.1.1 Terminology and Scheme for Linear Equations .........
4.1.2 Full-Approximation Storage Scheme for Nonlinear Equations
4.1.3 Extension to the Navier-Stokes Equations .
4.2 Comparison of Pressure-Based Smoothers .
4.3 Stability of Multigrid Iterations . .
4.3.1 Defect-Correction Method . ..
4.3.2 Cost of Different Convection Schemes ..... ..
4.4 Restriction and Prolongation Procedures .
4.5 Concluding Remarks ...........................

5 IMPLEMENTATION AND PERFORMANCE ON THE CM-5 .......

5.1 Storage Problem .......... ..... .... ....... ...
5.2 Multigrid Convergence Rate and Stability .
5.2.1 Truncation Error Convergence Criterion for Coarse Grids .
5.2.2 Numerical Characteristics of the FMG Procedure .
5.2.3 Influence of Initial Guess on Convergence Rate .
5.2.4 Rem arks . .
5.3 Performance on the CM-5 ........................
5.4 Concluding Remarks ..........................

REFERENCES ...................................

BIOGRAPHICAL SKETCH ............................











Abstract of Dissertation
Presented to the Graduate School of the University of Florida
in Partial Fulfillment of the Requirements for the
Degree of Doctor of Philosophy



PRESSURE-BASED METHODS ON SINGLE-INSTRUCTION
STREAM/MULTIPLE-DATA STREAM COMPUTERS

By

Edwin L. Blosch


Chairman: Dr. Wei Shyy
Major Department: Aerospace Engineering, Mechanics and Engineering Science


Computationally and numerically scalable algorithms are needed to exploit emerg-

ing parallel-computing capabilities. In this work pressure-based algorithms which

solve the two-dimensional incompressible Navier-Stokes equations are developed for

single-instruction stream/multiple-data stream (SIMD) computers.

The implications of the continuity constraint for the proper numerical treatment

of open boundary problems are investigated. Mass must be conserved globally so that

the system of linear algebraic pressure-correction equations is numerically consistent.

The convergence rate is poor unless global mass conservation is enforced explicitly.

Using an additive-correction technique to restore global mass conservation, flows

which have recirculating zones across the open boundary can be simulated.

The performance of the single-grid algorithm is assessed on three massively-

parallel computers, MasPar's MP-1 and Thinking Machines' CM-2 and CM-5. Paral-

lel efficiencies approaching 0.8 are possible with speeds exceeding that of traditional

vector supercomputers. The following issues relevant to the variation of parallel ef-

ficiency with problem size are studied: the suitability of the algorithm for SIMD

computation; the implementation of boundary conditions to avoid idle processors;

vi











the choice of point versus line-iterative relaxation schemes; the relative costs of the

coefficient computations and solving operations, and the variation of these costs with

problem size; the effect of the data-array-to-processor mapping; and the relative

speeds of computation and communication of the computer.

A nonlinear pressure-correction multigrid algorithm which has better convergence

rate characteristics than the single-grid method is formulated and implemented on

the CM-5. On the CM-5, the components of the multigrid algorithm are tested over a

range of problem sizes. The smoothing step is the dominant cost. Pressure-correction

methods and the locally-coupled explicit method are equally efficient on the CM-5.

V cycling is found to be much cheaper than W cycling, and a truncation-error based

"full-multigrid" procedure is found to be a computationally efficient and convenient

method for obtaining the initial fine-grid guess. The findings presented enable further

development of efficient, scalable pressure-based parallel computing algorithms.
















CHAPTER 1
INTRODUCTION

1.1 Motivations

Computational fluid dynamics (CFD) is a growing field which brings together

high-performance computing, physical science, and engineering technology. The dis-

tinctions between CFD and other fields such as computational physics and computa-

tional chemistry are largely semantic now, because increasingly more interdisplinary

applications are coming within range of the computational capabilities. CFD algo-

rithms and techniques are mature enough that the focus of research is expected to

shift in the next decade toward the development of robust flow codes, and toward the

application of these codes to numerical simulations which do not idealize either the

physics or the geometry and which take full account of the coupling between fluid

dynamics and other areas of physics [65]. These applications will require formidable

resources, particularly in the areas of computing speed, memory, storage, and in-

put/output bandwidth [78].

At the present time, the computational demands of the applications are still

at least two orders-of-magnitude beyond the computing technology. For example,

NASA's grand challenges for the 1990s are to achieve the capability to simulate vis-

cous, compressible flows with two-equation turbulence modelling over entire aircraft

configurations, and to couple the fluid dynamics simulation with the propulsion and

aircraft control systems modelling. To meet this challenge it is estimated that 1 ter-

aflops computing speed and 50 gigawords of memory will be required [24]. Current











massively-parallel supercomputers, for example, the CM-5 manufactured by Thinking

Machines, have peak speeds of 0(10 gigaflops) and memories of 0(1 gigaword).

Optimism is sometimes circulated that teraflop computers may be expected by

1995 [68]. In view of the two orders-of-magnitude disparity between the speed of

present-generation parallel computers and teraflops, such optimism should be dimmed

somewhat. Expectations are not being met in part because the applications, which

are the driving force behind the progress in hardware, have been slow to develop. The

numerical algorithms which have seen two decades of development on traditional vec-

tor supercomputers are not always easy targets for efficient parallel implementation.

Better understanding of the basic concepts and more experience with the present

generation of parallel computers is a prerequisite for improved algorithms and imple-

mentations.

The motivation of the present work has been the opportunity to investigate issues

related to the use of parallel computers in CFD, with the hope that the knowledge

gained can assist the transition to the new computing technology. The context of the

research is the numerical solution of the 2-d incompressible Navier-Stokes equations,

by a popular and proven numerical method known as the pressure-correction tech-

nique. A specific objective emerged as the research progressed, namely to develop

and analyze the performance of pressure-correction methods on the single-instruction

stream/multiple-data stream (SIMD) type of parallel computer. Single-grid compu-

tations were studied first, then a multigrid method was developed and tested.

SIMD computers were chosen because they are easier to program than multiple-

instruction stream/multiple-data stream (MIMD) computers explicitt message-passing

is not required), because synchronization of the processors is not an issue, and be-

cause the factors affecting the parallel run time and computational efficiency are

easier to identify and quantify. Also, these are arguably the most powerful machines











available right now-Los Alamos National Laboratory has a 1024-node CM-5 with 32

Gbytes of processor memory and is capable of 32 Gflops peak speed. Thus, the code,

the numerical techniques, and the understanding which are the contribution of this

research can be immediately useful for applications on massively parallel computers.

1.2 Governing Equations

The governing equations for 2-d, constant property, time-dependent viscous in-

compressible flow are the Navier-Stokes equations. They express the principles of

conservation of mass and momentum. In primitive variables and cartesian coordi-

nates, they may be written

u+ = 0 (1.1)

apu apu2 apuv op a2u a2u
+ + = + 2 + y (1.2)

dpv apuv dpv2 dp d2v a2v
-+ +$- = -- (1.3)
-t + + y dy +d2 + y2

where u and v are cartesian velocity components, p is the density, p is the fluid's

molecular viscosity, and p is the pressure. Eq. 1.1 is the mass continuity equation, also

known as the divergence-free constraint since its coordinate-free form is div ii = 0.

The Navier-Stokes equations 1.1-1.3 are a coupled set of nonlinear partial differ-

ential equations of mixed elliptic/parabolic type. Mathematically, they differ from

the compressible Navier-Stokes equations in two important respects that lead to dif-

ficulties for devising numerical solution techniques.

First, the role of the continuity equation is different in incompressible flow. In-

stead of a time-dependent equation for the density, in incompressible fluids the conti-

nuity equation is a constraint on the admissible velocity solutions. Numerical meth-

ods must be able to integrate the momentum equations forward in time while simul-

taneously maintaining satisfaction of the continuity constraint. On the other hand,











numerical methods for compressible flows can take advantage of the fact that in the

unsteady form each equation has a time-dependent term. The equations are cast

in vector form-any suitable method for time-integration can be employed on the

system of equations as a whole.

The second problem, assuming that a primitive-variable formulation is desired, is

that there is no equation for pressure. For compressible flows, the pressure can be de-

termined from the equation of state of the fluid. For incompressible flow, an auxiliary

"pressure-Poisson" equation can be derived by taking the divergence of the vector

form of the momentum equations; the continuity equation is invoked to eliminate

the unsteady term in the result. The formulation of the pressure-Poisson equation

requires manipulating the discrete forms of the momentum and continuity equations.

A particular discretization of the Laplacian operator is therefore implied in pressure-

Poisson equation, depending on the discrete gradient and divergence operators. This

operator may not be implementable at boundaries, and solvability constraints can

be violated [30]. Also, the differentiation of the governing equations introduces the

need for additional unphysical boundary conditions on the pressure. Physically, the

pressure in incompressible flow is only defined relative to an (arbitrary) constant.

Thus, the correct boundary conditions are Neumann. However, if the problem has

an open boundary, the governing equations should be supplemented with a boundary

condition on the normal traction [29, 32],

1 &un
F, -p + (1.4)
Re dn

where F is the force, Re is the Reynolds number, and the subscript n indicates the

normal direction. However, F, may be difficult to prescribe.











In practice, a zero-gradient or linear extrapolation for the normal velocity com-

ponent is a more popular outflow boundary condition. Many outflow boundary con-

ditions have been analyzed theoretically for incompressible flow (see [30, 31, 38, 56]).

There are even more boundary condition procedures in use. The method used and its

impact on the "solvability" of the resulting numerical systems of equations depends

on the discretization and the numerical method. This issue is treated in Chapter 2.

1.3 Numerical Methods for Viscous Incompressible Flow

Numerical algorithms for solving the incompressible Navier-Stokes system of equa-

tions were first developed by Harlow and Welch [39] and Chorin [15, 16]. Descendants

of these approaches are popular today. Harlow and Welch introduced the important

contribution of the staggered-grid location of the dependent variables. On a stag-

gered grid, the discrete Laplacian appearing in the derivation of the pressure-Poisson

equation has the standard five-point stencil. On colocated grids it still has a five-

point form but, if the central point is located at (i,j), the other points which are

involved are located at (i+2,j), (i-2j), (i,j+2), and (ij-2). Without nearest-neighbor

linkages, two uncoupled ("checkerboard") pressure fields can develop independently.

This pressure-decoupling can cause stability problems, since nonphysical discontinu-

ities in the pressure may develop [50]. In the present work, the velocity components

are staggered one-half of a control volume to the west and south of the pressure which

is defined at the center of the control volume as shown in Figure 1.1. Figure 1.1 also

shows the locations of all boundary velocity components involved in the discretization

and numerical solution, and representative boundary control volumes for u, v, and p.

In Chorin's artificial compressibility approach [15] a time-derivative of pressure is

added to the continuity equation. In this manner the continuity equation becomes

an equation for the pressure, and all the equations can be integrated forward in time,











either as a system or one at a time. The artificial compressibility method is closely

related to the penalty formulation used in finite-element methods [41]. The equations

are solved simultaneously in finite-element formulations. Penalty methods and the

artificial compressibility approach suffer from ill-conditioning when the equations

have strong nonlinearities or source terms. Because the pressure term is artificial,

they are not time-accurate either.

Projection methods [16, 62] are two-step procedures which first obtain a velocity

field by integrating the momentum equations, and then project this vector field into

a divergence-free space by subtracting the gradient of the pressure. The pressure-

Poisson equation is solved to obtain the pressure. The solution must be obtained

to a high degree of accuracy in unsteady calculations in order to obtain the correct

long-term behavior [76]-every step may therefore be fairly expensive. Furthermore,

the time-step size is limited by stability considerations, depending on the implicitness

of the treatment used for the convection terms.

"Pressure-based" methods for the incompressible Navier-Stokes equations include

SIMPLE [61] and its variants, SIMPLEC [19], SIMPLER [60], and PISO [43]. These

methods are similar to projection methods in the sense that a non-mass-conserving

velocity field is computed first, and then corrected to satisfy continuity. However, they

are not implicit in two steps because the nonlinear convection terms are linearized

explicitly. Instead of a pressure-Poisson equation, an approximate equation for the

pressure or pressure-correction is derived by manipulating the discrete forms of the

momentum and continuity equations. A few iterations of a suitable relaxation method

are used to obtain a partial solution to the system of correction equations, and

then new guesses for pressure and velocity are obtained by adding the corrections

to the old values. This process is iterated until all three equations are satisfied.

The iterations require underrelaxation because of the sequential coupling between











variables. Compared to projection methods, pressure-based methods are less implicit

when used for time-dependent problems. However, they can be used to seek the

steady-state directly if desired.

Compared to a fully coupled strategy, the sequential pressure-based approach

typically has slower convergence and less robustness with respect to Reynolds num-

ber. However, the sequential approach has the important advantage that additional

complexities, for example, chemical reaction, can be easily accommodated by simply

adding species-balance equations to the stack. The overall run time increases since

each governing equation is solved independently, and the total storage requirements

scale linearly with the number of equations solved. On the other hand, the computer

time and storage requirements escalate faster in a fully coupled solution strategy. The

typical way around this problem is to solve simultaneously the continuity and momen-

tum equations, then solve any additional equations in a sequential fashion. Without

knowing beforehand that the pressure-velocity coupling is the strongest among all the

various flow variables, however, the extra computational effort spent in simultaneous

solution of these equations is unwarranted.

There are other approaches for solving the incompressible Navier-Stokes equa-

tions, notably methods based on vorticity-streamfunction (w-p) or velocity-vorticity

(u-w) formulations, but pressure-based methods are easier, especially with regard to

boundary conditions and possible extension to 3-d domains. Furthermore, they have

demonstrated considerable robustness in computing incompressible flows. A broad

range of applications of pressure-based methods is demonstrated in [73].

1.4 Parallel Computing

General background of parallel computers and their application to the numeri-

cal solution of partial differential equations is given in Hockney and Jesshope [40]











and Ortega and Voigt [58]. Fischer and Patera [23] gave a recent review of parallel

computing from the perspective of the fluid dynamics community. Their "indirect

cost," the parallel run time, is of primary interest here. The "direct cost" of parallel

computers and their components is another matter entirely. For the iteration-based

numerical methods developed here, the parallel run time is the cost per iteration

multiplied by the number of iterations. The latter is affected by the characteristics of

the particular parallel computer used and the algorithms and implementations em-

ployed. Parallel computers come in all shapes and sizes, and it is becoming virtually

impossible to give a thorough taxonomy. The background given here is limited to a

description of the type of computer used in this work.

1.4.1 Data-Parallelism and SIMD Computers

Single-instruction stream/multiple-data stream (SIMD) computers include the

connection machines manufactured by the Thinking Machines Corporation, the CM

and CM-2, and the MP-1, MP-2, and MP-3 computers produced by the MasPar Cor-

poration. These are massively-parallel machines consisting of a front-end computer

and many processor/memory pairs, figuratively, the "back-end." The back-end pro-

cessors are connected to each other by a "data network." The topology of the data

network is a major feature of distributed-memory parallel computers.

The schematic in Figure 1.2 gives the general idea of the SIMD layout. The

program executes on the serial front-end computer. The front-end triggers the syn-

chronous execution of the "back-end" processors by sending "code blocks" simul-

taneously to all processors. Actually, the code blocks are sent to an intermediate

"control processorss)" The control processor broadcasts the instructions contained











in the code block, one at a time, to the computing processors. These "front-end-

to-processor" communications take time. This time is an overhead cost not present

when the program runs on a serial computer.

The operands of the instructions, the data, are distributed among the processors'

memories. Each processor operates on its own locally-stored data. The "data" in

grid-based numerical methods are the arrays, 2-d in this case, of dependent variables,

geometric quantities, and equation coefficients. Because there are usually plenty

of grid points and the same governing equations apply at each point, most CFD

algorithms contain many operations to be performed at every grid point. Thus this

"data-parallel" approach is very natural to most CFD algorithms.

Many operations may be done independently on each grid point, but there is cou-

pling between grid points in physically-derived problems. The data network enters

the picture when an instruction involves another processor's data. Such "interpro-

cessor" communication is another overhead cost of solving the problem on a parallel

computer. For a given algorithm, the amount of interprocessor communication de-

pends on the "data mapping," which refers to the partitioning of the arrays and the

assignment of these "subgrids" to processors. For a given machine, the speed of the

interprocessor communication depends on the pattern of communication (random or

regular) and the distance between the processors (far away or nearest-neighbor).

The run time of a parallel program depends first on the amount of front-end and

parallel computation in the algorithm, and the speeds of the front-end and back-

end for doing these computations. In the programs developed here, the front-end

computations are mainly the program control statements (IF blocks, DO loops, etc.).

The front-end work is not sped up by parallel processing. The parallel computations

are the useful work, and by design one hopes to have enough parallel computation











to amortize both the front-end computation and the interprocessor and front-end-to-

processor communication, which are the other factors that contribute to the parallel

run time.

From this brief description it should be clear that SIMD computers have four char-

acteristic speeds: the computation speed of the processors, the communication speed

between processors, and the speed of the front-end-to-processor communication, i.e.

the speed that code blocks are transferred, and the speed of the front-end. These

machine characteristics are not under the control of the programmer. However, the

amount of computation and communication a program contains is determined by the

programmer because it depends on the algorithm selected and the algorithm's imple-

mentation (the choice of the data mapping, for example). Thus, the key to obtaining

good performance from SIMD computers is to pick a suitable algorithm, "matched"

in a sense to the architecture, and to develop an implementation which minimizes

and localizes the interprocessor communication. Then, if there is enough parallel

computation to amortize the serial content of the program and the communication

overheads, the speedup obtained will be nearly the number of processors. The actual

performance, because it depends on the computer, the algorithm, and the imple-

mentation, must be determined by numerical experiment on a program-by-program

basis.

SIMD computers are restricted to exploiting data-parallelism, as opposed to the

parallelism of the tasks in an algorithm. The task-parallel approach is more com-

monly used, for example, on the Cray C90 supercomputer. Multiple-instruction

stream/multiple-data stream (MIMD) computers, on the other hand, are composed of

more-or-less autonomous processor/memory pairs. Examples include the Intel series

of machines (iPSC/2, iPSC/860, and Paragon), workstation clusters, and the connec-

tion machine CM-5. However, in CFD, the data-parallel approach is the prevalent











one even on MIMD computers. The front-end/back-end programming paradigm is

implemented by selecting one processor to initiate programs on the other processors,

accumulate global results, and enforce synchronization when necessary, a strategy

called single-program-multiple-data (SPMD) [23]. The CM-5 has a special "control

network" to provide automatic synchronization of the processor's execution, so a

SIMD programming model can be supported as well as MIMD. SIMD is the manner

in which the CM-5 has been used in the present work. The advantage to using the

CM-5 in the SIMD mode is that the programmer does not have to explicitly specify

message-passing. This simplification saves effort and increases the effective speed of

communication because certain time-consuming protocols for the data transfer can

be eliminated.

1.4.2 Algorithms and Performance

The previous subsection discussed data-parallelism and SIMD computers, i.e.

what parallel computing means in the present context and how it is carried out

by SIMD-type computers. To develop programs for SIMD computers requires one

to recognize that unlike serial computers, parallel computers are not black boxes. In

addition to the selection of an algorithm with ample data-parallelism, consideration

must be given to the implementation of the algorithm in specific ways in order to

achieve the desired benefits speedupss over serial computations).

The success of the choice of algorithm and the implementation on a particular

computer is judged by the "speedup" (S) and "efficiency" (E) of the program. The

communications mentioned above, front-end-to-processor and interprocessor, are es-

sentially overhead costs associated with the SIMD computational model. They would

not be present if the algorithm were implemented on a serial computer, or if such

communications were infinitely fast. If the overhead cost was zero, a parallel program











executing on n, processors would run np times faster than on a single processor, a

speedup of np. This idealized case would also have a parallel efficiency of 1. The

parallel efficiency E measures the actual speedup in comparison with the ideal.

One is also interested in how speedup, efficiency, and the parallel run time (Tp)

scale with problem size, and with the number of processors used. The objective in

using parallel computers is more than just obtaining a good speedup on a particular

problem size and a particular number of processors. For parallel CFD, the goals are

to either (1) reduce the time (the indirect cost [23]) to solve problems of a given

complexity, to satisfy the need for rapid turnaround times in design work, or (2)

increase the complexity of problems which can be solved in a fixed amount of time.

For the iteration-based numerical methods studied here, there are two considerations:

the cost per iteration, and the number of iterations, respectively, computational and

numerical factors. The total run time is the product of the two.

Gustafson [35] has presented fixed-size and scaled-size experiments whose results

describe how the cost per iteration scales on a particular machine. In the fixed-

size experiment, the efficiency is measured for a fixed problem size as processors are

added. The hope is that the run time is halved when the number of processors is

doubled. However, the run time obviously cannot be reduced indefinitely by adding

more processors because at some point the parallelism runs out-the limit to the

attainable speedup is the number of grid points. In the scaled-size experiment, the

problem size is increased along with the number of processors, to maintain a constant

local problem size for each of the parallel processors. Care must be taken to make

timings on a per iteration basis if the number of iterations to reach the end of the

computation increases with the problem size. The hope in such an experiment is that

the program will maintain a certain high level of parallel efficiency E. The ability











to maintain E in the scaled-size experiment indicates that the additional processors

increased the speedup in a one-for-one trade.

1.5 Pressure-Based Multigrid Methods

Multigrid methods are a potential route to both computationally and numerically

scalable programs. Their cost per iteration on parallel computers and convergence

rate is the subject of Chapters 4-5. For sufficiently smooth elliptic problems, the

convergence rate of multigrid methods is independent of the problem size-their op-

eration count is O(N). In practice, good convergence rates are maintained as the

problem size increases for Navier-Stokes problems, also, provided suitable multigrid

components-the smoother, restriction and prolongation procedures-and multigrid

techniques are employed. The standard V-cycle full-multigrid (FMG) algorithm has

an almost optimal operation count, O(log2N) for Poisson equations, on parallel com-

puters. Provided the multigrid algorithm is implemented efficiently and that the cost

per iteration scales well with the problem size and the number of processors, the

multigrid approach seems to be a promising way to exploit the increased computa-

tional capabilities that parallel computers offer.

The pressure-based methods mentioned previously involve the solution of three

systems of linear algebraic equations, one each for the two velocity components

and one for the pressure, by standard iterative methods such as successive line-

underrelaxation (SLUR). Hence they inherit the convergence rate properties of these

solvers, i.e. as the problem size grows the convergence rate deteriorates. With the

single-grid techniques, therefore, it will be difficult to obtain reasonable turnaround

times when the problem size is increased into the target range for parallel com-

puters. Multigrid techniques for accelerating the convergence of pressure-correction











methods should be pursued, and in fact they have been within the last five or so

years [70, 74, 80].

However, there are still many unsettled issues. The complexities affecting the

convergence rate of single-grid calculations carry over to the multigrid framework

and are compounded there by the coupling between the evolving solutions on multiple

grid levels, and by the particular "grid-scheduling" used.

Linear multigrid methods have been applied to accelerate the convergence rate for

the solution of the system of pressure or pressure-correction equations [4, 22, 42, 64,

94]. However, the overall convergence rate does not significantly improve because the

velocity-pressure coupling is not addressed [4, 22]. Therefore the multigrid strategy

should be applied on the "outer loop," with the role of the iterative relaxation method

played by the numerical methods described above, e.g. the projection method or the

pressure-correction method. Thus, the generic term "smoother" is prescribed because

it reflects the purpose of the solution of the coupled system of equations going on

inside the multigrid cycle-to smooth the residual so that an accurate coarse-grid

approximation of the fine-grid problem is possible. It is not true that a good solver,

one with a fast convergence rate on single-grid computations, is necessarily a good

smoother of the residual. It is therefore of interest to assess pressure-correction meth-

ods as potential multigrid smoothers. See Shyy and Sun [74] for more information

on the staggered-grid implementation of multigrid methods, and some encouraging

results.

Staggered grids require special techniques [21, 74] for the transfer of solutions and

residuals between grid levels, since the positions of the variables on different levels

do not correspond. However, they alleviate the "checkerboard" pressure stability

problem [50], and since techniques have already been established [74], there is no











reason not to go this route, especially when cartesian grids are used as in the present

work.

Vanka [89] has proposed a new numerical method as a smoother for multigrid

computations, one which has inferior convergence properties as a single-grid method

but apparently yields an effective multigrid method. A staggered-grid finite-volume

discretization is employed. In Vanka's smoother, the velocity components and pres-

sure of each control volume are updated simultaneously, so it is a coupled approach,

but the coupling between control volumes is not taken into account, so the calcu-

lation of new velocities and pressures is explicit. This method is sometimes called

the "locally-coupled explicit" or "block-explicit" pressure-based method. The control

volumes are visited in lexicographic order in the original method which is therefore

aptly called BGS (block Gauss-Seidel). Line-variants have been developed to couple

the flow variables in neighboring control volumes along lines (see [80, 87]).

Linden et al.[50] gave a brief survey of multigrid methods for the steady-state in-

compressible Navier-Stokes equations. They argue without analysis that BGS should

be preferred over the pressure-correction type methods since the strong local cou-

pling is likely to have better success smoothing the residual locally. On the other

hand, Sivaloganathan and Shaw [71, 70] have found good smoothing properties for

the pressure-correction approach, although the analysis was simplified considerably.

Sockol [80] has compared the point and line-variants of BGS with the pressure-

correction methods on serial computers, using model problems with different physical

characteristics. SIMPLE and BGS emerge as favorites in terms of robustness with

BGS preferred due to a lower cost per iteration. This preference may or may not

carry over to SIMD parallel computers (see Chapter 4 for comparison). Interesting

applications of multigrid methods to incompressible Navier-Stokes flow problems can

be found in [12, 28, 48, 54].











In terms of parallel implementations there are far fewer results although this

field is rapidly growing. Simon [77] gives a recent cross-section of parallel CFD

results. Parallel multigrid methods, not only in CFD but as a general technique

for partial differential equations, have received much attention due to their desirable

O(N) operation count on Poisson equations. However, it is apparently difficult to find

or design parallel computers with ideal communication networks for multigrid [13].

Consequently implementations have been pursued on a variety of machines to see

what performance can be obtained with the present generation of parallel machines,

and to identify and understand the basic issues. Dendy et al.[18] have recently

described a multigrid method on the CM-2. However, to accommodate the data-

parallel programming model they had to dimension their array data on every grid level

to the dimension extents of the finest grid array data. This approach is very wasteful

of storage. Consequently the size of problems which can be solved is greatly reduced.

Recently an improved release of the compiler has enabled the storage problem to be

circumvented with some programming diligence (see Chapter 5). The implementation

developed in this work is one of the first to take advantage of the new compiler feature.

In addition to parallel implementations of serial multigrid algorithms, several

novel multigrid methods have been proposed for SIMD computers [25, 26, 33]. Some

of the algorithms are instrinsically parallel [25, 26] or have increased parallelism

because they use multiple coarse grids, for example [33]. These efforts and others

have been recently reviewed [14, 53, 92]. Most of the new ideas have not been

developed yet for solving the incompressible Navier-Stokes equations.

One of the most prominent concerns addressed in the literature regarding parallel

implementations of serial multigrid methods is the coarse grids. When the number

of grid points is smaller than the number of processors the parallelism is reduced

to the number of grid points. This loss of parallelism may significantly affect the











parallel efficiency. One of the routes around the problem is to use multiple coarse

grids [59, 33, 79]. Another is to alter the grid-scheduling to avoid coarse grids. This

approach can lead to computationally scalable implementations [34, 49] but may

sacrifice the convergence rate. "Agglomeration" is an efficiency-increasing technique

used in MIMD multigrid programs which refers to the technique of duplicating the

coarse grid problem in each processor so that computation proceeds independently

(and redundantly). Such an approach can also be scalable [51]. However, most atten-

tion so far has focused on parallel implementations of serial multigrid algorithms, in

particular on assessing the importance of the coarse-grid smoothing problem for dif-

ferent machines and on developing techniques to minimize the impact on the parallel

efficiency.

1.6 Description of the Research

The dissertation is organized as follows. Chapter 2 discusses the role of the mass

conservation in the numerical consistency of the single-grid SIMPLE method for open

boundary problems, and explains the relevance of this issue to the convergence rate.

In Chapter 3 the single-grid pressure-correction method is implemented on the MP-1,

CM-2, and CM-5 computers and its performance is analyzed. High parallel efficien-

cies are obtained at speeds and problem sizes well beyond the current performance of

such algorithms on traditional vector supercomputers. Chapter 4 develops a multigrid

numerical method for the purpose of accelerating the single-grid pressure-correction

method and maintaining the accelerated convergence property independent of the

problem size. The multigrid smoother, the intergrid transfer operators, and the sta-

bilization strategy for Navier-Stokes computations are discussed. Chapter 5 describes

the actual implementation of the multigrid algorithm on the CM-5, its convergence

rate, and its parallel run time and scalability. The convergence rate depends on the








18


flow problem and the coarse-grid discretization, among other factors. These factors

are considered in the context of the "full-multigrid" (FMG) starting procedure by

which the initial guess on the fine grid is obtained. The cost of the FMG proce-

dure is a concern for parallel computation [88], and this issue is also addressed. The

results indicate that the FMG procedure may influence the asymptotic convergence

rate and the stability of the multigrid iterations. Concluding remarks in each chapter

summarize the progress made and suggest avenues for further study.















































Figure 1.1. Staggered-grid layout of dependent variables, for a small but complete
domain. Boundary values involved in the computation are shown. Representative u,
v, and pressure boundary control volumes are shaded.




















short blocks
of parallel code


Sequencer (CM-2)
Array control unit (MP-1)
Multiple SPARC nodes (CM-5:


individual instruction



P.E. P. E. P. E. P. E.


more P.E.s
0 0 0


array data partitioned among processor memories


Interprocessor communication network
hypercube (CM-2) + "NEWS"
3-stage crossbar (MP-1) + "X-Net"
fat tree (CM-5)


Figure 1.2. Layout of the MP-1, CM-2, and CM-5 SIMD computers.


Front End (CM-2 and MP-1)
Partition Manager (CM-5)
-> serial code, control code, scalar data


a 0 a


* 0 0














CHAPTER 2
PRESSURE-CORRECTION METHODS

2.1 Finite-Volume Discretization on Staggered Grids

The formulation of the numerical method used in this work begins with the inte-
gration of the governing equations Eq 1.1-1.3 over each of the control volumes in the
computational domain. Figure 1.1 shows a model computational domain with u, v,
and p (cell-centered) control volumes shaded. The continuity equation is integrated
over the p control volumes.
Consider the discretization of the u-momentum equation for the control volume
shown in Figure 2.1 whose dimensions are Ax and Ay. The v control volumes are
done exactly the same except rotated 900. Integration of Eq. 1.2 over the shaded
region is interpreted as follows for each of the terms:
I Opu Opup
P z dxdy AAy, (2.1)
at at
2 dx dy = (pu pu ) Ay, (2.2)

audx dy = (puv, pu,sv) A, (2.3)

S- dx dy = -(p p,)Ay, (2.4)
I (" ( au
It2U dxdy = a Ax (2.6)
jXdx = d2e-i jA (2.5)

/ P a dady=Pau Itau A, X (2.6)
if 49y 2 9ay ys)
The lowercase subscripts e, w. n, s indicate evaluation on the control volume faces.
By convention and the mean-value theorem, these are at the midpoint of the faces.
The subscript P in Eq. 2.1 indicates evaluation at the center of the control volume.
21











Because of the staggered grid, the required pressure values in Eq. 2.4 are already

located on the u control volume faces. The pressure-gradient term is effectively a

second-order central-difference approximation. With colocated grids, however, the

control-volume face pressures are obtained by averaging the nearby pressures. This

averaging results in the pressure at the cell center dropping out of the expression

for the pressure gradient. The central-difference in Eq. 2.4 is effectively taken over

a distance 2Ax on colocated grids. Thus staggered cartesian grids provide a more

accurate approximation of the pressure-gradient term since the difference stencil is

smaller.

The next step is to approximate the terms which involve values at the control

volume faces. In Eq. 2.2, one of the ue and one of the u, are replaced by an average

of neighboring values,

2 2) A ( UE+ Up UP+ UW )
(ue pu y = p 2 e P U2 A (2.7)

and in Eq. 2.3, v, and v, are obtained by averaging nearby values,
( Vne + nw Vse + Vs
(pUnVn pusv) Ax= p V -u pV us), Ax (2.8)

The remaining face velocities in the convection terms, u,, u,, ue, and uw, are ex-

pressed as a certain combination of the nearby u values-which u values are involved

and what weighting they receive is prescribed by the convection scheme. Some pop-

ular recirculating flow convection schemes are described in [73, 75].

The control-volume face derivatives in the diffusion terms are evaluated by central

differences,

P Ou 9u E-UP P-UW A (2.9)


P -I a AX = M UN- P p Ax (2.10)
dy ay Ay Ay











The unsteady term in Eq. 2.1 is approximated by a backward Euler scheme. All the

terms are evaluated at the "new" time level, i.e. implicitly.
Thus, the discretized momentum equations for each control volume can be put

into the following general form,

apup = aEUE + awuw + aNUN + asus + b, (2.11)

where b = (pw -pe)Ay+pun/At, the superscript n indicating the previous time-step.

The coefficients aN, as, etc. are comprised of the terms which modify UN, us, etc. in
the discretized convection and diffusion terms.

The continuity equation is integrated over a pressure control volume,

I/ [ pu+ dx dy = p(ue u)Ay + p(vn v)A = 0. (2.12)

Again the staggered grid is an advantage because the normal velocity components on

each control volume face are already in position-there is no need for interpolation.

2.2 The SIMPLE Method

One SIMPLE iteration takes initial velocity and pressure fields (u*, v*,p*) and
computes new guesses (u, v,p). The intermediate values are denoted with a tilde,

(it, j,). In the algorithm below, au(u*, v*), for example, means that the aN coeffi-
cient in the u-momentum equation depends on u* and v*. The parameters v,, v,, and
vc are the numbers of "inner" iterations to be taken for the u, v, and continuity equa-

tions, respectively. This notation will be clarified by the following discussion. The
inner iteration count is indicated by the superscript enclosed in parentheses. Finally,
wU and w, are the relaxation factors for the momentum and continuity equations.

SIMPLE (u*, v*, p*; Vu, vV, Vp, WU, wc)
Compute u coefficients au(u*, v*) (k = P,E,W,N,S) and source term b"(u*,p*)











for each discrete u-momentum equation:

a~Up = aNiUN + auis + auiE + a'viw + bU + (1 wu -2-u
UJUV s Ev W JUV P

Do v, iterations to obtain an approximate solution for ii

starting with u* as the initial guess

u(n) = Gu(n-1) + fU

f = un=u)

Compute v coefficients ak((i, v*) (k = E,W,N,S) and source term bv(v*,p*)

for each discrete v-momentum equation:

v2-p = a'VNl + a'ls + as + IE + a'w w + b + (1 wuv,) v

Do v iterations to obtain an approximate solution for v

starting with v* as the initial guess

v(n) = Gv(n-l1) + fV

V = v(n=v)

Compute p' coefficients a' (k = P,E,W,N,S) and source term bc(ii, 3)

for each discrete p' equation:

app p = aNp N + asp s + aEP'E + awP'w + bc
Do vc iterations to obtain an approximate solution for p'

starting with zero as the initial guess

p'(") = Gp'("-1) + f

Correct f, i, and p* at every interior grid point

up = ip + ,
(ap')p
Vp = pp + (a')p

pp = p* + WcP'p











The algorithm is not as complicated as it looks. The important point to note is

that the major tasks to be done are the computing of coefficients and the solving of

the systems of equations. The symbol G indicates the iteration matrix of whatever

type relaxation is used on these inner iterations (SLUR in this case), and f is the

corresponding source term.

In the SIMPLE pressure-correction method [61], the averages in Eq. 2.7 and 2.8

are lagged in order to linearize the resulting algebraic equations. The governing

equations are solved sequentially. First, the u momentum equation coefficients are

computed and an updated u field is computed by solving the system of linear alge-

braic equations. The pressures in Eq. 2.4 are lagged. The v momentum equation is

solved next to update v. The continuity equation, recast in terms of pressure correc-

tions, is then set up and solved. These pressure corrections are coupled to velocity

corrections. Together they are designed to correct the velocity field so that it satisfies

the continuity constraint, while simultaneously correcting the pressure field so that

momentum conservation is maintained.

The relationship between the velocity and pressure corrections is derived from

the momentum equation, as described in the next section. The resulting system

of equations is fully coupled, as one might expect knowing the elliptic nature of

pressure in incompressible fluids, and is therefore expensive to solve. However, if the

resulting system of pressure-correction equations were solved exactly, the divergence-

free constraint and the momentum equations (with old values of u and v present in

the nonlinear convection terms) would be satisfied. This approach would constitute

an implicit method of time integration for the linearized equations. The time-step

size would have to be limited to avoid stability problems caused by the linearization.

To reduce the computational cost, the SIMPLE prescription is to use an approx-

imate relationship between the velocity and pressure corrections (hence the label











"semi-implicit"). Variations on the original SIMPLE approximation have shown bet-

ter convergence rates for simple flow problems, but in discretizations on curvilinear

grids and other problems with significant contributions from source terms, the per-

formance is no better than the original SIMPLE method (see the results in [4]).

The goal of satisfying the divergence-free constraint can still be attained, if the

system of pressure-correction equations is converged to strict tolerances, because the

discrete continuity equations are still being solved. But satisfaction of the momentum

equations cannot be maintained with the approximate relationship. Consequently it

is no longer desirable to solve the p'-system of equations to strict tolerances. It-

erations are necessary to find the right velocities and pressures which satisfy all

three equations. Furthermore, since the equation coefficients are changing from one

iteration to the next, it is pointless to solve the momentum equations to strict tol-

erances. In practice, only a few iterations of a standard scheme such as successive

line-underrelaxation (SLUR) are performed.

The single "outer" iteration outlined above is repeated many times, with under-

relaxation to prevent the iterations from diverging. In this sense a two-level iterative

procedure is being employed. In the outer iterations, the momentum and pressure-

correction equations are iteratively updated based on the linearized coefficients and

sources, and inner iterations are applied to partially solve the systems of linear alge-

braic equations.

The fact that only a few inner iterations are taken on each system of equations sug-

gests that the asymptotic convergence rate of the iterative solver, which is the usual

means of comparison between solvers, does not necessarily dictate the convergence

rate of the outer iterative process. Braaten and Shyy [4] have found that the con-

vergence rate of the outer iterations actually decreases when the pressure-correction

equation is solved to a much stricter tolerance than the momentum equations. They











concluded that the balance between the equations is important. Because u, v, and

p' are segregated, the overall convergence rate is strongly dependent on the partic-

ular flow problem, the grid distribution and quality, and the choice of relaxation

parameters.

In contrast to projection methods, which are two-step but treat the convection

terms explicitly (or more recently by solving a Riemann problem [2]) and are therefore

restricted from taking too large a time-step, the pressure-correction approach is fully

implicit with no time-step limitation, but many iterations may be necessary. The

projection methods are formalized as time-integration techniques for semi-discrete

equations. SIMPLE is an iterative method for solving the discretized Navier-Stokes

system of coupled nonlinear algebraic equations. But the details given above should

make it clear that these techniques bear strong similarities-specifically, a single

SIMPLE iteration would be a projection method if the system of pressure-correction

equations was solved to strict tolerances at each iteration. It would be interesting to

do some numerical comparisons between projection methods and pressure-correction

methods to further clarify the similarity.

2.3 Discrete Formulation of the Pressure-Correction Equation

The discrete pressure-correction equation is obtained from the discrete momentum

and continuity equations as follows. The velocity field which has been newly obtained

by solving the momentum equations was denoted by (ii, v) earlier. The pressure field

after the momentum equations are solved still has the initial value p*. So fi, u, and

p* satisfy the u-momentum equation

apUip = aEUE + awuw + aNUN + asfis + (p* p*)Ay, (2.13)

and the corresponding v-momentum equation. The corrected (continuity-satisfying)

velocity field (u, v) satisfies the u-momentum equation with the corrected pressure











field p,

apup = aEUE + awuw + aNUN + asus + (pw pe)Ay, (2.14)

and likewise for the v-momentum equation. Additive corrections are assumed, i.e.


u = ii + u' (2.15)


v = v' (2.16)

p= p* + p'. (2.17)

Subtracting Eq. 2.13 from Eq. 2.14 gives the desired relationship between pressure

and the u corrections,


apup = akk + (p, p')Ay, (2.18)
k=E,W,N,S

with a similar expression for the v corrections.

If Eq. 2.18 is used as is, then the nearby velocity corrections in the summation need

to be replaced by similar expressions involving pressure-corrections. This requirement

brings in more velocity corrections and more pressure corrections, and so on, leading

to an equation which involves the pressure corrections at every grid point. The

resulting system of equations would be expensive to solve. Thus, the summation

term is dropped in order to obtain a compact expression for the velocity correction in

terms of pressure corrections. At convergence, the pressure corrections (and therefore

the velocity corrections) go to zero, so the precise form of the approximate pressure-

velocity correction relationship does not figure in the final converged solution.

The discrete form of the pressure-correction equation follows by first substituting

the simplified version of Eq. 2.18 into Eq. 2.15,


Up = Up + Up = Up + (p, p')Ay,


(2.19)











and then substituting this into the continuity equation Eq. 2.12, (with an analogous

formula for vp). The result is

pAy2 pAy2 PAx2 I PAX2
a (pu),-P) -(P'' (P-PP)+ (PP -pN) -- (-P') = b, (2.20)
ap(ue) ap(uw) ap(vn) ap(v,)

where the source term b is

b = p,Ay piiAy + pv;:A pvAx (2.21)


Recall that Eq. 2.20 and Eq. 2.21 are written for the pressure control volumes, so that

there is some interpretation required. The term ap(ue) in Eq. 2.20 is the appropriate

ap for the discretized u-momentum equation, Eq. 2.13. In other words, up in Eq. 2.13

is actually u,, u,, u,, or us in Eq. 2.20 and 2.21, relative to the pressure control

volumes on the staggered grid. Eq. 2.20 can be rearranged into the same general

form as Eq. 2.11. From Eq. 2.21, it is apparent that the right-hand side term is the

net mass flux entering the control volume, which should be zero in incompressible

flow.

In the formulation of the pressure-correction equation for boundary control vol-

umes, one makes use of the fact that the normal velocity components on the bound-

aries are known from either Dirichlet or Neumann boundary conditions, so no velocity

correction is required there. Consequently, the formulation of Eq. 2.20 for boundary

control volumes does not require any prescription of boundary p' values [60] when

velocity boundary conditions are prescribed. Without the summation from Eq. 2.18,

it is apparent that a zero velocity correction for the outflow boundary u-velocity

component is obtained when pw = pe-in effect, a Neumann boundary condition on

pressure is implied. This boundary condition is appropriate for an incompressible

fluid because it is physically consistent with the governing equations in which only

the pressure gradient appears. There is a unique pressure gradient but the level is











adjustable by any constant amount. If it happens that there is a pressure specified

on the boundary, for example by Eq. 1.4, then the correction there will be zero, pro-

viding a boundary condition for Eq. 2.20. Thus, it seems that there are no concerns

over the specification of boundary conditions for the p' equations.

2.4 Well-Posedness of the Pressure-Correction Equation

2.4.1 Analysis

To better understand the characteristics of the pressure-correction step in the

SIMPLE procedure, consider a model 3 x 3 computational domain, so that 9 algebraic

equations for the pressure corrections are obtained. Number the control volumes as

shown in Figure 2.3. Then the system of p' equations can be written

al -a 0 -ak 0 0 0 0 0 p' p(u\ u1 + v1 v,)
2 2
-aw a --a 0 -a 0 0 0 0 p' p(u + v v )
0 -a, a 0 0 -aN 0 0 0 p' P(u U + -v)
-a 0 0 a -4 0-aa4 0 0 0 4 p(u -u+ v -v)
55 + 5 _V5)
S-4 0 -a, a -aE 0 -aN 0 p' = p(uW-n+v -v) (2.22)
0 0 -a 0 -a6 ap 0 0 -aN P'6 P(u6 + v v,)
0 0 0 -as 0 0 ap -aE 0 p'7 p(u7 u7 + v v7)
S 0 0 -a -a8 as -4a P' P(u8 u+ v8 -
S0 0 0 -a 0 -a9 a9 .Pg p( u + v9 v9)


where the superscript designates the cell location and the subscript designates the

coefficient linking the point in question, P, and the neighboring node. The right-hand

side velocities are understood to be tilde quantities as in Eq. 2.21.

In finite-volume discretizations, fluxes are estimated at the control volume faces

which are common to adjacent control volumes, so if the governing equations are

cast in conservation law form, as they are here, the discrete efflux of any quantity

out of one control volume is guaranteed to be identical to the influx into its neighbor.

There is no possibility of internal sources or sinks. In fact this is what makes finite-

volume discretizations preferable to finite-difference discretizations. The following











relationships, using control volume 5 in Figure 2.3 as an example, follow from Eq. 2.20

and the internal consistency of finite-volume discretizations:


a = a + as + a N + aG (2.23)


a = a E = aw, aN =as as = aN (2.24)
5 = = 4 =1 v, = 2 (2.25)
w e e zw n S Vn

Eq. 2.23 states that the coefficient matrix is pentadiagonal and diagonally dominant

for the interior control volumes. Furthermore, when the natural boundary condition

(zero velocity correction) is applied, the appropriate term in Eq. 2.20 for the boundary

under consideration does not appear, and therefore the pressure-correction equations

for the boundary control volumes also satisfy Eq. 2.23. If a pressure boundary condi-

tion is applied so that the corresponding pressure correction is zero, then one would

set p' = 0 in Eq. 2.20, for example, which would give aw + aN + as < ap. Thus,

either way, the entire coefficient matrix in Eq. 2.22 is diagonally dominant. However,

with the natural prescription for boundary treatment, no diagonal term exceeds the

sum of its off-diagonal terms.

Thus, the system of equations Eq. 2.22 is linearly dependent with the natural

(velocity) boundary conditions, which can be verified by adding the 9 equations

above. Because of Eq. 2.23 and Eq. 2.24 all terms on the left-hand side of Eq. 2.22

identically cancel one another. At all interior control volume interfaces, the right-

hand side terms identically cancel due to Eq. 2.25, and the remaining source terms

are simply the boundary mass fluxes. This cancellation is equivalent to a discrete

statement of the divergence theorem


SV-idQ = j i -d(0fl) (2.26)











where 0f is the domain under consideration and n is the unit vector in the direction

normal to its boundary 0t.

Due to the linear dependence of the left-hand side of Eq. 2.22, the boundary mass

fluxes must also sum to zero in order for the system of equations to be consistent.

No solution exists if the linearly dependent system of equations is inconsistent. The

situation can be likened to a steady-state heat conduction problem with source terms

and adiabatic boundaries. Clearly, a steady-state solution only exists if the sum of

the source terms is zero. If there is a net heat source, then the temperature inside

the domain will simply rise without bound if an iterative solution strategy (quasi

time-marching) is used. Likewise, the net mass source in flow problems with open

boundaries must sum to zero for the pressure-correction equation to have a solution.

In other words, global mass conservation is required in discrete form in order for a

solution to exist. The interesting point to note is that during the course of SIMPLE

iterations, when the pressure-correction equation is executed, the velocity field does

not usually conserve mass globally in flow problems with open boundaries, unless

explicit measure is taken to enforce global mass conservation. The purpose of solving

the pressure-correction equations is to drive the local mass sources to zero by suitable

velocity corrections. But the pressure-correction equations which are supposed to

accomplish this purpose do not have a solution unless the net mass source is already

zero. For domains with closed boundaries, global mass conservation is obviously not

an issue.

Furthermore, this problem does not only show up when the initial guess is bad.

In the backward-facing step flow discussed below, the initial guess is zero everywhere

except for inflow, which obviously is the worst case as far as a net mass source is

concerned (all inflow and no outflow). But even if one starts with a mass-conserving

initial guess, during the course of iterations the outflow velocity boundary condition











which is necessary to solve the momentum equations will reset the outflow so that

the global mass-conservation constraint is violated.

2.4.2 Verification by Numerical Experiments

Support for the preceding discussion is provided by numerical simulation of two

model problems, a lid-driven cavity flow and a backward-facing step flow. The con-

figurations are shown along with other relevant data in Figure 2.2.

Figure 2.4 shows the outer-loop convergence paths for the lid-driven cavity flow

and the backward-facing step flow, both at Re = 100. The quantities plotted in

Figure 2.4 are the logo of the global residuals for each governing equation obtained

by summing up the local residuals, each of which is obtained by subtracting the

left-hand side of the discretized equations from the right-hand side. For the cavity

flow there are no mass fluxes across the boundary so, as mentioned earlier, the global

mass conservation condition is always satisfied when the algorithm reaches the point

of solving the system of p'-equations. The residuals have dropped to 107 after 150

iterations, which is very rapid convergence, indicating that good pressure and velocity

corrections are being obtained.

In the backward-facing step flow, however, the flowfield is very slow to develop

because no global mass conservation measure is enforced. During the course of iter-

ations, the mass flux into the domain from the left is not matched by an equal flux

through the outflow boundary, and consequently the system of pressure-correction

equations which is supposed to produce a continuity-satisfying velocity field does not

have a solution. Correspondingly one observes that the outer-loop convergence rate

is about 10 times worse than for cavity flow.

Also, note that the momentum convergence path of the backward-facing step flow

in Figure 2.4 tends to follow the continuity equation, indicating that the pressure and











velocity fields are strongly coupled. The present flow problem bears some similarity to

a fully-developed channel flow, in which the streamwise pressure-gradient and cross-

stream viscous diffusion are balanced, so the observation that pressure and velocity

are strongly coupled is intuitively correct. Thus, the convergence path is controlled

by the development of the pressure field. The slow convergence rate problem is due

to the inconsistency of the system of pressure-correction equations.

The inner-loop convergence path (the SLUR iterations) for the p'-system of equa-

tions must be examined to determine the manner in which the inner-loop inconsis-

tency leads to poor outer-loop convergence rates. Table 2.1 shows leading eigenvalues

for successive line-underrelaxation iteration matrices of the p'-system of equations at

an intermediate iteration for which the outer-loop residuals had dropped to approx-

imately 10-2.

Largest 3 eigenvalues Cavity Flow Back-Step Flow
A1 1.0 1.0
A2 0.956 0.996
A3 0.951 0.984

Table 2.1. Largest eigenvalues of iteration matrices during an intermediate itera-
tion, applying the successive line-underrelaxation iteration scheme to the p'-system of
equations.


In both model problems the spectral radius is 1.0 because the p'-system of equa-

tions is linearly dependent. The next largest eigenvalue is smaller in the cavity flow

computation than in the step flow computation, which means a faster asymptotic con-

vergence rate. However, the difference between 0.996 and 0.956 is not large enough

to produce the significant difference observed in the outer convergence path.

Figure 2.5 shows the inner-loop residuals of the SLUR procedure during an inter-

mediate iteration. The two momentum equations are well-conditioned and converge

to a solution within 4 iterations. In Figure 2.5 for the cavity flow case, the p'-equation











converges to zero, although this happens at a slower rate than the two momentum

equations because of the diffusive nature of the equation. In Figure 2.5 for the back-

step flow, the inner-loop residual is fixed on a nonzero residual, which is in fact the

initial level of inconsistency in the system of equations, i.e. the global mass deficit.

Given that the system of p'- equations which is being solved does not satisfy the

global continuity constraint, however, the significance or utility of the p'-field that

has been obtained is unknown.

In practice, the overall procedure may still be able to lead to a converged solu-

tion, as in the present case. It appears that the outflow extrapolating procedure,

a zero-gradient treatment utilized here, can help induce the overall computation to

converge to the right solution [72]. Obviously, such a lack of satisfaction of global

mass conservation is not desirable in view of the slow convergence rate.

Further study suggests that the iterative solution to the inconsistent system of

p'-equations converges on a unique pressure gradient, i.e. the difference between p'

values at any two points tends to a constant value, even though the p'-field does not

in general satisfy any of the equations in the system. This relationship is shown in

Figure 2.6, in which the convergence of the difference in p' between the lower-left and

upper-right locations in the domain of the cavity and backward-facing step flows is

plotted. Also shown is the value of p' at the lower-left corner of the domain. For the

cavity flow, there is a solution to the system of p'-equations, and it is obtained by

the SLUR technique in about 10 iterations. Thus all the pressure corrections and the

differences between them tend towards constant values. In the backward-facing step

flow, however, the individual pressure corrections increase linearly with the number

of iterations, symptomatic of the inconsistency in the system of equations. The

differences between p' values approach a constant, however. The rate at which this











unique pressure-gradient field is obtained depends on the eigenvalues of the iteration

matrix.

To resolve the inconsistency problem in the p'-system of equations and thereby

improve the outer-loop convergence rate in the backward-facing step flow, global mass

conservation has been explicitly enforced during the sequential solution procedure.

The procedure used is to compute the global mass deficit and then add a constant

value to the outflow boundary u-velocities to restore global mass conservation. Al-

ternatively, corrections can be applied at every streamwise location by considering

control volumes whose boundaries are the inflow plane, the top and bottom walls

of the channel, and the i=constant line at the specified streamwise location. The

artificially-imposed convection has the effect of speeding up the development of the

pressure field, whose normal development is diffusion-dominated. It is interesting to

note that this physically-motivated approach is in essence an acceleration of conver-

gence of the line-iterative method via the technique called additive correction [45, 69].

The strategy is to adjust the residual on the current line to zero by adding a con-

stant to all the unknowns in the line. This procedure is done for every line, for every

iteration, and generally produces improvement in the SLUR solution of a system of

equations. Kelkar and Patankar [45] have gone one step further by applying additive

corrections like an injection step of a multigrid scheme, a so-called block correction

technique. This technique is exploited to its fullest by Hutchinson and Raithby [42].

Given a fine-grid solution and a coarse grid, discretized equations for the correction

quantities on the coarse grid are obtained by summing the equations for each of the

fine-grid cells within a given coarse grid cell. A solution is then obtained (by direct

methods in [45]) which satisfies conservation of mass and momentum. The corrections

are then distributed uniformly to the fine grid cells which make up the coarse grid











cell, and the iterative solution on the fine grid is resumed. However, experiences have

shown that the net effect of such a treatment for complex flow problems is limited.

Figure 2.7 illustrates the improved convergence rate of the continuity equation for

the inner and outer loops, in the backward-facing step flow, when conservation of mass

is explicitly enforced. The inner-loop data is from the 10th outer-loop iteration. In

Figure 2.7, the cavity flow convergence path is also shown to facilitate the comparison.

For the back-step, the overall convergence rate is improved by an order of magnitude,

becoming slightly faster than the cavity flow case. This result reflects the improved

inner-loop performance, also shown in Figure 2.7. The improved performance for the

pressure-correction equation comes at the expense of a slightly slower convergence

rate for the momentum equations, because of the nonlinear convection term.

In short, it has been shown that a consistency condition, which is physically the re-

quirement of global mass conservation, is critical for meaningful pressure-corrections

to be guaranteed. Given natural (velocity) boundary conditions, which lead to a

linearly dependent system of pressure-correction equations, satisfaction of the global

continuity constraint is the only way that a solution can exist, and therefore the only

way that the inner-loop residuals can be driven to zero. For the model backward-

facing step flow in a channel with length L = 4 and a 21 x 9 mesh, the mass-

conservation constraint is enforced globally or at every streamwise location by an

additive-correction technique. This technique produces a 10-fold increase in the con-

vergence rate. Physically, modifying the u velocities has the same effect as adding

a convection term to the Poisson equation for the p'-field, which otherwise develops

very slowly. A coarse grid size was used to demonstrate the need of enforcing global

mass conservation. On a finer grid, this issue becomes more critical. In the next

section, the solution accuracy aspects related to mass conservation will be addressed,

and the computations will be conducted with more adequate grid resolution.











2.5 Numerical Treatment of Outflow Boundaries

Continuing with the theme of well-posedness, the next numerical issue to be dis-

cussed is the choice of outflow boundary location. If fluid flows into the domain at

a boundary where extrapolation is applied, then, traditionally, the problem is not

considered to be well-posed, because the information which is being transported into

the domain does not participate in the solution to the problem [60]. Numerically,

however, accurate solutions can be obtained using first-order extrapolation for the ve-

locity components on a boundary where inflow is occurring [72]. Here open boundary

treatment for both steady and time-dependent flow problems is investigated further.

Figure 2.9 and 2.8 present streamfunction contours for a time-dependent flow

problem, impulsively started backward-facing step flow, using central-differencing

for the convection terms and first-order backward-differencing in time. A parabolic

inflow velocity profile is specified, while outflow boundary velocities are obtained by

first-order extrapolation. The Reynolds number based on the average inflow velocity

Uavg and the channel height H, is 800. The expansion ratio H/h is 2 as in the model
problem described in Figure 2.3. Time-accurate simulations were performed for two

channel configurations, one with length L = 8 (81 x 41 mesh) and the other with

length L = 16 (161 x 41 mesh). This flow problem has been the subject of some

recent investigations focusing on open boundary conditions [30, 31].

For each time step, the SIMPLE algorithm is used to iteratively converge on a

solution to the unsteady form of the governing equations, explicitly enforcing global

conservation of mass during the course of iterations. In the present study, convergence

was declared for a given time step when the global residuals had been reduced below

10-4. The time-step size was twice the viscous time scale in the y-direction, i.e.











At = 2Ay2/v. Thus a fluid particle entering the domain at the average velocity u =

1 travels 2 units downstream during a time-step.

Figure 2.8 shows the formation of alternate bottom/top wall recirculation regions

during startup which gradually become thinner and elongated as they drift down-

stream. For the L = 16 simulation (Figure 2.8), the transient flowfield has as many

as four separation bubbles at T = 32, the latter two of which are eventually washed

out of the domain. In the L = 8 simulation (Figure 2.9) the streamfunction plots are

at times corresponding to those shown in Figure 2.8. Note that between T = 11 and

T = 32, a secondary bottom wall recirculation zone forms and drifts downstream,

exiting without reflection through the downstream boundary. The time evolution of

the flowfield for the L = 8 and L = 16 simulations is virtually identical.

As can be observed, the facts that a shorter channel length was used in Figure 2.9

and that a recirculating cell may go through the open boundary do not affect the

solutions. Figure 2.10 compares the computed time histories of the bottom wall

reattachment and top wall separation points between the two computations. The

L = 8 and L = 16 curves are perfectly overlapped. The steady-state solutions for

both the L = 8 and L = 16 channel configurations are also shown in Figure 2.9

and 2.8, respectively. Although the outflow boundary cuts the top wall separation

bubble approximately in half, there is no apparent difference between the computed

streamfunction contours for 0 < x < 8. Furthermore, the convergence rate is not

affected by the choice of outflow boundary location.

Figure 2.11 compares the steady-state u and v velocity profiles at x = 7 be-

tween the two computations. The accuracy of the computed results is assessed by

comparison with an FEM numerical solution reported by Gartling [27]. Figure 2.11

establishes quantitatively that the two simulations differ negligibly over 0 < x < 8

(the v profile differs on the order of 10-3) The velocity scale for the problem is 1.











Neither v profile agrees perfectly with the solution obtained by Gartling, which may

be attributed to the need for conducting further grid refinement studies in the present

work and/or Gartling's work.

Evidently the location of the open boundary is not critical to obtaining a con-

verged solution. This observation indicates that the downstream information is com-

pletely accounted for by the continuity equation. The correct pressure field can de-

velop because the system of p'-equations requires only the boundary mass flux specifi-

cation. If the global continuity constraint is satisfied, the pressure-correction equation

is consistent regardless of whether there is inflow or outflow at the boundary where

extrapolation is applied. The numerical well-posedness of the open boundary com-

putation results in virtually identical flowfield development for the time-dependent

L = 8 and L = 16 simulations as well as steady-state solutions which agree with each

other and follow closely Gartling's benchmark data [27].

2.6 Concluding Remarks

In order for the SIMPLE pressure-correction method to be a well-posed numer-

ical procedure for open boundary problems, explicit steps must be taken to ensure

the numerical consistency of the pressure-correction system of equations during the

course of iterations. For the discrete problem with the natural boundary treatment

for pressure, i.e. normal velocity specified at all boundaries, global mass conserva-

tion is the solvability constraint which must be satisfied in order that the system of

p'-equations is consistent. Without a globally mass-conserving procedure enforced

during each iterative step, the utility of the pressure-corrections obtained at each it-

eration cannot be guaranteed. Overall convergence may still occur, albeit very slowly.

In this regard, the poor outer-loop convergence behavior simply reflects the (poor)

convergence rate of the inner-loop iterations of the SLUR technique. In general, the











inner-loop residual is fixed on the value of the initial level of inconsistency of the

system of p'-equations which physically is the global mass deficit. The convergence

rate can be improved dramatically by explicitly enforcing mass conservation using

an additive-correction technique. The results of numerical simulations of backward-

facing step flow illustrate and support these conclusions.

The mass-conservation constraint also has implications for the issue of proper

numerical treatment of open boundaries where inflow is occurring. Specifically, the

conventional viewpoint that inflow cannot occur at open boundaries without Dirich-

let prescription of the inflow variables can be rebutted, based on the grounds that

the numerical problem is well-posed if the normal velocity components satisfy the

continuity constraint.


















































Figure 2.1. Staggered grid u control volume and the nearby variables which are
involved in the discretization of the u-momentum equation.











U=1
^t------


U(y) \
1i\~ L 1


W> -------------- ^:^ __
7 7/

h


Figure 2.2. Description of two model problems. Both are at Re = 100. The cavity
is a square with a top wall sliding to the left, while the backward-facing step is a
4 x 1 rectangular domain with an expansion ratio H/h = 2, and a parabolic inflow
(average inflow velocity = 1). The cavity flow grid is 9 x 9 and the step flow grid is
21 x 9. The meshes and the velocity vectors are shown.


y///#/////////////////////////







































Figure 2.3. Model 3 x 3 computational domain with numbered control volumes, for
discussion of Eq. 2.22. The staggered velocity components which refer to control
volume 5 are also indicated.


I I I

OP 7 OP P9


5 Un 5
P 4 0 Pr5 OP'6

s5

S -, -
Pi P'2 P 3
























Re = 100 Back-Step Flow


0

S-2

0 -4
0
O -6
S-6
ed)
2_


100 200 300


# of Iterations


0 500 1000


# of Iterations


Figure 2.4. Outer-loop convergence paths for the Re = 100 lid-driven cavity and
backward-facing step flows, using central-differencing for the convection terms. Leg-
end: p' equation: u momentum equation: -.-.-.- r momentum equation.


1500


Re = 100 Cavity Flow
























Re = 100 Cavity Flow Re = 100 Back-Step Flow


S0 -3 0oo
3 \ 3 \

0 .1



-6 -6
0 10 20 30 0 10 20 30

# of Iterations # of Iterations





Figure 2.5. Inner-loop convergence paths for the Re = 100 lid-driven cavity and
backward-facing step flows. The vertical axis is the logo of the ratio of the current
residual to the initial residual. Legend: p' equation: --- u momentum equation:
.-.-.- momentum equation.
























Inner Loop for Cavity Flow
0.03 I


0.01


# of Iterations


Inner Loop for Back-Step Flow


50
# of Iterations


Figure 2.6. Variation of p' with inner-loop iterations. The dashed line is the value
of p' at the lower-left control volume, while the solid line is the difference between
P'lowerleft and Pupperrnght


0.02 1:



























Outer Loop Convergence Path


- 81 200
0 100 200


# of Iterations


0 50


# of Iterations


Figure 2.7. Outer-loop and inner-loop convergence paths of the p' equation for the
backward-facing step model problem, with and without enforcing the continuity con-
straint. (1) conservation of mass not enforced: (2) continuity enforced globally; (3)
cavity flow.


Inner-Loop Convergence Path


ca
3
ya -1
(U
(^
Q- ,

















Ii


i


H


c J
II .
= 3
SrJ
= !

n
ce
0i




oC
*- C
- .
I^



cc-.
i "
-
a-
*~

-=rey
JC k
i-r i
:-"t ;
T- ^"
.ihC ^
i-. -: '














T=11






T= 15






T=20






T=32






T=oo








Figure 2.9. Time-dependent flowfield for impulsively started backward-facing step
flow, Re = 800. The domain has length L = 8. Streamfunction contours are plotted
at several instants during the evolution to the steady-state, which is the last figure.















Time-Evolution of Reattachment/Separation Locations


0 10 20 30 40 50


Time



Figure 2.10. Time-dependent location of bottom wall reattachment point and top wall
separation point for Re = 800 impulsively started backward-facing step flow. The
curves for both L = 8 and L = 16 computations are shown; they overlap identically.












U Velocity Profile at X = 7 For Re = 800 Back-Step Flow


U(Y)


V Velocity Profile at X = 7 For Re = 800 Back-Step Flow


-0.02 -0.015 -0.01 -0.005 0 0.005 0.01
V(Y)


Figure 2.11. Comparison of u and v-component of velocity profiles at x = 7.0 for
the L = 16 and L = 8 backward-facing step simulations at Re = 800, with central-
differencing. (o) indicates the grid-independent FEM solution obtained by Gartling.
The v profile is scaled up by 103.
















CHAPTER 3
EFFICIENCY AND SCALABILITY ON SIMD COMPUTERS

The previous chapter considered an issue which was important because of its im-

plications for the convergence rate in open boundary problems. The present chapter

shifts gears to focus on the cost and efficiency of pressure-correction methods on

SIMD computers.

As discussed in Chapter 1, the eventual goal is to understand the indirect cost [23],

i.e. the parallel run time, of such methods on SIMD computers, and how this cost

scales with the problem size and the number of processors. The run time is just the

number of iterations multiplied by the cost per iteration. This chapter considers the

cost per iteration.

3.1 Background

The discussion of SIMD computers in Chapter 1 indicated similarities in the

general layout of such machines and in the factors which affect program performance.

More detail is given in this section to better support the discussion of results.

3.1.1 Speedup and Efficiency

Speedup S is defined as

S = T (3.1)

where Tp is the measured run time using np processors. In the present work T1 is

the run time of the parallel algorithm on one processor, including both serial and

parallel computational work, but excluding the front-end-to-processor and interpro-

cessor communication. On a MIMD machine it is sometimes possible to actually time

53











the program on one processor, but each SIMD processor is not usually a capable

serial computer by itself, so Ti must be estimated. The timing tools on the CM-2

and CM-5 are very sophisticated, and can separately measure the time elapsed by

the processors doing computation, doing various kinds of communication, and doing

nothing (waiting for an instruction from the front-end, which might be finishing up

some serial work before it can send another code block). Thus, it is possible to make

a reasonable estimate for T1.

Parallel efficiency is the ratio of the actual speedup to the ideal (np), which reflects

the overhead costs of doing the computation in parallel:

E- S.actu T1/Tp (3.2)
Sideal np

If Tcomp is the time in seconds spent by each of the np processors doing useful work

(computation), Tinter-proc is the time spent by the processors doing interprocessor

communication, and Tfe-to-proc is the time elapsed through front-end-to-processor

communication, then each of the processors is busy a total of Tcomp + Tinter-proc

seconds and the total run time on multiple processors is Tcomp + Tinter-proc Tfe-to-proc

seconds. Assuming that the parallelism is high, i.e. a high percentage of the virtual

processors are not idle, a single processor would need npTcomp time to do the same

work. Thus, T1 = npTcomp, and from Eq. 3.2 E can be expressed as

1 1
E = (3.3)
1 + (Tinter-proc + Tfe-to-proc) /Tcomp 1 + (Tcomm) Tcomp

Since time is work divided by speed, E depends on both machine-related factors and

the implementational factors through Eq. 3.3. High parallel efficiency is not neces-

sarily a product of fast processors or fast communications considered alone, instead it

is the relative speeds that are important, and the relative amount of communication

and computation in the program. Consider the machine-related factors first.











3.1.2 Comparison Between CM-2. CM-5, and MP-1

A 32-node CM-5 with vector units, a 16k processor CM-2, and a 1k processor

MP-1 were used in the present study. The CM-5 has 4 GBytes total memory, while

the CM-2 has 512 Mbytes, and the MP-1 has 64 MBytes. The peak speeds of these

computers are 4, 3.5, and 0.034 Gflops, respectively, in double precision. Per proces-

sor, the peak speeds are 32, 7, and 0.033 Mflops, with memory bandwidths of 128,

25, and 0.67 Mbytes/s [67, 83]. Clearly these are computers with very different capa-

bilities, even taking into account the fact that peak speeds, which are based only on

the processor speed under ideal conditions, are not an accurate basis for comparison.

In the CM-2 and CM-5 the front-end computers are Sun-4 workstations, while

in the MP-1 the front-end is a Decstation 5000. From Eq. 3.3, it is clear that the

relative speeds of the front-end computer and the processors are important. Their

ratio determines the importance of the front-end-to-processor type of communication.

On the CM-2 and MP-1, there is just one of these intermediate processors, called

either a. sequencer or an array control unit, respectively, while on the 32-node CM-5

the 32 SPARC microprocessors have the role of sequencers.

Each SPARC node broadcasts to four vector units (VUs) which actually do the

work. Thus a 32-node CM-5 has 128 independent processors. In the CM-2 the "pro-

cessors" are more often called processing elements (PEs), because each one consists of

a floating-point unit coupled with 32 bit-serial processors. Each bit-serial processor

is the memory manager for a single bit of a 32-bit word. Thus, the 16k-processor

CM-2 actually has only 512 independent processing elements. This strange CM-2

processor design came about basically as a workaround which was introduced to im-

prove the memory bandwidth for floating-point calculations [66]. Compared to the

CM-5 VUs, the CM-2 processors are about one-fourth as fast, with larger overhead











costs associated with memory access and computation. The MP-1 has 1024 4-bit

processors-compared to either the CM-5 or CM-2 processors, the MP-1 processors

are very slow. The generic term "processing element" (PE), which is used occassion-

ally in the discussion below, refers to either one of the VUs, one of the 512 CM-2

processors, or one of the MP-1 processors, whichever is appropriate.

For the present study, the processors are either physically or logically imagined

to be arranged as a 2-d mesh, which is a layout that is well-supported by the data

networks of each of the computers. The data network of the 32-node CM-5 is a

fat tree of height 3, which is similar to a binary tree except the bandwidth stays

constant upwards from height 2 at 160 MBytes/s (details in [83]). One can expect

approximately 480 MBytes/s for regular grid communication patterns (i.e. between

nearest-neighbor SPARC nodes) and 128 MBytes/s for random (global) communica-

tions. The randomly-directed messages have to go farther up the tree, so they are

slower. The CM-2 network (a hypercube) is completely different from the fat-tree net-

work and its performance for regular grid communication between nearest-neighbor

processors is roughly 350 MBytes/s [67]. The grid network on the CM-2 is called

NEWS (North-East-West-South). It is a subset of the hypercube connections se-

lected at run time. The MP-1 has two networks: regular communications use X-Net

(1.25 GBytes/s, peak) which connects each processor to its eight nearest neighbors,

and random communications use a 3-stage crossbar (80 MBytes/s, peak).

To summarize the relative speeds of these three SIMD computers it is sufficient

for the present study to observe that the MP-1 has very fast nearest-neighbor com-

munication compared to its computational speed, while the exact opposite is true for

the CM-2. The ratio of nearest-neighbor communication speed to computation speed

is smaller still for the CM-5 than the CM-2. Again, from Eq. 3.3, one expects that

these differences will be an important factor influencing the parallel efficiency.











3.1.3 Hierarchical and Cut-and-Stack Data Mappings

When there are more array elements (grid points) than processors, each processor

handles multiple grid points. Which grid points are assigned to which processors is

determined by the "data-mapping," also called the data layout. The processors repeat

any instructions the appropriate number of times to handle all the array elements

which have been assigned to it. A useful idealization for SIMD machines, however,

is to pretend there are always as many processors as grid points. Then one speaks of

the "virtual processor" ratio (VP) which is the number of array elements assigned to

each physical processor. The way the data arrays are partitioned and mapped to the

processors is a main concern for developing a parallel implementation. The layout of

the data determines the amount of communication in a given program.

When the virtual processor ratio is 1, there are an equal number of processors

and array elements and the mapping is just one-to-one. When VP > 1 the mapping

of data to processors is either "hierarchical," in CM-Fortran, or "cut-and-stack" in

MP-Fortran. These mappings are also termed "block" and "cyclic" [85], respectively,

in the emerging High-Performance Fortran standard. The relative merits of these

different approaches have not been completely explored yet.

In cut-and-stack mapping, nearest-neighbor array elements are mapped to nearest-

neighbor physical processors. When the number of array elements exceeds the num-

ber of processors, additional memory layers are created. VP is just the number of

memory layers. In the general case, nearest-neighbor virtual processors (i.e. array

elements) will not be mapped to the same physical processor. Thus, the cost of a

nearest-neighbor communication of distance one will be proportional to VP, since the

nearest-neighbors of each virtual processor will be on a different physical processor.

In the hierarchical mapping, contiguous pieces of an array ("virtual subgrids") are











mapped to each processor. The "subgrid size" for the hierarchical mapping is syn-

onymous with VP. The distinction between hierarchical and cut-and-stack mapping

is clarified by Figure 3.1.

In hierarchical mapping, for VP > 1, each virtual processor has nearest-neighbors

in the same virtual subgrid, that is, on the same physical processor. Thus, for hier-

archical mapping on the CM-2, interprocessor communication breaks down into two

types (with different speeds)-on-processor and off-processor. Off-processor commu-

nication on the CM-2 has the NEWS speed given above, while on-processor communi-

cation is somewhat faster, because it is essentially just a memory operation. A more

detailed presentation and modelling of nearest-neighbor communication costs for the

hierarchical mapping on the CM-2 is given in [3]. The key idea is that with hierar-

chical mapping on the CM-2 the relative amount of on-processor and off-processor

communication is the area to perimeter ratio of the virtual subgrid.

For the CM-5, there are three types of interprocessor communication: (1) between

virtual processors on the same processor (that is, the same VU), (2) between virtual

processors on different VUs but on the same SPARC node, and (3) between virtual

processors on different SPARC nodes. Between different SPARC nodes (number 2),

the speed is 480 MBytes/s as mentioned above. On the same VU the speed is 16

GBytes/s. (The latter number is just the aggregate memory bandwidth of the 32-

node CM-5.) Thus, although off-processor NEWS communication is slow compared

to computation on the CM-2 and CM-5, good efficiencies can still be achieved as a

consequence of the data mapping which allows the majority of communication to be

of the on-processor type.











3.2 Implementional Considerations

The cost per SIMPLE iteration depends on the choice of relaxation method

(solver) for the systems of equations, the number of inner iterations (,,, v,, and vc),
the computation of coefficients for each system of equations, the correction step, and

the convergence checking and serial work done in program control. The pressure-

correction equation, since it is not underrelaxed, typically needs to be given more

iterations than the momentum equations, and consequently most of the effort is ex-

pended during this step of the SIMPLE method. This is another reason why the

convergence rate of the p'-equations discussed in Chapter 2 is important. Typically

v. and v, are the same and are < 3, and v, < 5v,.

In developing a parallel implementation of the SIMPLE algorithm, the first con-

sideration is the method of solving the u, v, and p' systems of equations. For serial

computations, successive line-underrelaxation using the tridiagonal matrix algorithm

(TDMA, whose operation count is O(N)) is a good choice because the cost per it-

eration is optimal and there is long-distance coupling between flow variables (along

lines), which is effective in promoting convergence in the outer iterations. The TDMA

is intrinsically serial. For parallel computations, a parallel tridiagonal solver must be

used (parallel cyclic reduction in the present work). In this case the cost per it-

eration depends not only on the computational workload (O(Nlog2N)) but also on

the amount of communication generated by the implementation on a particular ma-

chine. For these reasons, timing comparisons are made for several implementations

of both point- and line-Jacobi solvers used during the inner iterations of the SIMPLE

algorithm.











Generally, point-Jacobi iteration is not sufficiently effective for complex flow prob-

lems. However, as part of a multigrid strategy, good convergence rates can be ob-

tained (see Chapters 4 and 5). Furthermore, because it only involves the fastest type

of interprocessor communication, that which occurs between nearest-neighbor pro-

cessors, point-Jacobi iteration provides an upper bound for parallel efficiency, against

which other solvers can be compared.

The second consideration is the treatment of boundary computations. In the

present implementation, the coefficients and source terms for the boundary control

volumes are computed using the interior control volume formula and mask arrays.

Oran et al. [57] have called this trick the uniform boundary condition approach.

All coefficients can be computed simultaneously. The problem with computing the

boundary coefficients separately is that some of the processors are idle, which de-

creases E. For the CM-5, which is "synchronized MIMD" instead of strictly SIMD,

there exists limited capability to handle both boundary and interior coefficients si-

multaneously without formulating a single all-inclusive expression. However, this

capability cannot be utilized if either the boundary or interior formulas involve in-

terprocessor communication, which is the case here. As an example of the uniform

approach, consider the source terms for the north boundary u control volumes, which

are computed by the formula

b = aNUN + (pw Pe)Ay (3.4)


Recall that aN represents the discretized convective and diffusive flux terms, and UN

is the boundary value, and in the pressure gradient term, Ay is the vertical dimension

of the u control volume and pw/P are the west/east u-control-volume face pressures

on the staggered grid. Similar modifications show up in the south, east, and west

boundary u control volume source terms. To compute the boundary and interior











source terms simultaneously, the following implementation is used:


b = aboundaryUboundary + (Pw pe)Ay (3.5)


where

Uboundary = UNIN + USIS + UEIE + UWIW (3.6)

and

boundary = aNIN + asls + aEIE + awIw (3.7)

IN, Is, IE, and Iw are the mask arrays, which have the value 1 for the respective
boundary control volumes and 0 everywhere else. They are initialized once, at the

beginning of the program. Then, every iteration, there are four extra nearest-neighbor

communications. A comparison of the uniform approach with an implementation that

treats each boundary separately is discussed in the results.

3.3 Numerical Experiments

The SIMPLE algorithm for two-dimensional laminar flow has been timed on a

range of problem sizes from 8 x 8 to 1024 x 1024 which, on the CM-5, covers up

to VP = 8192. The convection terms are central-differenced. A fixed number (100)

of outer iterations are timed using as a model flow problem the lid-driven cavity

flow at Re = 1000. The timings were made with the "Prism" timing utility on

the CM-2 and CM-5, and the "dpuTimer" routines on the MP-1 [52, 86]. These

utilities can be inaccurate if the front-end machine is heavily loaded, which was the

case with the CM-2. Thus, on the CM-2 all cases were timed three times and the

fastest times were used, as recommended by Thinking Machines [82]. Prism times

every code block and accumulates totals in several categories, including computation

time for the nodes (Tcomp), "NEWS" communication (Tnews), and irregular-pattern

"SEND" communication. Also it is possible to infer Tfe-to-proc from the difference











between the processor busy time and the elapsed time. In the results Tcomm is the

sum of the "NEWS" and "SEND" interprocessor times. The front-end-to-processor

communication is separate. Additionally, the component tasks of the algorithm have

been timed, namely the coefficient computations (Tcoe/,), the solver (Tsoi,,), and the

velocity-correction and convergence-checking parts.

3.3.1 Efficiency of Point and Line Solvers for the Inner Iterations

Figure 3.2, based on timings made on the CM-5, illustrates the difference in

parallel efficiency for SIMPLE using point-Jacobi and line-Jacobi iterative solvers. E

is computed from Eq. 3.3 by timing Tcomm and Teomp introduced above. Problem size

is given in terms of the virtual processor ratio VP previously defined.

There are two implementations each with different data layouts, for point-Jacobi

iteration. One ignores the distinction between virtual processors which are on the

same physical processor and those which are on different physical processors. Each

array element is treated as if it is a processor. Thus, interprocessor communication

is generated whenever data is to be moved, even if the two virtual processors do-

ing the communication happen to be on the same physical processor. To be more

precise, a call to the run-time communication library is generated for every array el-

ement. Then, those array elements (virtual processors) which actually reside on the

same physical processor are identified and the communication is done as a memory

operation-but the unnecessary overhead of calling the library is incurred. Obviously

there is an inefficiency associated with pretending that there are as many processors

as array elements, but the tradeoff is that this is the most straightforward, and indeed

the intended, way to do the programming. In Figure 3.2, this approach is labelled

"NEWS," with the symbol "o." The other implementation is labelled "on-VU," with











the symbol "+," to indicate that interprocessor communication between virtual pro-

cessors on the same physical processor is being eliminated-the programming is in a

sense being done "on-VU."

To indicate to the compiler the different layouts of the data which are needed,

the programmer inserts compiler directives. For the "NEWS" version, the arrays are

laid out as shown in this example for a 1k x 1k grid and an 8 x 16 processor layout

on the CM-5:

REAL*8 A(1024,1024)
$CMF LAYOUT A(:BLOCK=128 :PROCS=8, :BLOCK=64 :PROCS=16)


Thus, the subgrid shape is 128 x 64, with a subgrid size (VP) of 8192 (this hap-

pens to be the biggest problem size for my program on a 32-node CM-5 with 4GBytes

of memory). When shifting all the data to their east nearest-neighbor, for example,

by far the large majority of transfers are on-VU and could be done without real inter-

processor communication. But there are only 2 dimensions in A, so that data-parallel

program statements cannot specifically access certain array elements, i.e. the ones on

the perimeter of the subgrid. Thus it is not possible with the "NEWS" layout to

treat interior virtual processors differently from those on the perimeter, and conse-

quently data shifts between the interior virtual processors generate interprocessor

communication even though it is unnecessary.

In the "on-VU" version, a different data layout is used which makes explicit to the

compiler the boundary between physical processors. The arrays are laid out without

virtual processors:

$CMF LAYOUT A(:SERIAL,:SERIAL,:BLOCK=1 :PROCS=8,:BLOCK=1 :PROCS=16)


The declaration must be changed accordingly, to A(128,64,8,16). Normally it is

inconvenient to work with the arrays in this manner. Thus the approach taken here











is to use an "array alias" of A [84]. In other words, this is an EQUIVALENCE func-

tion for the data-parallel arrays (similar to the Fortran77 EQUIVALENCE concept),

which equates A(1024,1024) with A(128,64,8,16), with the different LAYOUTs given

above. It is the alias instead of the original A which is used in the on-VU point-

Jacobi implementation. In the solver, the "on-VU" layout is used; everywhere else,

the more convenient "NEWS" layout is used. The actual mechanism by which the

equivalencing of distributed arrays can be accomplished is not too difficult to under-

stand. The front-end computer stores "array descriptors," which contain the array

layout, the starting address in processor memory, and other information. The actual

layout in each processors' memory is linear and doesn't change, but multiple array

descriptors can be generated for the same data. This descriptor multiplicity is what

array aliasing accomplishes. With the "on-VU" programming style, the compiler

does not generate communication when the shift of data is along a SERIAL axis.

Thus, interprocessor communication is generated only when the virtual processors

involved are on different physical processors, i.e. only when it is truly necessary. The

difference in the amount of communication is substantial for large subgrid sizes.

For both the "NEWS" and the "on-VU" curves in Figure 3.2, E is initially very

low, but as VP increases, E rises until it reaches a peak value of about 0.8 for the

"NEWS" version and 0.85 for the "on-VU" version. The trend is due to the amor-

tization of the front-end-to-processor and off-VU (between VUs which are physically

under control of different SPARC nodes) communication. The former contributes a

constant overhead cost per Jacobi iteration to Tcomm, while the latter has a VP1/2

dependency [3]. However, it does not appear from Figure 3.2 that these two terms'

effects can be distinguished from one another.

For VP > 2k, the CM-5 is computing roughly 3/4 of the time for the implementa-

tion which uses the "NEWS" version of point-Jacobi, with the remainder split evenly











between front-end-to-processor communication and on-VU interprocessor communi-

cation. It appears that the "on-VU" version has more front-end-to-processor com-

munication per iteration, so there is, in effect, a price of more front-end-to-processor

communication to pay in exchange for less interprocessor communication. Conse-

quently it takes VP > 4k to reach peak efficiency instead of 2k with the "NEWS"

version. For VP > 4k, however, E is about 5% 10% higher than for the "NEWS"

version because the on-VU communication has been replaced by straight memory

operations.

The observed difference would be even greater if a larger part of the total parallel

run time was spent in the solver. For the large VP cases in Figure 3.2, approximately

equal time was spent computing coefficients and solving the systems of equations.

"Typical" numbers of inner iterations were used, 3 each for the u and v momentum

equations, and 9 for the p' equation. From Figure 3.2, then, it appears that the ad-

vantage of the "on-VU" version over the "NEWS" version of point-Jacobi relaxation

within the SIMPLE algorithm is around 0.1 in E, for large problem sizes.

Red/black analogues to the "NEWS" and "on-VU" versions of point-Jacobi iter-

ation have also been tested. Red/black point iteration done in the "on-VU" manner

does not generate any additional front-end-to-processor communication, and there-

fore takes almost an identical amount of time as point-Jacobi. Thus red/black point

iterations are recommended when the "on-VU" layout is used due to their improved

convergence rate. However, with the "NEWS" layout, red/black point iteration gen-

erates two code blocks instead of one, and reduces by 2 the amount of computation

per code block. This results in a substantial (- 35% for the VP = 8k case) in-

crease in run time. Thus, if using "NEWS" layouts, red/black point iteration is not

cost-effective.











There are also two implementations of line-Jacobi iteration. In both, one inner

iteration consists of forming a tridiagonal system of equations for the unknowns in

each vertical line by moving the east/west terms to the right-hand side, solving the

multiple systems of equations simultaneously, and repeating the procedure for the

horizontal lines.

In the first version, parallel cyclic reduction is used to solve the multiple tridiag-

onal systems of equations (see [44] for a clear presentation). This involves combining

equations to decouple the system into even and odd equations. The result is two

tridiagonal systems of equations each half the size of the original. The reduction step

is repeated log2 N times, where N is the number of unknowns in each line. Thus, the

computational operation count is O(Nlog2N). Interprocessor communication occurs

for every unknown for every step, thus the communication operation count is also

O(Nlog2N). However, the distance for communication increases every step of the re-

duction by a factor of 2. For the first step, nearest-neighbor communication occurs,

while for the second step, the distance is 2, then 4, etc. Thus, the net communi-

cation speed is slower than the nearest-neighbor type of communication. Figure 3.2

confirms this argument-E peaks at about 0.5 compared to 0.8 for point-Jacobi it-

eration. In other words, for VP > 4k, interprocessor communication takes as much

time as computation with the line-Jacobi solver using cyclic reduction.

In the second version, the multiple systems of tridiagonal equations are solved

using the standard TDMA algorithm along the lines. To implement this version,

one must remap the arrays from (:NEWS,:NEWS) to (:NEWS,:SERIAL), for the

vertical lines, and to (:SERIAL,:NEWS) for the horizontal lines. This change from

rectangular subgrids to 1-d slices is the most time-consuming step, involving a global

communication of data ("SEND" instead of "NEWS"). Applied along the serial di-

mension, the TDMA does not generate any interprocessor communication. Some











front-end-to-processor communication is generated by the incrementing of the DO-

loop index, but unrolling the DO-loop helps to amortize this overhead cost to some

extent. Thus, in Figure 3.2 E is approximately constant at 0.14, except for very small

VP. The global communication is much slower than computation and consequently

there is not enough computation to amortize the communication. Furthermore, the

constant E implies from Eq. 3.3 that Tcomm and Teomp both scale in the same way

with problem size. It is evident that Tcomp ~ VP because the TDMA is O(N). Thus

constant E implies Tcomm VP. This means doubling VP doubles Tcomm, indicating

the communication speed has reached its peak, which further indicates that the full

bandwidth of the fat-tree is being utilized.

The disappointing performance of the standard line-iterative approach using the

TDMA points out the important fact that, for the CM-5, global communication

within inner iterations is intolerable. There is not enough computation to amortize

slow communication in the solver for any problem size. With parallel cyclic reduction,

where the regularity of the data movement allows faster communication, the efficiency

is much higher, although still significantly lower than for point-iterations. Additional

improvement can be sought by using the "on-VU" data layout to implement the

line-iterative solver within each processor's subgrid. This implementation essentially

trades interprocessor communication for the front-end-to-PE type of communication,

and in practice a front-end bottleneck develops. For the remainder of the discussion,

all line-Jacobi results refer to the parallel cyclic reduction implementation.

On the MP-1, the front-end-to-processor communication is not a major concern,

as can be inferred from Figure 3.3. The efficiency of the SIMPLE algorithm using

the point-Jacobi solver is plotted for each machine for the range of problem sizes

corresponding to the cases solved on the MP-1. The CM-2 and CM-5 can solve

much larger problems, so for comparison purposes only part of their data is shown.











Also, because the computers have different numbers of processors, the number of grid

points is used instead of VP to define the problem size.

As in Figure 3.2, each curve exhibits an initial rise corresponding to the amortiza-

tion of the front-end-to-processor communication and, for the CM-2 and CM-5, the

off-processor "NEWS" communication. On the MP-1, peak E is reached for small

problems (VP > 32). Due to the MP-l's relatively slow processors, the computa-

tion time quickly amortizes the front-end-to-processor communication time as VP

increases. Furthermore, because the relative speed of X-Net communication is fast,

the peak E is high, 0.85. On the CM-2, the peak E is 0.4, and this efficiency is

reached for approximately VP > 128. On the CM-5, the peak E is 0.8, but this

efficiency is not reached until VP > 2k. If computation is fast, then the rate of in-

crease of E with VP depends on the relative cost of on-processor, off-processor, and

front-end-to-processor communication. If the on-processor communication is fast,

larger VP is required to reach peak E. Thus, on the CM-5, the relatively fast on-VU

communication is simultaneously responsible for the good (0.8) peak E, and the fact

that very large problem sizes, (VP > 2k, 64 times larger than on the MP-1), are

needed to reach this peak E.

The aspect ratio of the virtual subgrid constitutes a secondary effect of the data

layout on the efficiency for hierarchical mapping. The major influence on E depends

on VP, i.e. the subgrid size, but the subgrid shape matters, too. This dependence

comes into play due to the different speeds of the on-processor and off-processor types

of communication. Higher aspect ratio subgrids have higher area to perimeter ratios,

and thus relatively more of off-processor communication than square subgrids.

Figure 3.4 gives some idea of the relative importance of the subgrid aspect ratio

effect. Along each curve the number of grid points is fixed, but the grid dimensions

vary, which, for a given processor layout, causes the subgrid shape (aspect ratio), to











vary. For example, on the CM-5 with an 8 x 16 processor layout, the following grids

were used corresponding to the VP = 1024 CM-5 curve: 256 x 512, 512 x 256, 680 x

192, and 1024 x 128. These cases give subgrid aspect ratios of 1, 4, 7, and 16. Tnews

is the time spent in "NEWS" type of interprocessor communication and Tcom, is the

time spent doing computation during 100 SIMPLE iterations. The solver for these

results is point-Jacobi relaxation.

For the VP = 1024 CM-5 case, increasing the aspect ratio from 1 to 16 causes

Tnews/Tcomp to increase from 0.3 to 0.5. This increase in Tnews/Tcomp increases the
run time for 100 iterations from 15s to 20s, and decreases the efficiency from 0.61 to

0.54. For the VP = 8192 CM-5 case, increasing the aspect ratio from 1 to 16 causes

Tnews/Tcomp to increase from 0.19 to 0.27. This increase in Tnews/Tcomp increases the
run time for 100 iterations from 118s to 126s, and decreases the efficiency from 0.74

to 0.72. Thus, the aspect ratio effect diminishes as VP increases due to the increasing

area of the subgrid. In other words the variation in the perimeter length matters less,

percentage-wise, as the area increases. The CM-2 results are similar. However, on

the CM-2 the on-PE type of communication is slower than on the CM-5, relative to

the computational speed. Thus, Te,,,/Tcomp ratios are higher on the CM-2.

3.3.2 Effect of Uniform Boundary Condition Implementation

In addition to the choice of solver, the treatment of boundary coefficient computa-

tions was discussed earlier as an important consideration affecting parallel efficiency.

Figure 3.5 compares the implementation described in the introductory section of this

chapter, to an implementation which treats the boundary control volumes separate

from the interior control volumes. The latter approach involves some 1-d operations

which leave some processors idle.











The results indicated in Figure 3.5 were obtained on the CM-2, using point-

Jacobi relaxation as the solver. With the uniform approach, the ratio of the time

spent computing coefficients, Tcoeff, to the time spent solving the equations, Tsolve,

remains constant at 0.6 for VP > 256. Both Tcoeff and To ,,ve VP in this case, so

doubling VP doubles both Tcoeff and To,,ve, leaving their ratio unchanged. The value

0.6 reflects the relative cost of coefficient computations compared to point-Jacobi

iteration. There are three equations for which coefficients are computed and 15 total

inner iterations, 3 each for the u and v equations, and 9 for the p' equation. Thus if

more inner iterations are taken, the ratio of Tcoeff to Tsoive will decrease, and vice-

versa. With the 1-d implementation, Tcoeff/Tsove increases until VP > 1024. Both

TcoeJf and Tso,,ve scale with VP asymptotically, but Figure 3.5 shows that Tcoeff has an
apparently very significant square-root component due to the boundary operations. If

N is the number of grid points and n, is the number of processors, then VP = N/np.

For boundary operations, N1/2 control volumes are computed in parallel with only

nt/2 processors-hence the VP1/2 contribution to Tcoeff. From Figure 3.5, it appears

that very large problems are required to reach the point where the interior coefficient

computations amortize the boundary coefficient computations. Even for large VP

when Tcoeff/Tsoive is approaching a constant, this constant is larger, approximately

0.8 compared to 0.6 for the uniform approach, due to the additional front-end-to-

processor communication which is intrinsic to the 1-d formulation.

3.3.3 Overall Performance

Table 3.1 summarizes the relative performance of SIMPLE on the CM-2, CM-5,

and MP-1 computers, using point and line-iterative solvers and the uniform boundary











condition treatment. In the first three cases the "NEWS" implementation of point-

Jacobi relaxation is the solver, while the last two cases are for the line-Jacobi solver

using cyclic reduction.

Machine Solver Problem VP Tp Time/Iter./Pt. Speed ) Peak
Size ___(MFlops) Speed
512 PE Point- 512 x 1024 188 s 2.6 x 10-6 s 147 4
CM-2 Jacobi 1024
128 VU Point- 736 x 8192 137 s 1.3 x 10-6 s 417 10
CM-5 Jacobi 1472
1024 PE Point- 512 x 256 316 s 1.2 x 10-5 s 44* 59
MP-1 Jacobi 512
512 PE Line- 512 x 1024 409 s 7.8 x 10-6 s 133 3
CM-2 Jacobi 1024
128 VU Line- 736 x 8192 453 s 4.2 x 10-6 s 247 6
CM-5 Jacobi 1472

Table 3.1. Performance results for the SIMPLE algorithm for 100 iterations of the
model problem. The solvers are the point-Jacobi ("NEWS") and line-Jacobi (cyclic
reduction) implementations. 3, 3, and 9 inner iterations are used for the u, v, and p'
equations, respectively. *The speeds are for double-precision calculations, except on
the MP-1.


In Table 3.1, the speeds reported are obtained by comparing the timings with

the identical code timed on a Cray C90, using the Cray hardware performance mon-

itor to determine Mflops. In terms of Mflops, the CM-2 version of the SIMPLE

algorithm's performance appears to be consistent with other CFD algorithms on the

CM-2. Jesperson and Levit [44] report 117 Mflops for a scalar implicit version of an

approximate factorization Navier-Stokes algorithm using parallel cyclic reduction to

solve the tridiagonal systems of equations. This result was obtained for a 512 x 512

simulation of 2-d flow over a cylinder using a 16k CM-2 as in the present study (a

different execution model was used (see [3, 47] for details). The measured time per

time-step per grid point was 1.6 x 10-5 seconds. By comparison, the performance of

the SIMPLE algorithm for the 512 x 1024 problem size using the line-Jacobi solver is











133 Mflops and 7.8 x 10-6 seconds per iteration per grid pt. Egolf [20] reports that the

TEACH Navier-Stokes combustor code based on a sequential pressure-based method

with a solver that is comparable to point-Jacobi relaxation, obtains a performance

which is 3.67 times better than a vectorized Cray X-MP version of the code, for a

model problem with 3.2 x 104 nodes. The present program runs 1.6 times faster than

a single Cray C90 processor for a 128 x 256 problem (32k grid points). One Cray

C-90 processor is about 2-4 times faster than a Cray X-MP. Thus, the present code

runs comparably fast.

3.3.4 Isoefficiencv Plot

Figures 3.2-3.4 addressed the effects of the inner-iterative solver, the boundary

treatment, the data layout, and the variation of parallel efficiency with problem size

for a fixed number of processors. Varying the number of processors is also of interest

and, as discussed in Chapter 1, an even more practical numerical experiment is to

vary np in proportion with the problem size, i.e. the scaled-size model.

Figure 3.6, which is based on the point-Jacobi MP-1 timings, incorporates the

above information into one plot, which has been called an isoefficiency plot by Kumar

and Singh [46]. The lines are paths along which the parallel efficiency E remains

constant as the problem size and the number of processors np vary. Using the point-

Jacobi solver and the uniform boundary coefficient implementation, each SIMPLE

iteration has no substantial contribution from operations which are less than fully

parallel or from operations whose time depends on the number of processors. The

efficiency is only a function of the virtual processor ratio, thus the lines are straight.

Much of the parameter space is covered by efficiencies between 0.6 and 0.8.

The reason that the present implementation is linearly scalable is that the oper-

ations are all scalable-each SIMPLE iteration has predominantly nearest-neighbor











communication and computation and full parallelism. Thus, Tp depends on VP.

Local communication speed does not depend on np.

Ti depends on the problem size N. Thus, as N and n, are increased in proportion,

starting from some initial ratio, the efficiency from Eq. 3.3 stays constant. If the initial

problem size is large and the corresponding parallel run time is acceptable, then one

can quickly get to very large problem sizes while still maintaining Tp constant by

increasing n, a relatively small amount (along the E = 0.85 curve). If the desired

run time is smaller, then initially (i.e. starting from small np) the efficiency will be

lower. Then the scaled-size experiment requires relatively more processors to get

to a large problem size along the constant efficiency (constant Tp for point-Jacobi

ierations) curve. Thus, the most desirable situation occurs when the efficiency is

high for an initially small problem size.

For this case the fixed-time and scaled-size methods are equivalent, because the

problem size T1 depends on N per iteration. However this is not the case when

the SIMPLE inner iterations are done with the line-Jacobi solver using parallel cyclic

reduction. Cyclic reduction requires (13 log2 N+1)N operations to solve a tridiagonal

system of N equations [44]. Thus, T ~- (13log2 N + 1)N and on np = N processors,

Tp ~ 13 log2 N+ 1 because every processor is active during every step of the reduction

and there are 13 log2 N 1 steps. Since VP = 1, every processor's time is proportional

to the number of steps, assuming each step costs about the same.

In the scaled-size approach, one doubles np and N together, which therefore gives

Ti ~ (26 log, 2N+2)N and Tp 13 log, 2N+1. The efficiency is 1, but Tp is increased

and T1 is more than doubled. In the fixed-time approach, then, one concludes that

N must be increased by a factor which is less than two, and n, must be doubled, in

order to maintain constant Tp. If a plot like Figure 3.6 is constructed, it should be

done with Ti instead of N as the measure of problem size. In that case, the lines











of constant efficiency would be described as Ti ~ n0, with a > 1. The ideal case is

a = 1. In addition to the operation count, there is another factor which reduces the

scalability of cyclic reduction, namely the time per step is not actually the same as

was assumed above-later steps require communication over longer distances which

is slower. In practice, however, no more than a few steps are necessary because the

coupling between widely-separated equations becomes very weak. As the system is

reduced the diagonal becomes much larger than the off-diagonal terms which can

then be neglected and the reduction process abbreviated.

In short, the basic prerequisite for scaled-size constant efficiency is that the

amount of work per SIMPLE iteration varies with VP and that the overheads and

inefficiencies, specifically the time spent in communication and the fraction of idle

processors, do not grow relative to the useful computational work as np and N are

increased proportionally. The SIMPLE implementation developed here using the

point-iterative solvers, Jacobi and red/black, have this linear computational scalabil-

ity property.

On the other hand, the convergence rate of point-iterative methods increases at a

rate greater than the problem size, so although Tp can be maintained constant while

the problem size and np are scaled up, the convergence rate deteriorates. Hence the

total run time (cost per iteration multiplied by the number of iterations) increases.

This lack of numerical scalability of standard iterative methods like point-Jacobi

relaxation is the motivation for the development of multigrid strategies.

3.4 Concluding Remarks

The SIMPLE algorithm, especially using point-iterative methods, is efficient on

SIMD machines and can maintain a relatively high efficiency as the problem size and

the number of processors is scaled up. However, boundary coefficient computations











need to be folded in with interior coefficient computations to achieve good efficiencies

at smaller problem sizes. For the CM-5, the inefficiency caused by idle processors

in a 1-d boundary treatment was significant over the entire range of problem sizes

tested. The line-Jacobi solver based on parallel cyclic reduction leads to a lower

peak E (0.5 on the CM-5) than the point-Jacobi solver (0.8), because there is more

communication and on average this communication is less localized. On the other

hand, the asymptotic convergence rates of the two methods are also different and need

to be considered on a problem-by-problem basis. The speeds which are obtained with

the line-iterative method are consistent and comparable with other CFD algorithms

on SIMD computers.

The key factor in obtaining high parallel efficiency for the SIMPLE algorithm

on the computers used, is fast nearest-neighbor communication relative to the speed

of computation. On the CM-2 and CM-5, hierarchical mapping allows on-processor

communication to dominate the slower off-processor form(s) of communication for

large VP. The efficiency is low for small problems because of the relatively large

contribution to the run time from the front-end-to-processor type of communication,

but this type of communication is constant and becomes less important as the problem

size increases.

Once the peak E is reached, the efficiency is determined by the balance of compu-

tation and on-processor communication speeds-for the CM-5, using a point-Jacobi

solver, E approaches approximately 0.8, while on the CM-2 the peak efficiency is 0.4,

which reflects the fact that the CM-5 vector units have a better balance, at least for

the operations in this algorithm, than the CM-2 processors.

The rate at which E approaches the peak value depends on the relative contribu-

tions of on- and off-processor communication and front-end-to-processor communica-

tion to the total run time. On the CM-5, VP > 2k is required to reach peak E. This











problem size is about one-fourth the maximum size which can be accommodated,

and yet still larger than many computations on traditional vector supercomputers.

Clearly a gap is developing between the size of problems which can be solved effi-

ciently in parallel and the size of problems which are small enough to be solved on

serial computers.

For parallel computations of all but the largest problems, then, the data layout

issue is very important- in going from a square subgrid to one with aspect ratio of

16, for a VP = 1k case on the CM-5, the run time increased by 25%. On the MP-1,

hierarchical mapping is not needed, because the processors are slow compared to the

X-Net communication speed. The peak E is 0.85 with the point-Jacobi solver, and

this performance is obtained for VP > 32, which is about one-eighth the size of

the largest case possible for this machine. Thus, with regards to achieving efficient

performance in the teraflops range, the comparison given here suggests a preference

for numerous slow processors instead of fewer fast ones, but such a computer may be

difficult and expensive to build.



















4 x 1 Layout of Processors PE 0 PE 1 PE 2 PE 3


Array A(8)


1


21


31


61


Cut-and-Stack Mapping (MP-Fortran)
Memory Layers
i/ 5/ 6/ 7/ 8/
E 1/2 / 3PE 4/
PE0 PE1 PE2 PE3


Hierarchical Mapping (CM-Fortran)
2 x1 virtual subgrids

1 2 3 4 5 6 7 8

PE 0 PE 1 PE 2 PE 3


Figure 3.1. Mapping an 8 element array A onto 4 processors. For the cut-and-
stack mapping, nearest-neighbors array elements are mapped to nearest-neighbor
physical processors. For the hierarchical mapping, nearest-neighbor array elements
are mapped to nearest-neighbor virtual processors, which may be on the same physical
processor.


7 1


81












Efficiency vs. VP


wU


1

0.8

0.6

0.4

0.2,
1


0'
0


10000


5000
VP


Figure 3.2. Parallel efficiency, E, as a function of problem size and solver, for the
CM-5 cases. The number of grid points is the virtual processor ratio, VP, multiplied
by the number of processors, 128. E is computed from Eq. 3.3. It reflects the relative
amount of communication, compared to computation, in the algorithm.


+ + 0o
0 0



m + Point-Jacobi (on-VU)
o Point-Jacobi (NEWS)
Line-Jacobi (Cyclic Red.)
x Line-Jacobi (TDMA)
txx x x x XX


)











E vs. Problem Size


LL


1

0.8

0.6

0.4

0.2


0
C


1 2
# of Grid Points x 10 5


Figure 3.3. Comparison between the CM-2, CM-5 and MP-1. The variation of
parallel efficiency with problem size is shown for the model problem, using point-
Jacobi relaxation as the solver. E is calculated from Eq. 3.3, and T1 = nrpTcom for
the CM-2 and CM-5, where Tcomp is measured. For the MP-1 cases, T1 is the front-end
time, scaled down to the estimated speed of the MP-1 processors (0.05 Mflops).


0 0 0 0


D +
)
+ X X
o MP-1
S+ CM-5
CM-2


)






80




Aspect Ratio Effect
2


E 1.5
O + VP=256, CM-2
0 o VP=1024, CM-2
1 VP=1024, CM-5
x VP=8192, CM-5

0.5 -


0
|-0--------------



0 5 10 15 20
Subgrid AR


Figure 3.4. Effect of subgrid aspect ratio on interprocessor communication time,
T,ew, for the hierarchical data-mapping (CM-2 and CM-5). Tnew, is normalized by
Tcom, in order to show how the aspect ratio effect varies with problem size, without
the complication of the fact that Tcomp varies also.











Effect of Implementation
0.8


0.75

0
c)0.7
So 1-d operations
S0.65 + 2-d operations
0
0
0.6 ----


0.55
0 500 1000 1500
VP


Figure 3.5. Normalized coefficient computation time as a function of problem size,
for two implementations (on the CM-2). In the 1-d case the boundary coefficients
are handled by 1-d array operations. In the 2-d case the uniform implementation
computes both boundary and interior coefficients simultaneously. Tcoeff is the time
spent computing coefficients in a SIMPLE iteration; Toie, is the time spent in point-
Jacobi iterations. There are 15 point-Jacobi iterations (v, = v, = 3 and v, = 9).











Isoefficiency Curves


in

-o
x
CD

0
IL

0C
Vf


2.5

2

1.5

1

0.5


2000 4000 6000 8000
# Processors (MP-1)


Figure 3.6. Isoefficiency curves based on the MP-1 cases and SIMPLE method with
the point-Jacobi solver. Efficiency E is computed from Eq. 3.3. Along lines of
constant E the cost per SIMPLE iteration is constant with the point-Jacobi solver
and the uniform boundary condition implementation.
















CHAPTER 4
A NONLINEAR PRESSURE-CORRECTION MULTIGRID METHOD

The single-grid timing results focused on the cost per iteration in order to elucidate

the computational issues which influence the parallel run time and the scalability. But

the parallel run time is the cost per iteration multiplied by the number of iterations.

For scaling to large problem sizes and numbers of processors, the numerical method

must scale well with respect to convergence rate, also.

The convergence rate of the single-grid pressure-correction method deteriorates

with increasing problem size. This trait is inherited from the smoothing property of

the stationary linear iterative method, point or line-Jacobi relaxation, used to solve

the systems of u, v, and p' equations during the course of SIMPLE iterations. Point-

Jacobi relaxation requires O(N2) iterations, where N is the number of grid points,

to decrease the solution error by a specified amount [1]. In other words, the number

of iterations increases faster than the problem size.

At best the cost per iteration stays constant as the number of processors np

increases proportional to the problem size. Thus, the total run time increases in

the scaled-size experiment using single-grid pressure-correction methods, due to the

increased number of iterations required. This lack of numerical scalability is a serious

disadvantage for parallel implementations, since the target problem size for parallel

computation is very large.

Multigrid methods can maintain good convergence rates as the problem size in-

creases. For Poisson equations, problem-size independent convergence rates can be

obtained [36, 55]. The recent book by Briggs [10] introduces the major concepts in

83











the context of Poisson equations. See also [11, 37, 90] for surveys and analyses of

multigrid convergence properties for more general linear equations. For a description

of practical techniques and special considerations for fluid dynamics, see the impor-

tant early papers by Brandt [5, 6]. However, there are many unresolved issues for

application to the incompressible Navier-Stokes equations, especially with regards to

their implementation and performance on parallel computers. The purpose of this

chapter is to describe the relevant convergence rate and stability issues for multigrid

methods in the context of application to the incompressible Navier-Stokes equations,

with numerical experiments used to illustrate the points made, in particular, regard-

ing the role of the restriction and prolongation procedures.

4.1 Background

The basic concept is the use of coarse grids to accelerate the asymptotic con-

vergence rate of an inner iterative scheme. The inner iterative method is called the

"smoother" for reasons to be made clear shortly. In the context of the present applica-

tion to the incompressible Navier-Stokes equations, the single-grid pressure-correction

method is the inner iterative scheme. Because the pressure-correction algorithm also

uses inner iterations-to solve the systems of u, v, and p' equations-the multigrid

method developed here actually has three nested levels of iterations.

A multigrid V cycle begins with a certain number of smoothing iterations on the

fine grid, where the solution is desired. Figure 4.1 shows a schematic of a V(3,2) cycle.

In this case three pressure-correction iterations are done first. Then residuals and

variables are restricted (averaged) to obtain coarse-grid values for these quantities.

The solution to the coarse-grid discretized equation provides a correction to the fine-

grid solution. Once the solution on the coarse grid is obtained, the correction is

interpolated (prolongated) to the fine grid and added back into the solution there.











Some post-smoothing iterations, two in this case, are needed to eliminate errors

introduced by the interpolation. Since it is usually too costly to attempt a direct

solution on the coarse grid, this smoothing-correction cycle is applied recursively,

leading to the V cycle shown.

The next section describes how such a procedure can accelerate the convergence

rate of an iterative method, in the context of linear equations. The multigrid scheme

for nonlinear scalar equations and the Navier-Stokes system of equations is then

described. Brandt [5] was the first to formalize the manner in which coarse grids

could be used as a convergence-acceleration technique for a given smoother. The

idea of using coarse grids to generate initial guesses for fine-grid solutions was around

much earlier.

The cost of the multigrid algorithm, per cycle, is dominated by the smoothing cost,

as will be shown in Chapter 5. Thus, with regard to the parallel run time per multigrid

iteration, the smoother is the primary concern. Also, with regard to the convergence

rate, the smoother is important. The single-grid convergence rate characteristics

of pressure-correction methods, the dependence on Reynolds number, flow problem,

and the convection scheme, carry over to the multigrid context. However, in the

multigrid method the smoother's role is, as the name implies, to smooth the fine-grid

residual, which is a different objective than to solve the equations quickly. A smooth

fine-grid residual equation can be approximated accurately on a coarser grid. The

next section describes an alternate pressure-based smoother, and compares its cost

against the pressure-correction method on the CM-5.

Stability of multigrid iterations is also an important unresolved issue. There are

two ways in which multigrid iterations can be caused to diverge. First, the single-grid

smoothing iterations can diverge, for example if central-differencing is used there are

possibly stability problems if the Reynolds number is high. Second, poor coarse-grid











corrections can cause divergence if the smoothing is insufficient. In a sense this latter

issue, the scheme and intergrid transfer operators which prescribe the coordination

between coarse and fine grids in the multigrid procedure, is the key issue. In the next

section two "stabilization strategies" are described. Then, the impact of different

restriction and prolongation procedures on the convergence rate is studied in the

context of two model problems, lid-driven cavity flow and flow past a symmetric

backward-facing step. These two particular flow problems have different physical

characteristics, and therefore the numerical experiments should give insight into the

problem-dependence of the results.

4.1.1 Terminology and Scheme for Linear Equations

The discrete problem to be solved can be written Ahuh = Sh, corresponding to

some differential equation L[u] = S. The set of values uh is defined by


{u}, = u(ih,jh), (i,j) E ([0 : N], [0 : N]) -- (4.1)

Similarly, u2h is defined on the coarser grid f2h with grid spacing 2h. The variable u

can be a scalar or a vector, and the operator A can be linear or nonlinear.

For linear equations, the "correction scheme" (CS) is frequently used. A two-

level multigrid cycle using CS accelerates the convergence of an iterative method

(with iteration matrix P) by the following procedure:

Do v fine-grid iterations vh PVvh
Compute residual on Qh rh = Ahvh Sh

Restrict rh to Q2h r2h = 2hrh

Solve exactly for e2h e2h = (A2h)-1r2h

Correct vh on Qh (h)new (vh)old + hhe2h











Ih' and I,' symbolize the restriction and prolongation procedures. The quantity vh

is the current approximation to the discrete solution uh. The algebraic error is the

difference between them, eh = uh vh. The discretization error is the difference

between the exact solutions of the continuous and discrete problems, ediscr = u uh.

The truncation error is obtained by substituting the exact solution into the discrete

equation,

7h = Ahu Sh = Ahu Ahuh. (4.2)

The notation above follows Briggs [10].

The two-level multigrid cycle begins on the fine grid with v iterations of the

smoother. Standard iterative methods all have the "smoothing property," which is

that the various eigenvector-decomposed components of the solution error are damped

at a rate proportional to their corresponding eigenvalues, i.e. the high frequency

errors are damped faster than the low frequency (smooth) errors. Thus, the conver-

gence rate of the smoothing iterations is initially rapid, but deteriorates as smooth

error components, those with large eigenvalues, dominate the remaining error. The

purpose of transferring the problem to a coarser grid is to make these smooth error

components appear more oscillatory with respect to the grid spacing, so that the

initial rapid convergence rate is obtained for the elimination of these smooth errors

by coarse-grid iterations. Since the coarse grid Q2h has only 1/4 as many grid points

as Qh (in 2-d), the smoothing iterations on the coarse grid are cheaper as well as

more effective in reducing the smooth error components than on the fine grid.

In the correction scheme, the coarse-grid problem is an equation for the algebraic

error,


A2he2h ^ r2h,


(4.3)











approximating the fine-grid residual equation for the algebraic error. To obtain the

coarse-grid source term, r2h, the restriction procedure Ih is applied to the fine-grid

residual rh,
r2h I2hh. (4.4)

Eq. 4.4 is an averaging type of operation. Two common restriction procedures are

straight injection of fine-grid values to their corresponding coarse-grid grid points,

and averaging rh over a few fine-grid grid points which are near the corresponding

coarse-grid grid point. The initial error on the coarse grid is taken as zero.

After the solution for e2h is obtained, this coarse-grid quantity is interpolated to

the fine grid and used to correct the fine-grid solution,

^ + 2h. (4.5)

For Ihh, common choices are bilinear or biquadratic interpolation.

In practice the solution for e2h is obtained by recursion on the two-level cycle-

(A2h)-1 is not explicitly computed. On the coarsest grid, direct solution may be

feasible if the equation is simple enough. Otherwise a few smoothing iterations can

be applied.

Recursion on the two-level algorithm leads to a "V cycle," as shown in Figure 4.1.

A simple V(3,2) cycle is shown. Three smoothing iterations are taken before re-

stricting to the next coarser grid, and two iterations are taken after the solution has

been corrected. The purpose of the latter smoothing iterations is to smooth out

any high-frequency noise introduced by the prolongation. Other cycles can be envi-

sioned. In particular the W cycle is popular [6]. The cycling strategy is called the

"grid-schedule," since it is the order in which the various grid levels are visited.

The most important consideration for the correction scheme has been saved for

last, namely the definition of the coarse-grid discrete equation A2h. One possibility is











to discretize the original differential equation directly on the coarse grid. However this

choice is not always the best one. The convergence-rate benefit from the multigrid

strategy is derived from the particular coarse-grid approximation to the fine-grid

discrete problem, not the continuous problem. Because the coarse-grid solutions

and residuals are obtained by particular averaging procedures, there is an implied

averaging procedure for the fine-grid discrete operator Ah which should be honored

to ensure a useful homogenization of the fine-grid residual equation. This issue is

critical when the coefficients and/or dependent variables of the governing equations

are not smooth [17].

For the Poisson equation, the Galerkin approximation A2h 1= 2hIA^I2h is the

right choice. The discretized equation coefficients on the coarse grid are obtained

by applying suitable averaging and interpolation operations to the fine-grid coeffi-

cients, instead of by discretizing the governing equation on a grid with a coarser

mesh spacing. Briggs has shown, by exploiting the algebraic relationship between

bilinear interpolation and full-weighting restriction operators, that initially smooth

errors begin in the range of interpolation and finish, after the smoothing-correction

cycle is applied, in the null space of the restriction operator [10]. Thus, if the fine-grid

smoothing eliminates all the high-frequency error components in the solution, one V

cycle using the correction-scheme is a direct solver for the Poisson equation. The con-

vergence rate of multigrid methods using the Galerkin approximation is more difficult

to analyze if the governing equations are more complicated than Poisson equations,

but significant theoretical advantages for application to general linear problems have

been indicated [90].











4.1.2 Full-Approximation Storage Scheme for Nonlinear Equations

The brief description given above does not bring out the complexities inherent in

the application to nonlinear problems. There is only experience, derived mostly from

numerical experiments, to guide the choice of the restriction/prolongation procedures

and the smoother. Furthermore, the linkage between the grid levels requires special

considerations because of the nonlinearity.

The correction scheme using the Galerkin approximation can be applied to the

nonlinear Navier-Stokes system of equations [94]. However, in order to use CS for

nonlinear equations, linearization is required. The best coarse-grid correction only

improves the fine-grid solution to the linearized equation. Also, for complex equa-

tions, considerable expense is incurred in computing A2h by the Galerkin approxi-

mation. The commonly adopted alternative is the intuitive one, to let A2h be the

differential operator L discretized on the grid with spacing 2h instead of h. In ex-

change for a straightforward problem definition on the coarse grid though, special

restriction and prolongation procedures may be necessary to ensure the usefulness of

the resulting corrections. Numerical experiments on a problem-by-problem basis are

necessary to determine good choices for the restriction and prolongation procedures

for Navier-Stokes multigrid methods.

The full-approximation storage (FAS) scheme [5] is preferred over the correction

scheme for nonlinear problems. The coarse-grid corrections generated by FAS improve

the solution to the full nonlinear problem instead of just the linearized one. The

discretized equation on the fine grid is, again,

Ahuh = Sh. (4.6)











The approximate solution vh after a few fine-grid iterations defines the residual on

the fine grid,
Ahvh = Sh + rh. (4.7)

A correction, the algebraic error e^l = uh vh, is sought which satisfies

Ah(vh + el) = Sh. (4.8)


The residual equation is formed by subtracting Eq. 4.7 from Eq. 4.8, and cancelling

Sh,
Ah(vh + eh) Ah(vh) = -rh, (4.9)

where the subscript "alg" is dropped for convenience. For linear equations the Ahvh

terms cancel leaving Eq. 4.3. Eq. 4.9 does not simplify for nonlinear equations.
Assuming that the smoother has done its job, rh is smooth and Eq. 4.9 is the same

as the coarse-grid residual equation

A2h(2h + e2h) A2h(2h) = -r2h, (4.10)

at coarse-grid grid points.
The error e2h is to be found, interpolated back to ih according to eh = Ih^2h,

and added to vh so that Eq. 4.8 is satisfied. The known quantities are v2h, which is a
"suitable" restriction of vh, and r2h, likewise a restriction of rh. Different restrictions

can be used for residuals and solutions. Thus, Eq. 4.10 can be written

A2h(Ihvh + 62h) = A2h(Ir hv) I7^rh. (4.11)


Since Eq. 4.11 is not an equation for e2h, one solves instead for the sum Ihhvh + e2h
Expanding rh and regrouping terms, Eq. 4.11 can be written


A2h(u2h) = A2h(I hvh) I2h h


(4.12)











= [A2h(Ihvh) Ih2h(Ahvh) + IhhSh S2h] + S2h (4.13)

[Snumerical 2+ S2h, (4.14)


Eq. 4.14 is similar to Eq. 4.6 except for the extra numerically-derived source term.

Once I^hvh + e2h is obtained the coarse-grid approximation to the fine-grid error, e2h,

is computed by first subtracting the initial coarse-grid solution I^hvh,

e2h = 2h I2h, (4.15)


then interpolating back to the fine grid and combining with the current solution,

vh + vh + I2h(e2h). (4.16)


4.1.3 Extension to the Navier-Stokes Equations

The incompressible Navier-Stokes equations are a system of coupled, nonlinear

equations. Consequently the FAS scheme given above for single nonlinear equations

needs to be modified.

The variables u^, u^, and uh represent the cartesian velocity components and the

pressure, respectively. Corresponding subscripts are used to identify each equations'

source term, residual and discrete operator in the formulation below. The three

equations for momentum and mass conservation are treated as if part of the following

matrix equation,
A 0 Gh uh SS^
0 Ah G [ U^ S2^ (4.17)
Gh G h 0 U S^h
The continuity equation source term is zero on the finest grid, Qh, but for coarser grid

levels it may not be zero. Thus, for the sake of generality it is included in Eq. 4.17.

Thus, for the ux-momentum equation Eq. 4.8 is modified to account for the

pressure-gradient, G^u^, which is also an unknown. The approximate solutions are











v^, vh, and v3 corresponding to u^, u4, and u.. For the ul-momentum equation, the
approximate solution satisfies

Av + G^v^ = S^ + ,^. (4.18)


The fine-grid residual equation corresponding to Eq. 4.9 is modified to

Al(v1 + eh) Ah(v ) + G (v3 + eh) G(h) = -rh, (4.19)


which is approximated on the coarse grid by the corresponding coarse-grid residual

equation,

A2h(vh + e2h) A2h(vh) + Gh(v + e^) G2v) = (4.20)
1 1V -11 x M + e3) Mv .r

The known terms are v2h = Ihh, v2h h= hh, and r2h = I2h ^
Expanding r and regrouping terms, Eq. 4.19 can be written
Expanding rh and regrouping terms, Eq. 4.19 can be written


Ah(uh) + Gh (u2h)


A2hr h2h h 2h 2h h
= Al (f, ) + Gx (l v^3)

-2h (A^hvh + GGVh) + I2hSh

2hl i2h h) 2h r2h h
= [A (h + Gx (1h v 3)
I2h Ahv + G)+ Ih2hSh Sh] + s2h
-h ( IU + Gv1 3 1 1 1h

1Ll,nulmer icalrc I *


Since Eq. 4.22 includes numerically derived source terms in addition to the physical

ones, the coarse-grid variables are not in general the same as would be obtained from

a discretization of the original continuous governing equations on the coarse grid.

The u2-momentum equation is treated similarly, and the coarse-grid continuity

equation is


G2hu2h + G2hU2h = G ^2h(I h h\ G2h(lr2hi h\ 12h r
X1 y 2 x h UI y 2h 'h 3


(4.22)


(4.21)




Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID EUKXELU77_JOJJ0X INGEST_TIME 2017-07-13T15:40:37Z PACKAGE AA00003617_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES



PAGE 1

35(6685(%$6(' 0(7+2'6 21 6,1*/(,16758&7,21 675($008/7,3/('$7$ 675($0 &20387(56 %\ (':,1 / %/26&+ $ ',66(57$7,21 35(6(17(' 72 7+( *5$'8$7( 6&+22/ 2) 7+( 81,9(56,7< 2) )/25,'$ ,1 3$57,$/ )8/),//0(17 2) 7+( 5(48,5(0(176 )25 7+( '(*5(( 2) '2&725 2) 3+,/2623+< 81,9(56,7< 2) )/25,'$

PAGE 2

$&.12:/('*(0(176 ZRXOG OLNH WR H[SUHVV P\ WKDQNV WR P\ DGYLVRU 'U :HL 6K\\ IRU UHIOHFWLQJ FDUHIXOO\ RQ P\ UHVXOWV DQG IRU GLUHFWLQJ P\ UHVHDUFK WRZDUG LQWHUHVWLQJ LVVXHV ZRXOG DOVR OLNH WR WKDQN KLP IRU WKH H[FHSWLRQDO SHUVRQDO VXSSRUW DQG IOH[LELOLW\ KH RIIHUHG PH GXULQJ P\ ODVW \HDU RI VWXG\ ZKLFK ZDV GRQH RIIFDPSXV ZRXOG DOVR OLNH WR DFNQRZOHGJH WKH FRQWULEXWLRQV RI WKH RWKHU PHPEHUV RI P\ 3K' FRPPLWWHH 'U &KHQ&KL +VX 'U %UXFH &DUUROO 'U 'DYLG 0LNRODLWLV DQG 'U 6DUWDM 6DKQL 'U +VX DQG 'U &DUUROO VXSHUYLVHG P\ %6 DQG 06 GHJUHH UHVHDUFK VWXGLHV UHVSHFWLYHO\ DQG 'U 0LNRODLWLV LQ WKH UROH RI JUDGXDWH FRRUGLQDWRU HQDEOHG PH WR REWDLQ ILQDQFLDO VXSSRUW IURP WKH 'HSDUWPHQW RI (QHUJ\ $OVR ZRXOG OLNH WR WKDQN 0DGKXNDU 5DR 5LFN 6PLWK DQG +6 8GD\NXPDU IRU SD\LQJ IHHV RQ PY EHKDOI DQG IRU UHJLVWHULQJ PH IRU FODVVHV ZKLOH ZDV LQ &DOLIRUQLD -HII :ULJKW 6 7KDNXU 6KLQ-\H /LDQJ *XREDR *XR DQG 3HGUR /RSH])HUQDQGH] KDYH DOVR PDGH GLUHFW DQG LQGLUHFW FRQWULEXWLRQV IRU ZKLFK DP JUDWHIXO 6SHFLDO WKDQNV JR WR 'U -DPLH 6HWKLDQ 'U $OH[DQGUH &KRULQ DQG 'U 3DXO &RQ FXV RI /DZUHQFH %HUNHOH\ /DERUDWRU\ IRU DOORZLQJ PH WR YLVLW /%/ DQG XVH WKHLU UHVRXUFHV IRU JLYLQJ SHUVRQDO ZRUGV RI VXSSRUW DQG FRQVWUXFWLYH DGYLFH DQG IRU WKH SULYLOHJH RI LQWHUDFWLQJ ZLWK WKHP DQG WKHLU JUDGXDWH VWXGHQWV LQ WKH DSSOLHG PDWKHn PDWLFV EUDQFK /DVW EXW QRW OHDVW ZRXOG OLNH WR WKDQN P\ ZLIH /DXUD IRU KHU SDWLHQFH KHU H[DPSOH DQG KHU IUDQN WKRXJKWV RQ fFXSV ZLWK VOLGLQJ OLGVf fIORZ WKURXJK VWUDZVf DQG QXPHULFDO VLPXODWLRQV LQ JHQHUDO Q

PAGE 3

0\ UHVHDUFK ZDV VXSSRUWHG LQ SDUW E\ WKH &RPSXWDWLRQDO 6FLHQFH *UDGXDWH )HOn ORZVKLS 3URJUDP RI WKH 2IILFH RI 6FLHQWLILF &RPSXWLQJ LQ WKH 'HSDUWPHQW RI (QHUJ\ 7KH &0V XVHG LQ WKLV VWXG\ ZHUH SDUWLDOO\ IXQGHG E\ 1DWLRQDO 6FLHQFH )RXQGDWLRQ ,QIUDVWUXFWXUH *UDQW &'$ LQ WKH FRPSXWHU VFLHQFH GHSDUWPHQW RI WKH 8QLn YHUVLW\ RI &DOLIRUQLD%HUNHOH\f DQG D JUDQW RI +3& WLPH IURP WKH 'R' +3& 6KDUHG 5HVRXUFH &HQWHU $UP\ +LJK3HUIRUPDQFH &RPSXWLQJ 5HVHDUFK &HQWHU 0LQQHDSROLV 0LQQHVRWD P

PAGE 4

7$%/( 2) &217(176 $&.12:/('*(0(176 LL $%675$&7 YL &+$37(56 ,1752'8&7,21 0RWLYDWLRQV *RYHUQLQJ (TXDWLRQV 1XPHULFDO 0HWKRGV IRU 9LVFRXV ,QFRPSUHVVLEOH )ORZ 3DUDOOHO &RPSXWLQJ 'DWD3DUDOOHOLVP DQG 6,0' &RPSXWHUV $OJRULWKPV DQG 3HUIRUPDQFH 3UHV VXUH%DVHG 0XOWLJULG 0HWKRGV 'HVFULSWLRQ RI WKH 5HVHDUFK 35(6685(&255(&7,21 0(7+2'6 )LQLWH9ROXPH 'LVFUHWL]DWLRQ RQ 6WDJJHUHG *ULGV 7KH 6,03/( 0HWKRG B 'LVFUHWH )RUPXODWLRQ RI WKH 3UHVVXUH&RUUHFWLRQ (TXDWLRQ :HOO3RVHGQHVV RI WKH 3UHVVXUH&RUUHFWLRQ (TXDWLRQ $QDO\VLV 9HULILFDWLRQ E\ 1XPHULFDO ([SHULPHQWV 1XPHULFDO 7UHDWPHQW RI 2XWIORZ %RXQGDULHV &RQFOXGLQJ 5HPDUNV ()),&,(1&< $1' 6&$/$%,/,7< 21 6,0' &20387(56 %DFNJURXQG 6SHHGXS DQG (IILFLHQF\ &RPSDULVRQ %HWZHHQ &0 &0 DQG 03 +LHUDUFKLFDO DQG &XWDQG6WDFN 'DWD 0DSSLQJV ,PSOHPHQWLRQDO &RQVLGHUDWLRQV 1XPHULFDO ([SHULPHQWV (IILFLHQF\ RI 3RLQW DQG /LQH 6ROYHUV IRU WKH ,QQHU ,WHUDWLRQV (IIHFW RI 8QLIRUP %RXQGDU\ &RQGLWLRQ ,PSOHPHQWDWLRQ 2YHUDOO 3HUIRUPDQFH ,VRHIILFLHQF\ 3ORW ,9

PAGE 5

&RQFOXGLQJ 5HPDUNV $ 121/,1($5 35(6685(&255(&7,21 08/7,*5,' 0(7+2' %DFNJURXQG 7HUPLQRORJ\ DQG 6FKHPH IRU /LQHDU (TXDWLRQV )XOO$SSUR[LPDWLRQ 6WRUDJH 6FKHPH IRU 1RQOLQHDU (TXDWLRQV ([WHQVLRQ WR WKH 1DYLHU6WRNHV (TXDWLRQV &RPSDULVRQ RI 3UHVVXUH%DVHG 6PRRWKHUV 6WDELOLW\ RI 0XOWLJULG ,WHUDWLRQV 'HIHFW&RUUHFWLRQ 0HWKRG &RVW RI 'LIIHUHQW &RQYHFWLRQ 6FKHPHV 5HVWULFWLRQ DQG 3URORQJDWLRQ 3URFHGXUHV &RQFOXGLQJ 5HPDUNV ,03/(0(17$7,21 $1' 3(5)250$1&( 21 7+( &0 6WRUDJH 3UREOHP 0XOWLJULG &RQYHUJHQFH 5DWH DQG 6WDELOLW\ 7UXQFDWLRQ (UURU &RQYHUJHQFH &ULWHULRQ IRU &RDUVH *ULGV 1XPHULFDO &KDUDFWHULVWLFV RI WKH )0* 3URFHGXUH ,QIOXHQFH RI ,QLWLDO *XHVV RQ &RQYHUJHQFH 5DWH 5HPDUNV 3HUIRUPDQFH RQ WKH &0 &RQFOXGLQJ 5HPDUNV 5()(5(1&(6 %,2*5$3+,&$/ 6.(7&+ Y

PAGE 6

$EVWUDFW RI 'LVVHUWDWLRQ 3UHVHQWHG WR WKH *UDGXDWH 6FKRRO RI WKH 8QLYHUVLW\ RI )ORULGD LQ 3DUWLDO )XOILOOPHQW RI WKH 5HTXLUHPHQWV IRU WKH 'HJUHH RI 'RFWRU RI 3KLORVRSK\ 35(6685(%$6(' 0(7+2'6 21 6,1*/(,16758&7,21 675($008/7,3/('$7$ 675($0 &20387(56 %\ (GZLQ / %ORVFK &KDLUPDQ 'U :HL 6K\\ 0DMRU 'HSDUWPHQW $HURVSDFH (QJLQHHULQJ 0HFKDQLFV DQG (QJLQHHULQJ 6FLHQFH &RPSXWDWLRQDOO\ DQG QXPHULFDOO\ VFDODEOH DOJRULWKPV DUH QHHGHG WR H[SORLW HPHUJn LQJ SDUDOOHOFRPSXWLQJ FDSDELOLWLHV ,Q WKLV ZRUN SUHVVXUHEDVHG DOJRULWKPV ZKLFK VROYH WKH WZRGLPHQVLRQDO LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV DUH GHYHORSHG IRU VLQJOHLQVWUXFWLRQ VWUHDPPXOWLSOHGDWD VWUHDP 6,0'f FRPSXWHUV 7KH LPSOLFDWLRQV RI WKH FRQWLQXLW\ FRQVWUDLQW IRU WKH SURSHU QXPHULFDO WUHDWPHQW RI RSHQ ERXQGDU\ SUREOHPV DUH LQYHVWLJDWHG 0DVV PXVW EH FRQVHUYHG JOREDOO\ VR WKDW WKH V\VWHP RI OLQHDU DOJHEUDLF SUHVVXUHFRUUHFWLRQ HTXDWLRQV LV QXPHULFDOO\ FRQVLVWHQW 7KH FRQYHUJHQFH UDWH LV SRRU XQOHVV JOREDO PDVV FRQVHUYDWLRQ LV HQIRUFHG H[SOLFLWO\ 8VLQJ DQ DGGLWLYHFRUUHFWLRQ WHFKQLTXH WR UHVWRUH JOREDO PDVV FRQVHUYDWLRQ IORZV ZKLFK KDYH UHFLUFXODWLQJ ]RQHV DFURVV WKH RSHQ ERXQGDU\ FDQ EH VLPXODWHG 7KH SHUIRUPDQFH RI WKH VLQJOHJULG DOJRULWKP LV DVVHVVHG RQ WKUHH PDVVLYHO\ SDUDOOHO FRPSXWHUV 0DV3DUfV 03 DQG 7KLQNLQJ 0DFKLQHVf &0 DQG &0 3DUDOn OHO HIILFLHQFLHV DSSURDFKLQJ DUH SRVVLEOH ZLWK VSHHGV H[FHHGLQJ WKDW RI WUDGLWLRQDO YHFWRU VXSHUFRPSXWHUV 7KH IROORZLQJ LVVXHV UHOHYDQW WR WKH YDULDWLRQ RI SDUDOOHO HIn ILFLHQF\ ZLWK SUREOHP VL]H DUH VWXGLHG WKH VXLWDELOLW\ RI WKH DOJRULWKP IRU 6,0' FRPSXWDWLRQ WKH LPSOHPHQWDWLRQ RI ERXQGDU\ FRQGLWLRQV WR DYRLG LGOH SURFHVVRUV YL

PAGE 7

WKH FKRLFH RI SRLQW YHUVXV OLQHLWHUDWLYH UHOD[DWLRQ VFKHPHV WKH UHODWLYH FRVWV RI WKH FRHIILFLHQW FRPSXWDWLRQV DQG VROYLQJ RSHUDWLRQV DQG WKH YDULDWLRQ RI WKHVH FRVWV ZLWK SUREOHP VL]H WKH HIIHFW RI WKH GDWDDUUD\WRSURFHVVRU PDSSLQJ DQG WKH UHODWLYH VSHHGV RI FRPSXWDWLRQ DQG FRPPXQLFDWLRQ RI WKH FRPSXWHU $ QRQOLQHDU SUHVVXUHFRUUHFWLRQ PXOWLJULG DOJRULWKP ZKLFK KDV EHWWHU FRQYHUJHQFH UDWH FKDUDFWHULVWLFV WKDQ WKH VLQJOHJULG PHWKRG LV IRUPXODWHG DQG LPSOHPHQWHG RQ WKH &0 2Q WKH &0 WKH FRPSRQHQWV RI WKH PXOWLJULG DOJRULWKP DUH WHVWHG RYHU D UDQJH RI SUREOHP VL]HV 7KH VPRRWKLQJ VWHS LV WKH GRPLQDQW FRVW 3UHVVXUHFRUUHFWLRQ PHWKRGV DQG WKH ORFDOO\FRXSOHG H[SOLFLW PHWKRG DUH HTXDOO\ HIILFLHQW RQ WKH &0 9 F\FOLQJ LV IRXQG WR EH PXFK FKHDSHU WKDQ : F\FOLQJ DQG D WUXQFDWLRQHUURU EDVHG fIXOOPXOWLJULGf SURFHGXUH LV IRXQG WR EH D FRPSXWDWLRQDOO\ HIILFLHQW DQG FRQYHQLHQW PHWKRG IRU REWDLQLQJ WKH LQLWLDO ILQHJULG JXHVV 7KH ILQGLQJV SUHVHQWHG HQDEOH IXUWKHU GHYHORSPHQW RI HIILFLHQW VFDODEOH SUHVVXUHEDVHG SDUDOOHO FRPSXWLQJ DOJRULWKPV 9OO

PAGE 8

&+$37(5 ,1752'8&7,21 0RWLYDWLRQV &RPSXWDWLRQDO IOXLG G\QDPLFV &)'f LV D JURZLQJ ILHOG ZKLFK EULQJV WRJHWKHU KLJKSHUIRUPDQFH FRPSXWLQJ SK\VLFDO VFLHQFH DQG HQJLQHHULQJ WHFKQRORJ\ 7KH GLVn WLQFWLRQV EHWZHHQ &)' DQG RWKHU ILHOGV VXFK DV FRPSXWDWLRQDO SK\VLFV DQG FRPSXWDn WLRQDO FKHPLVWU\ DUH ODUJHO\ VHPDQWLF QRZ EHFDXVH LQFUHDVLQJO\ PRUH LQWHUGLVSOLQDU\ DSSOLFDWLRQV DUH FRPLQJ ZLWKLQ UDQJH RI WKH FRPSXWDWLRQDO FDSDELOLWLHV &)' DOJRn ULWKPV DQG WHFKQLTXHV DUH PDWXUH HQRXJK WKDW WKH IRFXV RI UHVHDUFK LV H[SHFWHG WR VKLIW LQ WKH QH[W GHFDGH WRZDUG WKH GHYHORSPHQW RI UREXVW IORZ FRGHV DQG WRZDUG WKH DSSOLFDWLRQ RI WKHVH FRGHV WR QXPHULFDO VLPXODWLRQV ZKLFK GR QRW LGHDOL]H HLWKHU WKH SK\VLFV RU WKH JHRPHWU\ DQG ZKLFK WDNH IXOO DFFRXQW RI WKH FRXSOLQJ EHWZHHQ IOXLG G\QDPLFV DQG RWKHU DUHDV RI SK\VLFV >@ 7KHVH DSSOLFDWLRQV ZLOO UHTXLUH IRUPLGDEOH UHVRXUFHV SDUWLFXODUO\ LQ WKH DUHDV RI FRPSXWLQJ VSHHG PHPRU\ VWRUDJH DQG LQn SXWRXWSXW EDQGZLGWK >@ $W WKH SUHVHQW WLPH WKH FRPSXWDWLRQDO GHPDQGV RI WKH DSSOLFDWLRQV DUH VWLOO DW OHDVW WZR RUGHUVRIPDJQLWXGH EH\RQG WKH FRPSXWLQJ WHFKQRORJ\ )RU H[DPSOH 1$6$fV JUDQG FKDOOHQJHV IRU WKH V DUH WR DFKLHYH WKH FDSDELOLW\ WR VLPXODWH YLVn FRXV FRPSUHVVLEOH IORZV ZLWK WZRHTXDWLRQ WXUEXOHQFH PRGHOOLQJ RYHU HQWLUH DLUFUDIW FRQILJXUDWLRQV DQG WR FRXSOH WKH IOXLG G\QDPLFV VLPXODWLRQ ZLWK WKH SURSXOVLRQ DQG DLUFUDIW FRQWURO V\VWHPV PRGHOOLQJ 7R PHHW WKLV FKDOOHQJH LW LV HVWLPDWHG WKDW WHU DIORSV FRPSXWLQJ VSHHG DQG JLJDZRUGV RI PHPRU\ ZLOO EH UHTXLUHG >@ &XUUHQW

PAGE 9

PDVVLYHO\SDUDOOHO VXSHUFRPSXWHUV IRU H[DPSOH WKH &0 PDQXIDFWXUHG E\ 7KLQNLQJ 0DFKLQHV KDYH SHDN VSHHGV RI JLJDIORSVf DQG PHPRULHV RI JLJDZRUGf 2SWLPLVP LV VRPHWLPHV FLUFXODWHG WKDW WHUDIORS FRPSXWHUV PD\ EH H[SHFWHG E\ >@ ,Q YLHZ RI WKH WZR RUGHUVRIPDJQLWXGH GLVSDULW\ EHWZHHQ WKH VSHHG RI SUHVHQWJHQHUDWLRQ SDUDOOHO FRPSXWHUV DQG WHUDIORSV VXFK RSWLPLVP VKRXOG EH GLPPHG VRPHZKDW ([SHFWDWLRQV DUH QRW EHLQJ PHW LQ SDUW EHFDXVH WKH DSSOLFDWLRQV ZKLFK DUH WKH GULYLQJ IRUFH EHKLQG WKH SURJUHVV LQ KDUGZDUH KDYH EHHQ VORZ WR GHYHORS 7KH QXPHULFDO DOJRULWKPV ZKLFK KDYH VHHQ WZR GHFDGHV RI GHYHORSPHQW RQ WUDGLWLRQDO YHFn WRU VXSHUFRPSXWHUV DUH QRW DOZD\V HDV\ WDUJHWV IRU HIILFLHQW SDUDOOHO LPSOHPHQWDWLRQ %HWWHU XQGHUVWDQGLQJ RI WKH EDVLF FRQFHSWV DQG PRUH H[SHULHQFH ZLWK WKH SUHVHQW JHQHUDWLRQ RI SDUDOOHO FRPSXWHUV LV D SUHUHTXLVLWH IRU LPSURYHG DOJRULWKPV DQG LPSOHn PHQWDWLRQV 7KH PRWLYDWLRQ RI WKH SUHVHQW ZRUN KDV EHHQ WKH RSSRUWXQLW\ WR LQYHVWLJDWH LVVXHV UHODWHG WR WKH XVH RI SDUDOOHO FRPSXWHUV LQ &)' ZLWK WKH KRSH WKDW WKH NQRZOHGJH JDLQHG FDQ DVVLVW WKH WUDQVLWLRQ WR WKH QHZ FRPSXWLQJ WHFKQRORJ\ 7KH FRQWH[W RI WKH UHVHDUFK LV WKH QXPHULFDO VROXWLRQ RI WKH G LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV E\ D SRSXODU DQG SURYHQ QXPHULFDO PHWKRG NQRZQ DV WKH SUHVVXUHFRUUHFWLRQ WHFKn QLTXH $ VSHFLILF REMHFWLYH HPHUJHG DV WKH UHVHDUFK SURJUHVVHG QDPHO\ WR GHYHORS DQG DQDO\]H WKH SHUIRUPDQFH RI SUHVVXUHFRUUHFWLRQ PHWKRGV RQ WKH VLQJOHLQVWUXFWLRQ VWUHDPPXOWLSOHGDWD VWUHDP 6,0'f W\SH RI SDUDOOHO FRPSXWHU 6LQJOHJULG FRPSXn WDWLRQV ZHUH VWXGLHG ILUVW WKHQ D PXOWLJULG PHWKRG ZDV GHYHORSHG DQG WHVWHG 6W0' FRPSXWHUV ZHUH FKRVHQ EHFDXVH WKH\ DUH HDVLHU WR SURJUDP WKDQ PXOWLSOH LQVWUXFWLRQ VWUHDPPXOWLSOHGDWD VWUHDP 0,0'f FRPSXWHUV H[SOLFW PHVVDJHSDVVLQJ LV QRW UHTXLUHGf EHFDXVH V\QFKURQL]DWLRQ RI WKH SURFHVVRUV LV QRW DQ LVVXH DQG EHn FDXVH WKH IDFWRUV DIIHFWLQJ WKH SDUDOOHO UXQ WLPH DQG FRPSXWDWLRQDO HIILFLHQF\ DUH HDVLHU WR LGHQWLI\ DQG TXDQWLI\ $OVR WKHVH DUH DUJXDEO\ WKH PRVW SRZHUIXO PDFKLQHV

PAGE 10

DYDLODEOH ULJKW QRZf§/RV $ODPRV 1DWLRQDO /DERUDWRU\ KDV D QRGH &0 ZLWK *E\WHV RI SURFHVVRU PHPRU\ DQG LV FDSDEOH RI *IORSV SHDN VSHHG 7KXV WKH FRGH WKH QXPHULFDO WHFKQLTXHV DQG WKH XQGHUVWDQGLQJ ZKLFK DUH WKH FRQWULEXWLRQ RI WKLV UHVHDUFK FDQ EH LPPHGLDWHO\ XVHIXO IRU DSSOLFDWLRQV RQ PDVVLYHO\ SDUDOOHO FRPSXWHUV *RYHUQLQJ (TXDWLRQV 7KH JRYHUQLQJ HTXDWLRQV IRU G FRQVWDQW SURSHUW\ WLPHGHSHQGHQW YLVFRXV LQn FRPSUHVVLEOH IORZ DUH WKH 1DYLHU6WRNHV HTXDWLRQV 7KH\ H[SUHVV WKH SULQFLSOHV RI FRQVHUYDWLRQ RI PDVV DQG PRPHQWXP ,Q SULPLWLYH YDULDEOHV DQG FDUWHVLDQ FRRUGLn QDWHV WKH\ PD\ EH ZULWWHQ GSX GSY G[ G\ GSX GSX GSXY GS GX GX aGI aGA aGAa a7[ AGA GSY GSXY GSY GS GY GY AZ aGA aGAa aGA IOGA AGA f f f ZKHUH X DQG Y DUH FDUWHVLDQ YHORFLW\ FRPSRQHQWV S LV WKH GHQVLW\ S LV WKH IOXLGfV PROHFXODU YLVFRVLW\ DQG S LV WKH SUHVVXUH (T LV WKH PDVV FRQWLQXLW\ HTXDWLRQ DOVR NQRZQ DV WKH GLYHUJHQFHIUHH FRQVWUDLQW VLQFH LWV FRRUGLQDWHIUHH IRUP LV GLY X 7KH 1DYLHU6WRNHV HTXDWLRQV DUH D FRXSOHG VHW RI QRQOLQHDU SDUWLDO GLIIHUn HQWLDO HTXDWLRQV RI PL[HG HOOLSWLFSDUDEROLF W\SH 0DWKHPDWLFDOO\ WKH\ GLIIHU IURP WKH FRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV LQ WZR LPSRUWDQW UHVSHFWV WKDW OHDG WR GLIn ILFXOWLHV IRU GHYLVLQJ QXPHULFDO VROXWLRQ WHFKQLTXHV )LUVW WKH UROH RI WKH FRQWLQXLW\ HTXDWLRQ LV GLIIHUHQW LQ LQFRPSUHVVLEOH IORZ ,Qn VWHDG RI D WLPHGHSHQGHQW HTXDWLRQ IRU WKH GHQVLW\ LQ LQFRPSUHVVLEOH IOXLGV WKH FRQWLn QXLW\ HTXDWLRQ LV D FRQVWUDLQW RQ WKH DGPLVVLEOH YHORFLW\ VROXWLRQV 1XPHULFDO PHWKn RGV PXVW EH DEOH WR LQWHJUDWH WKH PRPHQWXP HTXDWLRQV IRUZDUG LQ WLPH ZKLOH VLPXOn WDQHRXVO\ PDLQWDLQLQJ VDWLVIDFWLRQ RI WKH FRQWLQXLW\ FRQVWUDLQW 2Q WKH RWKHU KDQG

PAGE 11

QXPHULFDO PHWKRGV IRU FRPSUHVVLEOH IORZV FDQ WDNH DGYDQWDJH RI WKH IDFW WKDW LQ WKH XQVWHDG\ IRUP HDFK HTXDWLRQ KDV D WLPHGHSHQGHQW WHUP 7KH HTXDWLRQV DUH FDVW LQ YHFWRU IRUPf§DQ\ VXLWDEOH PHWKRG IRU WLPHLQWHJUDWLRQ FDQ EH HPSOR\HG RQ WKH V\VWHP RI HTXDWLRQV DV D ZKROH 7KH VHFRQG SUREOHP DVVXPLQJ WKDW D SULPLWLYHYDULDEOH IRUPXODWLRQ LV GHVLUHG LV WKDW WKHUH LV QR HTXDWLRQ IRU SUHVVXUH )RU FRPSUHVVLEOH IORZV WKH SUHVVXUH FDQ EH GHn WHUPLQHG IURP WKH HTXDWLRQ RI VWDWH RI WKH IOXLG )RU LQFRPSUHVVLEOH IORZ DQ DX[LOLDU\ fSUHVVXUH3RLVVRQf HTXDWLRQ FDQ EH GHULYHG E\ WDNLQJ WKH GLYHUJHQFH RI WKH YHFWRU IRUP RI WKH PRPHQWXP HTXDWLRQV WKH FRQWLQXLW\ HTXDWLRQ LV LQYRNHG WR HOLPLQDWH WKH XQVWHDG\ WHUP LQ WKH UHVXOW 7KH IRUPXODWLRQ RI WKH SUHVVXUH3RLVVRQ HTXDWLRQ UHTXLUHV PDQLSXODWLQJ WKH GLVFUHWH IRUPV RI WKH PRPHQWXP DQG FRQWLQXLW\ HTXDWLRQV $ SDUWLFXODU GLVFUHWL]DWLRQ RI WKH /DSODFLDQ RSHUDWRU LV WKHUHIRUH LPSOLHG LQ SUHVVXUH 3RLVVRQ HTXDWLRQ GHSHQGLQJ RQ WKH GLVFUHWH JUDGLHQW DQG GLYHUJHQFH RSHUDWRUV 7KLV RSHUDWRU PD\ QRW EH LPSOHPHQWDEOH DW ERXQGDULHV DQG VROYDELOLW\ FRQVWUDLQWV FDQ EH YLRODWHG >@ $OVR WKH GLIIHUHQWLDWLRQ RI WKH JRYHUQLQJ HTXDWLRQV LQWURGXFHV WKH QHHG IRU DGGLWLRQDO XQSK\VLFDO ERXQGDU\ FRQGLWLRQV RQ WKH SUHVVXUH 3K\VLFDOO\ WKH SUHVVXUH LQ LQFRPSUHVVLEOH IORZ LV RQO\ GHILQHG UHODWLYH WR DQ DUELWUDU\f FRQVWDQW 7KXV WKH FRUUHFW ERXQGDU\ FRQGLWLRQV DUH 1HXPDQQ +RZHYHU LI WKH SUREOHP KDV DQ RSHQ ERXQGDU\ WKH JRYHUQLQJ HTXDWLRQV VKRXOG EH VXSSOHPHQWHG ZLWK D ERXQGDU\ FRQGLWLRQ RQ WKH QRUPDO WUDFWLRQ > @ )Q f§S GXQ 5H GQ f ZKHUH ) LV WKH IRUFH 5H LV WKH 5H\QROGV QXPEHU DQG WKH VXEVFULSW Q LQGLFDWHV WKH QRUPDO GLUHFWLRQ +RZHYHU )Q PD\ EH GLIILFXOW WR SUHVFULEH

PAGE 12

,Q SUDFWLFH D ]HURJUDGLHQW RU OLQHDU H[WUDSRODWLRQ IRU WKH QRUPDO YHORFLW\ FRPn SRQHQW LV D PRUH SRSXODU RXWIORZ ERXQGDU\ FRQGLWLRQ 0DQ\ RXWIORZ ERXQGDU\ FRQn GLWLRQV KDYH EHHQ DQDO\]HG WKHRUHWLFDOO\ IRU LQFRPSUHVVLEOH IORZ VHH > @f 7KHUH DUH HYHQ PRUH ERXQGDU\ FRQGLWLRQ SURFHGXUHV LQ XVH 7KH PHWKRG XVHG DQG LWV LPSDFW RQ WKH fVROYDELOLW\fn RI WKH UHVXOWLQJ QXPHULFDO V\VWHPV RI HTXDWLRQV GHSHQGV RQ WKH GLVFUHWL]DWLRQ DQG WKH QXPHULFDO PHWKRG 7KLV LVVXH LV WUHDWHG LQ &KDSWHU 1XPHULFDO 0HWKRGV IRU 9LVFRXV ,QFRPSUHVVLEOH )ORZ 1XPHULFDO DOJRULWKPV IRU VROYLQJ WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV V\VWHP RI HTXDn WLRQV ZHUH ILUVW GHYHORSHG E\ +DUORZ DQG :HOFK >@ DQG &KRULQ > @ 'HVFHQGDQWV RI WKHVH DSSURDFKHV DUH SRSXODU WRGD\ +DUORZ DQG :HOFK LQWURGXFHG WKH LPSRUWDQW FRQWULEXWLRQ RI WKH VWDJJHUHGJULG ORFDWLRQ RI WKH GHSHQGHQW YDULDEOHV 2Q D VWDJn JHUHG JULG WKH GLVFUHWH /DSODFLDQ DSSHDULQJ LQ WKH GHULYDWLRQ RI WKH SUHVVXUH3RLVVRQ HTXDWLRQ KDV WKH VWDQGDUG ILYHSRLQW VWHQFLO 2Q FRORFDWHG JULGV LW VWLOO KDV D ILYH SRLQW IRUP EXW LI WKH FHQWUDO SRLQW LV ORFDWHG DW LMf WKH RWKHU SRLQWV ZKLFK DUH LQYROYHG DUH ORFDWHG DW LMf LMf LMf DQG LMf :LWKRXW QHDUHVWQHLJKERU OLQNDJHV WZR XQFRXSOHG fFKHFNHUERDUGff SUHVVXUH ILHOGV FDQ GHYHORS LQGHSHQGHQWO\ 7KLV SUHVVXUHGHFRXSOLQJ FDQ FDXVH VWDELOLW\ SUREOHPV VLQFH QRQSK\VLFDO GLVFRQWLQXn LWLHV LQ WKH SUHVVXUH PD\ GHYHORS >@ ,Q WKH SUHVHQW ZRUN WKH YHORFLW\ FRPSRQHQWV DUH VWDJJHUHG RQHKDOI RI D FRQWURO YROXPH WR WKH ZHVW DQG VRXWK RI WKH SUHVVXUH ZKLFK LV GHILQHG DW WKH FHQWHU RI WKH FRQWURO YROXPH DV VKRZQ LQ )LJXUH )LJXUH DOVR VKRZV WKH ORFDWLRQV RI DOO ERXQGDU\ YHORFLW\ FRPSRQHQWV LQYROYHG LQ WKH GLVFUHWL]DWLRQ DQG QXPHULFDO VROXWLRQ DQG UHSUHVHQWDWLYH ERXQGDU\ FRQWURO YROXPHV IRU X Y DQG S ,Q &KRULQfV DUWLILFLDO FRPSUHVVLELOLW\ DSSURDFK >@ D WLPHGHULYDWLYH RI SUHVVXUH LV DGGHG WR WKH FRQWLQXLW\ HTXDWLRQ ,Q WKLV PDQQHU WKH FRQWLQXLW\ HTXDWLRQ EHFRPHV DQ HTXDWLRQ IRU WKH SUHVVXUH DQG DOO WKH HTXDWLRQV FDQ EH LQWHJUDWHG IRUZDUG LQ WLPH

PAGE 13

HLWKHU DV D V\VWHP RU RQH DW D WLPH 7KH DUWLILFLDO FRPSUHVVLELOLW\ PHWKRG LV FORVHO\ UHODWHG WR WKH SHQDOW\ IRUPXODWLRQ XVHG LQ ILQLWHHOHPHQW PHWKRGV >@ 7KH HTXDWLRQV DUH VROYHG VLPXOWDQHRXVO\ LQ ILQLWHHOHPHQW IRUPXODWLRQV 3HQDOW\ PHWKRGV DQG WKH DUWLILFLDO FRPSUHVVLELOLW\ DSSURDFK VXIIHU IURP LOOFRQGLWLRQLQJ ZKHQ WKH HTXDWLRQV KDYH VWURQJ QRQOLQHDULWLHV RU VRXUFH WHUPV %HFDXVH WKH SUHVVXUH WHUP LV DUWLILFLDO WKH\ DUH QRW WLPHDFFXUDWH HLWKHU 3URMHFWLRQ PHWKRGV > @ DUH WZRVWHS SURFHGXUHV ZKLFK ILUVW REWDLQ D YHORFLW\ ILHOG E\ LQWHJUDWLQJ WKH PRPHQWXP HTXDWLRQV DQG WKHQ SURMHFW WKLV YHFWRU ILHOG LQWR D GLYHUJHQFHIUHH VSDFH E\ VXEWUDFWLQJ WKH JUDGLHQW RI WKH SUHVVXUH 7KH SUHVVXUH 3RLVVRQ HTXDWLRQ LV VROYHG WR REWDLQ WKH SUHVVXUH 7KH VROXWLRQ PXVW EH REWDLQHG WR D KLJK GHJUHH RI DFFXUDF\ LQ XQVWHDG\ FDOFXODWLRQV LQ RUGHU WR REWDLQ WKH FRUUHFW ORQJWHUP EHKDYLRU >@f§HYHU\ VWHS PD\ WKHUHIRUH EH IDLUO\ H[SHQVLYH )XUWKHUPRUH WKH WLPHVWHS VL]H LV OLPLWHG E\ VWDELOLW\ FRQVLGHUDWLRQV GHSHQGLQJ RQ WKH LPSOLFLWQHVV RI WKH WUHDWPHQW XVHG IRU WKH FRQYHFWLRQ WHUPV f3UHVVXUHEDVHGf PHWKRGV IRU WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV LQFOXGH 6,03/( >@ DQG LWV YDULDQWV 6,03/(& >@ 6,03/(5 >@ DQG 3,62 >@ 7KHVH PHWKRGV DUH VLPLODU WR SURMHFWLRQ PHWKRGV LQ WKH VHQVH WKDW D QRQPDVVFRQVHUYLQJ YHORFLW\ ILHOG LV FRPSXWHG ILUVW DQG WKHQ FRUUHFWHG WR VDWLVI\ FRQWLQXLW\ +RZHYHU WKH\ DUH QRW LPSOLFLW LQ WZR VWHSV EHFDXVH WKH QRQOLQHDU FRQYHFWLRQ WHUPV DUH OLQHDUL]HG H[SOLFLWO\ ,QVWHDG RI D SUHVVXUH3RLVVRQ HTXDWLRQ DQ DSSUR[LPDWH HTXDWLRQ IRU WKH SUHVVXUH RU SUHVVXUHFRUUHFWLRQ LV GHULYHG E\ PDQLSXODWLQJ WKH GLVFUHWH IRUPV RI WKH PRPHQWXP DQG FRQWLQXLW\ HTXDWLRQV $ IHZ LWHUDWLRQV RI D VXLWDEOH UHOD[DWLRQ PHWKRG DUH XVHG WR REWDLQ D SDUWLDO VROXWLRQ WR WKH V\VWHP RI FRUUHFWLRQ HTXDWLRQV DQG WKHQ QHZ JXHVVHV IRU SUHVVXUH DQG YHORFLW\ DUH REWDLQHG E\ DGGLQJ WKH FRUUHFWLRQV WR WKH ROG YDOXHV 7KLV SURFHVV LV LWHUDWHG XQWLO DOO WKUHH HTXDWLRQV DUH VDWLVILHG 7KH LWHUDWLRQV UHTXLUH XQGHUUHOD[DWLRQ EHFDXVH RI WKH VHTXHQWLDO FRXSOLQJ EHWZHHQ

PAGE 14

YDULDEOHV &RPSDUHG WR SURMHFWLRQ PHWKRGV SUHVVXUHEDVHG PHWKRGV DUH OHVV LPSOLFLW ZKHQ XVHG IRU WLPHGHSHQGHQW SUREOHPV +RZHYHU WKH\ FDQ EH XVHG WR VHHN WKH VWHDG\VWDWH GLUHFWO\ LI GHVLUHG &RPSDUHG WR D IXOO\ FRXSOHG VWUDWHJ\ WKH VHTXHQWLDO SUHVVXUHEDVHG DSSURDFK W\SLFDOO\ KDV VORZHU FRQYHUJHQFH DQG OHVV UREXVWQHVV ZLWK UHVSHFW WR 5H\QROGV QXPn EHU +RZHYHU WKH VHTXHQWLDO DSSURDFK KDV WKH LPSRUWDQW DGYDQWDJH WKDW DGGLWLRQDO FRPSOH[LWLHV IRU H[DPSOH FKHPLFDO UHDFWLRQ FDQ EH HDVLO\ DFFRPPRGDWHG E\ VLPSO\ DGGLQJ VSHFLHVEDODQFH HTXDWLRQV WR WKH VWDFN 7KH RYHUDOO UXQ WLPH LQFUHDVHV VLQFH HDFK JRYHUQLQJ HTXDWLRQ LV VROYHG LQGHSHQGHQWO\ DQG WKH WRWDO VWRUDJH UHTXLUHPHQWV VFDOH OLQHDUO\ ZLWK WKH QXPEHU RI HTXDWLRQV VROYHG 2Q WKH RWKHU KDQG WKH FRPSXWHU WLPH DQG VWRUDJH UHTXLUHPHQWV HVFDODWH IDVWHU LQ D IXOO\ FRXSOHG VROXWLRQ VWUDWHJ\ 7KH W\SLFDO ZD\ DURXQG WKLV SUREOHP LV WR VROYH VLPXOWDQHRXVO\ WKH FRQWLQXLW\ DQG PRPHQn WXP HTXDWLRQV WKHQ VROYH DQ\ DGGLWLRQDO HTXDWLRQV LQ D VHTXHQWLDO IDVKLRQ :LWKRXW NQRZLQJ EHIRUHKDQG WKDW WKH SUHVVXUHYHORFLW\ FRXSOLQJ LV WKH VWURQJHVW DPRQJ DOO WKH YDULRXV IORZ YDULDEOHV KRZHYHU WKH H[WUD FRPSXWDWLRQDO HIIRUW VSHQW LQ VLPXOWDQHRXV VROXWLRQ RI WKHVH HTXDWLRQV LV XQZDUUDQWHG 7KHUH DUH RWKHU DSSURDFKHV IRU VROYLQJ WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDn WLRQV QRWDEO\ PHWKRGV EDVHG RQ YRUWLFLW\VWUHDPIXQFWLRQ f§ RU YHORFLW\YRUWLFLW\ X f§ Xf IRUPXODWLRQV EXW SUHVVXUHEDVHG PHWKRGV DUH HDVLHU HVSHFLDOO\ ZLWK UHJDUG WR ERXQGDU\ FRQGLWLRQV DQG SRVVLEOH H[WHQVLRQ WR G GRPDLQV )XUWKHUPRUH WKH\ KDYH GHPRQVWUDWHG FRQVLGHUDEOH UREXVWQHVV LQ FRPSXWLQJ LQFRPSUHVVLEOH IORZV $ EURDG UDQJH RI DSSOLFDWLRQV RI SUHVVXUHEDVHG PHWKRGV LV GHPRQVWUDWHG LQ >@ 3DUDOOHO &RPSXWLQJ *HQHUDO EDFNJURXQG RI SDUDOOHO FRPSXWHUV DQG WKHLU DSSOLFDWLRQ WR WKH QXPHULn FDO VROXWLRQ RI SDUWLDO GLIIHUHQWLDO HTXDWLRQV LV JLYHQ LQ +RFNQH\ DQG -HVVKRSH >@

PAGE 15

DQG 2UWHJD DQG 9RLJW >@ )LVFKHU DQG 3DWHUD >@ JDYH D UHFHQW UHYLHZ RI SDUDOOHO FRPSXWLQJ IURP WKH SHUVSHFWLYH RI WKH IOXLG G\QDPLFV FRPPXQLW\ 7KHLU fLQGLUHFW FRVWf WKH SDUDOOHO UXQ WLPH LV RI SULPDU\ LQWHUHVW KHUH 7KH fGLUHFW FRVWf RI SDUDOOHO FRPSXWHUV DQG WKHLU FRPSRQHQWV LV DQRWKHU PDWWHU HQWLUHO\ )RU WKH LWHUDWLRQEDVHG QXPHULFDO PHWKRGV GHYHORSHG KHUH WKH SDUDOOHO UXQ WLPH LV WKH FRVW SHU LWHUDWLRQ PXOWLSOLHG E\ WKH QXPEHU RI LWHUDWLRQV 7KH ODWWHU LV DIIHFWHG E\ WKH FKDUDFWHULVWLFV RI WKH SDUWLFXODU SDUDOOHO FRPSXWHU XVHG DQG WKH DOJRULWKPV DQG LPSOHPHQWDWLRQV HPn SOR\HG 3DUDOOHO FRPSXWHUV FRPH LQ DOO VKDSHV DQG VL]HV DQG LW LV EHFRPLQJ YLUWXDOO\ LPSRVVLEOH WR JLYH D WKRURXJK WD[RQRP\ 7KH EDFNJURXQG JLYHQ KHUH LV OLPLWHG WR D GHVFULSWLRQ RI WKH W\SH RI FRPSXWHU XVHG LQ WKLV ZRUN 'DWD3DUDOOHOLVP DQG 6,03 &RPSXWHUV 6LQJOHLQVWUXFWLRQ VWUHDPPXOWLSOHGDWD VWUHDP 6,0'f FRPSXWHUV LQFOXGH WKH FRQQHFWLRQ PDFKLQHV PDQXIDFWXUHG E\ WKH 7KLQNLQJ 0DFKLQHV &RUSRUDWLRQ WKH &0 DQG &0 DQG WKH 03 03 DQG 03 FRPSXWHUV SURGXFHG E\ WKH 0DV3DU &RUn SRUDWLRQ 7KHVH DUH PDVVLYHO\SDUDOOHO PDFKLQHV FRQVLVWLQJ RI D IURQWHQG FRPSXWHU DQG PDQ\ SURFHVVRUPHPRU\ SDLUV ILJXUDWLYHO\ WKH fEDFNHQGf 7KH EDFNHQG SURn FHVVRUV DUH FRQQHFWHG WR HDFK RWKHU E\ D fGDWD QHWZRUNf 7KH WRSRORJ\ RI WKH GDWD QHWZRUN LV D PDMRU IHDWXUH RI GLVWULEXWHGPHPRU\ SDUDOOHO FRPSXWHUV 7KH VFKHPDWLF LQ )LJXUH JLYHV WKH JHQHUDO LGHD RI WKH 6,0' OD\RXW 7KH SURJUDP H[HFXWHV RQ WKH VHULDO IURQWHQG FRPSXWHU 7KH IURQWHQG WULJJHUV WKH V\Qn FKURQRXV H[HFXWLRQ RI WKH fEDFNHQGf SURFHVVRUV E\ VHQGLQJ fFRGH EORFNVf VLPXOn WDQHRXVO\ WR DOO SURFHVVRUV $FWXDOO\ WKH FRGH EORFNV DUH VHQW WR DQ LQWHUPHGLDWH fFRQWURO SURFHVVRUVff 7KH FRQWURO SURFHVVRU EURDGFDVWV WKH LQVWUXFWLRQV FRQWDLQHG

PAGE 16

LQ WKH FRGH EORFN RQH DW D WLPH WR WKH FRPSXWLQJ SURFHVVRUV 7KHVH fIURQWHQG WRSURFHVVRUf FRPPXQLFDWLRQV WDNH WLPH 7KLV WLPH LV DQ RYHUKHDG FRVW QRW SUHVHQW ZKHQ WKH SURJUDP UXQV RQ D VHULDO FRPSXWHU 7KH RSHUDQGV RI WKH LQVWUXFWLRQV WKH GDWD DUH GLVWULEXWHG DPRQJ WKH SURFHVVRUVf PHPRULHV (DFK SURFHVVRU RSHUDWHV RQ LWV RZQ ORFDOO\VWRUHG GDWD 7KH fGDWDf LQ JULGEDVHG QXPHULFDO PHWKRGV DUH WKH DUUD\V G LQ WKLV FDVH RI GHSHQGHQW YDULDEOHV JHRPHWULF TXDQWLWLHV DQG HTXDWLRQ FRHIILFLHQWV %HFDXVH WKHUH DUH XVXDOO\ SOHQW\ RI JULG SRLQWV DQG WKH VDPH JRYHUQLQJ HTXDWLRQV DSSO\ DW HDFK SRLQW PRVW &)' DOJRULWKPV FRQWDLQ PDQ\ RSHUDWLRQV WR EH SHUIRUPHG DW HYHU\ JULG SRLQW 7KXV WKLV fGDWDSDUDOOHOf DSSURDFK LV YHU\ QDWXUDO WR PRVW &)' DOJRULWKPV 0DQ\ RSHUDWLRQV PD\ EH GRQH LQGHSHQGHQWO\ RQ HDFK JULG SRLQW EXW WKHUH LV FRXn SOLQJ EHWZHHQ JULG SRLQWV LQ SK\VLFDOO\GHULYHG SUREOHPV 7KH GDWD QHWZRUN HQWHUV WKH SLFWXUH ZKHQ DQ LQVWUXFWLRQ LQYROYHV DQRWKHU SURFHVVRUfV GDWD 6XFK fLQWHUSURn FHVVRUf FRPPXQLFDWLRQ LV DQRWKHU RYHUKHDG FRVW RI VROYLQJ WKH SUREOHP RQ D SDUDOOHO FRPSXWHU )RU D JLYHQ DOJRULWKP WKH DPRXQW RI LQWHUSURFHVVRU FRPPXQLFDWLRQ GHn SHQGV RQ WKH fGDWD PDSSLQJf ZKLFK UHIHUV WR WKH SDUWLWLRQLQJ RI WKH DUUD\V DQG WKH DVVLJQPHQW RI WKHVH fVXEJULGVf WR SURFHVVRUV )RU D JLYHQ PDFKLQH WKH VSHHG RI WKH LQWHUSURFHVVRU FRPPXQLFDWLRQ GHSHQGV RQ WKH SDWWHUQ RI FRPPXQLFDWLRQ UDQGRP RU UHJXODUf DQG WKH GLVWDQFH EHWZHHQ WKH SURFHVVRUV IDU DZD\ RU QHDUHVWQHLJKERUf 7KH UXQ WLPH RI D SDUDOOHO SURJUDP GHSHQGV ILUVW RQ WKH DPRXQW RI IURQWHQG DQG SDUDOOHO FRPSXWDWLRQ LQ WKH DOJRULWKP DQG WKH VSHHGV RI WKH IURQWHQG DQG EDFNn HQG IRU GRLQJ WKHVH FRPSXWDWLRQV ,Q WKH SURJUDPV GHYHORSHG KHUH WKH IURQWHQG FRPSXWDWLRQV DUH PDLQO\ WKH SURJUDP FRQWURO VWDWHPHQWV ,) EORFNV '2 ORRSV HWFf 7KH IURQWHQG ZRUN LV QRW VSHG XS E\ SDUDOOHO SURFHVVLQJ 7KH SDUDOOHO FRPSXWDWLRQV DUH WKH XVHIXO ZRUN DQG E\ GHVLJQ RQH KRSHV WR KDYH HQRXJK SDUDOOHO FRPSXWDWLRQ

PAGE 17

WR DPRUWL]H ERWK WKH IURQWHQG FRPSXWDWLRQ DQG WKH LQWHUSURFHVVRU DQG IURQWHQGWR SURFHVVRU FRPPXQLFDWLRQ ZKLFK DUH WKH RWKHU IDFWRUV WKDW FRQWULEXWH WR WKH SDUDOOHO UXQ WLPH )URP WKLV EULHI GHVFULSWLRQ LW VKRXOG EH FOHDU WKDW 6,0' FRPSXWHUV KDYH IRXU FKDUn DFWHULVWLF VSHHGV WKH FRPSXWDWLRQ VSHHG RI WKH SURFHVVRUV WKH FRPPXQLFDWLRQ VSHHG EHWZHHQ SURFHVVRUV DQG WKH VSHHG RI WKH IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ LH WKH VSHHG WKDW FRGH EORFNV DUH WUDQVIHUUHG DQG WKH VSHHG RI WKH IURQWHQG 7KHVH PDFKLQH FKDUDFWHULVWLFV DUH QRW XQGHU WKH FRQWURO RI WKH SURJUDPPHU +RZHYHU WKH DPRXQW RI FRPSXWDWLRQ DQG FRPPXQLFDWLRQ D SURJUDP FRQWDLQV LV GHWHUPLQHG E\ WKH SURJUDPPHU EHFDXVH LW GHSHQGV RQ WKH DOJRULWKP VHOHFWHG DQG WKH DOJRULWKPfV LPSOHn PHQWDWLRQ WKH FKRLFH RI WKH GDWD PDSSLQJ IRU H[DPSOHf 7KXV WKH NH\ WR REWDLQLQJ JRRG SHUIRUPDQFH IURP 6,0' FRPSXWHUV LV WR SLFN D VXLWDEOH DOJRULWKP fPDWFKHGf LQ D VHQVH WR WKH DUFKLWHFWXUH DQG WR GHYHORS DQ LPSOHPHQWDWLRQ ZKLFK PLQLPL]HV DQG ORFDOL]HV WKH LQWHUSURFHVVRU FRPPXQLFDWLRQ 7KHQ LI WKHUH LV HQRXJK SDUDOOHO FRPSXWDWLRQ WR DPRUWL]H WKH VHULDO FRQWHQW RI WKH SURJUDP DQG WKH FRPPXQLFDWLRQ RYHUKHDGV WKH VSHHGXS REWDLQHG ZLOO EH QHDUO\ WKH QXPEHU RI SURFHVVRUV 7KH DFWXDO SHUIRUPDQFH EHFDXVH LW GHSHQGV RQ WKH FRPSXWHU WKH DOJRULWKP DQG WKH LPSOHn PHQWDWLRQ PXVW EH GHWHUPLQHG E\ QXPHULFDO H[SHULPHQW RQ D SURJUDPE\SURJUDP EDVLV 6,0' FRPSXWHUV DUH UHVWULFWHG WR H[SORLWLQJ GDWDSDUDOOHOLVP DV RSSRVHG WR WKH SDUDOOHOLVP RI WKH WDVNV LQ DQ DOJRULWKP 7KH WDVNSDUDOOHO DSSURDFK LV PRUH FRPn PRQO\ XVHG IRU H[DPSOH RQ WKH &UD\ & VXSHUFRPSXWHU 0XOWLSOHLQVWUXFWLRQ VWUHDPPXOWLSOHGDWD VWUHDP 0,0'f FRPSXWHUV RQ WKH RWKHU KDQG DUH FRPSRVHG RI PRUHRUOHVV DXWRQRPRXV SURFHVVRUPHPRU\ SDLUV ([DPSOHV LQFOXGH WKH ,QWHO VHULHV RI PDFKLQHV L36& L36& DQG 3DUDJRQf ZRUNVWDWLRQ FOXVWHUV DQG WKH FRQQHFn WLRQ PDFKLQH &0 +RZHYHU LQ &)' WKH GDWDSDUDOOHO DSSURDFK LV WKH SUHYDOHQW

PAGE 18

RQH HYHQ RQ 0,0' FRPSXWHUV 7KH IURQWHQGEDFNHQG SURJUDPPLQJ SDUDGLJP LV LPSOHPHQWHG E\ VHOHFWLQJ RQH SURFHVVRU WR LQLWLDWH SURJUDPV RQ WKH RWKHU SURFHVVRUV DFFXPXODWH JOREDO UHVXOWV DQG HQIRUFH V\QFKURQL]DWLRQ ZKHQ QHFHVVDU\ D VWUDWHJ\ FDOOHG VLQJOHSURJUDPPXOWLSOHGDWD 630'f >@ 7KH &0 KDV D VSHFLDO fFRQWURO QHWZRUNf WR SURYLGH DXWRPDWLF V\QFKURQL]DWLRQ RI WKH SURFHVVRUfV H[HFXWLRQ VR D 6,0' SURJUDPPLQJ PRGHO FDQ EH VXSSRUWHG DV ZHOO DV 0,0' 6,0' LV WKH PDQQHU LQ ZKLFK WKH &0 KDV EHHQ XVHG LQ WKH SUHVHQW ZRUN 7KH DGYDQWDJH WR XVLQJ WKH &0 LQ WKH 6,0' PRGH LV WKDW WKH SURJUDPPHU GRHV QRW KDYH WR H[SOLFLWO\ VSHFLI\ PHVVDJHSDVVLQJ 7KLV VLPSOLILFDWLRQ VDYHV HIIRUW DQG LQFUHDVHV WKH HIIHFWLYH VSHHG RI FRPPXQLFDWLRQ EHFDXVH FHUWDLQ WLPHFRQVXPLQJ SURWRFROV IRU WKH GDWD WUDQVIHU FDQ EH HOLPLQDWHG $OJRULWKPV DQG 3HUIRUPDQFH 7KH SUHYLRXV VXEVHFWLRQ GLVFXVVHG GDWDSDUDOOHOLVP DQG 6,0' FRPSXWHUV LH ZKDW SDUDOOHO FRPSXWLQJ PHDQV LQ WKH SUHVHQW FRQWH[W DQG KRZ LW LV FDUULHG RXW E\ 6,0'WYSH FRPSXWHUV 7R GHYHORS SURJUDPV IRU 6,0' FRPSXWHUV UHTXLUHV RQH WR UHFRJQL]H WKDW XQOLNH VHULDO FRPSXWHUV SDUDOOHO FRPSXWHUV DUH QRW EODFN ER[HV ,Q DGGLWLRQ WR WKH VHOHFWLRQ RI DQ DOJRULWKP ZLWK DPSOH GDWDSDUDOOHOLVP FRQVLGHUDWLRQ PXVW EH JLYHQ WR WKH LPSOHPHQWDWLRQ RI WKH DOJRULWKP LQ VSHFLILF ZD\V LQ RUGHU WR DFKLHYH WKH GHVLUHG EHQHILWV VSHHGXSV RYHU VHULDO FRPSXWDWLRQVf 7KH VXFFHVV RI WKH FKRLFH RI DOJRULWKP DQG WKH LPSOHPHQWDWLRQ RQ D SDUWLFXODU FRPSXWHU LV MXGJHG E\ WKH fVSHHGXSf 6f DQG fHIILFLHQF\f (f RI WKH SURJUDP 7KH FRPPXQLFDWLRQV PHQWLRQHG DERYH IURQWHQGWRSURFHVVRU DQG LQWHUSURFHVVRU DUH HVn VHQWLDOO\ RYHUKHDG FRVWV DVVRFLDWHG ZLWK WKH 6,0' FRPSXWDWLRQDO PRGHO 7KH\ ZRXOG QRW EH SUHVHQW LI WKH DOJRULWKP ZHUH LPSOHPHQWHG RQ D VHULDO FRPSXWHU RU LI VXFK FRPPXQLFDWLRQV ZHUH LQILQLWHO\ IDVW ,I WKH RYHUKHDG FRVW ZDV ]HUR D SDUDOOHO SURJUDP

PAGE 19

H[HFXWLQJ RQ QS SURFHVVRUV ZRXOG UXQ QS WLPHV IDVWHU WKDQ RQ D VLQJOH SURFHVVRU D VSHHGXS RI QS 7KLV LGHDOL]HG FDVH ZRXOG DOVR KDYH D SDUDOOHO HIILFLHQF\ RI 7KH SDUDOOHO HIILFLHQF\ ( PHDVXUHV WKH DFWXDO VSHHGXS LQ FRPSDULVRQ ZLWK WKH LGHDO 2QH LV DOVR LQWHUHVWHG LQ KRZ VSHHGXS HIILFLHQF\ DQG WKH SDUDOOHO UXQ WLPH 7Sf VFDOH ZLWK SUREOHP VL]H DQG ZLWK WKH QXPEHU RI SURFHVVRUV XVHG 7KH REMHFWLYH LQ XVLQJ SDUDOOHO FRPSXWHUV LV PRUH WKDQ MXVW REWDLQLQJ D JRRG VSHHGXS RQ D SDUWLFXODU SUREOHP VL]H DQG D SDUWLFXODU QXPEHU RI SURFHVVRUV )RU SDUDOOHO &)' WKH JRDOV DUH WR HLWKHU f UHGXFH WKH WLPH WKH LQGLUHFW FRVW >@f WR VROYH SUREOHPV RI D JLYHQ FRPSOH[LW\ WR VDWLVI\ WKH QHHG IRU UDSLG WXUQDURXQG WLPHV LQ GHVLJQ ZRUN RU f LQFUHDVH WKH FRPSOH[LW\ RI SUREOHPV ZKLFK FDQ EH VROYHG LQ D IL[HG DPRXQW RI WLPH )RU WKH LWHUDWLRQEDVHG QXPHULFDO PHWKRGV VWXGLHG KHUH WKHUH DUH WZR FRQVLGHUDWLRQV WKH FRVW SHU LWHUDWLRQ DQG WKH QXPEHU RI LWHUDWLRQV UHVSHFWLYHO\ FRPSXWDWLRQDO DQG QXPHULFDO IDFWRUV 7KH WRWDO UXQ WLPH LV WKH SURGXFW RI WKH WZR *XVWDIVRQ >@ KDV SUHVHQWHG IL[HGVL]H DQG VFDOHGVL]H H[SHULPHQWV ZKRVH UHVXOWV GHVFULEH KRZ WKH FRVW SHU LWHUDWLRQ VFDOHV RQ D SDUWLFXODU PDFKLQH ,Q WKH IL[HG VL]H H[SHULPHQW WKH HIILFLHQF\ LV PHDVXUHG IRU D IL[HG SUREOHP VL]H DV SURFHVVRUV DUH DGGHG 7KH KRSH LV WKDW WKH UXQ WLPH LV KDOYHG ZKHQ WKH QXPEHU RI SURFHVVRUV LV GRXEOHG +RZHYHU WKH UXQ WLPH REYLRXVO\ FDQQRW EH UHGXFHG LQGHILQLWHO\ E\ DGGLQJ PRUH SURFHVVRUV EHFDXVH DW VRPH SRLQW WKH SDUDOOHOLVP UXQV RXWf§WKH OLPLW WR WKH DWWDLQDEOH VSHHGXS LV WKH QXPEHU RI JULG SRLQWV ,Q WKH VFDOHGVL]H H[SHULPHQW WKH SUREOHP VL]H LV LQFUHDVHG DORQJ ZLWK WKH QXPEHU RI SURFHVVRUV WR PDLQWDLQ D FRQVWDQW ORFDO SUREOHP VL]H IRU HDFK RI WKH SDUDOOHO SURFHVVRUV &DUH PXVW EH WDNHQ WR PDNH WLPLQJV RQ D SHU LWHUDWLRQ EDVLV LI WKH QXPEHU RI LWHUDWLRQV WR UHDFK WKH HQG RI WKH FRPSXWDWLRQ LQFUHDVHV ZLWK WKH SUREOHP VL]H 7KH KRSH LQ VXFK DQ H[SHULPHQW LV WKDW WKH SURJUDP ZLOO PDLQWDLQ D FHUWDLQ KLJK OHYHO RI SDUDOOHO HIILFLHQF\ ( 7KH DELOLW\

PAGE 20

WR PDLQWDLQ ( LQ WKH VFDOHGVL]H H[SHULPHQW LQGLFDWHV WKDW WKH DGGLWLRQDO SURFHVVRUV LQFUHDVHG WKH VSHHGXS LQ D RQHIRURQH WUDGH 3UHVVXUH%DVHG 0XOWLJULG 0HWKRGV 0XOWLJULG PHWKRGV DUH D SRWHQWLDO URXWH WR ERWK FRPSXWDWLRQDOO\ DQG QXPHULFDOO\ VFDODEOH SURJUDPV 7KHLU FRVW SHU LWHUDWLRQ RQ SDUDOOHO FRPSXWHUV DQG FRQYHUJHQFH UDWH LV WKH VXEMHFW RI &KDSWHUV )RU VXIILFLHQWO\ VPRRWK HOOLSWLF SUREOHPV WKH FRQYHUJHQFH UDWH RI PXOWLJULG PHWKRGV LV LQGHSHQGHQW RI WKH SUREOHP VL]Hf§WKHLU RSn HUDWLRQ FRXQW LV 1f ,Q SUDFWLFH JRRG FRQYHUJHQFH UDWHV DUH PDLQWDLQHG DV WKH SUREOHP VL]H LQFUHDVHV IRU 1DYLHU6WRNHV SUREOHPV DOVR SURYLGHG VXLWDEOH PXOWLJULG FRPSRQHQWVf§WKH VPRRWKHU UHVWULFWLRQ DQG SURORQJDWLRQ SURFHGXUHVf§DQG PXOWLJULG WHFKQLTXHV DUH HPSOR\HG 7KH VWDQGDUG 9F\FOH IXOOPXOWLJULG )0*f DOJRULWKP KDV DQ DOPRVW RSWLPDO RSHUDWLRQ FRXQW ORJ1f IRU 3RLVVRQ HTXDWLRQV RQ SDUDOOHO FRPn SXWHUV 3URYLGHG WKH PXOWLJULG DOJRULWKP LV LPSOHPHQWHG HIILFLHQWO\ DQG WKDW WKH FRVW SHU LWHUDWLRQ VFDOHV ZHOO ZLWK WKH SUREOHP VL]H DQG WKH QXPEHU RI SURFHVVRUV WKH PXOWLJULG DSSURDFK VHHPV WR EH D SURPLVLQJ ZD\ WR H[SORLW WKH LQFUHDVHG FRPSXWDn WLRQDO FDSDELOLWLHV WKDW SDUDOOHO FRPSXWHUV RIIHU 7KH SUHVVXUHEDVHG PHWKRGV PHQWLRQHG SUHYLRXVO\ LQYROYH WKH VROXWLRQ RI WKUHH V\VWHPV RI OLQHDU DOJHEUDLF HTXDWLRQV RQH HDFK IRU WKH WZR YHORFLW\ FRPSRQHQWV DQG RQH IRU WKH SUHVVXUH E\ VWDQGDUG LWHUDWLYH PHWKRGV VXFK DV VXFFHVVLYH OLQH XQGHUUHOD[DWLRQ 6/85f +HQFH WKH\ LQKHULW WKH FRQYHUJHQFH UDWH SURSHUWLHV RI WKHVH VROYHUV LH DV WKH SUREOHP VL]H JURZV WKH FRQYHUJHQFH UDWH GHWHULRUDWHV :LWK WKH VLQJOHJULG WHFKQLTXHV WKHUHIRUH LW ZLOO EH GLIILFXOW WR REWDLQ UHDVRQDEOH WXUQDURXQG WLPHV ZKHQ WKH SUREOHP VL]H LV LQFUHDVHG LQWR WKH WDUJHW UDQJH IRU SDUDOOHO FRPn SXWHUV 0XOWLJULG WHFKQLTXHV IRU DFFHOHUDWLQJ WKH FRQYHUJHQFH RI SUHVVXUHFRUUHFWLRQ

PAGE 21

PHWKRGV VKRXOG EH SXUVXHG DQG LQ IDFW WKH\ KDYH EHHQ ZLWKLQ WKH ODVW ILYH RU VR \HDUV > @ +RZHYHU WKHUH DUH VWLOO PDQ\ XQVHWWOHG LVVXHV 7KH FRPSOH[LWLHV DIIHFWLQJ WKH FRQYHUJHQFH UDWH RI VLQJOHJULG FDOFXODWLRQV FDUU\ RYHU WR WKH PXOWLJULG IUDPHZRUN DQG DUH FRPSRXQGHG WKHUH E\ WKH FRXSOLQJ EHWZHHQ WKH HYROYLQJ VROXWLRQV RQ PXOWLSOH JULG OHYHOV DQG E\ WKH SDUWLFXODU fJULGVFKHGXOLQJf XVHG /LQHDU PXOWLJULG PHWKRGV KDYH EHHQ DSSOLHG WR DFFHOHUDWH WKH FRQYHUJHQFH UDWH IRU WKH VROXWLRQ RI WKH V\VWHP RI SUHVVXUH RU SUHVVXUHFRUUHFWLRQ HTXDWLRQV > @ +RZHYHU WKH RYHUDOO FRQYHUJHQFH UDWH GRHV QRW VLJQLILFDQWO\ LPSURYH EHFDXVH WKH YHORFLW\SUHVVXUH FRXSOLQJ LV QRW DGGUHVVHG > @ 7KHUHIRUH WKH PXOWLJULG VWUDWHJ\ VKRXOG EH DSSOLHG RQ WKH fRXWHU ORRSf ZLWK WKH UROH RI WKH LWHUDWLYH UHOD[DWLRQ PHWKRG SOD\HG E\ WKH QXPHULFDO PHWKRGV GHVFULEHG DERYH HJ WKH SURMHFWLRQ PHWKRG RU WKH SUHVVXUHFRUUHFWLRQ PHWKRG 7KXV WKH JHQHULF WHUP fVPRRWKHUf LV SUHVFULEHG EHFDXVH LW UHIOHFWV WKH SXUSRVH RI WKH VROXWLRQ RI WKH FRXSOHG V\VWHP RI HTXDWLRQV JRLQJ RQ LQVLGH WKH PXOWLJULG F\FOHf§WR VPRRWK WKH UHVLGXDO VR WKDW DQ DFFXUDWH FRDUVHJULG DSSUR[LPDWLRQ RI WKH ILQHJULG SUREOHP LV SRVVLEOH ,W LV QRW WUXH WKDW D JRRG VROYHU RQH ZLWK D IDVW FRQYHUJHQFH UDWH RQ VLQJOHJULG FRPSXWDWLRQV LV QHFHVVDULO\ D JRRG VPRRWKHU RI WKH UHVLGXDO ,W LV WKHUHIRUH RI LQWHUHVW WR DVVHVV SUHVVXUHFRUUHFWLRQ PHWKn RGV DV SRWHQWLDO PXOWLJULG VPRRWKHUV 6HH 6K\\ DQG 6XQ >@ IRU PRUH LQIRUPDWLRQ RQ WKH VWDJJHUHGJULG LPSOHPHQWDWLRQ RI PXOWLJULG PHWKRGV DQG VRPH HQFRXUDJLQJ UHVXOWV 6WDJJHUHG JULGV UHTXLUH VSHFLDO WHFKQLTXHV > @ IRU WKH WUDQVIHU RI VROXWLRQV DQG UHVLGXDOV EHWZHHQ JULG OHYHOV VLQFH WKH SRVLWLRQV RI WKH YDULDEOHV RQ GLIIHUHQW OHYHOV GR QRW FRUUHVSRQG +RZHYHU WKH\ DOOHYLDWH WKH fFKHFNHUERDUGf SUHVVXUH VWDELOLW\ SUREOHP >@ DQG VLQFH WHFKQLTXHV KDYH DOUHDG\ EHHQ HVWDEOLVKHG >@ WKHUH LV QR

PAGE 22

UHDVRQ QRW WR JR WKLV URXWH HVSHFLDOO\ ZKHQ FDUWHVLDQ JULGV DUH XVHG DV LQ WKH SUHVHQW ZRUN 9DQND >@ KDV SURSRVHG D QHZ QXPHULFDO PHWKRG DV D VPRRWKHU IRU PXOWLJULG FRPSXWDWLRQV RQH ZKLFK KDV LQIHULRU FRQYHUJHQFH SURSHUWLHV DV D VLQJOHJULG PHWKRG EXW DSSDUHQWO\ \LHOGV DQ HIIHFWLYH PXOWLJULG PHWKRG $ VWDJJHUHGJULG ILQLWHYROXPH GLVFUHWL]DWLRQ LV HPSOR\HG ,Q 9DQNDfV VPRRWKHU WKH YHORFLW\ FRPSRQHQWV DQG SUHVn VXUH RI HDFK FRQWURO YROXPH DUH XSGDWHG VLPXOWDQHRXVO\ VR LW LV D FRXSOHG DSSURDFK EXW WKH FRXSOLQJ EHWZHHQ FRQWURO YROXPHV LV QRW WDNHQ LQWR DFFRXQW VR WKH FDOFXn ODWLRQ RI QHZ YHORFLWLHV DQG SUHVVXUHV LV H[SOLFLW 7KLV PHWKRG LV VRPHWLPHV FDOOHG WKH fORFDOO\FRXSOHG H[SOLFLWf RU fEORFNH[SOLFLWf SUHVVXUHEDVHG PHWKRG 7KH FRQWURO YROXPHV DUH YLVLWHG LQ OH[LFRJUDSKLF RUGHU LQ WKH RULJLQDO PHWKRG ZKLFK LV WKHUHIRUH DSWO\ FDOOHG %*6 EORFN *DXVV6HLGHOf /LQHYDULDQWV KDYH EHHQ GHYHORSHG WR FRXSOH WKH IORZ YDULDEOHV LQ QHLJKERULQJ FRQWURO YROXPHV DORQJ OLQHV VHH > @f /LQGHQ HW DO >@ JDYH D EULHI VXUYH\ RI PXOWLJULG PHWKRGV IRU WKH VWHDG\VWDWH LQn FRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV 7KH\ DUJXH ZLWKRXW DQDO\VLV WKDW %*6 VKRXOG EH SUHIHUUHG RYHU WKH SUHVVXUHFRUUHFWLRQ W\SH PHWKRGV VLQFH WKH VWURQJ ORFDO FRXn SOLQJ LV OLNHO\ WR KDYH EHWWHU VXFFHVV VPRRWKLQJ WKH UHVLGXDO ORFDOO\ 2Q WKH RWKHU KDQG 6LYDORJDQDWKDQ DQG 6KDZ > @ KDYH IRXQG JRRG VPRRWKLQJ SURSHUWLHV IRU WKH SUHVVXUHFRUUHFWLRQ DSSURDFK DOWKRXJK WKH DQDO\VLV ZDV VLPSOLILHG FRQVLGHUDEO\ 6RFNRO >@ KDV FRPSDUHG WKH SRLQW DQG OLQHYDULDQWV RI %*6 ZLWK WKH SUHVVXUH FRUUHFWLRQ PHWKRGV RQ VHULDO FRPSXWHUV XVLQJ PRGHO SUREOHPV ZLWK GLIIHUHQW SK\VLFDO FKDUDFWHULVWLFV 6,03/( DQG %*6 HPHUJH DV IDYRULWHV LQ WHUPV RI UREXVWQHVV ZLWK %*6 SUHIHUUHG GXH WR D ORZHU FRVW SHU LWHUDWLRQ 7KLV SUHIHUHQFH PD\ RU PD\ QRW FDUU\ RYHU WR 6,0' SDUDOOHO FRPSXWHUV VHH &KDSWHU IRU FRPSDULVRQf ,QWHUHVWLQJ DSSOLFDWLRQV RI PXOWLJULG PHWKRGV WR LQFRPSUHVVLEOH 1DYLHU6WRNHV IORZ SUREOHPV FDQ EH IRXQG LQ > @

PAGE 23

,Q WHUPV RI SDUDOOHO LPSOHPHQWDWLRQV WKHUH DUH IDU IHZHU UHVXOWV DOWKRXJK WKLV ILHOG LV UDSLGO\ JURZLQJ 6LPRQ >@ JLYHV D UHFHQW FURVVVHFWLRQ RI SDUDOOHO &)' UHVXOWV 3DUDOOHO PXOWLJULG PHWKRGV QRW RQO\ LQ &)' EXW DV D JHQHUDO WHFKQLTXH IRU SDUWLDO GLIIHUHQWLDO HTXDWLRQV KDYH UHFHLYHG PXFK DWWHQWLRQ GXH WR WKHLU GHVLUDEOH 1f RSHUDWLRQ FRXQW RQ 3RLVVRQ HTXDWLRQV +RZHYHU LW LV DSSDUHQWO\ GLIILFXOW WR ILQG RU GHVLJQ SDUDOOHO FRPSXWHUV ZLWK LGHDO FRPPXQLFDWLRQ QHWZRUNV IRU PXOWLJULG >@ &RQVHTXHQWO\ LPSOHPHQWDWLRQV KDYH EHHQ SXUVXHG RQ D YDULHW\ RI PDFKLQHV WR VHH ZKDW SHUIRUPDQFH FDQ EH REWDLQHG ZLWK WKH SUHVHQW JHQHUDWLRQ RI SDUDOOHO PDFKLQHV DQG WR LGHQWLI\ DQG XQGHUVWDQG WKH EDVLF LVVXHV 'HQG\ HW DO>@ KDYH UHFHQWO\ GHVFULEHG D PXOWLJULG PHWKRG RQ WKH &0 +RZHYHU WR DFFRPPRGDWH WKH GDWD SDUDOOHO SURJUDPPLQJ PRGHO WKH\ KDG WR GLPHQVLRQ WKHLU DUUD\ GDWD RQ HYHU\ JULG OHYHO WR WKH GLPHQVLRQ H[WHQWV RI WKH ILQHVW JULG DUUD\ GDWD 7KLV DSSURDFK LV YHU\ ZDVWHIXO RI VWRUDJH &RQVHTXHQWO\ WKH VL]H RI SUREOHPV ZKLFK FDQ EH VROYHG LV JUHDWO\ UHGXFHG 5HFHQWO\ DQ LPSURYHG UHOHDVH RI WKH FRPSLOHU KDV HQDEOHG WKH VWRUDJH SUREOHP WR EH FLUFXPYHQWHG ZLWK VRPH SURJUDPPLQJ GLOLJHQFH VHH &KDSWHU f 7KH LPSOHPHQWDWLRQ GHYHORSHG LQ WKLV ZRUN LV RQH RI WKH ILUVW WR WDNH DGYDQWDJH RI WKH QHZ FRPSLOHU IHDWXUH ,Q DGGLWLRQ WR SDUDOOHO LPSOHPHQWDWLRQV RI VHULDO PXOWLJULG DOJRULWKPV VHYHUDO QRYHO PXOWLJULG PHWKRGV KDYH EHHQ SURSRVHG IRU 6,0' FRPSXWHUV > @ 6RPH RI WKH DOJRULWKPV DUH LQVWULQVLFDOO\ SDUDOOHO > @ RU KDYH LQFUHDVHG SDUDOOHOLVP EHFDXVH WKH\ XVH PXOWLSOH FRDUVH JULGV IRU H[DPSOH >@ 7KHVH HIIRUWV DQG RWKHUV KDYH EHHQ UHFHQWO\ UHYLHZHG > @ 0RVW RI WKH QHZ LGHDV KDYH QRW EHHQ GHYHORSHG \HW IRU VROYLQJ WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV 2QH RI WKH PRVW SURPLQHQW FRQFHUQV DGGUHVVHG LQ WKH OLWHUDWXUH UHJDUGLQJ SDUDOOHO LPSOHPHQWDWLRQV RI VHULDO PXOWLJULG PHWKRGV LV WKH FRDUVH JULGV :KHQ WKH QXPEHU RI JULG SRLQWV LV VPDOOHU WKDQ WKH QXPEHU RI SURFHVVRUV WKH SDUDOOHOLVP LV UHGXFHG WR WKH QXPEHU RI JULG SRLQWV 7KLV ORVV RI SDUDOOHOLVP PD\ VLJQLILFDQWO\ DIIHFW WKH

PAGE 24

SDUDOOHO HIILFLHQF\ 2QH RI WKH URXWHV DURXQG WKH SUREOHP LV WR XVH PXOWLSOH FRDUVH JULGV > @ $QRWKHU LV WR DOWHU WKH JULGVFKHGXOLQJ WR DYRLG FRDUVH JULGV 7KLV DSSURDFK FDQ OHDG WR FRPSXWDWLRQDOO\ VFDODEOH LPSOHPHQWDWLRQV > @ EXW PD\ VDFULILFH WKH FRQYHUJHQFH UDWH f$JJORPHUDWLRQf LV DQ HIILFLHQF\LQFUHDVLQJ WHFKQLTXH XVHG LQ 0,0' PXOWLJULG SURJUDPV ZKLFK UHIHUV WR WKH WHFKQLTXH RI GXSOLFDWLQJ WKH FRDUVH JULG SUREOHP LQ HDFK SURFHVVRU VR WKDW FRPSXWDWLRQ SURFHHGV LQGHSHQGHQWO\ DQG UHGXQGDQWO\f 6XFK DQ DSSURDFK FDQ DOVR EH VFDODEOH >@ +RZHYHU PRVW DWWHQn WLRQ VR IDU KDV IRFXVHG RQ SDUDOOHO LPSOHPHQWDWLRQV RI VHULDO PXOWLJULG DOJRULWKPV LQ SDUWLFXODU RQ DVVHVVLQJ WKH LPSRUWDQFH RI WKH FRDUVHJULG VPRRWKLQJ SUREOHP IRU GLIn IHUHQW PDFKLQHV DQG RQ GHYHORSLQJ WHFKQLTXHV WR PLQLPL]H WKH LPSDFW RQ WKH SDUDOOHO HIILFLHQF\ 'HVFULSWLRQ RI WKH 5HVHDUFK 7KH GLVVHUWDWLRQ LV RUJDQL]HG DV IROORZV &KDSWHU GLVFXVVHV WKH UROH RI WKH PDVV FRQVHUYDWLRQ LQ WKH QXPHULFDO FRQVLVWHQF\ RI WKH VLQJOHJULG 6,03/( PHWKRG IRU RSHQ ERXQGDU\ SUREOHPV DQG H[SODLQV WKH UHOHYDQFH RI WKLV LVVXH WR WKH FRQYHUJHQFH UDWH ,Q &KDSWHU WKH VLQJOHJULG SUHVVXUHFRUUHFWLRQ PHWKRG LV LPSOHPHQWHG RQ WKH 03 &0 DQG &0 FRPSXWHUV DQG LWV SHUIRUPDQFH LV DQDO\]HG +LJK SDUDOOHO HIILFLHQn FLHV DUH REWDLQHG DW VSHHGV DQG SUREOHP VL]HV ZHOO EH\RQG WKH FXUUHQW SHUIRUPDQFH RI VXFK DOJRULWKPV RQ WUDGLWLRQDO YHFWRU VXSHUFRPSXWHUV &KDSWHU GHYHORSV D PXOWLJULG QXPHULFDO PHWKRG IRU WKH SXUSRVH RI DFFHOHUDWLQJ WKH VLQJOHJULG SUHVVXUHFRUUHFWLRQ PHWKRG DQG PDLQWDLQLQJ WKH DFFHOHUDWHG FRQYHUJHQFH SURSHUW\ LQGHSHQGHQW RI WKH SUREOHP VL]H 7KH PXOWLJULG VPRRWKHU WKH LQWHUJULG WUDQVIHU RSHUDWRUV DQG WKH VWDn ELOL]DWLRQ VWUDWHJ\ IRU 1DYLHU6WRNHV FRPSXWDWLRQV DUH GLVFXVVHG &KDSWHU GHVFULEHV WKH DFWXDO LPSOHPHQWDWLRQ RI WKH PXOWLJULG DOJRULWKP RQ WKH &0 LWV FRQYHUJHQFH UDWH DQG LWV SDUDOOHO UXQ WLPH DQG VFDODELOLW\ 7KH FRQYHUJHQFH UDWH GHSHQGV RQ WKH

PAGE 25

IORZ SUREOHP DQG WKH FRDUVHJULG GLVFUHWL]DWLRQ DPRQJ RWKHU IDFWRUV 7KHVH IDFWRUV DUH FRQVLGHUHG LQ WKH FRQWH[W RI WKH fIXOOPXOWLJULGf )0*f VWDUWLQJ SURFHGXUH E\ ZKLFK WKH LQLWLDO JXHVV RQ WKH ILQH JULG LV REWDLQHG 7KH FRVW RI WKH )0* SURFHn GXUH LV D FRQFHUQ IRU SDUDOOHO FRPSXWDWLRQ >@ DQG WKLV LVVXH LV DOVR DGGUHVVHG 7KH UHVXOWV LQGLFDWH WKDW WKH )0* SURFHGXUH PD\ LQIOXHQFH WKH DV\PSWRWLF FRQYHUJHQFH UDWH DQG WKH VWDELOLW\ RI WKH PXOWLJULG LWHUDWLRQV &RQFOXGLQJ UHPDUNV LQ HDFK FKDSWHU VXPPDUL]H WKH SURJUHVV PDGH DQG VXJJHVW DYHQXHV IRU IXUWKHU VWXG\

PAGE 26

)LJXUH 6WDJJHUHGJULG OD\RXW RI GHSHQGHQW YDULDEOHV IRU D VPDOO EXW FRPSOHWH GRPDLQ %RXQGDU\ YDOXHV LQYROYHG LQ WKH FRPSXWDWLRQ DUH VKRZQ 5HSUHVHQWDWLYH X Y DQG SUHVVXUH ERXQGDU\ FRQWURO YROXPHV DUH VKDGHG

PAGE 27

)URQW (QG &0 DQG 03f 3DUWLWLRQ 0DQDJHU &0f f§! VHULDO FRGH FRQWURO FRGH VFDODU GDWD VKRUW EORFNV RI SDUDOOHO FRGH 6HTXHQFHU &0f $UUD\ FRQWURO XQLW 03f 0XOWLSOH 63$5& QRGHV &0A PRUH 3(V f f f DUUD\ GDWD SDUWLWLRQHG DPRQJ SURFHVVRU PHPRULHV f f f ,QWHUSURFHVVRU FRPPXQLFDWLRQ QHWZRUN K\SHUFXEH &0f f1(:6f VWDJH FURVVEDU 03f f;1HWf IDW WUHH &0f )LJXUH /D\RXW RI WKH 03 &0 DQG &0 6,0' FRPSXWHUV

PAGE 28

&+$37(5 35(6685(&255(&7,21 0(7+2'6 )LQLWH9ROXPH 'LVFUHWL]DWLRQ RQ 6WDJJHUHG *ULGV 7KH IRUPXODWLRQ RI WKH QXPHULFDO PHWKRG XVHG LQ WKLV ZRUN EHJLQV ZLWK WKH LQWHn JUDWLRQ RI WKH JRYHUQLQJ HTXDWLRQV (T RYHU HDFK RI WKH FRQWURO YROXPHV LQ WKH FRPSXWDWLRQDO GRPDLQ )LJXUH VKRZV D PRGHO FRPSXWDWLRQDO GRPDLQ ZLWK X Y DQG S FHOOFHQWHUHGf FRQWURO YROXPHV VKDGHG 7KH FRQWLQXLW\ HTXDWLRQ LV LQWHJUDWHG RYHU WKH S FRQWURO YROXPHV &RQVLGHU WKH GLVFUHWL]DWLRQ RI WKH XPRPHQWXP HTXDWLRQ IRU WKH FRQWURO YROXPH VKRZQ LQ )LJXUH ZKRVH GLPHQVLRQV DUH $[ DQG $\ 7KH Y FRQWURO YROXPHV DUH GRQH H[DFWO\ WKH VDPH H[FHSW URWDWHG r ,QWHJUDWLRQ RI (T RYHU WKH VKDGHG UHJLRQ LV LQWHUSUHWHG DV IROORZV IRU HDFK RI WKH WHUPV DS8G[G\ nA$[$\ GW GSX GW -, aG[aG[ G\ SXA $\n G[ GSXY G\ G[ G\ SXQYQ SXVYVf $[ GS G[ G[G\ ^SH a 3Zf$\ -S S G X G[ GX G[G\ 8 GQ G[ GX D\3[L\ 3 G\ f7r 3 GX G[ GX G\ $\ $[ f f f f f f 7KH ORZHUFDVH VXEVFULSWV H Z Q V LQGLFDWH HYDOXDWLRQ RQ WKH FRQWURO YROXPH IDFHV %\ FRQYHQWLRQ DQG WKH PHDQYDOXH WKHRUHP WKHVH DUH DW WKH PLGSRLQW RI WKH IDFHV 7KH VXEVFULSW 3 LQ (T LQGLFDWHV HYDOXDWLRQ DW WKH FHQWHU RI WKH FRQWURO YROXPH

PAGE 29

%HFDXVH RI WKH VWDJJHUHG JULG WKH UHTXLUHG SUHVVXUH YDOXHV LQ (T DUH DOUHDG\ ORFDWHG RQ WKH X FRQWURO YROXPH IDFHV 7KH SUHVVXUHJUDGLHQW WHUP LV HIIHFWLYHO\ D VHFRQGRUGHU FHQWUDOGLIIHUHQFH DSSUR[LPDWLRQ :LWK FRORFDWHG JULGV KRZHYHU WKH FRQWUROYROXPH IDFH SUHVVXUHV DUH REWDLQHG E\ DYHUDJLQJ WKH QHDUE\ SUHVVXUHV 7KLV DYHUDJLQJ UHVXOWV LQ WKH SUHVVXUH DW WKH FHOO FHQWHU GURSSLQJ RXW RI WKH H[SUHVVLRQ IRU WKH SUHVVXUH JUDGLHQW 7KH FHQWUDOGLIIHUHQFH LQ (T LV HIIHFWLYHO\ WDNHQ RYHU D GLVWDQFH $[ RQ FRORFDWHG JULGV 7KXV VWDJJHUHG FDUWHVLDQ JULGV SURYLGH D PRUH DFFXUDWH DSSUR[LPDWLRQ RI WKH SUHVVXUHJUDGLHQW WHUP VLQFH WKH GLIIHUHQFH VWHQFLO LV VPDOOHU 7KH QH[W VWHS LV WR DSSUR[LPDWH WKH WHUPV ZKLFK LQYROYH YDOXHV DW WKH FRQWURO YROXPH IDFHV ,Q (T RQH RI WKH XH DQG RQH RI WKH XZ DUH UHSODFHG E\ DQ DYHUDJH RI QHLJKERULQJ YDOXHV >SXH SXX ? r I XH XS XS XZ L $ f $\ > S XH 3 Xr1 f $9 f DQG LQ (T YQ DQG YV DUH REWDLQHG E\ DYHUDJLQJ QHDUE\ YDOXHV ? D A7LH 9QZ YVH 9VZ ? $ SXQYQ SXVYVf $[ S XQ S 8r f $[ f 7KH UHPDLQLQJ IDFH YHORFLWLHV LQ WKH FRQYHFWLRQ WHUPV XQ XV XH DQG XZ DUH H[n SUHVVHG DV D FHUWDLQ FRPELQDWLRQ RI WKH QHDUE\ X YDOXHVf§ZKLFK X YDOXHV DUH LQYROYHG DQG ZKDW ZHLJKWLQJ WKH\ UHFHLYH LV SUHVFULEHG E\ WKH FRQYHFWLRQ VFKHPH 6RPH SRSn XODU UHFLUFXODWLQJ IORZ FRQYHFWLRQ VFKHPHV DUH GHVFULEHG LQ > @ 7KH FRQWUROYROXPH IDFH GHULYDWLYHV LQ WKH GLIIXVLRQ WHUPV DUH HYDOXDWHG E\ FHQWUDO GX GX ? $\ XH f§ XS }7[ A $[ H Z GX GX $[ XQ XS r G\ A \ Q D 9 $\ S $[ f $\ 3 $\ $[ f f GLIIHUHQFHV

PAGE 30

7KH XQVWHDG\ WHUP LQ (T LV DSSUR[LPDWHG E\ D EDFNZDUG (XOHU VFKHPH $OO WKH WHUPV DUH HYDOXDWHG DW WKH fQHZf WLPH OHYHO LH LPSOLFLWO\ 7KXV WKH GLVFUHWL]HG PRPHQWXP HTXDWLRQV IRU HDFK FRQWURO YROXPH FDQ EH SXW LQWR WKH IROORZLQJ JHQHUDO IRUP DSXS DSXS D?\XZ D1X1 mVAV f ZKHUH E SZ f§ SHf$\SXS $W WKH VXSHUVFULSW Q LQGLFDWLQJ WKH SUHYLRXV WLPHVWHS 7KH FRHIILFLHQWV DS DV HWF DUH FRPSULVHG RI WKH WHUPV ZKLFK PRGLI\ XA XS HWF LQ WKH GLVFUHWL]HG FRQYHFWLRQ DQG GLIIXVLRQ WHUPV 7KH FRQWLQXLW\ HTXDWLRQ LV LQWHJUDWHG RYHU D SUHVVXUH FRQWURO YROXPH GSX GSY G[ G\ $JDLQ WKH VWDJJHUHG JULG LV DQ DGYDQWDJH EHFDXVH WKH QRUPDO YHORFLW\ FRPSRQHQWV RQ HDFK FRQWURO YROXPH IDFH DUH DOUHDG\ LQ SRVLWLRQf§WKHUH LV QR QHHG IRU LQWHUSRODWLRQ 7KH 6,03/( 0HWKRG 2QH 6,03/( LWHUDWLRQ WDNHV LQLWLDO YHORFLW\ DQG SUHVVXUH ILHOGV XrYrSrf DQG FRPSXWHV QHZ JXHVVHV XYSf 7KH LQWHUPHGLDWH YDOXHV DUH GHQRWHG ZLWK D WLOGH YcRf ,Q WKH DOJRULWKP EHORZ DAXrYrf IRU H[DPSOH PHDQV WKDW WKH DSc FRHIILn FLHQW LQ WKH LLPRPHQWXP HTXDWLRQ GHSHQGV RQ Xr DQG Yr 7KH SDUDPHWHUV YX XY DQG YF DUH WKH QXPEHUV RI fLQQHUf LWHUDWLRQV WR EH WDNHQ IRU WKH X Y DQG FRQWLQXLW\ HTXDn WLRQV UHVSHFWLYHO\ 7KLV QRWDWLRQ ZLOO EH FODULILHG E\ WKH IROORZLQJ GLVFXVVLRQ 7KH LQQHU LWHUDWLRQ FRXQW LV LQGLFDWHG E\ WKH VXSHUVFULSW HQFORVHG LQ SDUHQWKHVHV )LQDOO\ XXY DQG /2F DUH WKH UHOD[DWLRQ IDFWRUV IRU WKH PRPHQWXP DQG FRQWLQXLW\ HTXDWLRQV 6,03/( ^XrYrSrYXLVYLSXXYXFf &RPSXWH X FRHIILFLHQWV DbXrYrf N 3(:16f DQG VRXUFH WHUP XQrSrf G[ G\ SXH XZf$\ SYQ XVf$D f

PAGE 31

IRU HDFK GLVFUHWH mPRPHQWXP HTXDWLRQ IOX DX AS DAV DX(( F3IDXZ EX XXYfAXn3 'R YX LWHUDWLRQV WR REWDLQ DQ DSSUR[LPDWH VROXWLRQ IRU VWDUWLQJ ZLWK Xr DV WKH LQLWLDO JXHVV *XQBf IX X XQ YA &RPSXWH Y FRHIILFLHQWV DNLLYrf N (:16f DQG VRXUFH WHUP YXrSrf IRU HDFK GLVFUHWH mPRPHQWXP HTXDWLRQ AYS DY1YDW DYVYV D(Y( DYZYZ EY AXYfAY( 'R XY LWHUDWLRQV WR REWDLQ DQ DSSUR[LPDWH VROXWLRQ IRU Y VWDUWLQJ ZLWK Yr DV WKH LQLWLDO JXHVV YQf 4Y QOf IY 9 9AQ A &RPSXWH Sn FRHIILFLHQWV DFN N 3(:16f DQG VRXUFH WHUP Ff IRU HDFK GLVFUHWH Sn HTXDWLRQ DSSnS DF1Sn1 DFVSnV DF(Sn( DFZSnZ EF 'R XF LWHUDWLRQV WR REWDLQ DQ DSSUR[LPDWH VROXWLRQ IRU Sn VWDUWLQJ ZLWK ]HUR DV WKH LQLWLDO JXHVV SnQf *SnQaOf IF &RUUHFW DQG Sr DW HYHU\ LQWHULRU JULG SRLQW f‘ 3S 3S 8FSnS

PAGE 32

7KH DOJRULWKP LV QRW DV FRPSOLFDWHG DV LW ORRNV 7KH LPSRUWDQW SRLQW WR QRWH LV WKDW WKH PDMRU WDVNV WR EH GRQH DUH WKH FRPSXWLQJ RI FRHIILFLHQWV DQG WKH VROYLQJ RI WKH V\VWHPV RI HTXDWLRQV 7KH V\PERO LQGLFDWHV WKH LWHUDWLRQ PDWUL[ RI ZKDWHYHU W\SH UHOD[DWLRQ LV XVHG RQ WKHVH LQQHU LWHUDWLRQV 6/85 LQ WKLV FDVHf DQG LV WKH FRUUHVSRQGLQJ VRXUFH WHUP ,Q WKH 6,03/( SUHVVXUHFRUUHFWLRQ PHWKRG >@ WKH DYHUDJHV LQ (T DQG DUH ODJJHG LQ RUGHU WR OLQHDUL]H WKH UHVXOWLQJ DOJHEUDLF HTXDWLRQV 7KH JRYHUQLQJ HTXDWLRQV DUH VROYHG VHTXHQWLDOO\ )LUVW WKH X PRPHQWXP HTXDWLRQ FRHIILFLHQWV DUH FRPSXWHG DQG DQ XSGDWHG X ILHOG LV FRPSXWHG E\ VROYLQJ WKH V\VWHP RI OLQHDU DOJHn EUDLF HTXDWLRQV 7KH SUHVVXUHV LQ (T DUH ODJJHG 7KH Y PRPHQWXP HTXDWLRQ LV VROYHG QH[W WR XSGDWH Y 7KH FRQWLQXLW\ HTXDWLRQ UHFDVW LQ WHUPV RI SUHVVXUH FRUUHFn WLRQV LV WKHQ VHW XS DQG VROYHG 7KHVH SUHVVXUH FRUUHFWLRQV DUH FRXSOHG WR YHORFLW\ FRUUHFWLRQV 7RJHWKHU WKH\ DUH GHVLJQHG WR FRUUHFW WKH YHORFLW\ ILHOG VR WKDW LW VDWLVILHV WKH FRQWLQXLW\ FRQVWUDLQW ZKLOH VLPXOWDQHRXVO\ FRUUHFWLQJ WKH SUHVVXUH ILHOG VR WKDW PRPHQWXP FRQVHUYDWLRQ LV PDLQWDLQHG 7KH UHODWLRQVKLS EHWZHHQ WKH YHORFLW\ DQG SUHVVXUH FRUUHFWLRQV LV GHULYHG IURP WKH PRPHQWXP HTXDWLRQ DV GHVFULEHG LQ WKH QH[W VHFWLRQ 7KH UHVXOWLQJ V\VWHP RI HTXDWLRQV LV IXOO\ FRXSOHG DV RQH PLJKW H[SHFW NQRZLQJ WKH HOOLSWLF QDWXUH RI SUHVVXUH LQ LQFRPSUHVVLEOH IOXLGV DQG LV WKHUHIRUH H[SHQVLYH WR VROYH +RZHYHU LI WKH UHVXOWLQJ V\VWHP RI SUHVVXUHFRUUHFWLRQ HTXDWLRQV ZHUH VROYHG H[DFWO\ WKH GLYHUJHQFH IUHH FRQVWUDLQW DQG WKH PRPHQWXP HTXDWLRQV ZLWK ROG YDOXHV RI X DQG Y SUHVHQW LQ WKH QRQOLQHDU FRQYHFWLRQ WHUPVf ZRXOG EH VDWLVILHG 7KLV DSSURDFK ZRXOG FRQVWLWXWH DQ LPSOLFLW PHWKRG RI WLPH LQWHJUDWLRQ IRU WKH OLQHDUL]HG HTXDWLRQV 7KH WLPHVWHS VL]H ZRXOG KDYH WR EH OLPLWHG WR DYRLG VWDELOLW\ SUREOHPV FDXVHG E\ WKH OLQHDUL]DWLRQ 7R UHGXFH WKH FRPSXWDWLRQDO FRVW WKH 6,03/( SUHVFULSWLRQ LV WR XVH DQ DSSUR[n LPDWH UHODWLRQVKLS EHWZHHQ WKH YHORFLW\ DQG SUHVVXUH FRUUHFWLRQV KHQFH WKH ODEHO

PAGE 33

fVHPLLPSOLFLWff 9DULDWLRQV RQ WKH RULJLQDO 6,03/( DSSUR[LPDWLRQ KDYH VKRZQ EHWn WHU FRQYHUJHQFH UDWHV IRU VLPSOH IORZ SUREOHPV EXW LQ GLVFUHWL]DWLRQV RQ FXUYLOLQHDU JULGV DQG RWKHU SUREOHPV ZLWK VLJQLILFDQW FRQWULEXWLRQV IURP VRXUFH WHUPV WKH SHUn IRUPDQFH LV QR EHWWHU WKDQ WKH RULJLQDO 6,03/( PHWKRG VHH WKH UHVXOWV LQ >@f 7KH JRDO RI VDWLVI\LQJ WKH GLYHUJHQFHIUHH FRQVWUDLQW FDQ VWLOO EH DWWDLQHG LI WKH V\VWHP RI SUHVVXUHFRUUHFWLRQ HTXDWLRQV LV FRQYHUJHG WR VWULFW WROHUDQFHV EHFDXVH WKH GLVFUHWH FRQWLQXLW\ HTXDWLRQV DUH VWLOO EHLQJ VROYHG %XW VDWLVIDFWLRQ RI WKH PRPHQWXP HTXDWLRQV FDQQRW EH PDLQWDLQHG ZLWK WKH DSSUR[LPDWH UHODWLRQVKLS &RQVHTXHQWO\ LW LV QR ORQJHU GHVLUDEOH WR VROYH WKH SnV\VWHP RI HTXDWLRQV WR VWULFW WROHUDQFHV ,Wn HUDWLRQV DUH QHFHVVDU\ WR ILQG WKH ULJKW YHORFLWLHV DQG SUHVVXUHV ZKLFK VDWLVI\ DOO WKUHH HTXDWLRQV )XUWKHUPRUH VLQFH WKH HTXDWLRQ FRHIILFLHQWV DUH FKDQJLQJ IURP RQH LWHUDWLRQ WR WKH QH[W LW LV SRLQWOHVV WR VROYH WKH PRPHQWXP HTXDWLRQV WR VWULFW WROn HUDQFHV ,Q SUDFWLFH RQO\ D IHZ LWHUDWLRQV RI D VWDQGDUG VFKHPH VXFK DV VXFFHVVLYH OLQHXQGHUUHOD[DWLRQ 6/85f DUH SHUIRUPHG 7KH VLQJOH fRXWHUf LWHUDWLRQ RXWOLQHG DERYH LV UHSHDWHG PDQ\ WLPHV ZLWK XQGHUn UHOD[DWLRQ WR SUHYHQW WKH LWHUDWLRQV IURP GLYHUJLQJ ,Q WKLV VHQVH D WZROHYHO LWHUDWLYH SURFHGXUH LV EHLQJ HPSOR\HG ,Q WKH RXWHU LWHUDWLRQV WKH PRPHQWXP DQG SUHVVXUH FRUUHFWLRQ HTXDWLRQV DUH LWHUDWLYHO\ XSGDWHG EDVHG RQ WKH OLQHDUL]HG FRHIILFLHQWV DQG VRXUFHV DQG LQQHU LWHUDWLRQV DUH DSSOLHG WR SDUWLDOO\ VROYH WKH V\VWHPV RI OLQHDU DOJHn EUDLF HTXDWLRQV 7KH IDFW WKDW RQO\ D IHZ LQQHU LWHUDWLRQV DUH WDNHQ RQ HDFK V\VWHP RI HTXDWLRQV VXJn JHVWV WKDW WKH DV\PSWRWLF FRQYHUJHQFH UDWH RI WKH LWHUDWLYH VROYHU ZKLFK LV WKH XVXDO PHDQV RI FRPSDULVRQ EHWZHHQ VROYHUV GRHV QRW QHFHVVDULO\ GLFWDWH WKH FRQYHUJHQFH UDWH RI WKH RXWHU LWHUDWLYH SURFHVV %UDDWHQ DQG 6K\\ >@ KDYH IRXQG WKDW WKH FRQn YHUJHQFH UDWH RI WKH RXWHU LWHUDWLRQV DFWXDOO\ GHFUHDVHV ZKHQ WKH SUHVVXUHFRUUHFWLRQ HTXDWLRQ LV VROYHG WR D PXFK VWULFWHU WROHUDQFH WKDQ WKH PRPHQWXP HTXDWLRQV 7KH\

PAGE 34

FRQFOXGHG WKDW WKH EDODQFH EHWZHHQ WKH HTXDWLRQV LV LPSRUWDQW %HFDXVH X Y DQG Sn DUH VHJUHJDWHG WKH RYHUDOO FRQYHUJHQFH UDWH LV VWURQJO\ GHSHQGHQW RQ WKH SDUWLFn XODU IORZ SUREOHP WKH JULG GLVWULEXWLRQ DQG TXDOLW\ DQG WKH FKRLFH RI UHOD[DWLRQ SDUDPHWHUV ,Q FRQWUDVW WR SURMHFWLRQ PHWKRGV ZKLFK DUH WZRVWHS EXW WUHDW WKH FRQYHFWLRQ WHUPV H[SOLFLWO\ RU PRUH UHFHQWO\ E\ VROYLQJ D 5LHPDQQ SUREOHP >@f DQG DUH WKHUHIRUH UHVWULFWHG IURP WDNLQJ WRR ODUJH D WLPHVWHS WKH SUHVVXUHFRUUHFWLRQ DSSURDFK LV IXOO\ LPSOLFLW ZLWK QR WLPHVWHS OLPLWDWLRQ EXW PDQ\ LWHUDWLRQV PD\ EH QHFHVVDU\ 7KH SURMHFWLRQ PHWKRGV DUH IRUPDOL]HG DV WLPHLQWHJUDWLRQ WHFKQLTXHV IRU VHPLGLVFUHWH HTXDWLRQV 6,03/( LV DQ LWHUDWLYH PHWKRG IRU VROYLQJ WKH GLVFUHWL]HG 1DYLHU6WRNHV V\VWHP RI FRXSOHG QRQOLQHDU DOJHEUDLF HTXDWLRQV %XW WKH GHWDLOV JLYHQ DERYH VKRXOG PDNH LW FOHDU WKDW WKHVH WHFKQLTXHV EHDU VWURQJ VLPLODULWLHVf§VSHFLILFDOO\ D VLQJOH 6,03/( LWHUDWLRQ ZRXOG EH D SURMHFWLRQ PHWKRG LI WKH V\VWHP RI SUHVVXUHFRUUHFWLRQ HTXDWLRQV ZDV VROYHG WR VWULFW WROHUDQFHV DW HDFK LWHUDWLRQ ,W ZRXOG EH LQWHUHVWLQJ WR GR VRPH QXPHULFDO FRPSDULVRQV EHWZHHQ SURMHFWLRQ PHWKRGV DQG SUHVVXUHFRUUHFWLRQ PHWKRGV WR IXUWKHU FODULI\ WKH VLPLODULW\ 'LVFUHWH )RUPXODWLRQ RI WKH 3UHVVXUH&RUUHFWLRQ (TXDWLRQ 7KH GLVFUHWH SUHVVXUHFRUUHFWLRQ HTXDWLRQ LV REWDLQHG IURP WKH GLVFUHWH PRPHQWXP DQG FRQWLQXLW\ HTXDWLRQV DV IROORZV 7KH YHORFLW\ ILHOG ZKLFK KDV EHHQ QHZO\ REWDLQHG E\ VROYLQJ WKH PRPHQWXP HTXDWLRQV ZDV GHQRWHG E\ Yf HDUOLHU 7KH SUHVVXUH ILHOG DIWHU WKH PRPHQWXP HTXDWLRQV DUH VROYHG VWLOO KDV WKH LQLWLDO YDOXH Sr 6R IW L DQG Sr VDWLVI\ WKH WWPRPHQWXP HTXDWLRQ DSLLS DSXS DZZ FLQP DVV >SrZ f§ Srf$\ f DQG WKH FRUUHVSRQGLQJ XPRPHQWXP HTXDWLRQ 7KH FRUUHFWHG FRQWLQXLW\VDWLVI\LQJf YHORFLW\ ILHOG XYf VDWLVILHV WKH APRPHQWXP HTXDWLRQ ZLWK WKH FRUUHFWHG SUHVVXUH

PAGE 35

ILHOG S DSXS DSXS DZXZ D1X1 *VAV SZ a 3Hf$\ f DQG OLNHZLVH IRU WKH YPRPHQWXP HTXDWLRQ $GGLWLYH FRUUHFWLRQV DUH DVVXPHG LH X Xn f Y Y 9 f S f§ Sn Sn f 6XEWUDFWLQJ (T IURP (T JLYHV WKH GHVLUHG UHODWLRQVKLS EHWZHHQ SUHVVXUH DQG WKH X FRUUHFWLRQV DSXnS DNXN SZ SnHf$\ f N (:16 ZLWK D VLPLODU H[SUHVVLRQ IRU WKH Y FRUUHFWLRQV ,I (T LV XVHG DV LV WKHQ WKH QHDUE\ YHORFLW\ FRUUHFWLRQV LQ WKH VXPPDWLRQ QHHG WR EH UHSODFHG EY VLPLODU H[SUHVVLRQV LQYROYLQJ SUHVVXUHFRUUHFWLRQV 7KLV UHTXLUHPHQW EULQJV LQ PRUH YHORFLW\ FRUUHFWLRQV DQG PRUH SUHVVXUH FRUUHFWLRQV DQG VR RQ OHDGLQJ WR DQ HTXDWLRQ ZKLFK LQYROYHV WKH SUHVVXUH FRUUHFWLRQV DW HYHU\ JULG SRLQW 7KH UHVXOWLQJ V\VWHP RI HTXDWLRQV ZRXOG EH H[SHQVLYH WR VROYH 7KXV WKH VXPPDWLRQ WHUP LV GURSSHG LQ RUGHU WR REWDLQ D FRPSDFW H[SUHVVLRQ IRU WKH YHORFLW\ FRUUHFWLRQ LQ WHUPV RI SUHVVXUH FRUUHFWLRQV $W FRQYHUJHQFH WKH SUHVVXUH FRUUHFWLRQV DQG WKHUHIRUH WKH YHORFLW\ FRUUHFWLRQVf JR WR ]HUR VR WKH SUHFLVH IRUP RI WKH DSSUR[LPDWH SUHVVXUH YHORFLW\ FRUUHFWLRQ UHODWLRQVKLS GRHV QRW ILJXUH LQ WKH ILQDO FRQYHUJHG VROXWLRQ 7KH GLVFUHWH IRUP RI WKH SUHVVXUHFRUUHFWLRQ HTXDWLRQ IROORZV E\ ILUVW VXEVWLWXWLQJ WKH VLPSOLILHG YHUVLRQ RI (T LQWR (T XS XS XnS XS SnZ Snf$\ f

PAGE 36

DQG WKHQ VXEVWLWXWLQJ WKLV LQWR WKH FRQWLQXLW\ HTXDWLRQ (T ZLWK DQ DQDORJRXV IRUPXOD IRU YSf 7KH UHVXOW LV S$\ 3SSnHfa 3A9 G3Z3Sf S$; $SnSSnQf S$; ^SnVSnSf E f D3XHf.UU UWMn D3XZf ZKHUH WKH VRXUFH WHUP E LV D3YQf DSYVf E f§ SXZ$\ f§ SXH$\ SYr $[ f§ SYr $[ f 5HFDOO WKDW (T DQG (T DUH ZULWWHQ IRU WKH SUHVVXUH FRQWURO YROXPHV VR WKDW WKHUH LV VRPH LQWHUSUHWDWLRQ UHTXLUHG 7KH WHUP DSXHf LQ (T LV WKH DSSURSULDWH DS IRU WKH GLVFUHWL]HG XPRPHQWXP HTXDWLRQ (T ,Q RWKHU ZRUGV XS LQ (T LV DFWXDOO\ XH XZ Xf RU XV LQ (T DQG UHODWLYH WR WKH SUHVVXUH FRQWURO YROXPHV RQ WKH VWDJJHUHG JULG (T FDQ EH UHDUUDQJHG LQWR WKH VDPH JHQHUDO IRUP DV (T )URP (T LW LV DSSDUHQW WKDW WKH ULJKWKDQG VLGH WHUP LV WKH QHW PDVV IOX[ HQWHULQJ WKH FRQWURO YROXPH ZKLFK VKRXOG EH ]HUR LQ LQFRPSUHVVLEOH IORZ ,Q WKH IRUPXODWLRQ RI WKH SUHVVXUHFRUUHFWLRQ HTXDWLRQ IRU ERXQGDU\ FRQWURO YROn XPHV RQH PDNHV XVH RI WKH IDFW WKDW WKH QRUPDO YHORFLW\ FRPSRQHQWV RQ WKH ERXQGn DULHV DUH NQRZQ IURP HLWKHU 'LULFKOHW RU 1HXPDQQ ERXQGDU\ FRQGLWLRQV VR QR YHORFLW\ FRUUHFWLRQ LV UHTXLUHG WKHUH &RQVHTXHQWO\ WKH IRUPXODWLRQ RI (T IRU ERXQGDU\ FRQWURO YROXPHV GRHV QRW UHTXLUH DQ\ SUHVFULSWLRQ RI ERXQGDU\ Sn YDOXHV >@ ZKHQ YHORFLW\ ERXQGDU\ FRQGLWLRQV DUH SUHVFULEHG :LWKRXW WKH VXPPDWLRQ IURP (T LW LV DSSDUHQW WKDW D ]HUR YHORFLW\ FRUUHFWLRQ IRU WKH RXWIORZ ERXQGDU\ XYHORFLW\ FRPSRQHQW LV REWDLQHG ZKHQ SZ SHf§LQ HIIHFW D 1HXPDQQ ERXQGDU\ FRQGLWLRQ RQ SUHVVXUH LV LPSOLHG 7KLV ERXQGDU\ FRQGLWLRQ LV DSSURSULDWH IRU DQ LQFRPSUHVVLEOH IOXLG EHFDXVH LW LV SK\VLFDOO\ FRQVLVWHQW ZLWK WKH JRYHUQLQJ HTXDWLRQV LQ ZKLFK RQO\ WKH SUHVVXUH JUDGLHQW DSSHDUV 7KHUH LV D XQLTXH SUHVVXUH JUDGLHQW EXW WKH OHYHO LV

PAGE 37

DGMXVWDEOH E\ DQ\ FRQVWDQW DPRXQW ,I LW KDSSHQV WKDW WKHUH LV D SUHVVXUH VSHFLILHG RQ WKH ERXQGDU\ IRU H[DPSOH E\ (T WKHQ WKH FRUUHFWLRQ WKHUH ZLOO EH ]HUR SURn YLGLQJ D ERXQGDU\ FRQGLWLRQ IRU (T 7KXV LW VHHPV WKDW WKHUH DUH QR FRQFHUQV RYHU WKH VSHFLILFDWLRQ RI ERXQGDU\ FRQGLWLRQV IRU WKH Sn HTXDWLRQV :HOO3RVHGQHVV RI WKH 3UHVVXUH&RUUHFWLRQ (TXDWLRQ $QDO\VLV 7R EHWWHU XQGHUVWDQG WKH FKDUDFWHULVWLFV RI WKH SUHVVXUHFRUUHFWLRQ VWHS LQ WKH 6,03/( SURFHGXUH FRQVLGHU D PRGHO [ FRPSXWDWLRQDO GRPDLQ VR WKDW DOJHEUDLF HTXDWLRQV IRU WKH SUHVVXUH FRUUHFWLRQV DUH REWDLQHG 1XPEHU WKH FRQWURO YROXPHV DV VKRZQ LQ )LJXUH 7KHQ WKH V\VWHP RI Sn HTXDWLRQV FDQ EH ZULWWHQ &/ S aD( aD1 3L 3XH mcQ aDZ D3 aD( aD1 U! 3mH XZ YQ }Vf aDZ IOS aD1 3 3mH f§ 8A XZ n XQ f r GS D( aD1 3D 3mH f aDV aDZ S aD( f§r1 3nV 3mH f f§ DV aDZ &OS f§Gb 3H S f &/ S Df( 3" S. f a f§ DZ DVS D_ 3V 3^ f DV 4 f§GnZ DS 3 /3mH WIOM ZKHUH WKH VXSHUVFULSW GHVLJQDWHV WKH FHOO ORFDWLRQ DQG WKH VXEVFULSW GHVLJQDWHV WKH FRHIILFLHQW OLQNLQJ WKH SRLQW LQ TXHVWLRQ 3 DQG WKH QHLJKERULQJ QRGH 7KH ULJKWKDQG VLGH YHORFLWLHV DUH XQGHUVWRRG WR EH WLOGH TXDQWLWLHV DV LQ (T ,Q ILQLWHYROXPH GLVFUHWL]DWLRQV IOX[HV DUH HVWLPDWHG DW WKH FRQWURO YROXPH IDFHV ZKLFK DUH FRPPRQ WR DGMDFHQW FRQWURO YROXPHV VR LI WKH JRYHUQLQJ HTXDWLRQV DUH FDVW LQ FRQVHUYDWLRQ ODZ IRUP DV WKH\ DUH KHUH WKH GLVFUHWH HIIOX[ RI DQ\ TXDQWLW\ RXW RI RQH FRQWURO YROXPH LV JXDUDQWHHG WR EH LGHQWLFDO WR WKH LQIOX[ LQWR LWV QHLJKERU 7KHUH LV QR SRVVLELOLW\ RI LQWHUQDO VRXUFHV RU VLQNV ,Q IDFW WKLV LV ZKDW PDNHV ILQLWH YROXPH GLVFUHWL]DWLRQV SUHIHUDEOH WR ILQLWHGLIIHUHQFH GLVFUHWL]DWLRQV 7KH IROORZLQJ

PAGE 38

UHODWLRQVKLSV XVLQJ FRQWURO YROXPH LQ )LJXUH DV DQ H[DPSOH IROORZ IURP (T DQG WKH LQWHUQDO FRQVLVWHQF\ RI ILQLWHYROXPH GLVFUHWL]DWLRQV 2S f§ 2S DZ D_ Df\ f DZ D(! D( f§ D:L D-9 Dt6L D$n f XO f (T VWDWHV WKDW WKH FRHIILFLHQW PDWUL[ LV SHQWDGLDJRQDO DQG GLDJRQDOO\ GRPLQDQW IRU WKH LQWHULRU FRQWURO YROXPHV )XUWKHUPRUH ZKHQ WKH QDWXUDO ERXQGDU\ FRQGLWLRQ ]HUR YHORFLW\ FRUUHFWLRQf LV DSSOLHG WKH DSSURSULDWH WHUP LQ (T IRU WKH ERXQGDU\ XQGHU FRQVLGHUDWLRQ GRHV QRW DSSHDU DQG WKHUHIRUH WKH SUHVVXUHFRUUHFWLRQ HTXDWLRQV IRU WKH ERXQGDU\ FRQWURO YROXPHV DOVR VDWLVI\ (T ,I D SUHVVXUH ERXQGDU\ FRQGLn WLRQ LV DSSOLHG VR WKDW WKH FRUUHVSRQGLQJ SUHVVXUH FRUUHFWLRQ LV ]HUR WKHQ RQH ZRXOG VHW SS LQ (T IRU H[DPSOH ZKLFK ZRXOG JLYH DZ Q DV D3‘ 7KXV HLWKHU ZD\ WKH HQWLUH FRHIILFLHQW PDWUL[ LQ (T LV GLDJRQDOO\ GRPLQDQW +RZHYHU ZLWK WKH QDWXUDO SUHVFULSWLRQ IRU ERXQGDU\ WUHDWPHQW QR GLDJRQDO WHUP H[FHHGV WKH VXP RI LWV RIIGLDJRQDO WHUPV 7KXV WKH V\VWHP RI HTXDWLRQV (T LV OLQHDUO\ GHSHQGHQW ZLWK WKH QDWXUDO YHORFLW\f ERXQGDU\ FRQGLWLRQV ZKLFK FDQ EH YHULILHG E\ DGGLQJ WKH HTXDWLRQV DERYH %HFDXVH RI (T DQG (T DOO WHUPV RQ WKH OHIWKDQG VLGH RI (T LGHQWLFDOO\ FDQFHO RQH DQRWKHU $W DOO LQWHULRU FRQWURO YROXPH LQWHUIDFHV WKH ULJKW KDQG VLGH WHUPV LGHQWLFDOO\ FDQFHO GXH WR (T DQG WKH UHPDLQLQJ VRXUFH WHUPV DUH VLPSO\ WKH ERXQGDU\ PDVV IOX[HV 7KLV FDQFHOODWLRQ LV HTXLYDOHQW WR D GLVFUHWH VWDWHPHQW RI WKH GLYHUJHQFH WKHRUHP > 9 X GWW -Q -DQ X ‘ Q G^Gf f

PAGE 39

ZKHUH  LV WKH GRPDLQ XQGHU FRQVLGHUDWLRQ DQG Q LV WKH XQLW YHFWRU LQ WKH GLUHFWLRQ QRUPDO WR LWV ERXQGDU\ G4 'XH WR WKH OLQHDU GHSHQGHQFH RI WKH OHIWKDQG VLGH RI (T WKH ERXQGDU\ PDVV IOX[HV PXVW DOVR VXP WR ]HUR LQ RUGHU IRU WKH V\VWHP RI HTXDWLRQV WR EH FRQVLVWHQW 1R VROXWLRQ H[LVWV LI WKH OLQHDUO\ GHSHQGHQW V\VWHP RI HTXDWLRQV LV LQFRQVLVWHQW 7KH VLWXDWLRQ FDQ EH OLNHQHG WR D VWHDG\VWDWH KHDW FRQGXFWLRQ SUREOHP ZLWK VRXUFH WHUPV DQG DGLDEDWLF ERXQGDULHV &OHDUO\ D VWHDG\VWDWH VROXWLRQ RQO\ H[LVWV LI WKH VXP RI WKH VRXUFH WHUPV LV ]HUR ,I WKHUH LV D QHW KHDW VRXUFH WKHQ WKH WHPSHUDWXUH LQVLGH WKH GRPDLQ ZLOO VLPSO\ ULVH ZLWKRXW ERXQG LI DQ LWHUDWLYH VROXWLRQ VWUDWHJ\ TXDVL WLPHPDUFKLQJf LV XVHG /LNHZLVH WKH QHW PDVV VRXUFH LQ IORZ SUREOHPV ZLWK RSHQ ERXQGDULHV PXVW VXP WR ]HUR IRU WKH SUHVVXUHFRUUHFWLRQ HTXDWLRQ WR KDYH D VROXWLRQ ,Q RWKHU ZRUGV JOREDO PDVV FRQVHUYDWLRQ LV UHTXLUHG LQ GLVFUHWH IRUP LQ RUGHU IRU D VROXWLRQ WR H[LVW 7KH LQWHUHVWLQJ SRLQW WR QRWH LV WKDW GXULQJ WKH FRXUVH RI 6,03/( LWHUDWLRQV ZKHQ WKH SUHVVXUHFRUUHFWLRQ HTXDWLRQ LV H[HFXWHG WKH YHORFLW\ ILHOG GRHV QRW XVXDOO\ FRQVHUYH PDVV JOREDOO\ LQ IORZ SUREOHPV ZLWK RSHQ ERXQGDULHV XQOHVV H[SOLFLW PHDVXUH LV WDNHQ WR HQIRUFH JOREDO PDVV FRQVHUYDWLRQ 7KH SXUSRVH RI VROYLQJ WKH SUHVVXUHFRUUHFWLRQ HTXDWLRQV LV WR GULYH WKH ORFDO PDVV VRXUFHV WR ]HUR E\ VXLWDEOH YHORFLW\ FRUUHFWLRQV %XW WKH SUHVVXUHFRUUHFWLRQ HTXDWLRQV ZKLFK DUH VXSSRVHG WR DFFRPSOLVK WKLV SXUSRVH GR QRW KDYH D VROXWLRQ XQOHVV WKH QHW PDVV VRXUFH LV DOUHDG\ ]HUR )RU GRPDLQV ZLWK FORVHG ERXQGDULHV JOREDO PDVV FRQVHUYDWLRQ LV REYLRXVO\ QRW DQ LVVXH )XUWKHUPRUH WKLV SUREOHP GRHV QRW RQO\ VKRZ XS ZKHQ WKH LQLWLDO JXHVV LV EDG ,Q WKH EDFNZDUGIDFLQJ VWHS IORZ GLVFXVVHG EHORZ WKH LQLWLDO JXHVV LV ]HUR HYHU\ZKHUH H[FHSW IRU LQIORZ ZKLFK REYLRXVO\ LV WKH ZRUVW FDVH DV IDU DV D QHW PDVV VRXUFH LV FRQFHUQHG DOO LQIORZ DQG QR RXWIORZf %XW HYHQ LI RQH VWDUWV ZLWK D PDVVFRQVHUYLQJ LQLWLDO JXHVV GXULQJ WKH FRXUVH RI LWHUDWLRQV WKH RXWIORZ YHORFLW\ ERXQGDU\ FRQGLWLRQ

PAGE 40

ZKLFK LV QHFHVVDU\ WR VROYH WKH PRPHQWXP HTXDWLRQV ZLOO UHVHW WKH RXWIORZ VR WKDW WKH JOREDO PDVVFRQVHUYDWLRQ FRQVWUDLQW LV YLRODWHG 9HULILFDWLRQ EY 1XPHULFDO ([SHULPHQWV 6XSSRUW IRU WKH SUHFHGLQJ GLVFXVVLRQ LV SURYLGHG E\ QXPHULFDO VLPXODWLRQ RI WZR PRGHO SUREOHPV D OLGGULYHQ FDYLW\ IORZ DQG D EDFNZDUGIDFLQJ VWHS IORZ 7KH FRQn ILJXUDWLRQV DUH VKRZQ DORQJ ZLWK RWKHU UHOHYDQW GDWD LQ )LJXUH )LJXUH VKRZV WKH RXWHUORRS FRQYHUJHQFH SDWKV IRU WKH OLGGULYHQ FDYLW\ IORZ DQG WKH EDFNZDUGIDFLQJ VWHS IORZ ERWK DW 5H 7KH TXDQWLWLHV SORWWHG LQ )LJXUH DUH WKH ORJZ RI WKH JOREDO UHVLGXDOV IRU HDFK JRYHUQLQJ HTXDWLRQ REWDLQHG E\ VXPPLQJ XS WKH ORFDO UHVLGXDOV HDFK RI ZKLFK LV REWDLQHG E\ VXEWUDFWLQJ WKH OHIWKDQG VLGH RI WKH GLVFUHWL]HG HTXDWLRQV IURP WKH ULJKWKDQG VLGH )RU WKH FDYLW\ IORZ WKHUH DUH QR PDVV IOX[HV DFURVV WKH ERXQGDU\ VR DV PHQWLRQHG HDUOLHU WKH JOREDO PDVV FRQVHUYDWLRQ FRQGLWLRQ LV DOZD\V VDWLVILHG ZKHQ WKH DOJRULWKP UHDFKHV WKH SRLQW RI VROYLQJ WKH V\VWHP RI HTXDWLRQV 7KH UHVLGXDOV KDYH GURSSHG WR fn DIWHU LWHUDWLRQV ZKLFK LV YHU\ UDSLG FRQYHUJHQFH LQGLFDWLQJ WKDW JRRG SUHVVXUH DQG YHORFLW\ FRUUHFWLRQV DUH EHLQJ REWDLQHG ,Q WKH EDFNZDUGIDFLQJ VWHS IORZ KRZHYHU WKH IORZILHOG LV YHU\ VORZ WR GHYHORS EHFDXVH QR JOREDO PDVV FRQVHUYDWLRQ PHDVXUH LV HQIRUFHG 'XULQJ WKH FRXUVH RI LWHUn DWLRQV WKH PDVV IOX[ LQWR WKH GRPDLQ IURP WKH OHIW LV QRW PDWFKHG E\ DQ HTXDO IOX[ WKURXJK WKH RXWIORZ ERXQGDU\ DQG FRQVHTXHQWO\ WKH V\VWHP RI SUHVVXUHFRUUHFWLRQ HTXDWLRQV ZKLFK LV VXSSRVHG WR SURGXFH D FRQWLQXLW\VDWLVI\LQJ YHORFLW\ ILHOG GRHV QRW KDYH D VROXWLRQ &RUUHVSRQGLQJO\ RQH REVHUYHV WKDW WKH RXWHUORRS FRQYHUJHQFH UDWH LV DERXW WLPHV ZRUVH WKDQ IRU FDYLW\ IORZ $OVR QRWH WKDW WKH PRPHQWXP FRQYHUJHQFH SDWK RI WKH EDFNZDUGIDFLQJ VWHS IORZ LQ )LJXUH WHQGV WR IROORZ WKH FRQWLQXLW\ HTXDWLRQ LQGLFDWLQJ WKDW WKH SUHVVXUH DQG

PAGE 41

YHORFLW\ ILHOGV DUH VWURQJO\ FRXSOHG 7KH SUHVHQW IORZ SUREOHP EHDUV VRPH VLPLODULW\ WR D IXOO\GHYHORSHG FKDQQHO IORZ LQ ZKLFK WKH VWUHDPZLVH SUHVVXUHJUDGLHQW DQG FURVVn VWUHDP YLVFRXV GLIIXVLRQ DUH EDODQFHG VR WKH REVHUYDWLRQ WKDW SUHVVXUH DQG YHORFLW\ DUH VWURQJO\ FRXSOHG LV LQWXLWLYHO\ FRUUHFW 7KXV WKH FRQYHUJHQFH SDWK LV FRQWUROOHG E\ WKH GHYHORSPHQW RI WKH SUHVVXUH ILHOG 7KH VORZ FRQYHUJHQFH UDWH SUREOHP LV GXH WR WKH LQFRQVLVWHQF\ RI WKH V\VWHP RI SUHVVXUHFRUUHFWLRQ HTXDWLRQV 7KH LQQHUORRS FRQYHUJHQFH SDWK WKH 6/85 LWHUDWLRQVf IRU WKH SnV\VWHP RI HTXDn WLRQV PXVW EH H[DPLQHG WR GHWHUPLQH WKH PDQQHU LQ ZKLFK WKH LQQHUORRS LQFRQVLVn WHQF\ OHDGV WR SRRU RXWHUORRS FRQYHUJHQFH UDWHV 7DEOH VKRZV OHDGLQJ HLJHQYDOXHV IRU VXFFHVVLYH OLQHXQGHUUHOD[DWLRQ LWHUDWLRQ PDWULFHV RI WKH SnV\VWHP RI HTXDWLRQV DW DQ LQWHUPHGLDWH LWHUDWLRQ IRU ZKLFK WKH RXWHUORRS UHVLGXDOV KDG GURSSHG WR DSSUR[n LPDWHO\ /DUJHVW HLJHQYDOXHV &DYLW\ )ORZ %DFN6WHS )ORZ $L D A 7DEOH /DUJHVW HLJHQYDOXHV RI LWHUDWLRQ PDWULFHV GXULQJ DQ LQWHUPHGLDWH LWHUDn WLRQ DSSO\LQJ WKH VXFFHVVLYH OLQHXQGHUUHOD[DWLRQ LWHUDWLRQ VFKHPH WR WKH SnV\VWHP RI HTXDWLRQV ,Q ERWK PRGHO SUREOHPV WKH VSHFWUDO UDGLXV LV EHFDXVH WKH SnV\VWHP RI HTXDn WLRQV LV OLQHDUO\ GHSHQGHQW 7KH QH[W ODUJHVW HLJHQYDOXH LV VPDOOHU LQ WKH FDYLW\ IORZ FRPSXWDWLRQ WKDQ LQ WKH VWHS IORZ FRPSXWDWLRQ ZKLFK PHDQV D IDVWHU DV\PSWRWLF FRQn YHUJHQFH UDWH +RZHYHU WKH GLIIHUHQFH EHWZHHQ DQG LV QRW ODUJH HQRXJK WR SURGXFH WKH VLJQLILFDQW GLIIHUHQFH REVHUYHG LQ WKH RXWHU FRQYHUJHQFH SDWK )LJXUH VKRZV WKH LQQHUORRS UHVLGXDOV RI WKH 6/85 SURFHGXUH GXULQJ DQ LQWHUn PHGLDWH LWHUDWLRQ 7KH WZR PRPHQWXP HTXDWLRQV DUH ZHOOFRQGLWLRQHG DQG FRQYHUJH WR D VROXWLRQ ZLWKLQ LWHUDWLRQV ,Q )LJXUH IRU WKH FDYLW\ IORZ FDVH WKH SnHTXDWLRQ

PAGE 42

FRQYHUJHV WR ]HUR DOWKRXJK WKLV KDSSHQV DW D VORZHU UDWH WKDQ WKH WZR PRPHQWXP HTXDWLRQV EHFDXVH RI WKH GLIIXVLYH QDWXUH RI WKH HTXDWLRQ ,Q )LJXUH IRU WKH EDFN VWHS IORZ WKH LQQHUORRS UHVLGXDO LV IL[HG RQ D QRQ]HUR UHVLGXDO ZKLFK LV LQ IDFW WKH LQLWLDO OHYHO RI LQFRQVLVWHQF\ LQ WKH V\VWHP RI HTXDWLRQV LH WKH JOREDO PDVV GHILFLW *LYHQ WKDW WKH V\VWHP RI Sn HTXDWLRQV ZKLFK LV EHLQJ VROYHG GRHV QRW VDWLVI\ WKH JOREDO FRQWLQXLW\ FRQVWUDLQW KRZHYHU WKH VLJQLILFDQFH RU XWLOLW\ RI WKH SnILHOG WKDW KDV EHHQ REWDLQHG LV XQNQRZQ ,Q SUDFWLFH WKH RYHUDOO SURFHGXUH PD\ VWLOO EH DEOH WR OHDG WR D FRQYHUJHG VROXn WLRQ DV LQ WKH SUHVHQW FDVH ,W DSSHDUV WKDW WKH RXWIORZ H[WUDSRODWLQJ SURFHGXUH D ]HURJUDGLHQW WUHDWPHQW XWLOL]HG KHUH FDQ KHOS LQGXFH WKH RYHUDOO FRPSXWDWLRQ WR FRQYHUJH WR WKH ULJKW VROXWLRQ >@ 2EYLRXVO\ VXFK D ODFN RI VDWLVIDFWLRQ RI JOREDO PDVV FRQVHUYDWLRQ LV QRW GHVLUDEOH LQ YLHZ RI WKH VORZ FRQYHUJHQFH UDWH )XUWKHU VWXG\ VXJJHVWV WKDW WKH LWHUDWLYH VROXWLRQ WR WKH LQFRQVLVWHQW V\VWHP RI SnHTXDWLRQV FRQYHUJHV RQ D XQLTXH SUHVVXUH JUDGLHQW LH WKH GLIIHUHQFH EHWZHHQ Sn YDOXHV DW DQ\ WZR SRLQWV WHQGV WR D FRQVWDQW YDOXH HYHQ WKRXJK WKH SnILHOG GRHV QRW LQ JHQHUDO VDWLVI\ DQ\ RI WKH HTXDWLRQV LQ WKH V\VWHP 7KLV UHODWLRQVKLS LV VKRZQ LQ )LJXUH LQ ZKLFK WKH FRQYHUJHQFH RI WKH GLIIHUHQFH LQ Sn EHWZHHQ WKH ORZHUOHIW DQG XSSHUULJKW ORFDWLRQV LQ WKH GRPDLQ RI WKH FDYLW\ DQG EDFNZDUGIDFLQJ VWHS IORZV LV SORWWHG $OVR VKRZQ LV WKH YDOXH RI Sn DW WKH ORZHUOHIW FRUQHU RI WKH GRPDLQ )RU WKH FDYLW\ IORZ WKHUH LV D VROXWLRQ WR WKH V\VWHP RI SnHTXDWLRQV DQG LW LV REWDLQHG E\ WKH 6/85 WHFKQLTXH LQ DERXW LWHUDWLRQV 7KXV DOO WKH SUHVVXUH FRUUHFWLRQV DQG WKH GLIIHUHQFHV EHWZHHQ WKHP WHQG WRZDUGV FRQVWDQW YDOXHV ,Q WKH EDFNZDUGIDFLQJ VWHS IORZ KRZHYHU WKH LQGLYLGXDO SUHVVXUH FRUUHFWLRQV LQFUHDVH OLQHDUO\ ZLWK WKH QXPEHU RI LWHUDWLRQV V\PSWRPDWLF RI WKH LQFRQVLVWHQF\ LQ WKH V\VWHP RI HTXDWLRQV 7KH GLIIHUHQFHV EHWZHHQ Sn YDOXHV DSSURDFK D FRQVWDQW KRZHYHU 7KH UDWH DW ZKLFK WKLV

PAGE 43

XQLTXH SUHVVXUHJUDGLHQW ILHOG LV REWDLQHG GHSHQGV RQ WKH HLJHQYDOXHV RI WKH LWHUDWLRQ PDWUL[ 7R UHVROYH WKH LQFRQVLVWHQF\ SUREOHP LQ WKH SnV\VWHP RI HTXDWLRQV DQG WKHUHE\ LPSURYH WKH RXWHUORRS FRQYHUJHQFH UDWH LQ WKH EDFNZDUGIDFLQJ VWHS IORZ JOREDO PDVV FRQVHUYDWLRQ KDV EHHQ H[SOLFLWO\ HQIRUFHG GXULQJ WKH VHTXHQWLDO VROXWLRQ SURFHGXUH 7KH SURFHGXUH XVHG LV WR FRPSXWH WKH JOREDO PDVV GHILFLW DQG WKHQ DGG D FRQVWDQW YDOXH WR WKH RXWIORZ ERXQGDU\ XYHORFLWLHV WR UHVWRUH JOREDO PDVV FRQVHUYDWLRQ $On WHUQDWLYHO\ FRUUHFWLRQV FDQ EH DSSOLHG DW HYHU\ VWUHDPZLVH ORFDWLRQ E\ FRQVLGHULQJ FRQWURO YROXPHV ZKRVH ERXQGDULHV DUH WKH LQIORZ SODQH WKH WRS DQG ERWWRP ZDOOV RI WKH FKDQQHO DQG WKH L FRQVWDQW OLQH DW WKH VSHFLILHG VWUHDPZLVH ORFDWLRQ 7KH DUWLILFLDOO\LPSRVHG FRQYHFWLRQ KDV WKH HIIHFW RI VSHHGLQJ XS WKH GHYHORSPHQW RI WKH SUHVVXUH ILHOG ZKRVH QRUPDO GHYHORSPHQW LV GLIIXVLRQGRPLQDWHG ,W LV LQWHUHVWLQJ WR QRWH WKDW WKLV SK\VLFDOO\PRWLYDWHG DSSURDFK LV LQ HVVHQFH DQ DFFHOHUDWLRQ RI FRQYHUn JHQFH RI WKH OLQHLWHUDWLYH PHWKRG YLD WKH WHFKQLTXH FDOOHG DGGLWLYH FRUUHFWLRQ > @ 7KH VWUDWHJ\ LV WR DGMXVW WKH UHVLGXDO RQ WKH FXUUHQW OLQH WR ]HUR E\ DGGLQJ D FRQn VWDQW WR DOO WKH XQNQRZQV LQ WKH OLQH 7KLV SURFHGXUH LV GRQH IRU HYHU\ OLQH IRU HYHU\ LWHUDWLRQ DQG JHQHUDOO\ SURGXFHV LPSURYHPHQW LQ WKH 6/85 VROXWLRQ RI D V\VWHP RI HTXDWLRQV .HONDU DQG 3DWDQNDU >@ KDYH JRQH RQH VWHS IXUWKHU E\ DSSO\LQJ DGGLWLYH FRUUHFWLRQV OLNH DQ LQMHFWLRQ VWHS RI D PXOWLJULG VFKHPH D VRFDOOHG EORFN FRUUHFWLRQ WHFKQLTXH 7KLV WHFKQLTXH LV H[SORLWHG WR LWV IXOOHVW E\ +XWFKLQVRQ DQG 5DLWKE\ >@ *LYHQ D ILQHJULG VROXWLRQ DQG D FRDUVH JULG GLVFUHWL]HG HTXDWLRQV IRU WKH FRUUHFWLRQ TXDQWLWLHV RQ WKH FRDUVH JULG DUH REWDLQHG E\ VXPPLQJ WKH HTXDWLRQV IRU HDFK RI WKH ILQHJULG FHOOV ZLWKLQ D JLYHQ FRDUVH JULG FHOO $ VROXWLRQ LV WKHQ REWDLQHG E\ GLUHFW PHWKRGV LQ >@f ZKLFK VDWLVILHV FRQVHUYDWLRQ RI PDVV DQG PRPHQWXP 7KH FRUUHFWLRQV DUH WKHQ GLVWULEXWHG XQLIRUPO\ WR WKH ILQH JULG FHOOV ZKLFK PDNH XS WKH FRDUVH JULG

PAGE 44

FHOO DQG WKH LWHUDWLYH VROXWLRQ RQ WKH ILQH JULG LV UHVXPHG +RZHYHU H[SHULHQFHV KDYH VKRZQ WKDW WKH QHW HIIHFW RI VXFK D WUHDWPHQW IRU FRPSOH[ IORZ SUREOHPV LV OLPLWHG )LJXUH LOOXVWUDWHV WKH LPSURYHG FRQYHUJHQFH UDWH RI WKH FRQWLQXLW\ HTXDWLRQ IRU WKH LQQHU DQG RXWHU ORRSV LQ WKH EDFNZDUGIDFLQJ VWHS IORZ ZKHQ FRQVHUYDWLRQ RI PDVV LV H[SOLFLWO\ HQIRUFHG 7KH LQQHUORRS GDWD LV IURP WKH WK RXWHUORRS LWHUDWLRQ ,Q )LJXUH WKH FDYLW\ IORZ FRQYHUJHQFH SDWK LV DOVR VKRZQ WR IDFLOLWDWH WKH FRPSDULVRQ )RU WKH EDFNVWHS WKH RYHUDOO FRQYHUJHQFH UDWH LV LPSURYHG E\ DQ RUGHU RI PDJQLWXGH EHFRPLQJ VOLJKWO\ IDVWHU WKDQ WKH FDYLW\ IORZ FDVH 7KLV UHVXOW UHIOHFWV WKH LPSURYHG LQQHUORRS SHUIRUPDQFH DOVR VKRZQ LQ )LJXUH 7KH LPSURYHG SHUIRUPDQFH IRU WKH SUHVVXUHFRUUHFWLRQ HTXDWLRQ FRPHV DW WKH H[SHQVH RI D VOLJKWO\ VORZHU FRQYHUJHQFH UDWH IRU WKH PRPHQWXP HTXDWLRQV EHFDXVH RI WKH QRQOLQHDU FRQYHFWLRQ WHUP ,Q VKRUW LW KDV EHHQ VKRZQ WKDW D FRQVLVWHQF\ FRQGLWLRQ ZKLFK LV SK\VLFDOO\ WKH UHn TXLUHPHQW RI JOREDO PDVV FRQVHUYDWLRQ LV FULWLFDO IRU PHDQLQJIXO SUHVVXUHFRUUHFWLRQV WR EH JXDUDQWHHG *LYHQ QDWXUDO YHORFLW\f ERXQGDU\ FRQGLWLRQV ZKLFK OHDG WR D OLQHDUO\ GHSHQGHQW V\VWHP RI SUHVVXUHFRUUHFWLRQ HTXDWLRQV VDWLVIDFWLRQ RI WKH JOREDO FRQWLQXLW\ FRQVWUDLQW LV WKH RQO\ ZD\ WKDW D VROXWLRQ FDQ H[LVW DQG WKHUHIRUH WKH RQO\ ZD\ WKDW WKH LQQHUORRS UHVLGXDOV FDQ EH GULYHQ WR ]HUR )RU WKH PRGHO EDFNZDUGn IDFLQJ VWHS IORZ LQ D FKDQQHO ZLWK OHQJWK / DQG D [ PHVK WKH PDVV FRQVHUYDWLRQ FRQVWUDLQW LV HQIRUFHG JOREDOO\ RU DW HYHU\ VWUHDPZGVH ORFDWLRQ E\ DQ DGGLWLYHFRUUHFWLRQ WHFKQLTXH 7KLV WHFKQLTXH SURGXFHV D IROG LQFUHDVH LQ WKH FRQn YHUJHQFH UDWH 3K\VLFDOO\ PRGLI\LQJ WKH X YHORFLWLHV KDV WKH VDPH HIIHFW DV DGGLQJ D FRQYHFWLRQ WHUP WR WKH 3RLVVRQ HTXDWLRQ IRU WKH ILHOG ZKLFK RWKHUZLVH GHYHORSV YHU\ VORZO\ $ FRDUVH JULG VL]H ZDV XVHG WR GHPRQVWUDWH WKH QHHG RI HQIRUFLQJ JOREDO PDVV FRQVHUYDWLRQ 2Q D ILQHU JULG WKLV LVVXH EHFRPHV PRUH FULWLFDO ,Q WKH QH[W VHFWLRQ WKH VROXWLRQ DFFXUDF\ DVSHFWV UHODWHG WR PDVV FRQVHUYDWLRQ ZLOO EH DGGUHVVHG DQG WKH FRPSXWDWLRQV ZLOO EH FRQGXFWHG ZLWK PRUH DGHTXDWH JULG UHVROXWLRQ

PAGE 45

1XPHULFDO 7UHDWPHQW RI 2XWIORZ %RXQGDULHV &RQWLQXLQJ ZLWK WKH WKHPH RI ZHOOSRVHGQHVV WKH QH[W QXPHULFDO LVVXH WR EH GLVn FXVVHG LV WKH FKRLFH RI RXWIORZ ERXQGDU\ ORFDWLRQ ,I IOXLG IORZV LQWR WKH GRPDLQ DW D ERXQGDU\ ZKHUH H[WUDSRODWLRQ LV DSSOLHG WKHQ WUDGLWLRQDOO\ WKH SUREOHP LV QRW FRQVLGHUHG WR EH ZHOOSRVHG EHFDXVH WKH LQIRUPDWLRQ ZKLFK LV EHLQJ WUDQVSRUWHG LQWR WKH GRPDLQ GRHV QRW SDUWLFLSDWH LQ WKH VROXWLRQ WR WKH SUREOHP >@ 1XPHULFDOO\ KRZHYHU DFFXUDWH VROXWLRQV FDQ EH REWDLQHG XVLQJ ILUVWRUGHU H[WUDSRODWLRQ IRU WKH YHn ORFLW\ FRPSRQHQWV RQ D ERXQGDU\ ZKHUH LQIORZ LV RFFXUULQJ >@ +HUH RSHQ ERXQGDU\ WUHDWPHQW IRU ERWK VWHDG\ DQG WLPHGHSHQGHQW IORZ SUREOHPV LV LQYHVWLJDWHG IXUWKHU )LJXUH DQG SUHVHQW VWUHDPIXQFWLRQ FRQWRXUV IRU D WLPHGHSHQGHQW IORZ SUREOHP LPSXOVLYHO\ VWDUWHG EDFNZDUGIDFLQJ VWHS IORZ XVLQJ FHQWUDOGLIIHUHQFLQJ IRU WKH FRQYHFWLRQ WHUPV DQG ILUVWRUGHU EDFNZDUGGLIIHUHQFLQJ LQ WLPH $ SDUDEROLF LQIORZ YHORFLW\ SURILOH LV VSHFLILHG ZKLOH RXWIORZ ERXQGDU\ YHORFLWLHV DUH REWDLQHG E\ ILUVWRUGHU H[WUDSRODWLRQ 7KH 5H\QROGV QXPEHU EDVHG RQ WKH DYHUDJH LQIORZ YHORFLW\ XDYJ DQG WKH FKDQQHO KHLJKW + LV 7KH H[SDQVLRQ UDWLR +K LV DV LQ WKH PRGHO SUREOHP GHVFULEHG LQ )LJXUH 7LPHDFFXUDWH VLPXODWLRQV ZHUH SHUIRUPHG IRU WZR FKDQQHO FRQILJXUDWLRQV RQH ZLWK OHQJWK / [ PHVKf DQG WKH RWKHU ZLWK OHQJWK / [ PHVKf 7KLV IORZ SUREOHP KDV EHHQ WKH VXEMHFW RI VRPH UHFHQW LQYHVWLJDWLRQV IRFXVLQJ RQ RSHQ ERXQGDU\ FRQGLWLRQV > @ )RU HDFK WLPH VWHS WKH 6,03/( DOJRULWKP LV XVHG WR LWHUDWLYHO\ FRQYHUJH RQ D VROXWLRQ WR WKH XQVWHDG\ IRUP RI WKH JRYHUQLQJ HTXDWLRQV H[SOLFLWO\ HQIRUFLQJ JOREDO FRQVHUYDWLRQ RI PDVV GXULQJ WKH FRXUVH RI LWHUDWLRQV ,Q WKH SUHVHQW VWXG\ FRQYHUJHQFH ZDV GHFODUHG IRU D JLYHQ WLPH VWHS ZKHQ WKH JOREDO UHVLGXDOV KDG EHHQ UHGXFHG EHORZ 7KH WLPHVWHS VL]H ZDV WZLFH WKH YLVFRXV WLPH VFDOH LQ WKH \GLUHFWLRQ LH

PAGE 46

$L $\X 7KXV D IOXLG SDUWLFOH HQWHULQJ WKH GRPDLQ DW WKH DYHUDJH YHORFLW\ X WUDYHOV XQLWV GRZQVWUHDP GXULQJ D WLPHVWHS )LJXUH VKRZV WKH IRUPDWLRQ RI DOWHUQDWH ERWWRPWRS ZDOO UHFLUFXODWLRQ UHJLRQV GXULQJ VWDUWXS ZKLFK JUDGXDOO\ EHFRPH WKLQQHU DQG HORQJDWHG DV WKH\ GULIW GRZQn VWUHDP )RU WKH / VLPXODWLRQ )LJXUH f WKH WUDQVLHQW IORZILHOG KDV DV PDQ\ DV IRXU VHSDUDWLRQ EXEEOHV DW 7 WKH ODWWHU WZR RI ZKLFK DUH HYHQWXDOO\ ZDVKHG RXW RI WKH GRPDLQ ,Q WKH / VLPXODWLRQ )LJXUH f WKH VWUHDPIXQFWLRQ SORWV DUH DW WLPHV FRUUHVSRQGLQJ WR WKRVH VKRZQ LQ )LJXUH 1RWH WKDW EHWZHHQ 7 DQG 7 D VHFRQGDU\ ERWWRP ZDOO UHFLUFXODWLRQ ]RQH IRUPV DQG GULIWV GRZQVWUHDP H[LWLQJ ZLWKRXW UHIOHFWLRQ WKURXJK WKH GRZQVWUHDP ERXQGDU\ 7KH WLPH HYROXWLRQ RI WKH IORZILHOG IRU WKH / DQG / f§ VLPXODWLRQV LV YLUWXDOO\ LGHQWLFDO $V FDQ EH REVHUYHG WKH IDFWV WKDW D VKRUWHU FKDQQHO OHQJWK ZDV XVHG LQ )LJXUH DQG WKDW D UHFLUFXODWLQJ FHOO PD\ JR WKURXJK WKH RSHQ ERXQGDU\ GR QRW DIIHFW WKH VROXWLRQV )LJXUH FRPSDUHV WKH FRPSXWHG WLPH KLVWRULHV RI WKH ERWWRP ZDOO UHDWWDFKPHQW DQG WRS ZDOO VHSDUDWLRQ SRLQWV EHWZHHQ WKH WZR FRPSXWDWLRQV 7KH / f§ DQG / FXUYHV DUH SHUIHFWO\ RYHUODSSHG 7KH VWHDG\VWDWH VROXWLRQV IRU ERWK WKH / DQG / FKDQQHO FRQILJXUDWLRQV DUH DOVR VKRZQ LQ )LJXUH DQG UHVSHFWLYHO\ $OWKRXJK WKH RXWIORZ ERXQGDU\ FXWV WKH WRS ZDOO VHSDUDWLRQ EXEEOH DSSUR[LPDWHO\ LQ KDOI WKHUH LV QR DSSDUHQW GLIIHUHQFH EHWZHHQ WKH FRPSXWHG VWUHDPIXQFWLRQ FRQWRXUV IRU [ )XUWKHUPRUH WKH FRQYHUJHQFH UDWH LV QRW DIIHFWHG E\ WKH FKRLFH RI RXWIORZ ERXQGDU\ ORFDWLRQ )LJXUH FRPSDUHV WKH VWHDG\VWDWH X DQG Y YHORFLW\ SURILOHV DW [ EHn WZHHQ WKH WZR FRPSXWDWLRQV 7KH DFFXUDF\ RI WKH FRPSXWHG UHVXOWV LV DVVHVVHG E\ FRPSDULVRQ ZLWK DQ )(0 QXPHULFDO VROXWLRQ UHSRUWHG E\ *DUWOLQJ >@ )LJXUH HVWDEOLVKHV TXDQWLWDWLYHO\ WKDW WKH WZR VLPXODWLRQV GLIIHU QHJOLJLEO\ RYHU [ WKH Y SURILOH GLIIHUV RQ WKH RUGHU RI f 7KH YHORFLW\ VFDOH IRU WKH SUREOHP LV

PAGE 47

1HLWKHU Y SURILOH DJUHHV SHUIHFWO\ ZLWK WKH VROXWLRQ REWDLQHG E\ *DUWOLQJ ZKLFK PD\ EH DWWULEXWHG WR WKH QHHG IRU FRQGXFWLQJ IXUWKHU JULG UHILQHPHQW VWXGLHV LQ WKH SUHVHQW ZRUN DQGRU *DUWOLQJfV ZRUN (YLGHQWO\ WKH ORFDWLRQ RI WKH RSHQ ERXQGDU\ LV QRW FULWLFDO WR REWDLQLQJ D FRQn YHUJHG VROXWLRQ 7KLV REVHUYDWLRQ LQGLFDWHV WKDW WKH GRZQVWUHDP LQIRUPDWLRQ LV FRPn SOHWHO\ DFFRXQWHG IRU E\ WKH FRQWLQXLW\ HTXDWLRQ 7KH FRUUHFW SUHVVXUH ILHOG FDQ GHn YHORS EHFDXVH WKH V\VWHP RI HTXDWLRQV UHTXLUHV RQO\ WKH ERXQGDU\ PDVV IOX[ VSHFLILn FDWLRQ ,I WKH JOREDO FRQWLQXLW\ FRQVWUDLQW LV VDWLVILHG WKH SUHVVXUHFRUUHFWLRQ HTXDWLRQ LV FRQVLVWHQW UHJDUGOHVV RI ZKHWKHU WKHUH LV LQIORZ RU RXWIORZ DW WKH ERXQGDU\ ZKHUH H[WUDSRODWLRQ LV DSSOLHG 7KH QXPHULFDO ZHOOSRVHGQHVV RI WKH RSHQ ERXQGDU\ FRPn SXWDWLRQ UHVXOWV LQ YLUWXDOO\ LGHQWLFDO IORZILHOG GHYHORSPHQW IRU WKH WLPHGHSHQGHQW / DQG / VLPXODWLRQV DV ZHOO DV VWHDG\VWDWH VROXWLRQV ZKLFK DJUHH ZLWK HDFK RWKHU DQG IROORZ FORVHO\ *DUWOLQJfV EHQFKPDUN GDWD >@ &RQFOXGLQJ 5HPDUNV ,Q RUGHU IRU WKH 6,03/( SUHVVXUHFRUUHFWLRQ PHWKRG WR EH D ZHOOSRVHG QXPHUn LFDO SURFHGXUH IRU RSHQ ERXQGDU\ SUREOHPV H[SOLFLW VWHSV PXVW EH WDNHQ WR HQVXUH WKH QXPHULFDO FRQVLVWHQF\ RI WKH SUHVVXUHFRUUHFWLRQ V\VWHP RI HTXDWLRQV GXULQJ WKH FRXUVH RI LWHUDWLRQV )RU WKH GLVFUHWH SUREOHP ZLWK WKH QDWXUDO ERXQGDU\ WUHDWPHQW IRU SUHVVXUH LH QRUPDO YHORFLW\ VSHFLILHG DW DOO ERXQGDULHV JOREDO PDVV FRQVHUYDn WLRQ LV WKH VROYDELOLW\ FRQVWUDLQW ZKLFK PXVW EH VDWLVILHG LQ RUGHU WKDW WKH V\VWHP RI SnHTXDWLRQV LV FRQVLVWHQW :LWKRXW D JOREDOO\ PDVVFRQVHUYLQJ SURFHGXUH HQIRUFHG GXULQJ HDFK LWHUDWLYH VWHS WKH XWLOLW\ RI WKH SUHVVXUHFRUUHFWLRQV REWDLQHG DW HDFK LWn HUDWLRQ FDQQRW EH JXDUDQWHHG 2YHUDOO FRQYHUJHQFH PD\ VWLOO RFFXU DOEHLW YHU\ VORZO\ ,Q WKLV UHJDUG WKH SRRU RXWHUORRS FRQYHUJHQFH EHKDYLRU VLPSO\ UHIOHFWV WKH SRRUf FRQYHUJHQFH UDWH RI WKH LQQHUORRS LWHUDWLRQV RI WKH 6/85 WHFKQLTXH ,Q JHQHUDO WKH

PAGE 48

LQQHUORRS UHVLGXDO LV IL[HG RQ WKH YDOXH RI WKH LQLWLDO OHYHO RI LQFRQVLVWHQF\ RI WKH V\VWHP RI SnHTXDWLRQV ZKLFK SK\VLFDOO\ LV WKH JOREDO PDVV GHILFLW 7KH FRQYHUJHQFH UDWH FDQ EH LPSURYHG GUDPDWLFDOO\ E\ H[SOLFLWO\ HQIRUFLQJ PDVV FRQVHUYDWLRQ XVLQJ DQ DGGLWLYHFRUUHFWLRQ WHFKQLTXH 7KH UHVXOWV RI QXPHULFDO VLPXODWLRQV RI EDFNZDUG IDFLQJ VWHS IORZ LOOXVWUDWH DQG VXSSRUW WKHVH FRQFOXVLRQV 7KH PDVVFRQVHUYDWLRQ FRQVWUDLQW DOVR KDV LPSOLFDWLRQV IRU WKH LVVXH RI SURSHU QXPHULFDO WUHDWPHQW RI RSHQ ERXQGDULHV ZKHUH LQIORZ LV RFFXUULQJ 6SHFLILFDOO\ WKH FRQYHQWLRQDO YLHZSRLQW WKDW LQIORZ FDQQRW RFFXU DW RSHQ ERXQGDULHV ZLWKRXW 'LULFK OHW SUHVFULSWLRQ RI WKH LQIORZ YDULDEOHV FDQ EH UHEXWWHG EDVHG RQ WKH JURXQGV WKDW WKH QXPHULFDO SUREOHP LV ZHOOSRVHG LI WKH QRUPDO YHORFLW\ FRPSRQHQWV VDWLVI\ WKH FRQWLQXLW\ FRQVWUDLQW

PAGE 49

)LJXUH 6WDJJHUHG JULG X FRQWURO YROXPH DQG WKH QHDUE\ YDULDEOHV ZKLFK DUH LQYROYHG LQ WKH GLVFUHWL]DWLRQ RI WKH XPRPHQWXP HTXDWLRQ

PAGE 50

8 6W 6W 1 ? V Vr 9 9 9 ? ? ? 6W V Vr ? $ Y ? ? 9 ? A ? 9 ? 1N aVr ‘1 LO ? ? ? 8\f + )LJXUH 'HVFULSWLRQ RI WZR PRGHO SUREOHPV %RWK DUH DW 5H f§ 7KH FDYLW\ LV D VTXDUH ZLWK D WRS ZDOO VOLGLQJ WR WKH OHIW ZKLOH WKH EDFNZDUGIDFLQJ VWHS LV D [ UHFWDQJXODU GRPDLQ ZLWK DQ H[SDQVLRQ UDWLR +K f§ DQG D SDUDEROLF LQIORZ DYHUDJH LQIORZ YHORFLW\ f 7KH FDYLW\ IORZ JULG LV [ DQG WKH VWHS IORZ JULG LV [ 7KH PHVKHV DQG WKH YHORFLW\ YHFWRUV DUH VKRZQ

PAGE 51

)LJXUH 0RGHO [ FRPSXWDWLRQDO GRPDLQ ZLWK QXPEHUHG FRQWURO YROXPHV IRU GLVFXVVLRQ RI (T 7KH VWDJJHUHG YHORFLW\ FRPSRQHQWV ZKLFK UHIHU WR FRQWURO YROXPH DUH DOVR LQGLFDWHG

PAGE 52

/RJ RI 5HVLGXDO 5H &DYLW\ )ORZ 5H %DFN6WHS )ORZ RI ,WHUDWLRQV RI ,WHUDWLRQV )LJXUH 2XWHUORRS FRQYHUJHQFH SDWKV IRU WKH 5H OLGGULYHQ FDYLW\ DQG EDFNZDUGIDFLQJ VWHS IORZV XVLQJ FHQWUDOGLIIHUHQFLQJ IRU WKH FRQYHFWLRQ WHUPV /HJn HQG HTXDWLRQ X PRPHQWXP HTXDWLRQ F PRPHQWXP HTXDWLRQ

PAGE 53

RI ,WHUDWLRQV RI ,WHUDWLRQV )LJXUH ,QQHUORRS FRQYHUJHQFH SDWKV IRU WKH 5H OLGGULYHQ FDYLW\ DQG EDFNZDUGIDFLQJ VWHS IORZV 7KH YHUWLFDO D[LV LV WKH ORJ?R RI WKH UDWLR RI WKH FXUUHQW UHVLGXDO WR WKH LQLWLDO UHVLGXDO /HJHQG Sn HTXDWLRQ X PRPHQWXP HTXDWLRQ Y PRPHQWXP HTXDWLRQ

PAGE 54

,QQHU /RRS IRU &DYLW\ )ORZ ,QQHU /RRS IRU %DFN6WHS )ORZ )LJXUH 9DULDWLRQ RI Sn ZLWK LQQHUORRS LWHUDWLRQV 7KH GDVKHG OLQH LV WKH YDOXH RI S DW WKH ORZHUOHIW FRQWURO YROXPH ZKLOH WKH VROLG OLQH LV WKH GLIIHUHQFH EHWZHHQ 3ORZHU OHIW DQA 3XSSHUULJKI

PAGE 55

2XWHU /RRS &RQYHUJHQFH 3DWK ,QQHU/RRS &RQYHUJHQFH 3DWK RI ,WHUDWLRQV RI ,WHUDWLRQV )LJXUH 2XWHUORRS DQG LQQHUORRS FRQYHUJHQFH SDWKV RI WKH Sn HTXDWLRQ IRU WKH EDFNZDUGIDFLQJ VWHS PRGHO SUREOHP ZLWK DQG ZLWKRXW HQIRUFLQJ WKH FRQWLQXLW\ FRQn VWUDLQW f FRQVHUYDWLRQ RI PDVV QRW HQIRUFHG f FRQWLQXLW\ HQIRUFHG JOREDOO\ f FDYLW\ IORZ

PAGE 56

7 7 7 7 ]] )LJXUH 7LPHGHSHQGHQW IORZOLHOG IRU LPSXOVLYHO\ VWDUWHG EDFNZDUGIDFLQJ VWHS IORZ 5H OLH GRPDLQ KDV OHQJWK 6WUHDPIXQFWLRQ FRQWRXUV DUH SORWWHG DW VHYHUDO LQVWDQWV GXULQJ WKH HYROXWLRQ WR WKH VWHDG\ VWDWH ZKLFK LV WKH ODVW ILJXUH

PAGE 57

7 7 7 7]] )LJXUH 7LPHGHSHQGHQW IORZILHOG IRU LPSXOVLYHO\ VWDUWHG EDFNZDUGIDFLQJ VWHS IORZ 5H 7KH GRPDLQ KDV OHQJWK / 6WUHDPIXQFWLRQ FRQWRXUV DUH SORWWHG DW VHYHUDO LQVWDQWV GXULQJ WKH HYROXWLRQ WR WKH VWHDG\VWDWH ZKLFK LV WKH ODVW ILJXUH

PAGE 58

7LPH(YROXWLRQ RI 5HDWWDFKPHQW6HSDUDWLRQ /RFDWLRQV 7LPH )LJXUH 7LPHGHSHQGHQW ORFDWLRQ RI ERWWRP ZDOO UHDWWDFKPHQW SRLQW DQG WRS ZDOO VHSDUDWLRQ SRLQW IRU 5H LPSXOVLYHO\ VWDUWHG EDFNZDUGIDFLQJ VWHS IORZ 7KH FXUYHV IRU ERWK / DQG / f§ FRPSXWDWLRQV DUH VKRZQ WKH\ RYHUODS LGHQWLFDOO\

PAGE 59

9 9HORFLW\ 3URILOH DW ; )RU 5H %DFN6WHS )ORZ )LJXUH &RPSDULVRQ RI X DQG XFRPSRQHQW RI YHORFLW\ SURILOHV DW [ IRU WKH / DQG / f§ EDFNZDUGIDFLQJ VWHS VLPXODWLRQV DW 5H ZLWK FHQWUDO GLIIHUHQFLQJ Rf LQGLFDWHV WKH JULGLQGHSHQGHQW )(0 VROXWLRQ REWDLQHG E\ *DUWOLQJ 7KH Y SURILOH LV VFDOHG XS E\

PAGE 60

&+$37(5 ()),&,(1&< $1' 6&$/$%,/,7< 21 6,0' &20387(56 7KH SUHYLRXV FKDSWHU FRQVLGHUHG DQ LVVXH ZKLFK ZDV LPSRUWDQW EHFDXVH RI LWV LPn SOLFDWLRQV IRU WKH FRQYHUJHQFH UDWH LQ RSHQ ERXQGDU\ SUREOHPV 7KH SUHVHQW FKDSWHU VKLIWV JHDUV WR IRFXV RQ WKH FRVW DQG HIILFLHQF\ RI SUHVVXUHFRUUHFWLRQ PHWKRGV RQ 6,0' FRPSXWHUV $V GLVFXVVHG LQ &KDSWHU WKH HYHQWXDO JRDO LV WR XQGHUVWDQG WKH LQGLUHFW FRVW >@ LH WKH SDUDOOHO UXQ WLPH RI VXFK PHWKRGV RQ 6,0' FRPSXWHUV DQG KRZ WKLV FRVW VFDOHV ZLWK WKH SUREOHP VL]H DQG WKH QXPEHU RI SURFHVVRUV 7KH UXQ WLPH LV MXVW WKH QXPEHU RI LWHUDWLRQV PXOWLSOLHG E\ WKH FRVW SHU LWHUDWLRQ 7KLV FKDSWHU FRQVLGHUV WKH FRVW SHU LWHUDWLRQ %DFNJURXQG 7KH GLVFXVVLRQ RI 6,0' FRPSXWHUV LQ &KDSWHU LQGLFDWHG VLPLODULWLHV LQ WKH JHQHUDO OD\RXW RI VXFK PDFKLQHV DQG LQ WKH IDFWRUV ZKLFK DIIHFW SURJUDP SHUIRUPDQFH 0RUH GHWDLO LV JLYHQ LQ WKLV VHFWLRQ WR EHWWHU VXSSRUW WKH GLVFXVVLRQ RI UHVXOWV 6SHHGXS DQG (IILFLHQF\ 6SHHGXS 6 LV GHILQHG DV V \ Lf LS ZKHUH 7S LV WKH PHDVXUHG UXQ WLPH XVLQJ QS SURFHVVRUV ,Q WKH SUHVHQW ZRUN 7M LV WKH UXQ WLPH RI WKH SDUDOOHO DOJRULWKP RQ RQH SURFHVVRU LQFOXGLQJ ERWK VHULDO DQG SDUDOOHO FRPSXWDWLRQDO ZRUN EXW H[FOXGLQJ WKH IURQWHQGWRSURFHVVRU DQG LQWHUSURn FHVVRU FRPPXQLFDWLRQ 2Q D 0,0' PDFKLQH LW LV VRPHWLPHV SRVVLEOH WR DFWXDOO\ WLPH

PAGE 61

WKH SURJUDP RQ RQH SURFHVVVRU EXW HDFK 6,0' SURFHVVRU LV QRW XVXDOO\ D FDSDEOH VHULDO FRPSXWHU E\ LWVHOI VR 7? PXVW EH HVWLPDWHG 7KH WLPLQJ WRROV RQ WKH &0 DQG &0 DUH YHU\ VRSKLVWLFDWHG DQG FDQ VHSDUDWHO\ PHDVXUH WKH WLPH HODSVHG E\ WKH SURFHVVRUV GRLQJ FRPSXWDWLRQ GRLQJ YDULRXV NLQGV RI FRPPXQLFDWLRQ DQG GRLQJ QRWKLQJ ZDLWLQJ IRU DQ LQVWUXFWLRQ IURP WKH IURQWHQG ZKLFK PLJKW EH ILQLVKLQJ XS VRPH VHULDO ZRUN EHIRUH LW FDQ VHQG DQRWKHU FRGH EORFNf 7KXV LW LV SRVVLEOH WR PDNH D UHDVRQDEOH HVWLPDWH IRU 7? 3DUDOOHO HIILFLHQF\ LV WKH UDWLR RI WKH DFWXDO VSHHGXS WR WKH LGHDO QSf ZKLFK UHIOHFWV WKH RYHUKHDG FRVWV RI GRLQJ WKH FRPSXWDWLRQ LQ SDUDOOHO 6JFWXDO B 7 S 6LGHDO 7LS f ,I 7FRPS LV WKH WLPH LQ VHFRQGV VSHQW E\ HDFK RI WKH QS SURFHVVRUV GRLQJ XVHIXO ZRUN FRPSXWDWLRQf 7QLHUBSURF LV WKH WLPH VSHQW E\ WKH SURFHVVRUV GRLQJ LQWHUSURFHVVRU FRPPXQLFDWLRQ DQG 7HBWRBSURF LV WKH WLPH HODSVHG WKURXJK IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ WKHQ HDFK RI WKH SURFHVVRUV LV EXV\ D WRWDO RI 7FRPS fHUBSURF VHFRQGV DQG WKH WRWDO UXQ WLPH RQ PXOWLSOH SURFHVVRUV LV 7FRPS?7LQWHUSURF?7IHWRSURF VHFRQGV $VVXPLQJ WKDW WKH SDUDOOHOLVP LV KLJK LH D KLJK SHUFHQWDJH RI WKH YLUWXDO SURFHVVRUV DUH QRW LGOH D VLQJOH SURFHVVRU ZRXOG QHHG QS7FRPS WLPH WR GR WKH VDPH ZRUN 7KXV 7? QS7FRPS DQG IURP (T ( FDQ EH H[SUHVVHG DV f ^7LQLHU f§ SURF 7IH f§WRf§SURFf 7FRPS 7 7FRPPf 7FRPS 6LQFH WLPH LV ZRUN GLYLGHG E\ VSHHG ( GHSHQGV RQ ERWK PDFKLQHUHODWHG IDFWRUV DQG WKH LPSOHPHQWDWLRQDO IDFWRUV WKURXJK (T +LJK SDUDOOHO HIILFLHQF\ LV QRW QHFHVn VDULO\ D SURGXFW RI IDVW SURFHVVRUV RU IDVW FRPPXQLFDWLRQV FRQVLGHUHG DORQH LQVWHDG LW LV WKH [fHODWLYH VSHHGV WKDW DUH LPSRUWDQW DQG WKH UHODWLYH DPRXQW RI FRPPXQLFDWLRQ DQG FRPSXWDWLRQ LQ WKH SURJUDP &RQVLGHU WKH PDFKLQHUHODWHG IDFWRUV ILUVW

PAGE 62

&RPSDULVRQ %HWZHHQ &0 &0 DQG 03 $ QRGH &0 ZLWK YHFWRU XQLWV D N SURFHVVRU &0 DQG D ON SURFHVVRU 03 ZHUH XVHG LQ WKH SUHVHQW VWXG\ 7KH &0 KDV *%\WHV WRWDO PHPRU\ ZKLOH WKH &0 KDV 0E\WHV DQG WKH 03 KDV 0%\WHV 7KH SHDN VSHHGV RI WKHVH FRPSXWHUV DUH DQG *IORSV UHVSHFWLYHO\ LQ GRXEOH SUHFLVLRQ 3HU SURFHVn VRU WKH SHDN VSHHGV DUH DQG 0IORSV ZLWK PHPRU\ EDQGZLGWKV RI DQG 0E\WHVV > @ &OHDUO\ WKHVH DUH FRPSXWHUV ZLWK YHU\ GLIIHUHQW FDSDn ELOLWLHV HYHQ WDNLQJ LQWR DFFRXQW WKH IDFW WKDW SHDN VSHHGV ZKLFK DUH EDVHG RQO\ RQ WKH SURFHVVRU VSHHG XQGHU LGHDO FRQGLWLRQV DUH QRW DQ DFFXUDWH EDVLV IRU FRPSDULVRQ ,Q WKH &0 DQG &0 WKH IURQWHQG FRPSXWHUV DUH 6XQ ZRUNVWDWLRQV ZKLOH LQ WKH 03 WKH IURQWHQG LV D 'HFVWDWLRQ )URP (T LW LV FOHDU WKDW WKH UHODWLYH VSHHGV RI WKH IURQWHQG FRPSXWHU DQG WKH SURFHVVRUV DUH LPSRUWDQW 7KHLU UDWLR GHWHUPLQHV WKH LPSRUWDQFH RI WKH IURQWHQGWRSURFHVVRU W\SH RI FRPPXQLFDWLRQ 2Q WKH &0 DQG 03 WKHUH LV MXVW RQH RI WKHVH LQWHUPHGLDWH SURFHVVRUV FDOOHG HLWKHU D VHTXHQFHU RU DQ DUUD\ FRQWURO XQLW UHVSHFWLYHO\ ZKLOH RQ WKH QRGH &0 WKH 63$5& PLFURSURFHVVRUV KDYH WKH UROH RI VHTXHQFHUV (DFK 63$5& QRGH EURDGFDVWV WR IRXU YHFWRU XQLWV 98Vf ZKLFK DFWXDOO\ GR WKH ZRUN 7KXV D QRGH &0 KDV LQGHSHQGHQW SURFHVVRUV ,Q WKH &0 WKH ffSURn FHVVRUVff DUH PRUH RIWHQ FDOOHG SURFHVVLQJ HOHPHQWV 3(Vf EHFDXVH HDFK RQH FRQVLVWV RI D IORDWLQJSRLQW XQLW FRXSOHG ZLWK ELWVHULDO SURFHVVRUV (DFK ELWVHULDO SURFHVVRU LV WKH PHPRU\ PDQDJHU IRU D VLQJOH ELW RI D ELW ZRUG 7KXV WKH NSURFHVVRU &0 DFWXDOO\ KDV RQO\ LQGHSHQGHQW SURFHVVLQJ HOHPHQWV 7KLV VWUDQJH &0 SURFHVVRU GHVLJQ FDPH DERXW EDVLFDOO\ DV D ZRUNDURXQG ZKLFK ZDV LQWURGXFHG WR LPn SURYH WKH PHPRU\ EDQGZLGWK IRU IORDWLQJSRLQW FDOFXODWLRQV >@ &RPSDUHG WR WKH &0 98V WKH &0 SURFHVVRUV DUH DERXW RQHIRXUWK DV IDVW ZLWK ODUJHU RYHUKHDG

PAGE 63

FRVWV DVVRFLDWHG ZLWK PHPRU\ DFFHVV DQG FRPSXWDWLRQ 7KH 03 KDV ELW SURFHVVRUVf§FRPSDUHG WR HLWKHU WKH &0 RU &0 SURFHVVRUV WKH 03 SURFHVVRUV DUH YHU\ VORZ 7KH JHQHULF WHUP fSURFHVVLQJ HOHPHQWf 3(f ZKLFK LV XVHG RFFDVVLRQ DOO\ LQ WKH GLVFXVVLRQ EHORZ UHIHUV WR HLWKHU RQH RI WKH 98V RQH RI WKH &0 SURFHVVRUV RU RQH RI WKH 03 SURFHVVRUV ZKLFKHYHU LV DSSURSULDWH )RU WKH SUHVHQW VWXG\ WKH SURFHVVRUV DUH HLWKHU SK\VLFDOO\ RU ORJLFDOO\ LPDJLQHG WR EH DUUDQJHG DV D G PHVK ZKLFK LV D OD\RXW WKDW LV ZHOOVXSSRUWHG E\ WKH GDWD QHWZRUNV RI HDFK RI WKH FRPSXWHUV 7KH GDWD QHWZRUN RI WKH QRGH &0 LV D IDW WUHH RI KHLJKW ZKLFK LV VLPLODU WR D ELQDU\ WUHH H[FHSW WKH EDQGZLGWK VWD\V FRQVWDQW XSZDUGV IURP KHLJKW DW 0%\WHVV GHWDLOV LQ >@f 2QH FDQ H[SHFW DSSUR[LPDWHO\ 0%\WHVV IRU UHJXODU JULG FRPPXQLFDWLRQ SDWWHUQV LH EHWZHHQ QHDUHVWQHLJKERU 63$5& QRGHVf DQG 0%\WHVV IRU UDQGRP JOREDOf FRPPXQLFDn WLRQV 7KH UDQGRPO\GLUHFWHG PHVVDJHV KDYH WR JR IDUWKHU XS WKH WUHH VR WKH\ DUH VORZHU 7KH &0 QHWZRUN D K\SHUFXEHf LV FRPSOHWHO\ GLIIHUHQW IURP WKH IDWWUHH QHWn ZRUN DQG LWV SHUIRUPDQFH IRU UHJXODU JULG FRPPXQLFDWLRQ EHWZHHQ QHDUHVWQHLJKERU SURFHVVRUV LV URXJKO\ 0%\WHVV >@ 7KH JULG QHWZRUN RQ WKH &0 LV FDOOHG 1(:6 1RUWK(DVW:HVW6RXWKf ,W LV D VXEVHW RI WKH K\SHUFXEH FRQQHFWLRQV VHn OHFWHG DW UXQ WLPH 7KH 03 KDV WZR QHWZRUNV UHJXODU FRPPXQLFDWLRQV XVH ;1HW *%\WHVV SHDNf ZKLFK FRQQHFWV HDFK SURFHVVRU WR LWV HLJKW QHDUHVW QHLJKERUV DQG UDQGRP FRPPXQLFDWLRQV XVH D VWDJH FURVVEDU 0%\WHVV SHDNf 7R VXPPDUL]H WKH UHODWLYH VSHHGV RI WKHVH WKUHH 6,0' FRPSXWHUV LW LV VXIILFLHQW IRU WKH SUHVHQW VWXG\ WR REVHUYH WKDW WKH 03 KDV YHU\ IDVW QHDUHVWQHLJKERU FRPn PXQLFDWLRQ FRPSDUHG WR LWV FRPSXWDWLRQDO VSHHG ZKLOH WKH H[DFW RSSRVLWH LV WUXH IRU WKH &0 7KH UDWLR RI QHDUHVWQHLJKERU FRPPXQLFDWLRQ VSHHG WR FRPSXWDWLRQ VSHHG LV VPDOOHU VWLOO IRU WKH &0 WKDQ WKH &0 $JDLQ IURP (T RQH H[SHFWV WKDW WKHVH GLIIHUHQFHV ZLOO EH DQ LPSRUWDQW IDFWRU LQIOXHQFLQJ WKH SDUDOOHO HIILFLHQF\

PAGE 64

+LHUDUFKLFDO DQG &XWDQG6WDFN 'DWD 0DSSLQJV :KHQ WKHUH DUH PRUH DUUD\ HOHPHQWV JULG SRLQWVf WKDQ SURFHVVRUV HDFK SURFHVVRU KDQGOHV PXOWLSOH JULG SRLQWV :KLFK JULG SRLQWV DUH DVVLJQHG WR ZKLFK SURFHVVRUV LV GHWHUPLQHG E\ WKH fGDWDPDSSLQJf DOVR FDOOHG WKH GDWD OD\RXW 7KH SURFHVVRUV UHSHDW DQ\ LQVWUXFWLRQV WKH DSSURSULDWH QXPEHU RI WLPHV WR KDQGOH DOO WKH DUUD\ HOHPHQWV ZKLFK KDYH EHHQ DVVLJQHG WR LW $ XVHIXO LGHDOL]DWLRQ IRU 6,0' PDFKLQHV KRZHYHU LV WR SUHWHQG WKHUH DUH DOZD\V DV PDQ\ SURFHVVRUV DV JULG SRLQWV 7KHQ RQH VSHDNV RI WKH fYLUWXDO SURFHVVRUf UDWLR 93f ZKLFK LV WKH QXPEHU RI DUUD\ HOHPHQWV DVVLJQHG WR HDFK SK\VLFDO SURFHVVRU 7KH ZD\ WKH GDWD DUUD\V DUH SDUWLWLRQHG DQG PDSSHG WR WKH SURFHVVRUV LV D PDLQ FRQFHUQ IRU GHYHORSLQJ D SDUDOOHO LPSOHPHQWDWLRQ 7KH OD\RXW RI WKH GDWD GHWHUPLQHV WKH DPRXQW RI FRPPXQLFDWLRQ LQ D JLYHQ SURJUDP :KHQ WKH YLUWXDO SURFHVVRU UDWLR LV WKHUH DUH DQ HTXDO QXPEHU RI SURFHVVRUV DQG DUUD\ HOHPHQWV DQG WKH PDSSLQJ LV MXVW RQHWRRQH :KHQ 93 WKH PDSSLQJ RI GDWD WR SURFHVVRUV LV HLWKHU fKLHUDUFKLFDOf LQ &0)RUWUDQ RU fFXWDQGVWDFNf LQ 03)RUWUDQ 7KHVH PDSSLQJV DUH DOVR WHUPHG fEORFNf DQG fF\FOLFf >@ UHVSHFWLYHO\ LQ WKH HPHUJLQJ +LJK3HUIRUPDQFH )RUWUDQ VWDQGDUG 7KH UHODWLYH PHULWV RI WKHVH GLIIHUHQW DSSURDFKHV KDYH QRW EHHQ FRPSOHWHO\ H[SORUHG \HW ,Q FXWDQGVWDFN PDSSLQJ QHDUHVWQHLJKERU DUUD\ HOHPHQWV DUH PDSSHG WR QHDUHVW QHLJKERU SK\VLFDO SURFHVVRUV :KHQ WKH QXPEHU RI DUUD\ HOHPHQWV H[FHHGV WKH QXPn EHU RI SURFHVVRUV DGGLWLRQDO PHPRU\ OD\HUV DUH FUHDWHG 93 LV MXVW WKH QXPEHU RI PHPRU\ OD\HUV ,Q WKH JHQHUDO FDVH QHDUHVWQHLJKERU YLUWXDO SURFHVVRUV LH DUUD\ HOHPHQWVf ZLOO QRW EH PDSSHG WR WKH VDPH SK\VLFDO SURFHVVRU 7KXV WKH FRVW RI D QHDUHVWQHLJKERU FRPPXQLFDWLRQ RI GLVWDQFH RQH ZLOO EH SURSRUWLRQDO WR 93 VLQFH WKH QHDUHVWQHLJKERUV RI HDFK YLUWXDO SURFHVVRU ZLOO EH RQ D GLIIHUHQW SK\VLFDO SURFHVVRU ,Q WKH KLHUDUFKLFDO PDSSLQJ FRQWLJXRXV SLHFHV RI DQ DUUD\ fYLUWXDO VXEJULGVff DUH

PAGE 65

PDSSHG WR HDFK SURFHVVRU 7KH fVXEJULG VL]Hf IRU WKH KLHUDUFKLFDO PDSSLQJ LV V\Qn RQ\PRXV ZLWK 93 7KH GLVWLQFWLRQ EHWZHHQ KLHUDUFKLFDO DQG FXWDQGVWDFN PDSSLQJ LV FODULILHG E\ )LJXUH ,Q KLHUDUFKLFDO PDSSLQJ IRU 9 3 HDFK YLUWXDO SURFHVVRU KDV QHDUHVWQHLJKERUV LQ WKH VDPH YLUWXDO VXEJULG WKDW LV RQ WKH VDPH SK\VLFDO SURFHVVRU 7KXV IRU KLHUn DUFKLFDO PDSSLQJ RQ WKH &0 LQWHUSURFHVVRU FRPPXQLFDWLRQ EUHDNV GRZQ LQWR WZR W\SHV ZLWK GLIIHUHQW VSHHGVff§RQSURFHVVRU DQG RIISURFHVVRU 2IISURFHVVRU FRPPXn QLFDWLRQ RQ WKH &0 KDV WKH 1(:6 VSHHG JLYHQ DERYH ZKLOH RQSURFHVVRU FRPPXQLn FDWLRQ LV VRPHZKDW IDVWHU EHFDXVH LW LV HVVHQWLDOO\ MXVW D PHPRU\ RSHUDWLRQ $ PRUH GHWDLOHG SUHVHQWDWLRQ DQG PRGHOOLQJ RI QHDUHVWQHLJKERU FRPPXQLFDWLRQ FRVWV IRU WKH KLHUDUFKLFDO PDSSLQJ RQ WKH &0 LV JLYHQ LQ >@ 7KH NH\ LGHD LV WKDW ZLWK KLHUDUn FKLFDO PDSSLQJ RQ WKH &0 WKH UHODWLYH DPRXQW RI RQSURFHVVRU DQG RIISURFHVVRU FRPPXQLFDWLRQ LV WKH DUHD WR SHULPHWHU UDWLR RI WKH YLUWXDO VXEJULG )RU WKH &0 WKHUH DUH WKUHH W\SHV RI LQWHUSURFHVVRU FRPPXQLFDWLRQ f EHWZHHQ YLUWXDO SURFHVVRUV RQ WKH VDPH SURFHVVRU WKDW LV WKH VDPH 98f f EHWZHHQ YLUWXDO SURFHVVRUV RQ GLIIHUHQW 98V EXW RQ WKH VDPH 63$5& QRGH DQG f EHWZHHQ YLUWXDO SURFHVVRUV RQ GLIIHUHQW 63$5& QRGHV %HWZHHQ GLIIHUHQW 63$5& QRGHV QXPEHU f WKH VSHHG LV 0%\WHVV DV PHQWLRQHG DERYH 2Q WKH VDPH 98 WKH VSHHG LV *%\WHVV 7KH ODWWHU QXPEHU LV MXVW WKH DJJUHJDWH PHPRU\ EDQGZLGWK RI WKH QRGH &0f 7KXV DOWKRXJK RIISURFHVVRU 1(:6 FRPPXQLFDWLRQ LV VORZ FRPSDUHG WR FRPSXWDWLRQ RQ WKH &0 DQG &0 JRRG HIILFLHQFLHV FDQ VWLOO EH DFKLHYHG DV D FRQVHTXHQFH RI WKH GDWD PDSSLQJ ZKLFK DOORZV WKH PDMRULW\ RI FRPPXQLFDWLRQ WR EH RI WKH RQSURFHVVRU W\SH

PAGE 66

,PSOHPHQWLRQDO &RQVLGHUDWLRQV 7KH FRVW SHU 6,03/( LWHUDWLRQ GHSHQGV RQ WKH FKRLFH RI UHOD[DWLRQ PHWKRG VROYHUf IRU WKH V\VWHPV RI HTXDWLRQV WKH QXPEHU RI LQQHU LWHUDWLRQV ]X W\ DQG XFf WKH FRPSXWDWLRQ RI FRHIILFLHQWV IRU HDFK V\VWHP RI HTXDWLRQV WKH FRUUHFWLRQ VWHS DQG WKH FRQYHUJHQFH FKHFNLQJ DQG VHULDO ZRUN GRQH LQ SURJUDP FRQWURO 7KH SUHVVXUH FRUUHFWLRQ HTXDWLRQ VLQFH LW LV QRW XQGHUUHOD[HG W\SLFDOO\ QHHGV WR EH JLYHQ PRUH LWHUDWLRQV WKDQ WKH PRPHQWXP HTXDWLRQV DQG FRQVHTXHQWO\ PRVW RI WKH HIIRUW LV H[n SHQGHG GXULQJ WKLV VWHS RI WKH 6,03/( PHWKRG 7KLV LV DQRWKHU UHDVRQ ZK\ WKH FRQYHUJHQFH UDWH RI WKH SnHTXDWLRQV GLVFXVVHG LQ &KDSWHU LV LPSRUWDQW 7\SLFDOO\ XX DQG XY DUH WKH VDPH DQG DUH DQG XF EYX ,Q GHYHORSLQJ D SDUDOOHO LPSOHPHQWDWLRQ RI WKH 6,03/( DOJRULWKP WKH ILUVW FRQn VLGHUDWLRQ LV WKH PHWKRG RI VROYLQJ WKH X Y DQG Sn V\VWHPV RI HTXDWLRQV )RU VHULDO FRPSXWDWLRQV VXFFHVVLYH OLQHXQGHUUHOD[DWLRQ XVLQJ WKH WULGLDJRQDO PDWUL[ DOJRULWKP 7'0$ ZKRVH RSHUDWLRQ FRXQW LV 21ff LV D JRRG FKRLFH EHFDXVH WKH FRVW SHU LWn HUDWLRQ LV RSWLPDO DQG WKHUH LV ORQJGLVWDQFH FRXSOLQJ EHWZHHQ IORZ YDULDEOHV DORQJ OLQHVf ZKLFK LV HIIHFWLYH LQ SURPRWLQJ FRQYHUJHQFH LQ WKH RXWHU LWHUDWLRQV 7KH 7'0$ LV LQWULQVLFDOO\ VHULDO )RU SDUDOOHO FRPSXWDWLRQV D SDUDOOHO WULGLDJRQDO VROYHU PXVW EH XVHG SDUDOOHO F\FOLF UHGXFWLRQ LQ WKH SUHVHQW ZRUNf ,Q WKLV FDVH WKH FRVW SHU LWn HUDWLRQ GHSHQGV QRW RQO\ RQ WKH FRPSXWDWLRQDO ZRUNORDG 1ORJ1ff EXW DOVR RQ WKH DPRXQW RI FRPPXQLFDWLRQ JHQHUDWHG E\ WKH LPSOHPHQWDWLRQ RQ D SDUWLFXODU PDn FKLQH )RU WKHVH UHDVRQV WLPLQJ FRPSDULVRQV DUH PDGH IRU VHYHUDO LPSOHPHQWDWLRQV RI ERWK SRLQW DQG OLQH-DFREL VROYHUV XVHG GXULQJ WKH LQQHU LWHUDWLRQV RI WKH 6,03/( DOJRULWKP

PAGE 67

*HQHUDOO\ SRLQW-DFREL LWHUDWLRQ LV QRW VXIILFLHQWO\ HIIHFWLYH IRU FRPSOH[ IORZ SUREn OHPV +RZHYHU DV SDUW RI D PXOWLJULG VWUDWHJ\ JRRG FRQYHUJHQFH UDWHV FDQ EH REn WDLQHG VHH &KDSWHUV DQG f )XUWKHUPRUH EHFDXVH LW RQO\ LQYROYHV WKH IDVWHVW W\SH RI LQWHUSURFHVVRU FRPPXQLFDWLRQ WKDW ZKLFK RFFXUV EHWZHHQ QHDUHVWQHLJKERU SURn FHVVRUV SRLQW-DFREL LWHUDWLRQ SURYLGHV DQ XSSHU ERXQG IRU SDUDOOHO HIILFLHQF\ DJDLQVW ZKLFK RWKHU VROYHUV FDQ EH FRPSDUHG 7KH VHFRQG FRQVLGHUDWLRQ LV WKH WUHDWPHQW RI ERXQGDU\ FRPSXWDWLRQV ,Q WKH SUHVHQW LPSOHPHQWDWLRQ WKH FRHIILFLHQWV DQG VRXUFH WHUPV IRU WKH ERXQGDU\ FRQWURO YROXPHV DUH FRPSXWHG XVLQJ WKH LQWHULRU FRQWURO YROXPH IRUPXOD DQG PDVN DUUD\V 2UDQ HW DO >@ KDYH FDOOHG WKLV WULFN WKH XQLIRUP ERXQGDU\ FRQGLWLRQ DSSURDFK $OO FRHIILFLHQWV FDQ EH FRPSXWHG VLPXOWDQHRXVO\ 7KH SUREOHP ZLWK FRPSXWLQJ WKH ERXQGDU\ FRHIILFLHQWV VHSDUDWHO\ LV WKDW VRPH RI WKH SURFHVVRUV DUH LGOH ZKLFK GHn FUHDVHV ( )RU WKH &0 ZKLFK LV fV\QFKURQL]HG 0,0'f LQVWHDG RI VWULFWO\ 6,0' WKHUH H[LVWV OLPLWHG FDSDELOLW\ WR KDQGOH ERWK ERXQGDU\ DQG LQWHULRU FRHIILFLHQWV VLn PXOWDQHRXVO\ ZLWKRXW IRUPXODWLQJ D VLQJOH DOOLQFOXVLYH H[SUHVVLRQ +RZHYHU WKLV FDSDELOLW\ FDQQRW EH XWLOL]HG LI HLWKHU WKH ERXQGDU\ RU LQWHULRU IRUPXODV LQYROYH LQn WHUSURFHVVRU FRPPXQLFDWLRQ ZKLFK LV WKH FDVH KHUH $V DQ H[DPSOH RI WKH XQLIRUP DSSURDFK FRQVLGHU WKH VRXUFH WHUPV IRU WKH QRUWK ERXQGDU\ X FRQWURO YROXPHV ZKLFK DUH FRPSXWHG E\ WKH IRUPXOD E D1X1 SZ SHf$\ f 5HFDOO WKDW D UHSUHVHQWV WKH GLVFUHWL]HG FRQYHFWLYH DQG GLIIXVLYH IOX[ WHUPV DQG XQ LV WKH ERXQGDU\ YDOXH DQG LQ WKH SUHVVXUH JUDGLHQW WHUP $\ LV WKH YHUWLFDO GLPHQVLRQ RI WKH X FRQWURO YROXPH DQG SZSH DUH WKH ZHVWHDVW XFRQWUROYROXPH IDFH SUHVVXUHV RQ WKH VWDJJHUHG JULG 6LPLODU PRGLILFDWLRQV VKRZ XS LQ WKH VRXWK HDVW DQG ZHVW ERXQGDU\ X FRQWURO YROXPH VRXUFH WHUPV 7R FRPSXWH WKH ERXQGDU\ DQG LQWHULRU

PAGE 68

VRXUFH WHUPV VLPXOWDQHRXVO\ WKH IROORZLQJ LPSOHPHQWDWLRQ LV XVHG A f§ 2ERXQGDU\AERXQGDU\ 7 3Z 3HfA9 f ZKHUH 8ERXQGDU\ 81A1 8V/V 8HA( 8??U ,Z f DQG GERXQGDU\ &/Q,Q &/6A6 &/(A( : ,?9 f MY ,V ,H DQG Z DUH WKH PDVN DUUD\V ZKLFK KDYH WKH YDOXH IRU WKH UHVSHFWLYH ERXQGDU\ FRQWURO YROXPHV DQG HYHU\ZKHUH HOVH 7KH\ DUH LQLWLDOL]HG RQFH DW WKH EHJLQQLQJ RI WKH SURJUDP 7KHQ HYHU\ LWHUDWLRQ WKHUH DUH IRXU H[WUD QHDUHVWQHLJKERU FRPPXQLFDWLRQV $ FRPSDULVRQ RI WKH XQLIRUP DSSURDFK ZLWK DQ LPSOHPHQWDWLRQ WKDW WUHDWV HDFK ERXQGDU\ VHSDUDWHO\ LV GLVFXVVHG LQ WKH UHVXOWV 1XPHULFDO ([SHULPHQWV 7KH 6,03/( DOJRULWKP IRU WZRGLPHQVLRQDO ODPLQDU IORZ KDV EHHQ WLPHG RQ D UDQJH RI SUREOHP VL]HV IURP [ WR [ ZKLFK RQ WKH &0 FRYHUV XS WR 93 7KH FRQYHFWLRQ WHUPV DUH FHQWUDOGLIIHUHQFHG $ IL[HG QXPEHU f RI RXWHU LWHUDWLRQV DUH WLPHG XVLQJ DV D PRGHO IORZ SUREOHP WKH OLGGULYHQ FDYLW\ IORZ DW 5H 7KH WLPLQJV ZHUH PDGH ZLWK WKH f3ULVPf WLPLQJ XWLOLW\ RQ WKH &0 DQG &0 DQG WKH fGSX7LPHUf URXWLQHV RQ WKH 03 > @ 7KHVH XWLOLWLHV FDQ EH LQDFFXUDWH LI WKH IURQWHQG PDFKLQH LV KHDYLO\ ORDGHG ZKLFK ZDV WKH FDVH ZLWK WKH &0 7KXV RQ WKH &0 DOO FDVHV ZHUH WLPHG WKUHH WLPHV DQG WKH IDVWHVW WLPHV ZHUH XVHG DV UHFRPPHQGHG E\ 7KLQNLQJ 0DFKLQHV >@ 3ULVP WLPHV HYHU\ FRGH EORFN DQG DFFXPXODWHV WRWDOV LQ VHYHUDO FDWHJRULHV LQFOXGLQJ FRPSXWDWLRQ WLPH IRU WKH QRGHV 7FRPSf f1(:6f FRPPXQLFDWLRQ 7QHZVf DQG LUUHJXODUSDWWHUQ f6(1'f FRPPXQLFDWLRQ $OVR LW LV SRVVLEOH WR LQIHU 7IHWRSURF IURP WKH GLIIHUHQFH

PAGE 69

EHWZHHQ WKH SURFHVVRU EXV\ WLPH DQG WKH HODSVHG WLPH ,Q WKH UHVXOWV 7FRPP LV WKH VXP RI WKH f1(:6f DQG f6(1'f LQWHUSURFHVVRU WLPHV 7KH IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ LV VHSDUDWH $GGLWLRQDOO\ WKH FRPSRQHQW WDVNV RI WKH DOJRULWKP KDYH EHHQ WLPHG QDPHO\ WKH FRHIILFLHQW FRPSXWDWLRQV 7FRHccf WKH VROYHU 7VRcYHf DQG WKH YHORFLW\FRUUHFWLRQ DQG FRQYHUJHQFHFKHFNLQJ SDUWV (IILFLHQF\ RI 3RLQW DQG /LQH 6ROYHUV IRU WKH ,QQHU ,WHUDWLRQV )LJXUH EDVHG RQ WLPLQJV PDGH RQ WKH &0 LOOXVWUDWHV WKH GLIIHUHQFH LQ SDUDOOHO HIILFLHQF\ IRU 6,03/( XVLQJ SRLQW-DFREL DQG OLQH-DFREL LWHUDWLYH VROYHUV ( LV FRPSXWHG IURP (T E\ WLPLQJ 7FRPP DQG 7FRPS LQWURGXFHG DERYH 3UREOHP VL]H LV JLYHQ LQ WHUPV RI WKH YLUWXDO SURFHVVRU UDWLR 93 SUHYLRXVO\ GHILQHG 7KHUH DUH WZR LPSOHPHQWDWLRQV HDFK ZLWK GLIIHUHQW GDWD OD\RXWV IRU SRLQW-DFREL LWHUDWLRQ 2QH LJQRUHV WKH GLVWLQFWLRQ EHWZHHQ YLUWXDO SURFHVVRUV ZKLFK DUH RQ WKH VDPH SK\VLFDO SURFHVVRU DQG WKRVH ZKLFK DUH RQ GLIIHUHQW SK\VLFDO SURFHVVRUV (DFK DUUD\ HOHPHQW LV WUHDWHG DV LI LW LV D SURFHVVRU 7KXV LQWHUSURFHVVRU FRPPXQLFDWLRQ LV JHQHUDWHG ZKHQHYHU GDWD LV WR EH PRYHG HYHQ LI WKH WZR YLUWXDO SURFHVVRUV GRn LQJ WKH FRPPXQLFDWLRQ KDSSHQ WR EH RQ WKH VDPH SK\VLFDO SURFHVVRU 7R EH PRUH SUHFLVH D FDOO WR WKH UXQWLPH FRPPXQLFDWLRQ OLEUDU\ LV JHQHUDWHG IRU HYHU\ DUUD\ HOn HPHQW 7KHQ WKRVH DUUD\ HOHPHQWV YLUWXDO SURFHVVRUVf ZKLFK DFWXDOO\ UHVLGH RQ WKH VDPH SK\VLFDO SURFHVVRU DUH LGHQWLILHG DQG WKH FRPPXQLFDWLRQ LV GRQH DV D PHPRU\ RSHUDWLRQf§EXW WKH XQQHFHVVDU\ RYHUKHDG RI FDOOLQJ WKH OLEUDU\ LV LQFXUUHG 2EYLRXVO\ WKHUH LV DQ LQHIILFLHQF\ DVVRFLDWHG ZLWK SUHWHQGLQJ WKDW WKHUH DUH DV PDQ\ SURFHVVRUV DV DUUD\ HOHPHQWV EXW WKH WUDGHRII LV WKDW WKLV LV WKH PRVW VWUDLJKWIRUZDUG DQG LQGHHG WKH LQWHQGHG ZD\ WR GR WKH SURJUDPPLQJ ,Q )LJXUH WKLV DSSURDFK LV ODEHOOHG f1(:6f ZLWK WKH V\PERO fRf 7KH RWKHU LPSOHPHQWDWLRQ LV ODEHOOHG fRQ98f ZLWK

PAGE 70

WKH V\PERO ff WR LQGLFDWH WKDW LQWHUSURFHVVRU FRPPXQLFDWLRQ EHWZHHQ YLUWXDO SURn FHVVRUV RQ WKH VDPH SK\VLFDO SURFHVVRU LV EHLQJ HOLPLQDWHGf§WKH SURJUDPPLQJ LV LQ D VHQVH EHLQJ GRQH fRQ98f 7R LQGLFDWH WR WKH FRPSLOHU WKH GLIIHUHQW OD\RXWV RI WKH GDWD ZKLFK DUH QHHGHG WKH SURJUDPPHU LQVHUWV FRPSLOHU GLUHFWLYHV )RU WKH f1(:6f YHUVLRQ WKH DUUD\V DUH ODLG RXW DV VKRZQ LQ WKLV H[DPSOH IRU D ON [ ON JULG DQG DQ [ SURFHVVRU OD\RXW RQ WKH &0 5($/r $ f &0) /$<287 $%/2&. 352&6 %/2&. 352&6 f 7KXV WKH VXEJULG VKDSH LV [ ZLWK D VXEJULG VL]H ^93f RI WKLV KDSn SHQV WR EH WKH ELJJHVW SUREOHP VL]H IRU P\ SURJUDP RQ D QRGH &0 ZLWK *%\WHV RI PHPRU\f :KHQ VKLIWLQJ DOO WKH GDWD WR WKHLU HDVW QHDUHVWQHLJKERU IRU H[DPSOH E\ IDU WKH ODUJH PDMRULW\ RI WUDQVIHUV DUH RQ98 DQG FRXOG EH GRQH ZLWKRXW UHDO LQWHUn SURFHVVRU FRPPXQLFDWLRQ %XW WKHUH DUH RQO\ GLPHQVLRQV LQ $ VR WKDW GDWDSDUDOOHO SURJUDP VWDWHPHQWV FDQQRW VSHFLILFDOO\ DFFHVV FHUWDLQ DUUD\ HOHPHQWV LH WKH RQHV RQ WKH SHULPHWHU RI WKH VXEJULG 7KXV LW LV QRW SRVVLEOH ZLWK WKH f1(:6f OD\RXW WR WUHDW LQWHULRU YLUWXDO SURFHVVRUV GLIIHUHQWO\ IURP WKRVH RQ WKH SHULPHWHU DQG FRQVHn TXHQWO\ GDWD VKLIWV EHWZHHQ WKH LQWHULRU YLUWXDO SURFHVVRUV JHQHUDWH LQWHUSURFHVVRU FRPPXQLFDWLRQ HYHQ WKRXJK LW LV XQQHFHVVDU\ ,Q WKH fRQ98f YHUVLRQ D GLIIHUHQW GDWD OD\RXW LV XVHG ZKLFK PDNHV H[SOLFLW WR WKH FRPSLOHU WKH ERXQGDU\ EHWZHHQ SK\VLFDO SURFHVVRUV 7KH DUUD\V DUH ODLG RXW ZLWKRXW YLUWXDO SURFHVVRUV &0) /$<287 $6(5,$/6(5,$/%/2&. O 352&6 %/2&. O 352&6 f 7KH GHFODUDWLRQ PXVW EH FKDQJHG DFFRUGLQJO\ WR $f 1RUPDOO\ LW LV LQFRQYHQLHQW WR ZRUN ZLWK WKH DUUD\V LQ WKLV PDQQHU 7KXV WKH DSSURDFK WDNHQ KHUH

PAGE 71

LV WR XVH DQ fDUUD\ DOLDVf RI $ >@ ,Q RWKHU ZRUGV WKLV LV DQ (48,9$/(1&( IXQFn WLRQ IRU WKH GDWDSDUDOOHO DUUD\V VLPLODU WR WKH )RUWUDQ (48,9$/(1&( FRQFHSWf ZKLFK HTXDWHV $ f ZLWK $ f ZLWK WKH GLIIHUHQW /$<287V JLYHQ DERYH ,W LV WKH DOLDV LQVWHDG RI WKH RULJLQDO $ ZKLFK LV XVHG LQ WKH RQ98 SRLQW -DFREL LPSOHPHQWDWLRQ ,Q WKH VROYHU WKH fRQ98f OD\RXW LV XVHG HYHU\ZKHUH HOVH WKH PRUH FRQYHQLHQW f1(:6f OD\RXW LV XVHG 7KH DFWXDO PHFKDQLVP E\ ZKLFK WKH HTXLYDOHQFLQJ RI GLVWULEXWHG DUUD\V FDQ EH DFFRPSOLVKHG LV QRW WRR GLIILFXOW WR XQGHUn VWDQG 7KH IURQWHQG FRPSXWHU VWRUHV fDUUD\ GHVFULSWRUVf ZKLFK FRQWDLQ WKH DUUD\ OD\RXW WKH VWDUWLQJ DGGUHVV LQ SURFHVVRU PHPRU\ DQG RWKHU LQIRUPDWLRQ 7KH DFWXDO OD\RXW LQ HDFK SURFHVVRUVf PHPRU\ LV OLQHDU DQG GRHVQfW FKDQJH EXW PXOWLSOH DUUD\ GHVFULSWRUV FDQ EH JHQHUDWHG IRU WKH VDPH GDWD 7KLV GHVFULSWRU PXOWLSOLFLW\ LV ZKDW DUUD\ DOLDVLQJ DFFRPSOLVKHV :LWK WKH fRQ98f SURJUDPPLQJ VW\OH WKH FRPSLOHU GRHV QRW JHQHUDWH FRPPXQLFDWLRQ ZKHQ WKH VKLIW RI GDWD LV DORQJ D 6(5,$/ D[LV 7KXV LQWHUSURFHVVRU FRPPXQLFDWLRQ LV JHQHUDWHG RQO\ ZKHQ WKH YLUWXDO SURFHVVRUV LQYROYHG DUH RQ GLIIHUHQW SK\VLFDO SURFHVVRUV LH RQO\ ZKHQ LW LV WUXO\ QHFHVVDU\ 7KH GLIIHUHQFH LQ WKH DPRXQW RI FRPPXQLFDWLRQ LV VXEVWDQWLDO IRU ODUJH VXEJULG VL]HV )RU ERWK WKH f1(:6f DQG WKH fRQ98f FXUYHV LQ )LJXUH ( LV LQLWLDOO\ YHU\ ORZ EXW DV 93 LQFUHDVHV ( ULVHV XQWLO LW UHDFKHV D SHDN YDOXH RI DERXW IRU WKH f1(:6f YHUVLRQ DQG IRU WKH fRQ98f YHUVLRQ 7KH WUHQG LV GXH WR WKH DPRUn WL]DWLRQ RI WKH IURQWHQGWRSURFHVVRU DQG RII98 EHWZHHQ 98V ZKLFK DUH SK\VLFDOO\ XQGHU FRQWURO RI GLIIHUHQW 63$5& QRGHVf FRPPXQLFDWLRQ 7KH IRUPHU FRQWULEXWHV D FRQVWDQW RYHUKHDG FRVW SHU -DFREL LWHUDWLRQ WR 7FRPP ZKLOH WKH ODWWHU KDV D 93 GHSHQGHQF\ >@ +RZHYHU LW GRHV QRW DSSHDU IURP )LJXUH WKDW WKHVH WZR WHUPVf HIIHFWV FDQ EH GLVWLQJXLVKHG IURP RQH DQRWKHU )RU 93 N WKH &0 LV FRPSXWLQJ URXJKO\ RI WKH WLPH IRU WKH LPSOHPHQWDn WLRQ ZKLFK XVHV WKH f1(:6f YHUVLRQ RI SRLQW-DFREL ZLWK WKH UHPDLQGHU VSOLW HYHQO\

PAGE 72

EHWZHHQ IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ DQG RQ98 LQWHUSURFHVVRU FRPPXQLn FDWLRQ ,W DSSHDUV WKDW WKH fRQ98f YHUVLRQ KDV PRUH IURQWHQGWRSURFHVVRU FRPn PXQLFDWLRQ SHU LWHUDWLRQ VR WKHUH LV LQ HIIHFW D SULFH RI PRUH IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ WR SD\ LQ H[FKDQJH IRU OHVV LQWHUSURFHVVRU FRPPXQLFDWLRQ &RQVHn TXHQWO\ LW WDNHV 93 N WR UHDFK SHDN HIILFLHQF\ LQVWHDG RI N ZLWK WKH f1(:6f YHUVLRQ )RU 93 N KRZHYHU ( LV DERXW b a b KLJKHU WKDQ IRU WKH f1(:6f YHUVLRQ EHFDXVH WKH RQ98 FRPPXQLFDWLRQ KDV EHHQ UHSODFHG E\ VWUDLJKW PHPRU\ RSHUDWLRQV 7KH REVHUYHG GLIIHUHQFH ZRXOG EH HYHQ JUHDWHU LI D ODUJHU SDUW RI WKH WRWDO SDUDOOHO UXQ WLPH ZDV VSHQW LQ WKH VROYHU )RU WKH ODUJH 93 FDVHV LQ )LJXUH DSSUR[LPDWHO\ HTXDO WLPH ZDV VSHQW FRPSXWLQJ FRHIILFLHQWV DQG VROYLQJ WKH V\VWHPV RI HTXDWLRQV f7\SLFDOf QXPEHUV RI LQQHU LWHUDWLRQV ZHUH XVHG HDFK IRU WKH X DQG Y PRPHQWXP HTXDWLRQV DQG IRU WKH Sn HTXDWLRQ )URP )LJXUH WKHQ LW DSSHDUV WKDW WKH DGn YDQWDJH RI WKH fRQ98f YHUVLRQ RYHU WKH f1(:6f YHUVLRQ RI SRLQW-DFREL UHOD[DWLRQ ZLWKLQ WKH 6,03/( DOJRULWKP LV DURXQG LQ ( IRU ODUJH SUREOHP VL]HV 5HGEODFN DQDORJXHV WR WKH f1(:6f DQG fRQ98f YHUVLRQV RI SRLQW-DFREL LWHUn DWLRQ KDYH DOVR EHHQ WHVWHG 5HGEODFN SRLQW LWHUDWLRQ GRQH LQ WKH fRQ98f PDQQHU GRHV QRW JHQHUDWH DQ\ DGGLWLRQDO IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ DQG WKHUHn IRUH WDNHV DOPRVW DQ LGHQWLFDO DPRXQW RI WLPH DV SRLQW-DFREL 7KXV UHGEODFN SRLQW LWHUDWLRQV DUH UHFRPPHQGHG ZKHQ WKH fRQ98f OD\RXW LV XVHG GXH WR WKHLU LPSURYHG FRQYHUJHQFH UDWH +RZHYHU ZLWK WKH f1(:6f OD\RXW UHGEODFN SRLQW LWHUDWLRQ JHQn HUDWHV WZR FRGH EORFNV LQVWHDG RI RQH DQG UHGXFHV E\ WKH DPRXQW RI FRPSXWDWLRQ SHU FRGH EORFN 7KLV UHVXOWV LQ D VXEVWDQWLDO a b IRU WKH 93 N FDVHf LQn FUHDVH LQ UXQ WLPH 7KXV LI XVLQJ f1(:6f OD\RXWV UHGEODFN SRLQW LWHUDWLRQ LV QRW FRVWHIIHFWLYH

PAGE 73

7KHUH DUH DOVR WZR LPSOHPHQWDWLRQV RI OLQH-DFREL LWHUDWLRQ ,Q ERWK RQH LQQHU LWHUDWLRQ FRQVLVWV RI IRUPLQJ D WULGLDJRQDO V\VWHP RI HTXDWLRQV IRU WKH XQNQRZQV LQ HDFK YHUWLFDO OLQH E\ PRYLQJ WKH HDVWZHVW WHUPV WR WKH ULJKWKDQG VLGH VROYLQJ WKH PXOWLSOH V\VWHPV RI HTXDWLRQV VLPXOWDQHRXVO\ DQG UHSHDWLQJ WKH SURFHGXUH IRU WKH KRUL]RQWDO OLQHV ,Q WKH ILUVW YHUVLRQ SDUDOOHO F\FOLF UHGXFWLRQ LV XVHG WR VROYH WKH PXOWLSOH WULGLDJn RQDO V\VWHPV RI HTXDWLRQV VHH >@ IRU D FOHDU SUHVHQWDWLRQf 7KLV LQYROYHV FRPELQLQJ HTXDWLRQV WR GHFRXSOH WKH V\VWHP LQWR HYHQ DQG RGG HTXDWLRQV 7KH UHVXOW LV WZR WULGLDJRQDO V\VWHPV RI HTXDWLRQV HDFK KDOI WKH VL]H RI WKH RULJLQDO 7KH UHGXFWLRQ VWHS LV UHSHDWHG ORJ 1 WLPHV ZKHUH 1 LV WKH QXPEHU RI XQNQRZQV LQ HDFK OLQH 7KXV WKH FRPSXWDWLRQDO RSHUDWLRQ FRXQW LV 1ORJ1f ,QWHUSURFHVVRU FRPPXQLFDWLRQ RFFXUV IRU HYHU\ XQNQRZQ IRU HYHU\ VWHS WKXV WKH FRPPXQLFDWLRQ RSHUDWLRQ FRXQW LV DOVR 1ORJ1f +RZHYHU WKH GLVWDQFH IRU FRPPXQLFDWLRQ LQFUHDVHV HYHU\ VWHS RI WKH UHn GXFWLRQ E\ D IDFWRU RI )RU WKH ILUVW VWHS QHDUHVWQHLJKERU FRPPXQLFDWLRQ RFFXUV ZKLOH IRU WKH VHFRQG VWHS WKH GLVWDQFH LV WKHQ HWF 7KXV WKH QHW FRPPXQLn FDWLRQ VSHHG LV VORZHU WKDQ WKH QHDUHVWQHLJKERU W\SH RI FRPPXQLFDWLRQ )LJXUH FRQILUPV WKLV DUJXPHQWf§( SHDNV DW DERXW FRPSDUHG WR IRU SRLQW-DFREL LWn HUDWLRQ ,Q RWKHU ZRUGV IRU 93 N LQWHUSURFHVVRU FRPPXQLFDWLRQ WDNHV DV PXFK WLPH DV FRPSXWDWLRQ ZLWK WKH OLQH-DFREL VROYHU XVLQJ F\FOLF UHGXFWLRQ ,Q WKH VHFRQG YHUVLRQ WKH PXOWLSOH V\VWHPV RI WULGLDJRQDO HTXDWLRQV DUH VROYHG XVLQJ WKH VWDQGDUG 7'0$ DOJRULWKP DORQJ WKH OLQHV 7R LPSOHPHQW WKLV YHUVLRQ RQH PXVW UHPDS WKH DUUD\V IURP 1(:61(:6f WR 1(:66(5,$/f IRU WKH YHUWLFDO OLQHV DQG WR 6(5,$/1(:6f IRU WKH KRUL]RQWDO OLQHV 7KLV FKDQJH IURP UHFWDQJXODU VXEJULGV WR G VOLFHV LV WKH PRVW WLPHFRQVXPLQJ VWHS LQYROYLQJ D JOREDO FRPPXQLFDWLRQ RI GDWD f6(1'f LQVWHDG RI f1(:6ff $SSOLHG DORQJ WKH VHULDO GLn PHQVLRQ WKH 7'0$ GRHV QRW JHQHUDWH DQ\ LQWHUSURFHVVRU FRPPXQLFDWLRQ 6RPH

PAGE 74

IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ LV JHQHUDWHG E\ WKH LQFUHPHQWLQJ RI WKH ORRS LQGH[ EXW XQUROOLQJ WKH '2ORRS KHOSV WR DPRUWL]H WKLV RYHUKHDG FRVW WR VRPH H[WHQW 7KXV LQ )LJXUH ( LV DSSUR[LPDWHO\ FRQVWDQW DW H[FHSW IRU YHU\ VPDOO 93 7KH JOREDO FRPPXQLFDWLRQ LV PXFK VORZHU WKDQ FRPSXWDWLRQ DQG FRQVHTXHQWO\ WKHUH LV QRW HQRXJK FRPSXWDWLRQ WR DPRUWL]H WKH FRPPXQLFDWLRQ )XUWKHUPRUH WKH FRQVWDQW ( LPSOLHV IURP (T WKDW 7FRPP DQG 7FRPS ERWK VFDOH LQ WKH VDPH ZD\ ZLWK SUREOHP VL]H ,W LV HYLGHQW WKDW 7FRPS a 93 EHFDXVH WKH 7'0 $ LV 21f 7KXV FRQVWDQW ( LPSOLHV 7FRPP a 93 7KLV PHDQV GRXEOLQJ 93 GRXEOHV 7FRPP LQGLFDWLQJ WKH FRPPXQLFDWLRQ VSHHG KDV UHDFKHG LWV SHDN ZKLFK IXUWKHU LQGLFDWHV WKDW WKH IXOO EDQGZLGWK RI WKH IDWWUHH LV EHLQJ XWLOL]HG 7KH GLVDSSRLQWLQJ SHUIRUPDQFH RI WKH VWDQGDUG OLQHLWHUDWLYH DSSURDFK XVLQJ WKH 7'0$ SRLQWV RXW WKH LPSRUWDQW IDFW WKDW IRU WKH &0 JOREDO FRPPXQLFDWLRQ ZLWKLQ LQQHU LWHUDWLRQV LV LQWROHUDEOH 7KHUH LV QRW HQRXJK FRPSXWDWLRQ WR DPRUWL]H VORZ FRPPXQLFDWLRQ LQ WKH VROYHU IRU DQ\ SUREOHP VL]H :LWK SDUDOOHO F\FOLF UHGXFWLRQ ZKHUH WKH UHJXODULW\ RI WKH GDWD PRYHPHQW DOORZV IDVWHU FRPPXQLFDWLRQ WKH HIILFLHQF\ LV PXFK KLJKHU DOWKRXJK VWLOO VLJQLILFDQWO\ ORZHU WKDQ IRU SRLQWLWHUDWLRQV $GGLWLRQDO LPSURYHPHQW FDQ EH VRXJKW E\ XVLQJ WKH fRQ98f GDWD OD\RXW WR LPSOHPHQW WKH OLQHLWHUDWLYH VROYHU ZLWKLQ HDFK SURFHVVRUfV VXEJULG 7KLV LPSOHPHQWDWLRQ HVVHQWLDOO\ WUDGHV LQWHUSURFHVVRU FRPPXQLFDWLRQ IRU WKH IURQWHQGWR3( W\SH RI FRPPXQLFDWLRQ DQG LQ SUDFWLFH D IURQWHQG ERWWOHQHFN GHYHORSV )RU WKH UHPDLQGHU RI WKH GLVFXVVLRQ DOO OLQH-DFREL UHVXOWV UHIHU WR WKH SDUDOOHO F\FOLF UHGXFWLRQ LPSOHPHQWDWLRQ 2Q WKH 03 WKH IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ LV QRW D PDMRU FRQFHUQ DV FDQ EH LQIHUUHG IURP )LJXUH 7KH HIILFLHQF\ RI WKH 6,03/( DOJRULWKP XVLQJ WKH SRLQW-DFREL VROYHU LV SORWWHG IRU HDFK PDFKLQH IRU WKH UDQJH RI SUREOHP VL]HV FRUUHVSRQGLQJ WR WKH FDVHV VROYHG RQ WKH 03 7KH &0 DQG &0 FDQ VROYH PXFK ODUJHU SUREOHPV VR IRU FRPSDULVRQ SXUSRVHV RQO\ SDUW RI WKHLU GDWD LV VKRZQ

PAGE 75

$OVR EHFDXVH WKH FRPSXWHUV KDYH GLIIHUHQW QXPEHUV RI SURFHVVRUV WKH QXPEHU RI JULG SRLQWV LV XVHG LQVWHDG RI 93 WR GHILQH WKH SUREOHP VL]H $V LQ )LJXUH HDFK FXUYH H[KLELWV DQ LQLWLDO ULVH FRUUHVSRQGLQJ WR WKH DPRUWL]Dn WLRQ RI WKH IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ DQG IRU WKH &0 DQG &0 WKH RIISURFHVVRU f1(:6f FRPPXQLFDWLRQ 2Q WKH 03 SHDN ( LV UHDFKHG IRU VPDOO SUREOHPV 93 f 'XH WR WKH 03OfV UHODWLYHO\ VORZ SURFHVVRUV WKH FRPSXWDn WLRQ WLPH TXLFNO\ DPRUWL]HV WKH IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ WLPH DV 93 LQFUHDVHV )XUWKHUPRUH EHFDXVH WKH UHODWLYH VSHHG RI ;1HW FRPPXQLFDWLRQ LV IDVW WKH SHDN ( LV KLJK 2Q WKH &0 WKH SHDN ( LV DQG WKLV HIILFLHQF\ LV UHDFKHG IRU DSSUR[LPDWHO\ 93 2Q WKH &0 WKH SHDN ( LV EXW WKLV HIILFLHQF\ LV QRW UHDFKHG XQWLO 93 N ,I FRPSXWDWLRQ LV IDVW WKHQ WKH UDWH RI LQn FUHDVH RI ( ZLWK 93 GHSHQGV RQ WKH UHODWLYH FRVW RI RQSURFHVVRU RIISURFHVVRU DQG IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ ,I WKH RQSURFHVVRU FRPPXQLFDWLRQ LV IDVW ODUJHU 93 LV UHTXLUHG WR UHDFK SHDN ( 7KXV RQ WKH &0 WKH UHODWLYHO\ IDVW RQ98 FRPPXQLFDWLRQ LV VLPXOWDQHRXVO\ UHVSRQVLEOH IRU WKH JRRG f SHDN ( DQG WKH IDFW WKDW YHU\ ODUJH SUREOHP VL]HV 93 N WLPHV ODUJHU WKDQ RQ WKH 03f DUH QHHGHG WR UHDFK WKLV SHDN ( 7KH DVSHFW UDWLR RI WKH YLUWXDO VXEJULG FRQVWLWXWHV D VHFRQGDU\ HIIHFW RI WKH GDWD OD\RXW RQ WKH HIILFLHQF\ IRU KLHUDUFKLFDO PDSSLQJ 7KH PDMRU LQIOXHQFH RQ ( GHSHQGV RQ 93 LH WKH VXEJULG VL]H EXW WKH VXEJULG VKDSH PDWWHUV WRR 7KLV GHSHQGHQFH FRPHV LQWR SOD\ GXH WR WKH GLIIHUHQW VSHHGV RI WKH RQSURFHVVRU DQG RIISURFHVVRU W\SHV RI FRPPXQLFDWLRQ +LJKHU DVSHFW UDWLR VXEJULGV KDYH KLJKHU DUHD WR SHULPHWHU UDWLRV DQG WKXV UHODWLYHO\ PRUH RI RIISURFHVVRU FRPPXQLFDWLRQ WKDQ VTXDUH VXEJULGV )LJXUH JLYHV VRPH LGHD RI WKH UHODWLYH LPSRUWDQFH RI WKH VXEJULG DVSHFW UDWLR HIIHFW $ORQJ HDFK FXUYH WKH QXPEHU RI JULG SRLQWV LV IL[HG EXW WKH JULG GLPHQVLRQV YDU\ ZKLFK IRU D JLYHQ SURFHVVRU OD\RXW FDXVHV WKH VXEJULG VKDSH DVSHFW UDWLRf WR

PAGE 76

YDU\ )RU H[DPSOH RQ WKH &0 ZLWK DQ [ SURFHVVRU OD\RXW WKH IROORZLQJ JULGV ZHUH XVHG FRUUHVSRQGLQJ WR WKH 93 &0 FXUYH [ [ [ DQG [ 7KHVH FDVHV JLYH VXEJULG DVSHFW UDWLRV RI DQG 7QHZV LV WKH WLPH VSHQW LQ f1(:6f W\SH RI LQWHUSURFHVVRU FRPPXQLFDWLRQ DQG 7FRPS LV WKH WLPH VSHQW GRLQJ FRPSXWDWLRQ GXULQJ 6,03/( LWHUDWLRQV 7KH VROYHU IRU WKHVH UHVXOWV LV SRLQW-DFREL UHOD[DWLRQ )RU WKH 93 f§ &0 FDVH LQFUHDVLQJ WKH DVSHFW UDWLR IURP WR FDXVHV 7QHZV7FRPS WR LQFUHDVH IURP WR 7KLV LQFUHDVH LQ 7QHZV7FRUQS LQFUHDVHV WKH UXQ WLPH IRU LWHUDWLRQV IURP V WR V DQG GHFUHDVHV WKH HIILFLHQF\ IURP WR )RU WKH 93 &0 FDVH LQFUHDVLQJ WKH DVSHFW UDWLR IURP WR FDXVHV 7QHZVO7FRPS WR LQFUHDVH IURP WR 7KLV LQFUHDVH LQ 7QHZV7FRUQS LQFUHDVHV WKH UXQ WLPH IRU LWHUDWLRQV IURP V WR V DQG GHFUHDVHV WKH HIILFLHQF\ IURP WR 7KXV WKH DVSHFW UDWLR HIIHFW GLPLQLVKHV DV 93 LQFUHDVHV GXH WR WKH LQFUHDVLQJ DUHD RI WKH VXEJULG ,Q RWKHU ZRUGV WKH YDULDWLRQ LQ WKH SHULPHWHU OHQJWK PDWWHUV OHVV SHUFHQWDJHZLVH DV WKH DUHD LQFUHDVHV 7KH &0 UHVXOWV DUH VLPLODU +RZHYHU RQ WKH &0 WKH RQ3( W\SH RI FRPPXQLFDWLRQ LV VORZHU WKDQ RQ WKH &0 UHODWLYH WR WKH FRPSXWDWLRQDO VSHHG 7KXV 7QHZV7FRPS UDWLRV DUH KLJKHU RQ WKH &0 (IIHFW RI 8QLIRUP %RXQGDU\ &RQGLWLRQ ,PSOHPHQWDWLRQ ,Q DGGLWLRQ WR WKH FKRLFH RI VROYHU WKH WUHDWPHQW RI ERXQGDU\ FRHIILFLHQW FRPSXWDn WLRQV ZDV GLVFXVVHG HDUOLHU DV DQ LPSRUWDQW FRQVLGHUDWLRQ DIIHFWLQJ SDUDOOHO HIILFLHQF\ )LJXUH FRPSDUHV WKH LPSOHPHQWDWLRQ GHVFULEHG LQ WKH LQWURGXFWRU\ VHFWLRQ RI WKLV FKDSWHU WR DQ LPSOHPHQWDWLRQ ZKLFK WUHDWV WKH ERXQGDU\ FRQWURO YROXPHV VHSDUDWH IURP WKH LQWHULRU FRQWURO YROXPHV 7KH ODWWHU DSSURDFK LQYROYHV VRPH G RSHUDWLRQV ZKLFK OHDYH VRPH SURFHVVRUV LGOH

PAGE 77

7KH UHVXOWV LQGLFDWHG LQ )LJXUH ZHUH REWDLQHG RQ WKH &0 XVLQJ SRLQW -DFREL UHOD[DWLRQ DV WKH VROYHU :LWK WKH XQLIRUP DSSURDFK WKH UDWLR RI WKH WLPH VSHQW FRPSXWLQJ FRHIILFLHQWV 7FRHII WR WKH WLPH VSHQW VROYLQJ WKH HTXDWLRQV 7VRLYH UHPDLQV FRQVWDQW DW IRU 93 %RWK 7FRHMc DQG 7VRLYH a 93 LQ WKLV FDVH VR GRXEOLQJ 93 GRXEOHV ERWK 7FRHcM DQG 7VRLYH OHDYLQJ WKHLU UDWLR XQFKDQJHG 7KH YDOXH UHIOHFWV WKH UHODWLYH FRVW RI FRHIILFLHQW FRPSXWDWLRQV FRPSDUHG WR SRLQW-DFREL LWHUDWLRQ 7KHUH DUH WKUHH HTXDWLRQV IRU ZKLFK FRHIILFLHQWV DUH FRPSXWHG DQG WRWDO LQQHU LWHUDWLRQV HDFK IRU WKH X DQG Y HTXDWLRQV DQG IRU WKH Sn HTXDWLRQ 7KXV LI PRUH LQQHU LWHUDWLRQV DUH WDNHQ WKH UDWLR RI 7FRHMM WR 7VRcYH ZLOO GHFUHDVH DQG YLFH YHUVD :LWK WKH G LPSOHPHQWDWLRQ 7FRHIM7VRcYH LQFUHDVHV XQWLO 93 %RWK 7FRHII DQG 7VRLYH VFDOH ZLWK 93 DV\PSWRWLFDOO\ EXW )LJXUH VKRZV WKDW 7FRHIM KDV DQ DSSDUHQWO\ YHU\ VLJQLILFDQW VTXDUHURRW FRPSRQHQW GXH WR WKH ERXQGDU\ RSHUDWLRQV ,I 1 LV WKH QXPEHU RI JULG SRLQWV DQG QY LV WKH QXPEHU RI SURFHVVRUV WKHQ 93 1QS )RU ERXQGDU\ RSHUDWLRQV 1A FRQWURO YROXPHV DUH FRPSXWHG LQ SDUDOOHO ZLWK RQO\ Q[SURFHVVRUVf§KHQFH WKH 93[A FRQWULEXWLRQ WR 7FRHIM )URP )LJXUH LW DSSHDUV WKDW YHU\ ODUJH SUREOHPV DUH UHTXLUHG WR UHDFK WKH SRLQW ZKHUH WKH LQWHULRU FRHIILFLHQW FRPSXWDWLRQV DPRUWL]H WKH ERXQGDU\ FRHIILFLHQW FRPSXWDWLRQV (YHQ IRU ODUJH 93 ZKHQ 7&2HII7VRLYH LV DSSURDFKLQJ D FRQVWDQW WKLV FRQVWDQW LV ODUJHU DSSUR[LPDWHO\ FRPSDUHG WR IRU WKH XQLIRUP DSSURDFK GXH WR WKH DGGLWLRQDO IURQWHQGWR SURFHVVRU FRPPXQLFDWLRQ ZKLFK LV LQWULQVLF WR WKH G IRUPXODWLRQ 2YHUDOO 3HUIRUPDQFH 7DEOH VXPPDUL]HV WKH UHODWLYH SHUIRUPDQFH RI 6,03/( RQ WKH &0 &0 DQG 03 FRPSXWHUV XVLQJ SRLQW DQG OLQHLWHUDWLYH VROYHUV DQG WKH XQLIRUP ERXQGDU\

PAGE 78

FRQGLWLRQ WUHDWPHQW ,Q WKH ILUVW WKUHH FDVHV WKH f1(:6f LPSOHPHQWDWLRQ RI SRLQW -DFREL UHOD[DWLRQ LV WKH VROYHU ZKLOH WKH ODVW WZR FDVHV DUH IRU WKH OLQH-DFREL VROYHU XVLQJ F\FOLF UHGXFWLRQ 0DFKLQH 6ROYHU 3UREOHP 6L]H 93 7 9 7LPH,WHU3W 6SHHG f 0)ORSVf 3HDN 6SHHG 3( &0 3RLQW -DFREL [ V [ V 98 &0 3RLQW -DFREL [ V [ f V 3( 03 3RLQW -DFREL [ V [ a V r 3( &0 /LQH DFREL [ V [ V 98 &0 /LQH -DFREL [ V [ V 7DEOH 3HUIRUPDQFH UHVXOWV IRU WKH 6,03/( DOJRULWKP IRU LWHUDWLRQV RI WKH PRGHO SUREOHP 7KH VROYHUV DUH WKH SRLQW-DFREL f1(:6ff DQG OLQH-DFREL F\FOLF UHGXFWLRQf LPSOHPHQWDWLRQV DQG LQQHU LWHUDWLRQV DUH XVHG IRU WKH X Y DQG Sf HTXDWLRQV UHVSHFWLYHO\ r 7KH VSHHGV DUH IRU GRXEOHSUHFLVLRQ FDOFXODWLRQV H[FHSW RQ WKH 03 ,Q 7DEOH WKH VSHHGV UHSRUWHG DUH REWDLQHG E\ FRPSDULQJ WKH WLPLQJV ZLWK WKH LGHQWLFDO FRGH WLPHG RQ D &UD\ & XVLQJ WKH &UD\ KDUGZDUH SHUIRUPDQFH PRQn LWRU WR GHWHUPLQH 0IORSV ,Q WHUPV RI 0IORSV WKH &0 YHUVLRQ RI WKH 6,03/( DOJRULWKPfV SHUIRUPDQFH DSSHDUV WR EH FRQVLVWHQW ZLWK RWKHU &)' DOJRULWKPV RQ WKH &0 -HVSHUVRQ DQG /HYLW >@ UHSRUW 0IORSV IRU D VFDODU LPSOLFLW YHUVLRQ RI DQ DSSUR[LPDWH IDFWRUL]DWLRQ 1DYLHU6WRNHV DOJRULWKP XVLQJ SDUDOOHO F\FOLF UHGXFWLRQ WR VROYH WKH WULGLDJRQDO V\VWHPV RI HTXDWLRQV 7KLV UHVXOW ZDV REWDLQHG IRU D [ VLPXODWLRQ RI G IORZ RYHU D F\OLQGHU XVLQJ D N &0 DV LQ WKH SUHVHQW VWXG\ D GLIIHUHQW H[HFXWLRQ PRGHO ZDV XVHG VHH > @ IRU GHWDLOVf 7KH PHDVXUHG WLPH SHU WLPHVWHS SHU JULG SRLQW ZDV [ VHFRQGV %\ FRPSDULVRQ WKH SHUIRUPDQFH RI WKH 6,03/( DOJRULWKP IRU WKH [ SUREOHP VL]H XVLQJ WKH OLQH-DFREL VROYHU LV

PAGE 79

0IORSV DQG [ a VHFRQGV SHU LWHUDWLRQ SHU JULG SW (JROI >@ UHSRUWV WKDW WKH 7($&+ 1DYLHU6WRNHV FRPEXVWRU FRGH EDVHG RQ D VHTXHQWLDO SUHVVXUHEDVHG PHWKRG ZLWK D VROYHU WKDW LV FRPSDUDEOH WR SRLQW-DFREL UHOD[DWLRQ REWDLQV D SHUIRUPDQFH ZKLFK LV WLPHV EHWWHU WKDQ D YHFWRUL]HG &UD\ ;03 YHUVLRQ RI WKH FRGH IRU D PRGHO SUREOHP ZLWK [ QRGHV 7KH SUHVHQW SURJUDP UXQV WLPHV IDVWHU WKDQ D VLQJOH &UD\ & SURFHVVRU IRU D [ SUREOHP N JULG SRLQWVf 2QH &UD\ & SURFHVVRU LV DERXW WLPHV IDVWHU WKDQ D &UD\ ;03 7KXV WKH SUHVHQW FRGH UXQV FRPSDUDEO\ IDVW ,VRHIILFLHQFY 3ORW )LJXUHV DGGUHVVHG WKH HIIHFWV RI WKH LQQHULWHUDWLYH VROYHU WKH ERXQGDU\ WUHDWPHQW WKH GDWD OD\RXW DQG WKH YDULDWLRQ RI SDUDOOHO HIILFLHQF\ ZLWK SUREOHP VL]H IRU D IL[HG QXPEHU RI SURFHVVRUV 9DU\LQJ WKH QXPEHU RI SURFHVVRUV LV DOVR RI LQWHUHVW DQG DV GLVFXVVHG LQ &KDSWHU DQ HYHQ PRUH SUDFWLFDO QXPHULFDO H[SHULPHQW LV WR YDU\ QS LQ SURSRUWLRQ ZLWK WKH SUREOHP VL]H LH WKH VFDOHGVL]H PRGHO )LJXUH ZKLFK LV EDVHG RQ WKH SRLQW-DFREL 03 WLPLQJV LQFRUSRUDWHV WKH DERYH LQIRUPDWLRQ LQWR RQH SORW ZKLFK KDV EHHQ FDOOHG DQ LVRHIILFLHQF\ SORW E\ .XPDU DQG 6LQJK >@ 7KH OLQHV DUH SDWKV DORQJ ZKLFK WKH SDUDOOHO HIILFLHQF\ ( UHPDLQV FRQVWDQW DV WKH SUREOHP VL]H DQG WKH QXPEHU RI SURFHVVRUV QS YDU\ 8VLQJ WKH SRLQW -DFREL VROYHU DQG WKH XQLIRUP ERXQGDU\ FRHIILFLHQW LPSOHPHQWDWLRQ HDFK 603/( LWHUDWLRQ KDV QR VXEVWDQWLDO FRQWULEXWLRQ IURP RSHUDWLRQV ZKLFK DUH OHVV WKDQ IXOO\ SDUDOOHO RU IURP RSHUDWLRQV ZKRVH WLPH GHSHQGV RQ WKH QXPEHU RI SURFHVVRUV 7KH HIILFLHQF\ LV RQO\ D IXQFWLRQ RI WKH YLUWXDO SURFHVVRU UDWLR WKXV WKH OLQHV DUH VWUDLJKW 0XFK RI WKH SDUDPHWHU VSDFH LV FRYHUHG E\ HIILFLHQFLHV EHWZHHQ DQG 7KH UHDVRQ WKDW WKH SUHVHQW LPSOHPHQWDWLRQ LV OLQHDUO\ VFDODEOH LV WKDW WKH RSHUn DWLRQV DUH DOO VFDODEOHf§HDFK 6W03/( LWHUDWLRQ KDV SUHGRPLQDQWO\ QHDUHVWQHLJKERU

PAGE 80

FRPPXQLFDWLRQ DQG FRPSXWDWLRQ DQG IXOO SDUDOOHOLVP 7KXV 7S GHSHQGV RQ 93 /RFDO FRPPXQLFDWLRQ VSHHG GRHV QRW GHSHQG RQ QS 7? GHSHQGV RQ WKH SUREOHP VL]H 1 7KXV DV 1 DQG QS DUH LQFUHDVHG LQ SURSRUWLRQ VWDUWLQJ IURP VRPH LQLWLDO UDWLR WKH HIILFLHQF\ IURP (T VWD\V FRQVWDQW ,I WKH LQLWLDO SUREOHP VL]H LV ODUJH DQG WKH FRUUHVSRQGLQJ SDUDOOHO UXQ WLPH LV DFFHSWDEOH WKHQ RQH FDQ TXLFNO\ JHW WR YHU\ ODUJH SUREOHP VL]HV ZKLOH VWLOO PDLQWDLQLQJ 7S FRQVWDQW E\ LQFUHDVLQJ QS D UHODWLYHO\ VPDOO DPRXQW DORQJ WKH ( FXUYHf ,I WKH GHVLUHG UXQ WLPH LV VPDOOHU WKHQ LQLWLDOO\ LH VWDUWLQJ IURP VPDOO QSf WKH HIILFLHQF\ ZLOO EH ORZHU 7KHQ WKH VFDOHGVL]H H[SHULPHQW UHTXLUHV UHODWLYHO\ PRUH SURFHVVRUV WR JHW WR D ODUJH SUREOHP VL]H DORQJ WKH FRQVWDQW HIILFLHQF\ FRQVWDQW 7S IRU SRLQW-DFREL LHUDWLRQVf FXUYH 7KXV WKH PRVW GHVLUDEOH VLWXDWLRQ RFFXUV ZKHQ WKH HIILFLHQF\ LV KLJK IRU DQ LQLWLDOO\ VPDOO SUREOHP VL]H )RU WKLV FDVH WKH IL[HGWLPH DQG VFDOHGVL]H PHWKRGV DUH HTXLYDOHQW EHFDXVH WKH SUREOHP VL]H 7? GHSHQGV RQ 1 SHU LWHUDWLRQ +RZHYHU WKLV LV QRW WKH FDVH ZKHQ WKH 6,03/( LQQHU LWHUDWLRQV DUH GRQH ZLWK WKH OLQH-DFREL VROYHU XVLQJ SDUDOOHO F\FOLF UHGXFWLRQ &\FOLF UHGXFWLRQ UHTXLUHV RJ 1 Of1 RSHUDWLRQV WR VROYH D WULGLDJRQDO V\VWHP RI 1 HTXDWLRQV >@ 7KXV 7? a RJ 1 f1 DQG RQ QS f§ 1 SURFHVVRUV 7S a ORJ L9_ EHFDXVH HYHU\ SURFHVVRU LV DFWLYH GXULQJ HYHU\ VWHS RI WKH UHGXFWLRQ DQG WKHUH DUH ORJ 1 VWHSV 6LQFH 93 HYHU\ SURFHVVRUfV WLPH LV SURSRUWLRQDO WR WKH QXPEHU RI VWHSV DVVXPLQJ HDFK VWHS FRVWV DERXW WKH VDPH ,Q WKH VFDOHGVL]H DSSURDFK RQH GRXEOHV QS DQG 1 WRJHWKHU ZKLFK WKHUHIRUH JLYHV 7L a RJ 1f1 DQG 7S a ORJ 9, 7KH HIILFLHQF\ LV EXW 7S LV LQFUHDVHG DQG ? LV PRUH WKDQ GRXEOHG ,Q WKH IL[HGWLPH DSSURDFK WKHQ RQH FRQFOXGHV WKDW 1 PXVW EH LQFUHDVHG E\ D IDFWRU ZKLFK LV OHVV WKDQ WZR DQG QS PXVW EH GRXEOHG LQ RUGHU WR PDLQWDLQ FRQVWDQW 7S ,I D SORW OLNH )LJXUH LV FRQVWUXFWHG LW VKRXOG EH GRQH ZLWK 7? LQVWHDG RI 1 DV WKH PHDVXUH RI SUREOHP VL]H ,Q WKDW FDVH WKH OLQHV

PAGE 81

RI FRQVWDQW HIILFLHQF\ ZRXOG EH GHVFULEHG DV 7? a ULS ZLWK D 7KH LGHDO FDVH LV D ,Q DGGLWLRQ WR WKH RSHUDWLRQ FRXQW WKHUH LV DQRWKHU IDFWRU ZKLFK UHGXFHV WKH VFDODELOLW\ RI F\FOLF UHGXFWLRQ QDPHO\ WKH WLPH SHU VWHS LV QRW DFWXDOO\ WKH VDPH DV ZDV DVVXPHG DERYHf§ODWHU VWHSV UHTXLUH FRPPXQLFDWLRQ RYHU ORQJHU GLVWDQFHV ZKLFK LV VORZHU ,Q SUDFWLFH KRZHYHU QR PRUH WKDQ D IHZ VWHSV DUH QHFHVVDU\ EHFDXVH WKH FRXSOLQJ EHWZHHQ ZLGHO\VHSDUDWHG HTXDWLRQV EHFRPHV YHU\ ZHDN $V WKH V\VWHP LV UHGXFHG WKH GLDJRQDO EHFRPHV PXFK ODUJHU WKDQ WKH RIIGLDJRQDO WHUPV ZKLFK FDQ WKHQ EH QHJOHFWHG DQG WKH UHGXFWLRQ SURFHVV DEEUHYLDWHG ,Q VKRUW WKH EDVLF SUHUHTXLVLWH IRU VFDOHGVL]H FRQVWDQW HIILFLHQF\ LV WKDW WKH DPRXQW RI ZRUN SHU 6,03/( LWHUDWLRQ YDULHV ZLWK 93 DQG WKDW WKH RYHUKHDGV DQG LQHIILFLHQFLHV VSHFLILFDOO\ WKH WLPH VSHQW LQ FRPPXQLFDWLRQ DQG WKH IUDFWLRQ RI LGOH SURFHVVRUV GR QRW JURZ UHODWLYH WR WKH XVHIXO FRPSXWDWLRQDO ZRUN DV QS DQG 1 DUH LQFUHDVHG SURSRUWLRQDOO\ 7KH 6,03/( LPSOHPHQWDWLRQ GHYHORSHG KHUH XVLQJ WKH SRLQWLWHUDWLYH VROYHUV -DFREL DQG UHGEODFN KDYH WKLV OLQHDU FRPSXWDWLRQDO VFDODELOn LW\ SURSHUW\ 2Q WKH RWKHU KDQG WKH FRQYHUJHQFH UDWH RI SRLQWLWHUDWLYH PHWKRGV LQFUHDVHV DW D UDWH JUHDWHU WKDQ WKH SUREOHP VL]H VR DOWKRXJK 7S FDQ EH PDLQWDLQHG FRQVWDQW ZKLOH WKH SUREOHP VL]H DQG QS DUH VFDOHG XS WKH FRQYHUJHQFH UDWH GHWHULRUDWHV +HQFH WKH WRWDO UXQ WLPH FRVW SHU LWHUDWLRQ PXOWLSOLHG E\ WKH QXPEHU RI LWHUDWLRQVf LQFUHDVHV 7KLV ODFN RI QXPHULFDO VFDODELOLW\ RI VWDQGDUG LWHUDWLYH PHWKRGV OLNH SRLQW-DFREL UHOD[DWLRQ LV WKH PRWLYDWLRQ IRU WKH GHYHORSPHQW RI PXOWLJULG VWUDWHJLHV &RQFOXGLQJ 5HPDUNV 7KH 6,03/( DOJRULWKP HVSHFLDOO\ XVLQJ SRLQWLWHUDWLYH PHWKRGV LV HIILFLHQW RQ 6,0' PDFKLQHV DQG FDQ PDLQWDLQ D UHODWLYHO\ KLJK HIILFLHQF\ DV WKH SUREOHP VL]H DQG WKH QXPEHU RI SURFHVVRUV LV VFDOHG XS +RZHYHU ERXQGDU\ FRHIILFLHQW FRPSXWDWLRQV

PAGE 82

QHHG WR EH IROGHG LQ ZLWK LQWHULRU FRHIILFLHQW FRPSXWDWLRQV WR DFKLHYH JRRG HIILFLHQFLHV DW VPDOOHU SUREOHP VL]HV )RU WKH &0 WKH LQHIILFLHQF\ FDXVHG E\ LGOH SURFHVVRUV LQ D G ERXQGDU\ WUHDWPHQW ZDV VLJQLILFDQW RYHU WKH HQWLUH UDQJH RI SUREOHP VL]HV WHVWHG 7KH OLQH-DFREL VROYHU EDVHG RQ SDUDOOHO F\FOLF UHGXFWLRQ OHDGV WR D ORZHU SHDN ( RQ WKH &0f WKDQ WKH SRLQW-DFREL VROYHU f EHFDXVH WKHUH LV PRUH FRPPXQLFDWLRQ DQG RQ DYHUDJH WKLV FRPPXQLFDWLRQ LV OHVV ORFDOL]HG 2Q WKH RWKHU KDQG WKH DV\PSWRWLF FRQYHUJHQFH UDWHV RI WKH WZR PHWKRGV DUH DOVR GLIIHUHQW DQG QHHG WR EH FRQVLGHUHG RQ D SUREOHPE\SUREOHP EDVLV 7KH VSHHGV ZKLFK DUH REWDLQHG ZLWK WKH OLQHLWHUDWLYH PHWKRG DUH FRQVLVWHQW DQG FRPSDUDEOH ZLWK RWKHU &)' DOJRULWKPV RQ 60' FRPSXWHUV 7KH NH\ IDFWRU LQ REWDLQLQJ KLJK SDUDOOHO HIILFLHQF\ IRU WKH 6W03/( DOJRULWKP RQ WKH FRPSXWHUV XVHG LV IDVW QHDUHVWQHLJKERU FRPPXQLFDWLRQ UHODWLYH WR WKH VSHHG RI FRPSXWDWLRQ 2Q WKH &0 DQG &0 KLHUDUFKLFDO PDSSLQJ DOORZV RQSURFHVVRU FRPPXQLFDWLRQ WR GRPLQDWH WKH VORZHU RIISURFHVVRU IRUPVf RI FRPPXQLFDWLRQ IRU ODUJH 93 7KH HIILFLHQF\ LV ORZ IRU VPDOO SUREOHPV EHFDXVH RI WKH UHODWLYHO\ ODUJH FRQWULEXWLRQ WR WKH UXQ WLPH IURP WKH IURQWHQGWRSURFHVVRU W\SH RI FRPPXQLFDWLRQ EXW WKLV W\SH RI FRPPXQLFDWRQ LV FRQVWDQW DQG EHFRPHV OHVV LPSRUWDQW DV WKH SUREOHP VL]H LQFUHDVHV 2QFH WKH SHDN ( LV UHDFKHG WKH HIILFLHQF\ LV GHWHUPLQHG E\ WKH EDODQFH RI FRPSXn WDWLRQ DQG RQSURFHVVRU FRPPXQLFDWLRQ VSHHGVf§IRU WKH &0 XVLQJ D SRLQW-DFREL VROYHU ( DSSURDFKHV DSSUR[LPDWHO\ ZKLOH RQ WKH &0 WKH SHDN HIILFLHQF\ LV ZKLFK UHIOHFWV WKH IDFW WKDW WKH &0 YHFWRU XQLWV KDYH D EHWWHU EDODQFH DW OHDVW IRU WKH RSHUDWLRQV LQ WKLV DOJRULWKP WKDQ WKH &0 SURFHVVRUV 7KH UDWH DW ZKLFK ( DSSURDFKHV WKH SHDN YDOXH GHSHQGV RQ WKH UHODWLYH FRQWULEXn WLRQV RI RQ DQG RIISURFHVVRU FRPPXQLFDWLRQ DQG IURQWHQGWRSURFHVVRU FRPPXQLFDn WLRQ WR WKH WRWDO UXQ WLPH 2Q WKH &0 93 N LV UHTXLUHG WR UHDFK SHDN ( 7KLV

PAGE 83

SUREOHP VL]H LV DERXW RQHIRXUWK WKH PD[LPXP VL]H ZKLFK FDQ EH DFFRPPRGDWHG DQG \HW VWLOO ODUJHU WKDQ PDQ\ FRPSXWDWLRQV RQ WUDGLWLRQDO YHFWRU VXSHUFRPSXWHUV &OHDUO\ D JDS LV GHYHORSLQJ EHWZHHQ WKH VL]H RI SUREOHPV ZKLFK FDQ EH VROYHG HIILn FLHQWO\ LQ SDUDOOHO DQG WKH VL]H RI SUREOHPV ZKLFK DUH VPDOO HQRXJK WR EH VROYHG RQ VHULDO FRPSXWHUV )RU SDUDOOHO FRPSXWDWLRQV RI DOO EXW WKH ODUJHVW SUREOHPV WKHQ WKH GDWD OD\RXW LVVXH LV YHU\ LPSRUWDQW LQ JRLQJ IURP D VTXDUH VXEJULG WR RQH ZLWK DVSHFW UDWLR RI IRU D 93 ON FDVH RQ WKH &0 WKH UXQ WLPH LQFUHDVHG E\ b 2Q WKH 03 KLHUDUFKLFDO PDSSLQJ LV QRW QHHGHG EHFDXVH WKH SURFHVVRUV DUH VORZ FRPSDUHG WR WKH ;1HW FRPPXQLFDWLRQ VSHHG 7KH SHDN ( LV ZLWK WKH SRLQW-DFREL VROYHU DQG WKLV SHUIRUPDQFH LV REWDLQHG IRU 93 ZKLFK LV DERXW RQHHLJKWK WKH VL]H RI WKH ODUJHVW FDVH SRVVLEOH IRU WKLV PDFKLQH 7KXV ZLWK UHJDUGV WR DFKLHYLQJ HIILFLHQW SHUIRUPDQFH LQ WKH WHUDIORSV UDQJH WKH FRPSDULVRQ JLYHQ KHUH VXJJHVWV D SUHIHUHQFH IRU QXPHURXV VORZ SURFHVVRUV LQVWHDG RI IHZHU IDVW RQHV EXW VXFK D FRPSXWHU PD\ EH GLIILFXOW DQG H[SHQVLYH WR EXLOG

PAGE 84

[ /D\RXW RI 3URFHVVRUV 3(2 3( 3( 3( $UUD\ $f &XWDQG6WDFN 0DSSLQJ 03)RUWUDQf +LHUDUFKLFDO 0DSSLQJ &0)RUWUDQf 0HPRU\ /D\HUV $ A += A A A 3( 3( 3( 3( 3( 3( 3( 3( [ YLUWXDO VXEJULGV L f f L f V )LJXUH 0DSSLQJ DQ HOHPHQW DUUD\ $ RQWR SURFHVVRUV )RU WKH FXWDQG VWDFN PDSSLQJ QHDUHVWQHLJKERUV DUUD\ HOHPHQWV DUH PDSSHG WR QHDUHVWQHLJKERU SK\VLFDO SURFHVVRUV )RU WKH KLHUDUFKLFDO PDSSLQJ QHDUHVWQHLJKERU DUUD\ HOHPHQWV DUH PDSSHG WR QHDUHVWQHLJKERU YLUWXDO SURFHVVRUV ZKLFK PD\ EH RQ WKH VDPH SK\VLFDO SURFHVVRU

PAGE 85

/' 93 (IILFLHQF\ YV 93 k k 7!J n [ r ; 2 R r [ ; R ; ;; 3RLQW-DFREL RQ98f 3RLQW-DFREL 1(:6f /LQH-DFREL &\FOLF 5HGf /LQH-DFREL 7'0$f ; ;; )LJXUH 3DUDOOHO HIILFLHQF\ ( DV D IXQFWLRQ RI SUREOHP VL]H DQG VROYHU IRU WKH &0 FDVHV 7KH QXPEHU RI JULG SRLQWV LV WKH YLUWXDO SURFHVVRU UDWLR 93 PXOWLSOLHG E\ WKH QXPEHU RI SURFHVVRUV ( LV FRPSXWHG IURP (T ,W UHIOHFWV WKH UHODWLYH DPRXQW RI FRPPXQLFDWLRQ FRPSDUHG WR FRPSXWDWLRQ LQ WKH DOJRULWKP

PAGE 86

( YV 3UREOHP 6L]H )LJXUH &RPSDULVRQ EHWZHHQ WKH &0 &0 DQG 03 7KH YDULDWLRQ RI SDUDOOHO HIILFLHQF\ ZLWK SUREOHP VL]H LV VKRZQ IRU WKH PRGHO SUREOHP XVLQJ SRLQW -DFREL UHOD[DWLRQ DV WKH VROYHU ( LV FDOFXODWHG IURP (T DQG 7? QS7FRPS IRU WKH &0 DQG &0 ZKHUH 7FRPS LV PHDVXUHG )RU WKH 03 FDVHV 7? LV WKH IURQWHQG WLPH VFDOHG GRZQ WR WKH HVWLPDWHG VSHHG RI WKH 03 SURFHVVRUV 0IORSVf

PAGE 87

7QHZV7FRPS $VSHFW 5DWLR (IIHFW 6XEJULG $5 )LJXUH (IIHFW RI VXEJULG DVSHFW UDWLR RQ LQWHUSURFHVVRU FRPPXQLFDWLRQ WLPH 7QHZV IRU WKH KLHUDUFKLFDO GDWDPDSSLQJ &0 DQG &0f 7QHZV LV QRUPDOL]HG E\ 7FRPS LQ RUGHU WR VKRZ KRZ WKH DVSHFW UDWLR HIIHFW YDULHV ZLWK SUREOHP VL]H ZLWKRXW WKH FRPSOLFDWLRQ RI WKH IDFW WKDW 7FRPS YDULHV DOVR

PAGE 88

7FRHII7VROYH (IIHFW RI ,PSOHPHQWDWLRQ 93 )LJXUH 1RUPDOL]HG FRHIILFLHQW FRPSXWDWLRQ WLPH DV D IXQFWLRQ RI SUREOHP VL]H IRU WZR LPSOHPHQWDWLRQV RQ WKH &0f ,Q WKH G FDVH WKH ERXQGDU\ FRHIILFLHQWV DUH KDQGOHG E\ G DUUD\ RSHUDWLRQV ,Q WKH G FDVH WKH XQLIRUP LPSOHPHQWDWRQ FRPSXWHV ERWK ERXQGDU\ DQG LQWHULRU FRHIILFLHQWV VLPXOWDQHRXVO\ 7FRHcc LV WKH WLPH VSHQW FRPSXWLQJ FRHIILFLHQWV LQ D 6,03/( LWHUDWLRQ 7VRcYH LV WKH WLPH VSHQW LQ SRLQW -DFREL LWHUDWLRQV 7KHUH DUH SRLQW-DFREL LWHUDWLRQV LX YY DQG XF f

PAGE 89

,VRHIILFLHQF\ &XUYHV L , L 3URFHVVRUV 03f )LJXUH ,VRHIILFLHQF\ FXUYHV EDVHG RQ WKH 03 FDVHV DQG 6,03/( PHWKRG ZLWK WKH SRLQW-DFREL VROYHU (IILFLHQF\ ( LV FRPSXWHG IURP (T $ORQJ OLQHV RI FRQVWDQW ( WKH FRVW SHU 6,03/( LWHUDWLRQ LV FRQVWDQW ZLWK WKH SRLQW-DFREL VROYHU DQG WKH XQLIRUP ERXQGDU\ FRQGLWLRQ LPSOHPHQWDWLRQ

PAGE 90

&+$37(5 $ 121/,1($5 35(6685(&255(&7,21 08/7,*5,' 0(7+2' 7KH VLQJOHJULG WLPLQJ UHVXOWV IRFXVHG RQ WKH FRVW SHU LWHUDWLRQ LQ RUGHU WR HOXFLGDWH WKH FRPSXWDWLRQDO LVVXHV ZKLFK LQIOXHQFH WKH SDUDOOHO UXQ WLPH DQG WKH VFDODELOLW\ %XW WKH SDUDOOHO UXQ WLPH LV WKH FRVW SHU LWHUDWLRQ PXOWLSOLHG E\ WKH QXPEHU RI LWHUDWLRQV )RU VFDOLQJ WR ODUJH SUREOHP VL]HV DQG QXPEHUV RI SURFHVVRUV WKH QXPHULFDO PHWKRG PXVW VFDOH ZHOO ZLWK UHVSHFW WR FRQYHUJHQFH UDWH DOVR 7KH FRQYHUJHQFH UDWH RI WKH VLQJOHJULG SUHVVXUHFRUUHFWLRQ PHWKRG GHWHULRUDWHV ZLWK LQFUHDVLQJ SUREOHP VL]H 7KLV WUDLW LV LQKHULWHG IURP WKH VPRRWKLQJ SURSHUW\ RI WKH VWDWLRQDU\ OLQHDU LWHUDWLYH PHWKRG SRLQW RU OLQH-DFREL UHOD[DWLRQ XVHG WR VROYH WKH V\VWHPV RI X Y DQG Sn HTXDWLRQV GXULQJ WKH FRXUVH RI 6,03/( LWHUDWLRQV 3RLQW -DFREL UHOD[DWLRQ UHTXLUHV 1f LWHUDWLRQV ZKHUH 1 LV WKH QXPEHU RI JULG SRLQWV WR GHFUHDVH WKH VROXWLRQ HUURU E\ D VSHFLILHG DPRXQW >@ ,Q RWKHU ZRUGV WKH QXPEHU RI LWHUDWLRQV LQFUHDVHV IDVWHU WKDQ WKH SUREOHP VL]H $W EHVW WKH FRVW SHU LWHUDWLRQ VWD\V FRQVWDQW DV WKH QXPEHU RI SURFHVVRUV QS LQFUHDVHV SURSRUWLRQDO WR WKH SUREOHP VL]H 7KXV WKH WRWDO UXQ WLPH LQFUHDVHV LQ WKH VFDOHGVL]H H[SHULPHQW XVLQJ VLQJOHJULG SUHVVXUHFRUUHFWLRQ PHWKRGV GXH WR WKH LQFUHDVHG QXPEHU RI LWHUDWLRQV UHTXLUHG 7KLV ODFN RI QXPHULFDO VFDODELOLW\ LV D VHULRXV GLVDGYDQWDJH IRU SDUDOOHO LPSOHPHQWDWLRQV VLQFH WKH WDUJHW SUREOHP VL]H IRU SDUDOOHO FRPSXWDWLRQ LV YHU\ ODUJH 0XOWLJULG PHWKRGV FDQ PDLQWDLQ JRRG FRQYHUJHQFH UDWHV DV WKH SUREOHP VL]H LQn FUHDVHV )RU 3RLVVRQ HTXDWLRQV SUREOHPVL]H LQGHSHQGHQW FRQYHUJHQFH UDWHV FDQ EH REWDLQHG > @ 7KH UHFHQW ERRN E\ %ULJJV >@ LQWURGXFHV WKH PDMRU FRQFHSWV LQ

PAGE 91

WKH FRQWH[W RI 3RLVVRQ HTXDWLRQV 6HH DOVR > @ IRU VXUYH\V DQG DQDO\VHV RI PXOWLJULG FRQYHUJHQFH SURSHUWLHV IRU PRUH JHQHUDO OLQHDU HTXDWLRQV )RU D GHVFULSWLRQ RI SUDFWLFDO WHFKQLTXHV DQG VSHFLDO FRQVLGHUDWLRQV IRU IOXLG G\QDPLFV VHH WKH LPSRUn WDQW HDUO\ SDSHUV E\ %UDQGW > @ +RZHYHU WKHUH DUH PDQ\ XQUHVROYHG LVVXHV IRU DSSOLFDWLRQ WR WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV HVSHFLDOO\ ZLWK UHJDUGV WR WKHLU LPSOHPHQWDWLRQ DQG SHUIRUPDQFH RQ SDUDOOHO FRPSXWHUV 7KH SXUSRVH RI WKLV FKDSWHU LV WR GHVFULEH WKH UHOHYDQW FRQYHUJHQFH UDWH DQG VWDELOLW\ LVVXHV IRU PXOWLJULG PHWKRGV LQ WKH FRQWH[W RI DSSOLFDWLRQ WR WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV ZLWK QXPHULFDO H[SHULPHQWV XVHG WR LOOXVWUDWH WKH SRLQWV PDGH LQ SDUWLFXODU UHJDUGn LQJ WKH UROH RI WKH UHVWULFWLRQ DQG SURORQJDWLRQ SURFHGXUHV %DFNJURXQG 7KH EDVLF FRQFHSW LV WKH XVH RI FRDUVH JULGV WR DFFHOHUDWH WKH DV\PSWRWLF FRQn YHUJHQFH UDWH RI DQ LQQHU LWHUDWLYH VFKHPH 7KH LQQHU LWHUDWLYH PHWKRG LV FDOOHG WKH fVPRRWKHUf IRU UHDVRQV WR EH PDGH FOHDU VKRUWO\ ,Q WKH FRQWH[W RI WKH SUHVHQW DSSOLFDn WLRQ WR WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV WKH VLQJOHJULG SUHVVXUHFRUUHFWLRQ PHWKRG LV WKH LQQHU LWHUDWLYH VFKHPH %HFDXVH WKH SUHVVXUHFRUUHFWLRQ DOJRULWKP DOVR XVHV LQQHU LWHUDWLRQVf§WR VROYH WKH V\VWHPV RI X Y DQG Sn HTXDWLRQVf§WKH PXOWLJULG PHWKRG GHYHORSHG KHUH DFWXDOO\ KDV WKUHH QHVWHG OHYHOV RI LWHUDWLRQV $ PXOWLJULG 9 F\FOH EHJLQV ZLWK D FHUWDLQ QXPEHU RI VPRRWKLQJ LWHUDWLRQV RQ WKH ILQH JULG ZKHUH WKH VROXWLRQ LV GHVLUHG )LJXUH VKRZV D VFKHPDWLF RI D 9f F\FOH ,Q WKLV FDVH WKUHH SUHVVXUHFRUUHFWLRQ LWHUDWLRQV DUH GRQH ILUVW 7KHQ UHVLGXDOV DQG YDULDEOHV DUH UHVWULFWHG DYHUDJHGf WR REWDLQ FRDUVHJULG YDOXHV IRU WKHVH TXDQWLWLHV 7KH VROXWLRQ WR WKH FRDUVHJULG GLVFUHWL]HG HTXDWLRQ SURYLGHV D FRUUHFWLRQ WR WKH ILQH JULG VROXWLRQ 2QFH WKH VROXWLRQ RQ WKH FRDUVH JULG LV REWDLQHG WKH FRUUHFWLRQ LV LQWHUSRODWHG SURORQJDWHGf WR WKH ILQH JULG DQG DGGHG EDFN LQWR WKH VROXWLRQ WKHUH

PAGE 92

6RPH SRVWVPRRWKLQJ LWHUDWLRQV WZR LQ WKLV FDVH DUH QHHGHG WR HOLPLQDWH HUURUV LQWURGXFHG E\ WKH LQWHUSRODWLRQ 6LQFH LW LV XVXDOO\ WRR FRVWO\ WR DWWHPSW D GLUHFW VROXWLRQ RQ WKH FRDUVH JULG WKLV VPRRWKLQJFRUUHFWLRQ F\FOH LV DSSOLHG UHFXUVLYHO\ OHDGLQJ WR WKH 9 F\FOH VKRZQ 7KH QH[W VHFWLRQ GHVFULEHV KRZ VXFK D SURFHGXUH FDQ DFFHOHUDWH WKH FRQYHUJHQFH UDWH RI DQ LWHUDWLYH PHWKRG LQ WKH FRQWH[W RI OLQHDU HTXDWLRQV 7KH PXOWLJULG VFKHPH IRU QRQOLQHDU VFDODU HTXDWLRQV DQG WKH 1DYLHU6WRNHV V\VWHP RI HTXDWLRQV LV WKHQ GHVFULEHG %UDQGW >@ ZDV WKH ILUVW WR IRUPDOL]H WKH PDQQHU LQ ZKLFK FRDUVH JULGV FRXOG EH XVHG DV D FRQYHUJHQFHDFFHOHUDWLRQ WHFKQLTXH IRU D JLYHQ VPRRWKHU 7KH LGHD RI XVLQJ FRDUVH JULGV WR JHQHUDWH LQLWLDO JXHVVHV IRU ILQHJULG VROXWLRQV ZDV DURXQG PXFK HDUOLHU 7KH FRVW RI WKH PXOWLJULG DOJRULWKP SHU F\FOH LV GRPLQDWHG E\ WKH VPRRWKLQJ FRVW DV ZLOO EH VKRZQ LQ &KDSWHU 7KXV ZLWK UHJDUG WR WKH SDUDOOHO UXQ WLPH SHU PXOWLJULG LWHUDWLRQ WKH VPRRWKHU LV WKH SULPDU\ FRQFHUQ $OVR ZLWK UHJDUG WR WKH FRQYHUJHQFH UDWH WKH VPRRWKHU LV LPSRUWDQW 7KH VLQJOHJULG FRQYHUJHQFH UDWH FKDUDFWHULVWLFV RI SUHVVXUHFRUUHFWLRQ PHWKRGV WKH GHSHQGHQFH RQ 5H\QROGV QXPEHU IORZ SUREOHP DQG WKH FRQYHFWLRQ VFKHPH FDUU\ RYHU WR WKH PXOWLJULG FRQWH[W +RZHYHU LQ WKH PXOWLJULG PHWKRG WKH VPRRWKHUfV UROH LV DV WKH QDPH LPSOLHV WR VPRRWK WKH ILQHJULG UHVLGXDO ZKLFK LV D GLIIHUHQW REMHFWLYH WKDQ WR VROYH WKH HTXDWLRQV TXLFNO\ $ VPRRWK ILQHJULG UHVLGXDO HTXDWLRQ FDQ EH DSSUR[LPDWHG DFFXUDWHO\ RQ D FRDUVHU JULG 7KH QH[W VHFWLRQ GHVFULEHV DQ DOWHUQDWH SUHVVXUHEDVHG VPRRWKHU DQG FRPSDUHV LWV FRVW DJDLQVW WKH SUHVVXUHFRUUHFWLRQ PHWKRG RQ WKH &0 6WDELOLW\ RI PXOWLJULG LWHUDWLRQV LV DOVR DQ LPSRUWDQW XQUHVROYHG LVVXH 7KHUH DUH WZR ZD\V LQ ZKLFK PXOWLJULG LWHUDWLRQV FDQ EH FDXVHG WR GLYHUJH )LUVW WKH VLQJOHJULG VPRRWKLQJ LWHUDWLRQV FDQ GLYHUJH IRU H[DPSOH LI FHQWUDOGLIIHUHQFLQJ LV XVHG WKHUH DUH SRVVLEO\ VWDELOLW\ SUREOHPV LI WKH 5H\QROGV QXPEHU LV KLJK 6HFRQG SRRU FRDUVHJULG

PAGE 93

FRUUHFWLRQV FDQ FDXVH GLYHUJHQFH LI WKH VPRRWKLQJ LV LQVXIILFLHQW ,Q D VHQVH WKLV ODWWHU LVVXH WKH VFKHPH DQG LQWHUJULG WUDQVIHU RSHUDWRUV ZKLFK SUHVFULEH WKH FRRUGLQDWLRQ EHWZHHQ FRDUVH DQG ILQH JULGV LQ WKH PXOWLJULG SURFHGXUH LV WKH NH\ LVVXH ,Q WKH QH[W VHFWLRQ WZR fVWDELOL]DWLRQ VWUDWHJLHVf DUH GHVFULEHG 7KHQ WKH LPSDFW RI GLIIHUHQW UHVWULFWLRQ DQG SURORQJDWLRQ SURFHGXUHV RQ WKH FRQYHUJHQFH UDWH LV VWXGLHG LQ WKH FRQWH[W RI WZR PRGHO SUREOHPV OLGGULYHQ FDYLW\ IORZ DQG IORZ SDVW D V\PPHWULF EDFNZDUGIDFLQJ VWHS 7KHVH WZR SDUWLFXODU IORZ SUREOHPV KDYH GLIIHUHQW SK\VLFDO FKDUDFWHULVWLFV DQG WKHUHIRUH WKH QXPHULFDO H[SHULPHQWV VKRXOG JLYH LQVLJKW LQWR WKH SUREOHPGHSHQGHQFH RI WKH UHVXOWV 7HUPLQRORJ\ DQG 6FKHPH IRU /LQHDU (TXDWLRQV 7KH GLVFUHWH SUREOHP WR EH VROYHG FDQ EH ZULWWHQ $KXK 6K FRUUHVSRQGLQJ WR VRPH GLIIHUHQWLDO HTXDWLRQ />X@ f§ 6 7KH VHW RI YDOXHV XK LV GHILQHG E\ .nM` X^LKMKf LMf f > 1@ > 1@f K f 6LPLODUO\ XK LV GHILQHG RQ WKH FRDUVHU JULG 4K ZLWK JULG VSDFLQJ K 7KH YDULDEOH X FDQ EH D VFDODU RU D YHFWRU DQG WKH RSHUDWRU $ FDQ EH OLQHDU RU QRQOLQHDU )RU OLQHDU HTXDWLRQV WKH fFRUUHFWLRQ VFKHPHf &6f LV IUHTXHQWO\ XVHG $ WZR OHYHO PXOWLJULG F\FOH XVLQJ &6 DFFHOHUDWHV WKH FRQYHUJHQFH RI DQ LWHUDWLYH PHWKRG ZLWK LWHUDWLRQ PDWUL[ 3f E\ WKH IROORZLQJ SURFHGXUH 'R Y ILQHJULG LWHUDWLRQV YK f§ 3OYK &RPSXWH UHVLGXDO RQ IOK UK $KYK f§ 6K 5HVWULFW UK WR 4K UK ,KUK 6ROYH H[DFWO\ IRU HK HK OrfUL &RUUHFW YK RQ 9KfQHZ YKfROG ,bKHK

PAGE 94

,bK DQG ,A V\PEROL]H WKH UHVWULFWLRQ DQG SURORQJDWLRQ SURFHGXUHV 7KH TXDQWLW\ YK LV WKH FXUUHQW DSSUR[LPDWLRQ WR WKH GLVFUHWH VROXWLRQ XK 7KH DOJHEUDLF HUURU LV WKH GLIIHUHQFH EHWZHHQ WKHP HK XK f§ YK 7KH GLVFUHWL]DWLRQ HUURU LV WKH GLIIHUHQFH EHWZHHQ WKH H[DFW VROXWLRQV RI WKH FRQWLQXRXV DQG GLVFUHWH SUREOHPV HWVFU X f§ XK 7KH WUXQFDWLRQ HUURU LV REWDLQHG E\ VXEVWLWXWLQJ WKH H[DFW VROXWLRQ LQWR WKH GLVFUHWH HTXDWLRQ UK $KX 6K $KX $KXK f 7KH QRWDWLRQ DERYH IROORZV %ULJJV >@ 7KH WZROHYHO PXOWLJULG F\FOH EHJLQV RQ WKH ILQH JULG ZLWK X LWHUDWLRQV RI WKH VPRRWKHU 6WDQGDUG LWHUDWLYH PHWKRGV DOO KDYH WKH fVPRRWKLQJ SURSHUW\f ZKLFK LV WKDW WKH YDULRXV HLJHQYHFWRUGHFRPSRVHG FRPSRQHQWV RI WKH VROXWLRQ HUURU DUH GDPSHG DW D UDWH SURSRUWLRQDO WR WKHLU FRUUHVSRQGLQJ HLJHQYDOXHV LH WKH KLJK IUHTXHQF\ HUURUV DUH GDPSHG IDVWHU WKDQ WKH ORZ IUHTXHQF\ VPRRWKf HUURUV 7KXV WKH FRQYHUn JHQFH UDWH RI WKH VPRRWKLQJ LWHUDWLRQV LV LQLWLDOO\ UDSLG EXW GHWHULRUDWHV DV VPRRWK HUURU FRPSRQHQWV WKRVH ZLWK ODUJH HLJHQYDOXHV GRPLQDWH WKH UHPDLQLQJ HUURU 7KH SXUSRVH RI WUDQVIHUULQJ WKH SUREOHP WR D FRDUVHU JULG LV WR PDNH WKHVH VPRRWK HUURU FRPSRQHQWV DSSHDU PRUH RVFLOODWRU\ ZLWK UHVSHFW WR WKH JULG VSDFLQJ VR WKDW WKH LQLWLDO UDSLG FRQYHUJHQFH UDWH LV REWDLQHG IRU WKH HOLPLQDWLRQ RI WKHVH VPRRWK HUURUV E\ FRDUVHJULG LWHUDWLRQV 6LQFH WKH FRDUVH JULG 4K KDV RQO\ DV PDQ\ JULG SRLQWV DV 4K LQ Gf WKH VPRRWKLQJ LWHUDWLRQV RQ WKH FRDUVH JULG DUH FKHDSHU DV ZHOO DV PRUH HIIHFWLYH LQ UHGXFLQJ WKH VPRRWK HUURU FRPSRQHQWV WKDQ RQ WKH ILQH JULG ,Q WKH FRUUHFWLRQ VFKHPH WKH FRDUVHJULG SUREOHP LV DQ HTXDWLRQ IRU WKH DOJHEUDLF HUURU $KH K UK f

PAGE 95

DSSUR[LPDWLQJ WKH ILQHJULG UHVLGXDO HTXDWLRQ IRU WKH DOJHEUDLF HUURU 7R REWDLQ WKH FRDUVHJULG VRXUFH WHUP UO WKH UHVWULFWLRQ SURFHGXUH ,K LV DSSOLHG WR WKH ILQHJULG UHVLGXDO UK UK ,KKUK f (T LV DQ DYHUDJLQJ W\SH RI RSHUDWLRQ 7ZR FRPPRQ UHVWULFWLRQ SURFHGXUHV DUH VWUDLJKW LQMHFWLRQ RI ILQHJULG YDOXHV WR WKHLU FRUUHVSRQGLQJ FRDUVHJULG JULG SRLQWV DQG DYHUDJLQJ UK RYHU D IHZ ILQHJULG JULG SRLQWV ZKLFK DUH QHDU WKH FRUUHVSRQGLQJ FRDUVHJULG JULG SRLQW 7KH LQLWLDO HUURU RQ WKH FRDUVH JULG LV WDNHQ DV ]HUR $IWHU WKH VROXWLRQ IRU HK LV REWDLQHG WKLV FRDUVHJULG TXDQWLW\ LV LQWHUSRODWHG WR WKH ILQH JULG DQG XVHG WR FRUUHFW WKH ILQHJULG VROXWLRQ YK YK ,"KHK f )RU cK FRPPRQ FKRLFHV DUH ELOLQHDU RU ELTXDGUDWLF LQWHUSRODWLRQ ,Q SUDFWLFH WKH VROXWLRQ IRU HK LV REWDLQHG E\ UHFXUVLRQ RQ WKH WZROHYHO F\FOHf§ ^$KfaO LV QRW H[SOLFLWO\ FRPSXWHG 2Q WKH FRDUVHVW JULG GLUHFW VROXWLRQ PD\ EH IHDVLEOH LI WKH HTXDWLRQ LV VLPSOH HQRXJK 2WKHUZLVH D IHZ VPRRWKLQJ LWHUDWLRQV FDQ EH DSSOLHG 5HFXUVLRQ RQ WKH WZROHYHO DOJRULWKP OHDGV WR D f9 F\FOHf DV VKRZQ LQ )LJXUH $ VLPSOH 9f F\FOH LV VKRZQ 7KUHH VPRRWKLQJ LWHUDWLRQV DUH WDNHQ EHIRUH UHn VWULFWLQJ WR WKH QH[W FRDUVHU JULG DQG WZR LWHUDWLRQV DUH WDNHQ DIWHU WKH VROXWLRQ KDV EHHQ FRUUHFWHG 7KH SXUSRVH RI WKH ODWWHU VPRRWKLQJ LWHUDWLRQV LV WR VPRRWK RXW DQ\ KLJKIUHTXHQF\ QRLVH LQWURGXFHG E\ WKH SURORQJDWLRQ 2WKHU F\FOHV FDQ EH HQYLn VLRQHG ,Q SDUWLFXODU WKH : F\FOH LV SRSXODU >@ 7KH F\FOLQJ VWUDWHJ\ LV FDOOHG WKH fJULGVFKHGXOHf VLQFH LW LV WKH RUGHU LQ ZKLFK WKH YDULRXV JULG OHYHOV DUH YLVLWHG 7KH PRVW LPSRUWDQW FRQVLGHUDWLRQ IRU WKH FRUUHFWLRQ VFKHPH KDV EHHQ VDYHG IRU ODVW QDPHO\ WKH GHILQLWLRQ RI WKH FRDUVHJULG GLVFUHWH HTXDWLRQ $K 2QH SRVVLELOLW\ LV

PAGE 96

WR GLVFUHWL]H WKH RULJLQDO GLIIHUHQWLDO HTXDWLRQ GLUHFWO\ RQ WKH FRDUVH JULG +RZHYHU WKLV FKRLFH LV QRW DOZD\V WKH EHVW RQH 7KH FRQYHUJHQFHUDWH EHQHILW IURP WKH PXOWLJULG VWUDWHJ\ LV GHULYHG IURP WKH SDUWLFXODU FRDUVHJULG DSSUR[LPDWLRQ WR WKH ILQHJULG GLVFUHWH SUREOHP QRW WKH FRQWLQXRXV SUREOHP %HFDXVH WKH FRDUVHJULG VROXWLRQV DQG UHVLGXDOV DUH REWDLQHG E\ SDUWLFXODU DYHUDJLQJ SURFHGXUHV WKHUH LV DQ LPSOLHG DYHUDJLQJ SURFHGXUH IRU WKH ILQHJULG GLVFUHWH RSHUDWRU $K ZKLFK VKRXOG EH KRQRUHG WR HQVXUH D XVHIXO KRPRJHQL]DWLRQ RI WKH ILQHJULG UHVLGXDO HTXDWLRQ 7KLV LVVXH LV FULWLFDO ZKHQ WKH FRHIILFLHQWV DQGRU GHSHQGHQW YDULDEOHV RI WKH JRYHUQLQJ HTXDWLRQV DUH QRW VPRRWK >@ )RU WKH 3RLVVRQ HTXDWLRQ WKH *DOHUNLQ DSSUR[LPDWLRQ $K ,K$K,bK LV WKH ULJKW FKRLFH 7KH GLVFUHWL]HG HTXDWLRQ FRHIILFLHQWV RQ WKH FRDUVH JULG DUH REWDLQHG E\ DSSO\LQJ VXLWDEOH DYHUDJLQJ DQG LQWHUSRODWLRQ RSHUDWLRQV WR WKH ILQHJULG FRHIILn FLHQWV LQVWHDG RI E\ GLVFUHWL]LQJ WKH JRYHUQLQJ HTXDWLRQ RQ D JULG ZLWK D FRDUVHU PHVK VSDFLQJ %ULJJV KDV VKRZQ E\ H[SORLWLQJ WKH DOJHEUDLF UHODWLRQVKLS EHWZHHQ ELOLQHDU LQWHUSRODWLRQ DQG IXOOZHLJKWLQJ UHVWULFWLRQ RSHUDWRUV WKDW LQLWLDOO\ VPRRWK HUURUV EHJLQ LQ WKH UDQJH RI LQWHUSRODWLRQ DQG ILQLVK DIWHU WKH VPRRWKLQJFRUUHFWLRQ F\FOH LV DSSOLHG LQ WKH QXOO VSDFH RI WKH UHVWULFWLRQ RSHUDWRU >@ 7KXV LI WKH ILQHJULG VPRRWKLQJ HOLPLQDWHV DOO WKH KLJKIUHTXHQF\ HUURU FRPSRQHQWV LQ WKH VROXWLRQ RQH 9 F\FOH XVLQJ WKH FRUUHFWLRQVFKHPH LV D GLUHFW VROYHU IRU WKH 3RLVVRQ HTXDWLRQ 7KH FRQn YHUJHQFH UDWH RI PXOWLJULG PHWKRGV XVLQJ WKH *DOHUNLQ DSSUR[LPDWLRQ LV PRUH GLIILFXOW WR DQDO\]H LI WKH JRYHUQLQJ HTXDWLRQV DUH PRUH FRPSOLFDWHG WKDQ 3RLVVRQ HTXDWLRQV EXW VLJQLILFDQW WKHRUHWLFDO DGYDQWDJHV IRU DSSOLFDWLRQ WR JHQHUDO OLQHDU SUREOHPV KDYH EHHQ LQGLFDWHG >@

PAGE 97

)XOO$SSUR[LPDWLRQ 6WRUDJH 6FKHPH IRU 1RQOLQHDU (TXDWLRQV 7KH EULHI GHVFULSWLRQ JLYHQ DERYH GRHV QRW EULQJ RXW WKH FRPSOH[LWLHV LQKHUHQW LQ WKH DSSOLFDWLRQ WR QRQOLQHDU SUREOHPV 7KHUH LV RQO\ H[SHULHQFH GHULYHG PRVWO\ IURP QXPHULFDO H[SHULPHQWV WR JXLGH WKH FKRLFH RI WKH UHVWULFWLRQSURORQJDWLRQ SURFHGXUHV DQG WKH VPRRWKHU )XUWKHUPRUH WKH OLQNDJH EHWZHHQ WKH JULG OHYHOV UHTXLUHV VSHFLDO FRQVLGHUDWLRQV EHFDXVH RI WKH QRQOLQHDULW\ 7KH FRUUHFWLRQ VFKHPH XVLQJ WKH *DOHUNLQ DSSUR[LPDWLRQ FDQ EH DSSOLHG WR WKH QRQOLQHDU 1DYLHU6WRNHV V\VWHP RI HTXDWLRQV >@ +RZHYHU LQ RUGHU WR XVH &6 IRU QRQOLQHDU HTXDWLRQV OLQHDUL]DWLRQ LV UHTXLUHG 7KH EHVW FRDUVHJULG FRUUHFWLRQ RQO\ LPSURYHV WKH ILQHJULG VROXWLRQ WR WKH OLQHDUL]HG HTXDWLRQ $OVR IRU FRPSOH[ HTXDn WLRQV FRQVLGHUDEOH H[SHQVH LV LQFXUUHG LQ FRPSXWLQJ $K E\ WKH *DOHUNLQ DSSUR[Ln PDWLRQ 7KH FRPPRQO\ DGRSWHG DOWHUQDWLYH LV WKH LQWXLWLYH RQH WR OHW $K EH WKH GLIIHUHQWLDO RSHUDWRU / GLVFUHWL]HG RQ WKH JULG ZLWK VSDFLQJ K LQVWHDG RI K ,Q H[n FKDQJH IRU D VWUDLJKWIRUZDUG SUREOHP GHILQLWLRQ RQ WKH FRDUVH JULG WKRXJK VSHFLDO UHVWULFWLRQ DQG SURORQJDWLRQ SURFHGXUHV PD\ EH QHFHVVDU\ WR HQVXUH WKH XVHIXOQHVV RI WKH UHVXOWLQJ FRUUHFWLRQV 1XPHULFDO H[SHULPHQWV RQ D SUREOHPE\SUREOHP EDVLV DUH QHFHVVDU\ WR GHWHUPLQH JRRG FKRLFHV IRU WKH UHVWULFWLRQ DQG SURORQJDWLRQ SURFHGXUHV IRU 1DYLHU6WRNHV PXOWLJULG PHWKRGV 7KH IXOODSSUR[LPDWLRQ VWRUDJH )$6f VFKHPH >@ LV SUHIHUUHG RYHU WKH FRUUHFWLRQ VFKHPH IRU QRQOLQHDU SUREOHPV 7KH FRDUVHJULG FRUUHFWLRQV JHQHUDWHG E\ )$6 LPSURYH WKH VROXWLRQ WR WKH IXOO QRQOLQHDU SUREOHP LQVWHDG RI MXVW WKH OLQHDUL]HG RQH 7KH GLVFUHWL]HG HTXDWLRQ RQ WKH ILQH JULG LV DJDLQ $KXK 6K f

PAGE 98

7KH DSSUR[LPDWH VROXWLRQ YK DIWHU D IHZ ILQHJULG LWHUDWLRQV GHILQHV WKH UHVLGXDO RQ WKH ILQH JULG $KYK 6K UK f $ FRUUHFWLRQ WKH DOJHEUDLF HUURU HAOJ XK f§ YK LV VRXJKW ZKLFK VDWLVILHV ‘ 9 f VK f 7KH UHVLGXDO HTXDWLRQ LV IRUPHG E\ VXEWUDFWLQJ (T IURP (T DQG FDQFHOOLQJ 6? $KYK HKf$K^YKf UK f ZKHUH WKH VXEVFULSW fDOJf LV GURSSHG IRU FRQYHQLHQFH )RU OLQHDU HTXDWLRQV WKH $KYK WHUPV FDQFHO OHDYLQJ (T (T GRHV QRW VLPSOLI\ IRU QRQOLQHDU HTXDWLRQV $VVXPLQJ WKDW WKH VPRRWKHU KDV GRQH LWV MRE UK LV VPRRWK DQG (T LV WKH VDPH DV WKH FRDUVHJULG UHVLGXDO HTXDWLRQ $KYK HKf $KYKf UK f DW FRDUVHJULG JULG SRLQWV 7KH HUURU HK LV WR EH IRXQG LQWHUSRODWHG EDFN WR 4K DFFRUGLQJ WR HK AHA DQG DGGHG WR YK VR WKDW (T LV VDWLVILHG 7KH NQRZQ TXDQWLWLHV DUH YK ZKLFK LV D fVXLWDEOHf UHVWULFWLRQ RI YK DQG UK OLNHZLVH D UHVWULFWLRQ RI UK 'LIIHUHQW UHVWULFWLRQV FDQ EH XVHG IRU UHVLGXDOV DQG VROXWLRQV 7KXV (T FDQ EH ZULWWHQ $K,KKYK HKf $K,KKYKf ,KKUK f 6LQFH (T LV QRW DQ HTXDWLRQ IRU HK RQH VROYHV LQVWHDG IRU WKH VXP ,KYK HK ([SDQGLQJ UK DQG UHJURXSLQJ WHUPV (T FDQ EH ZULWWHQ $KXNf $K^,KKYKf UK f

PAGE 99

>$K,KYKf ,K$KYKf ,K6K 6O@ U QK TK QXPHULFDO n f f (T LV VLPLODU WR (T H[FHSW IRU WKH H[WUD QXPHULFDOO\GHULYHG VRXUFH WHUP 2QFH ,OKYK HK LV REWDLQHG WKH FRDUVHJULG DSSUR[LPDWLRQ WR WKH ILQHJULG HUURU HO LV FRPSXWHG E\ ILUVW VXEWUDFWLQJ WKH LQLWLDO FRDUVHJULG VROXWLRQ ,KYK HK X K WI9 f WKHQ LQWHUSRODWLQJ EDFN WR WKH ILQH JULG DQG FRPELQLQJ ZLWK WKH FXUUHQW VROXWLRQ YK YK HLf f ([WHQVLRQ WR WKH 1DYLHU6WRNHV (TXDWLRQV 7KH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV DUH D V\VWHP RI FRXSOHG QRQOLQHDU HTXDWLRQV &RQVHTXHQWO\ WKH )$6 VFKHPH JLYHQ DERYH IRU VLQJOH QRQOLQHDU HTXDWLRQV QHHGV WR EH PRGLILHG 7KH YDULDEOHV DQG 8J UHSUHVHQW WKH FDUWHVLDQ YHORFLW\ FRPSRQHQWV DQG WKH SUHVVXUH UHVSHFWLYHO\ &RUUHVSRQGLQJ VXEVFULSWV DUH XVHG WR LGHQWLI\ HDFK HTXDWLRQVf VRXUFH WHUP UHVLGXDO DQG GLVFUHWH RSHUDWRU LQ WKH IRUPXODWLRQ EHORZ 7KH WKUHH HTXDWLRQV IRU PRPHQWXP DQG PDVV FRQVHUYDWLRQ DUH WUHDWHG DV LI SDUW RI WKH IROORZLQJ PDWUL[ HTXDWLRQ f $? *\ UmI L U VL @ R $K *K\ X? 6L *K[ *K\ 8 7KH FRQWLQXLW\ HTXDWLRQ VRXUFH WHUP LV ]HUR RQ WKH ILQHVW JULG 4K EXW IRU FRDUVHU JULG OHYHOV LW PD\ QRW EH ]HUR 7KXV IRU WKH VDNH RI JHQHUDOLW\ LW LV LQFOXGHG LQ (T 7KXV IRU WKH LLLPRPHQWXP HTXDWLRQ (T LV PRGLILHG WR DFFRXQW IRU WKH SUHVVXUHJUDGLHQW *AXAL ZKLFK LV DOVR DQ XQNQRZQ 7KH DSSUR[LPDWH VROXWLRQV DUH

PAGE 100

Y? Yb DQG X FRUUHVSRQGLQJ WR X` XLI DQG X )RU WKH XLPRPHQWXP HTXDWLRQ WKH DSSUR[LPDWH VROXWLRQ VDWLVILHV $rYW *KVYr 6` UO f 7KH ILQHJULG UHVLGXDO HTXDWLRQ FRUUHVSRQGLQJ WR (T LV PRGLILHG WR W£ HIf *`mc f t0f UI f ZKLFK LV DSSUR[LPDWHG RQ WKH FRDUVH JULG E\ WKH FRUUHVSRQGLQJ FRDUVHJULG UHVLGXDO HTXDWLRQ $K K fK? D K K? aLK UK fK? VaLK K? K D fU? $O WM HO f $O X f *[ L! H f *] X f U^ 8f 7KH NQRZQ WHUPV DUH Y?K f§ ,KYA YK ,KYA DQG U?K ,KUr ([SDQGLQJ UDQG UHJURXSLQJ WHUPV (T FDQ EH ZULWWHQ $" mf *" m"f .,OKf *OK mf LI m *MYMf VI ? $K MKK? VLK UKAK? OAL XLf I $IRIF *K[Yf I 6" A@ f QXPHULFDO 6I K 6LQFH (T LQFOXGHV QXPHULFDOO\ GHULYHG VRXUFH WHUPV LQ DGGLWLRQ WR WKH SK\VLFDO RQHV WKH FRDUVHJULG YDULDEOHV DUH QRW LQ JHQHUDO WKH VDPH DV ZRXOG EH REWDLQHG IURP D GLVFUHWL]DWLRQ RI WKH RULJLQDO FRQWLQXRXV JRYHUQLQJ HTXDWLRQV RQ WKH FRDUVH JULG 7KH XPRPHQWXP HTXDWLRQ LV WUHDWHG VLPLODUO\ DQG WKH FRDUVHJULG FRQWLQXLW\ HTXDWLRQ LV *OK K J KXOK ALKW F UOKO UKfK? MKfK *[ mf *\ Xf U f

PAGE 101

7KH V\VWHP RI HTXDWLRQV (T DUH VROYHG E\ HLWKHU WKH SUHVVXUHFRUUHFWLRQ PHWKRG VHTXHQWLDOf RU WKH ORFDOO\FRXSOHG H[SOLFLW PHWKRG GHVFULEHG LQ WKH QH[W VHFWLRQ ,Q DGGLWLRQ WR WKH FKRLFH RI WKH VPRRWKHU WKH VSHFLILFDWLRQ RI WKH FRDUVHJULG GLVFUHWH SUREOHP $Kf LV FULWLFDO WR WKH FRQYHUJHQFH UDWH DQG WR WKH VWDELOLW\ RI WKH PXOWLJULG LWHUDWLRQV DV ZHOO ,Q WKH GHVFULSWLRQ RI WKH )$6 VFKHPH IRU WKH G LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV SUHVHQWHG HDUOLHU QR PHQWLRQ ZDV PDGH RI WKH FRDUVH JULG GLVFUHWL]DWLRQ ,QWXLWLYHO\ RQH ZRXOG XVH WKH VDPH GLVFUHWL]DWLRQ IRU HDFK RI WKH WHUPV DV RQ WKH ILQH JULG )RU H[DPSOH LI WKH FRQYHFWLRQ WHUPV DUH FHQWUDO GLIIHUHQFHG RQ WKH ILQH JULG WKHQ FHQWUDOGLIIHUHQFLQJ VKRXOG EH XVHG RQ WKH FRDUVH JULG DOVR +RZHYHU ZLWK VXFK DQ DSSURDFK QXPHULFDO VWDELOLW\ IUHTXHQWO\ EHFRPHV D SUREOHP SDUWLFXODUO\ LQ KLJK 5H\QROGV QXPEHU IORZ SUREOHPV &RPSDULVRQ RI 3UHVVXUH%DVHG 6PRRWKHUV 7KH VLQJOHJULG FRQYHUJHQFH UDWH RI SUHVVXUHFRUUHFWLRQ PHWKRGV IRU WKH LQFRPn SUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV GHSHQGV VWURQJO\ RQ WKH GLVFUHWL]DWLRQ RI WKH QRQn OLQHDU FRQYHFWLRQ WHUPV WKH 5H\QROGV QXPEHU DQG WKH LPSRUWDQFH RI WKH SUHVVXUH YHORFLW\ FRXSOLQJ LQ WKH IOXLG G\QDPLFV 7KH JULG VL]H DQG TXDOLW\ FDQ DOVR DIIHFW WKH FRQYHUJHQFH UDWH LQ FXUYLOLQHDU IRUPXODWLRQV 7KHVH LVVXHV FDUU\ RYHU WR WKH PXOWLJULG FRQWH[W DQG DUH FRPSOLFDWHG E\ WKH LQWHUSOD\ EHWZHHQ WKH HYROYLQJ VROXWLRQV RQ WKH PXOWLSOH JULG OHYHOV 7ZR SUHVVXUHEDVHG PHWKRGV DUH SRSXODU VPRRWKHUV 7KH ILUVW LV WKH SUHVVXUH FRUUHFWLRQ PHWKRG VWXGLHG LQ &KDSWHU DQG DQG WKH RWKHU LV 9DQNDfV ORFDOO\ FRXSOHG H[SOLFLW PHWKRG >@ EULHIO\ LQWURGXFHG LQ &KDSWHU 0XFK DWWHQWLRQ KDV EHHQ IRFXVHG RQ FRPSDULQJ WKH SHUIRUPDQFH RI WKHVH WZR PHWKRGV LQ WKH PXOWLJULG

PAGE 102

FRQWH[W LH DV VPRRWKHUV 7KH VHPLLPSOLFLW SUHVVXUHFRUUHFWLRQ PHWKRGV GXH WR WKHLU LPSOLFLWQHVV DUH EHWWHU VLQJOHJULG VROYHUV ,Q WKH ORFDOO\FRXSOHG H[SOLFLW PHWKRG SUHVVXUH DQG YHORFLW\ DUH XSGDWHG LQ D FRXSOHG PDQQHU LQVWHDG RI VHTXHQWLDOO\ $ ILQLWHYROXPH LPSOHPHQWDWLRQ RQ D VWDJn JHUHG JULG LV HPSOR\HG 7KH SUHVVXUH DQG WKH YHORFLWLHV RQ WKH IDFHV RI HDFK S FRQWURO YROXPH DUH XSGDWHG VLPXOWDQHRXVO\ +RZHYHU WKH VLPXOWDQHRXV XSGDWH RI SUHVVXUH DQG YHORFLW\ LV RQO\ IRU RQH FRQWURO YROXPH DW D WLPH 8QGHUUHOD[DWLRQ LV DJDLQ QHFHVVDU\ GXH WR WKH GHFRXSOLQJ EHWZHHQ FRQWURO YROXPHV 7KH FRQWURO YROXPHV DUH WUDYHUVHG E\ WKH OH[LFRJUDSKLFDO RUGHULQJ ZLWK WKH PRVW UHFHQWO\ XSGDWHG X DQG Y YDOXHV XVHG ZKHQ DYDLODEOH 7KXV WKH RULJLQDO PHWKRG LV FDOOHG %*6 IRU fEORFN *DXVV6HLGHOff $IWHU RQH VZHHS RI WKH JULG HDFK X DQG Y KDYH EHHQ XSGDWHG WZLFH DQG HDFK SUHVVXUH RQFH $ UHGEODFN RUGHULQJ VXLWDEOH IRU SDUDOOHO FRPSXWDWLRQ KDV EHHQ GHYHORSHG LQ WKLV UHVHDUFK %\ DQDORJ\ WKLV DOJRULWKP LV FDOOHG %5% EORFN UHGEODFNf )RU WKH LMfWK SUHVVXUH FRQWURO YROXPH WKH FRQWLQXLW\ HTXDWLRQ LV ZULWWHQ LQ WHUPV RI WKH YHORFLW\ FRUUHFWLRQV QHHGHG WR UHVWRUH PDVV FRQVHUYDWLRQ ?X LL M ‘XnLMf$\YLMLYLf$[ .I Lf$\YLMYOMLf$[ 5FLM f ZKHUH 5FW @ LV WKH PDVV UHVLGXDO LQ WKH LMfWK FRQWURO YROXPH 7KH QRWDWLRQ IROORZV WKH GHYHORSPHQW LQ &KDSWHU H[FHSW QRZ WKDW SUHVVXUH DQG YHORFLW\ DUH FRXSOHG LW LV QHFHVVDU\ WR UHIHU WR WKH LMf QRWDWLRQ RQ RFFDVLRQ ,Q )LJXUH XZ LV LM XH LV LTLS UV LV XM DQG YQ LV FMW r 7KH GLVFUHWH XPRPHQWXP HTXDWLRQ IRU WKH LMfWK S FRQWURO YROXPH LV ZULWWHQ 8 D3XLM 3LL$\ e N (:16 DX NXN ^3LWM a 3rL!f$W DSW LM 5O f

PAGE 103

7KH GLVFUHWL]HG PRPHQWXP HTXDWLRQV IRU WKH WKUHH RWKHU IDFHV RI WKH SUHVVXUH FRQWURO YROXPH DUH ZULWWHQ DQDORJRXVO\ JLYLQJ D V\VWHP RI ILYH HTXDWLRQV LQ ILYH XQNQRZQV f n .NL $\ D U L $\ fLLM '8 ALLM $[ f§ $[ ]L U!Y f§$\ $\ f§$[ $[ / A 7KH VROXWLRQ RI WKLV PDWUL[ HTXDWLRQ LV GRQH E\ KDQG IRU S M B $$B $ DSf!DSf OOM DSf!-DSf}-O 3nLM .fM DSf!LM $[fIL9W .3fR DSfmML f 7KH YHORFLW\ FRUUHFWLRQV DUH IRXQG E\ EDFNVXEVWLWXWLRQ 7KH HQWLUH SURFHGXUH LV VXPPDUL]HG LQ WKH IROORZLQJ DOJRULWKP %5%]]r Yr Sr MXYMFf &RPSXWH Xn FRHIILFLHQW DSXrYrf DQG UHVLGXDO .IM 9]Mf &RPSXWH ] FRHIILFLHQW DSXrUrf DQG UHVLGXDO 9]Mf &RPSXWH SnM EDFNVXEVWLWXWH IRU X?U ]]OM ]]nM X 9] Mf ] M RGG &RUUHFW DOO ]] Q DQG RGG S 2LM ‘ 7 $=XYO=WAM DQDORJRXV FRUUHFWLRQV IRU LM WfM ML DQG SWf &RPSXWH ]]n FRHIILFLHQW DSXYf DQG UHVLGXDO 9] Mf &RPSXWH ] FRHIILFLHQW DY3Yf DQG UHVLGXDO 9]Mf &RPSXWH SM ‘ EDFN VXEVWLWXWH IRU ]]n -" ]]M ]]M ]]n9] Mf ] M H]]HU] &RUUHFW DOO ]] ]] DQG HYHQ S M 7 $=XX==M DQDORJRXV FRUUHFWLRQV IRU ]]WLM ]]M ]] DQG SW-f

PAGE 104

,Q JHQHUDO WKH FRQYHUJHQFH UDWH LQ WKH PXOWLJULG FRQWH[W LV GLIIHUHQW EHWZHHQ 6,0n 3/( DQG %5% /LQGHQ HW DO >@ VWDWHG D SUHIHUHQFH IRU WKH ORFDOO\FRXSOHG H[SOLFLW VPRRWKHU UDWKHU WKDQ SUHVVXUHFRUUHFWLRQ PHWKRGV 7KH DUJXPHQW WKH DXWKRUV JDYH ZDV WKDW WKH ORFDO FRXSOLQJ RI YDULDEOHV LV EHWWHU VXLWHG WR SURGXFH ORFDO VPRRWKLQJ RI UHVLGXDOV LH IDVWHU UHVROXWLRQ RI WKH ORFDO YDULDWLRQV LQ WKH VROXWLRQ 7KLV LV EHn OLHYHG WR DOORZ D PRUH DFFXUDWH FRDUVHJULG DSSUR[LPDWLRQ RI WKH ILQHJULG SUREOHP 6LPLODU UHDVRQLQJ DSSHDUV WR KDYH EHHQ DSSOLHG LQ WKH RULJLQDO GHYHORSPHQW >@ E\ )HU]LJHU DQG 3HULF >@ DQG E\ *KLD HW DO >@ /LQGHQ HW DO >@ GLG D VLPn SOLILHG )RXULHU DQDO\VLV RI ORFDOO\FRXSOHG VPRRWKLQJ IRU WKH 6WRNHV HTXDWLRQV DQG FRQILUPHG JRRG VPRRWKLQJ SURSHUWLHV RI WKH ORFDOO\FRXSOHG H[SOLFLW PHWKRG 6KDZ DQG 6LYDORJDQDWKDQ >@ KDYH IRXQG WKDW 6,03/( ZLWK WKH 6/85 VROYHUf DOVR KDV JRRG VPRRWKLQJ SURSHUWLHV IRU WKH 6WRNHV HTXDWLRQV DVVXPLQJ WKDW WKH SUHVVXUH FRUUHFWLRQ HTXDWLRQ LV VROYHG FRPSOHWHO\ GXULQJ HDFK LWHUDWLRQ 7KXV WKHUH LV VRPH DQDO\WLFDO HYLGHQFH WKDW ERWK SUHVVXUHFRUUHFWLRQ PHWKRGV DQG WKH ORFDOO\FRXSOHG H[SOLFLW WHFKQLTXH DUH VXLWDEOH DV PXOWLJULG VPRRWKHUV +RZHYHU WKH DQDO\WLFDO ZRUN LV RYHUVLPSOLILHGf§QXPHULFDO FRPSDULVRQV DUH QHHGHG RQ D SUREOHPE\SUREOHP EDVLV 6RFNRO >@ KDV FRPSDUHG WKH SHUIRUPDQFH RI %*6 WZR OLQHXSGDWLQJ YDULDWLRQV RQ %*6 DQG WKH 6,03/( PHWKRG ZLWK VXFFHVVLYH OLQHXQGHUUHOD[DWLRQ IRU WKH LQQHU LWHUDWLRQV 7KUHH PRGHO IORZ SUREOHPV ZHUH WHVWHG ZLWK GLIIHUHQW SK\VLFDO FKDUDFn WHULVWLFV DQG YDU\LQJ JULG DVSHFW UDWLRV OLGGULYHQ FDYLW\ IORZ FKDQQHO IORZ DQG D FRPELQHG FKDQQHOFDYLW\ IORZ fRSHQ FDYLW\ff ,Q WHUPV RI ZRUN XQLWV 6RFNRO IRXQG WKDW DOO IRXU VPRRWKHUV ZHUH FRPSHWLWLYH IRU OLGGULYHQ FDYLW\ IORZ RYHU D UDQJH RI 5H IURP WR )RU WKH GHYHORSLQJ FKDQQHO IORZ %*6 DQG LWV OLQHXSGDWLQJ YDULDQWV FRQYHUJHG IDVWHU WKDQ 6,03/( RQ VTXDUH JULGV EXW DV WKH JULG DVSHFW UDWLR LQFUHDVHG 6,03/( EHFDPH FRPSHWLWLYH

PAGE 105

%UDQGW DQG @ KDYH GHYHORSHG D OLQHUHOD[DWLRQEDVHG PXOWLJULG PHWKRG ZKLFK KDQGOHV SUHVVXUH DQG YHORFLW\ VHTXHQWLDOO\ *RRG FRQYHUJHQFH UDWHV ZHUH REn VHUYHG IRU fHQWHULQJW\SHf IORZ SUREOHPV LQ ZKLFK WKH IORZ KDV D GRPLQDQW GLUHFWLRQ DQG LV DOLJQHG ZLWK JULG OLQHV /LQHUHOD[DWLRQ KDV WKH HIIHFW RI SURYLGLQJ QRQLVRWURSLF HUURU VPRRWKLQJ SURSHUWLHV WR PDWFK WKH SK\VLFV RI WKH SUREOHP :HVVHOLQJ >@ DQn DO\]HG VHYHUDO OLQHUHOD[DWLRQ PHWKRGV DQG FRQFOXGHG WKDW DOWHUQDWLQJ OLQH-DFREL UHOD[DWLRQ KDG UREXVW VPRRWKLQJ SURSHUWLHV DQG VRPHZKDW XQH[SHFWHGO\ WKDW LW ZDV D EHWWHU FKRLFH WKDQ 6/85 )RU SUHVVXUHEDVHG VPRRWKHUV QXPHULFDO H[SHULPHQWDWLRQ DSSDUHQWO\ KDV FUHDWHG VRPH LQWXLWLRQ UHJDUGLQJ WKH UHODWLYH SHUIRUPDQFH RI VHTXHQWLDO DQG ORFDOO\FRXSOHG VPRRWKHUV LQ PRGHO IORZ SUREOHPV EXW PDQ\ RI WKH LVVXHV KDYH QRW EHHQ LQYHVWLJDWHG V\VWHPDWLFDOO\ )XUWKHU UHVHDUFK SHUKDSV VKRXOG QRW EH GLUHFWHG WRZDUG WKH JRDO RI SLFNLQJ RQH PHWKRG RYHU WKH RWKHU *HQHUDO FRQFOXVLRQV DUH XQOLNHO\ EHFDXVH WKH FRQYHUJHQFH UDWH LV GHSHQGHQW RQ WKH SDUWLFXODU IORZ SUREOHP ,QVWHDG ERWK W\SHV RI VPRRWKHUV VKRXOG FRQWLQXH WR EH LPSOHPHQWHG DQG WHVWHG LQ WKH PXOWLJULG FRQWH[W QRW WR GHWHUPLQH D SUHIHUHQFH EXW UDWKHU WR EXLOG XQGHUVWDQGLQJ IRU WKHLU DSSOLFDWLRQ WR FRPSOH[ IORZ SUREOHPV 7KH FRVW SHU LWHUDWLRQ RI %5% DQG 6,03/( DUH FRPSDUDEOH RQ VHULDO FRPSXWHUV ,I YX XY f§ DQG XF VXFFHVVLYH OLQHXQGHUUHOD[DWLRQ LQQHU LWHUDWLRQV DUH XVHG 6,03/( FRVWV DERXW b PRUH SHU LWHUDWLRQ WKDQ %*6 >@ %*6 DQG %5% DUH LGHQWLFDO LQ WHUPV RI UXQ WLPH RQ D VHULDO FRPSXWHU 7KH UHODWLYH FRVW LV GLIIHUHQW RQ SDUDOOHO FRPSXWHUV WKRXJK )LJXUHV DQG FRPSDUH WKH SDUDOOHO UXQ WLPH SHU LWHUDWLRQ RI %5% ZLWK 6,03/( RQ D 98 &0 LH 63$5& QRGHV HDFK FRQWUROOLQJ YHFWRU XQLWVf IRU D IL[HG QXPEHU RI LWHUDWLRQV f RI WKH VLQJOHJULG %5% DQG 6,03/( VROYHUV 7KH FRQYHFWLRQ WHUPV DUH FHQWUDOGLIIHUHQFHG DQG IRU 6,03/( SRLQW-DFREL LQQHU LWHUDWLRQV DUH XVHG ZLWK

PAGE 106

XX YY DQG XF 7KH SUREOHP VL]H LV JLYHQ LQ WHUPV RI WKH YLUWXDO SURFHVVRU UDWLR WKH ODUJHVW SUREOHP VL]H LQ )LJXUHV DQG LV JULG SRLQWV )LJXUH LQGLFDWHV WKDW 6,03/( DQG %5% KDYH YLUWXDOO\ WKH VDPH FRVW SHU LWHUDWLRQV DQG WKDW WKLV FRVW VFDOHV OLQHDUO\ ZLWK WKH SUREOHP VL]H RQ D IL[HG QXPEHU RI SURFHVVRUV )LJXUH VKRZV WKDW %5% UHTXLUHV DOPRVW WZLFH DV PXFK WLPH RQ FRHIILFLHQW FRPSXWDWLRQV EXW RQO\ DERXW KDOI DV PXFK RQ VROYLQJ IRU WKH SUHVVXUH FKDQJHV DQG EDFNVXEVWLWXWLQJ 7KH FRHIILFLHQW FRPSXWDWLRQ FRVW ZRXOG EH H[DFWO\ WZLFH WKDW RI 6,03/( H[FHSW IRU WKH VPDOO FRQWULEXWLRQ IURP WKH FRPSXWDWLRQ RI WKH HTXDWLRQ FRHIILFLHQWV LQ WKH 6,03/( SURFHGXUH )LJXUH VKRZV WKH DPRXQW RI WLPH VSHQW RQ FRPSXWDWLRQ DQG LQWHUSURFHVVRU FRPPXQLFDWLRQ 7KH LQWHUSURFHVVRU FRPPXQLFDWLRQ FRVW LV UHODWLYHO\ VPDOO FRPSDUHG WR WKH FRPSXWDWLRQ FRVW $OVR WKH VXP RI WKH WZR LV OHVV WKDQ WKH WRWDO HODSVHG WLPH VKRZQ LQ )LJXUH GXH WR IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ 7KH UHODWLYH WLPH VSHQW RYHUDOO DQG LQ FRPSXWDWLRQ LV HVVHQWLDOO\ WKH HIILFLHQF\ 7KXV WKH UHVXOWV VKRZQ LQ )LJXUHV DUH VXPPDUL]HG E\ WKH SRLQW-DFREL FXUYH LQ )LJXUH )XUWKHUPRUH WKH EUHDNGRZQ LQWR FRPPXQLFDWLRQ DQG FRPSXWDWLRQ LV DSSUR[LPDWHO\ WKH VDPH IRU ERWK 6,03/( DQG %5% VR LQ WHUPV RI HIILFLHQF\ VLPLODU FKDUDFWHULVWLFV IRU %5% DUH H[SHFWHG DV ZHUH REVHUYHG LQ &KDSWHU IRU 6,03/( ,Q )LJXUHV WKH 6,03/( WLPLQJV ZLOO EH GLIIHUHQW LI OLQH-DFREL LQQHU LWn HUDWLRQV DUH XVHG LQVWHDG RI SRLQW-DFREL LQQHU LWHUDWLRQV 7KH SDUDOOHO HIILFLHQF\ LV UHGXFHG DQG WKH DFWXDO SDUDOOHO UXQ WLPH LV JUHDWHU 2QH OLQH-DFREL LQQHU LWHUDWLRQ FRQVLVWLQJ RI WZR WULGLDJRQDO VROYHVf§RQH WUHDWLQJ WKH XQNQRZQV LPSOLFLWO\ DORQJ KRUn L]RQWDO OLQHV DQG WKH RWKHU IRU WKH YHUWLFDO OLQHVf XVLQJ WKH F\FOLF UHGXFWLRQ PHWKRG LQWURGXFHG LQ &KDSWHU WDNHV DERXW WLPHV DV ORQJ DV RQH SRLQW-DFREL LWHUDWLRQ

PAGE 107

RQ WKH &0 /LQH-DFREL LQQHU LWHUDWLRQV DUH WKHUHIRUH QRW SUHIHUUHG RYHU SRLQW -DFREL LQQHU LWHUDWLRQV IRU XVH LQ WKH 6,03/( DOJRULWKP XQOHVV WKH EHQHILW WR WKH FRQYHUJHQFH UDWH LV VXEVWDQWLDO 7KH OLQHXSGDWLQJ YDULDQWV RI %5% VHH > @f DUH HYHQ ZRUVH LQ FRPSDULVRQ ZLWK %5% WKDQ WKH OLQH-DFREL 6,03/( PHWKRG LV LQ FRPSDULVRQ ZLWK WKH SRLQW-DFREL 6,03/( PHWKRGf§WKH\ DUH QRW VXLWDEOH IRU 6,0' FRPSXWDWLRQ 7KH OLQHXSGDWLQJ YDULDWLRQV RQ %*6 FRXSOH SUHVVXUHV DQG YHORFLWLHV EHWZHHQ FRQWURO YROXPHV DORQJ D OLQH DV ZHOO DV ZLWKLQ HDFK FRQWURO YROXPH %\ FRQWUDVW LQ VHTXHQWLDO SUHVVXUHEDVHG PHWKRGV OLQHLWHUDWLYH PHWKRGV DUH XVHG ZLWKLQ WKH FRQWH[W RI VROYLQJ WKH LQGLYLGXDO V\VWHPV RI HTXDWLRQV VR RQO\ D VLQJOH YDULDEOH LV LQYROYHG 2Q WKH VWDJJHUHG JULG WKH XQNQRZQV ZKLFK DUH WR EH XSGDWHG VLPXOWDQHRXVO\ LQ WKH OLQHYDULDQW RI %5% DUH IRU D FRQVWDQW M OLQH ^SM XML3ML ‘ f ‘}XQLLM3QLLM` 7R VHW XS WKH WULGLDJRQDO V\VWHP RI HTXDWLRQV IRU VROYLQJ IRU WKHVH XQNQRZQV VLPXOn WDQHRXVO\ UHTXLUHV FRHIILFLHQW DQG VRXUFHWHUP GDWD WR EH PRYHG IURP DUUD\V ZKLFK KDYH WKH VDPH OD\RXW DV WKH X DQG S DUUD\V %XW WKLV GDWD PXVW EH PRYHG WR DQ DUUD\Vf ZKLFK KDV D ORQJHU GLPHQVLRQ LQ WKH LGLUHFWLRQ ,QVWHDG RI KDYLQJ GLPHQn VLRQ P WKH DUUD\ ZKLFK FRQWDLQV WKH XQNQRZQV GLDJRQDOV DQG ULJKWKDQG VLGHV KDV GLPHQVLRQ P 7KH HOHPHQWV OP IRU WKH FRQVWDQW M OLQH RI X DQG WKH X FRHIILFLHQW DUUD\V X DS DA D@\ DXV DIW ff PXVW EH PRYHG LQWR SRVLWLRQV OP 6LPLODU GDWD PRYHPHQW LV UHTXLUHG IRU WKH S FRHIILFLHQWV DQG GDWD 7KXV f6(1'W\SH FRPPXn QLFDWLRQ ZLOO EH JHQHUDWHG GXULQJ HDFK LWHUDWLRQ WR VHW XS WKH WULGLDJRQDO V\VWHP RI HTXDWLRQV DORQJ WKH OLQHV 7KLV W\SH RI FRPPXQLFDWLRQ LV SURKLELWLYHO\ H[SHQVLYH LQ DQ DOJRULWKP ZKHUH DOO WKH RWKHU RSHUDWLRQV DUH UHODWLYHO\ IDVW DQG HIILFLHQW 7KXV LI OLQHUHOD[DWLRQ VPRRWKLQJ LV UHTXLUHG WR VROYH D SDUWLFXODU IORZ SUREOHP IRU HLWKHU D VLQJOHJULG RU D PXOWLJULG FRPSXWDWLRQ RQ WKH &0 WKH SUHVVXUHFRUUHFWLRQ PHWKRGV VKRXOG EH XVHG 2WKHUZLVH HLWKHU %5% RU 6,03/(W\SH PHWKRGV FDQ EH

PAGE 108

XVHG LI WLPH SHU LWHUDWLRQ LV WKH RQO\ FRQVLGHUDWLRQ :LWK XX XY XF DQG SRLQW-DFREL LQQHU LWHUDWLRQV 6,03/( DQG %5% KDYH HVVHQWLDOO\ WKH VDPH SDUDOOHO FRVW DQG HIILFLHQF\ 6WDELOLW\ RI 0XOWLJULG ,WHUDWLRQV ,W LV ZHOO NQRZQ WKDW FHQWUDOGLIIHUHQFH GLVFUHWL]DWLRQV RI WKH FRQYHFWLRQ WHUPV LQ WKH 1DYLHU6WRNHV HTXDWLRQV PD\ EH XQVWDEOH LI FHOO 3HFOHW QXPEHUV DUH JUHDWHU WKDQ WZR GHSHQGLQJ RQ WKH ERXQGDU\ FRQGLWLRQV >@ 7KH FRDUVHJULG OHYHOVf KDYH KLJKHU FHOO 3HFOHW QXPEHUV &RQVHTXHQWO\ PXOWLJULG LWHUDWLRQV PD\ GLYHUJH GULYHQ E\ WKH GLYHUJHQFH RI VPRRWKLQJ LWHUDWLRQV RQ FRDUVH JULGV LI FHQWUDOGLIIHUHQFLQJ LV XVHG 7KH FRQYHFWLRQ WHUPV RQ FRDUVH JULGV PD\ QHHG WR EH XSZLQGHG IRU VWDELOLW\ +RZHYHU VHFRQGRUGHU DFFXUDF\ LV XVXDOO\ GHVLUHG RQ WKH ILQHVW JULG 7KH fVWDELOL]DWLRQ VWUDWn HJ\f LV WKH DSSURDFK XVHG WR SURYLGH VWDELOLW\ RI WKH FRDUVHJULG GLVFUHWL]DWLRQV ZKLOH VLPXOWDQHRXVO\ SURYLGLQJ VHFRQGRUGHU DFFXUDF\ IRU WKH ILQHJULG VROXWLRQ 7KH QDLYH VWDELOL]DWLRQ VWUDWHJ\ LV WR VLPSO\ GLVFUHWL]H WKH FRQYHFWLRQ WHUPV ZLWK ILUVWRUGHU XSZLQGLQJ RQ WKH FRDUVHJULG OHYHOV DQG E\ VHFRQGRUGHU FHQWUDOGLIIHUHQFLQJ RQ WKH ILQHVW JULG 8QIRUWXQDWHO\ WKH QDLYH DSSURDFK GRHV QRW ZRUNf§WKHUH LV D fPLVn PDWFKf EHWZHHQ WKH VROXWLRQV RQ QHLJKERULQJ OHYHOV LI GLIIHUHQW FRQYHFWLRQ VFKHPHV DUH HPSOR\HG UHVXOWLQJ LQ SRRU FRDUVHJULG FRUUHFWLRQV ,Q SUDFWLFH GLYHUJHQFH XVXDOO\ UHVXOWV 7KH FRDUVHJULG GLVFUHWL]DWLRQ QHHGV WR EH FRQVLVWHQW ZLWK WKH ILQHJULG GLVn FUHWL]DWLRQ LQ RUGHU WKDW DQ DFFXUDWH DSSUR[LPDWLRQ RI WKH ILQHJULG UHVLGXDO HTXDWLRQ LV JHQHUDOO\ SRVVLEOH ,Q WKH SUHVHQW ZRUN D fGHIHFWFRUUHFWLRQf VWDELOL]DWLRQ VWUDWHJ\ LV HPSOR\HG DV LQ > @ 7KH FRQYHFWLRQ WHUPV RQ DOO FRDUVH JULGV DUH GLVFUHWL]HG E\ ILUVW RUGHU XSZLQGLQJ 7KH FRQYHFWLRQ WHUPV RQ WKH ILQHVW JULG DUH DOVR XSZLQGHG EXW D

PAGE 109

VRXUFHWHUP FRUUHFWLRQ LV DSSOLHG ZKLFK DOORZV VHFRQGRUGHU FHQWUDOGLIIHUHQFH DFFXn UDF\ WR EH REWDLQHG ZKHQ WKH PXOWLJULG LWHUDWLRQV KDYH FRQYHUJHG $QRWKHU DSSURDFK LV WR XVH D VWDEOH VHFRQGRUGHU DFFXUDWH FRQYHFWLRQ VFKHPH HJ VHFRQGRUGHU XSZLQGLQJ RQ DOO JULG OHYHOV >@ 6K\\ DQG 6XQ >@ KDYH XVHG GLIIHUHQW FRQYHFWLRQ VFKHPHV RQ DOO JULG OHYHOV DQG FRPSDUHG WKH FRQYHUJHQFH UDWHV &HQWUDOGLIIHUHQFLQJ ILUVWRUGHU XSZLQGLQJ DQG VHFRQGRUGHU XSZLQGLQJ ZHUH WHVWHG IRU 5H DQG 5H OLGGULYHQ FDYLW\ IORZ SUREOHPV &RPSDUDEOH FRQYHUn JHQFH UDWHV ZHUH REWDLQHG IRU DOO WKUHH FRQYHFWLRQ VFKHPHV ZKHUHDV IRU VLQJOHJULG FRPSXWDWLRQV WKHUH DUH UHODWLYHO\ ODUJH GLIIHUHQFHV LQ WKH FRQYHUJHQFH UDWHV &HQWUDO GLIIHUHQFLQJ ZDV XQVWDEOH IRU WKH 5H FDVH EXW D K\EULG VWUDWHJ\ ZLWK VHFRQG RUGHU XSZLQGLQJ RQ WKH FRDUVHVW WKUHH JULG OHYHOV DQG FHQWUDOGLIIHUHQFLQJ RQ WKH ILQHU JULG OHYHOV UHPHGLHG WKH SUREOHP ZLWKRXW GHWHULRUDWLQJ WKH FRQYHUJHQFH UDWH )XUWKHU VWXG\ RI WKLV LVVXH LV FRQGXFWHG LQ &KDSWHU LQ ZKLFK WKH FRQYHUJHQFH UDWH DQG VWDn ELOLW\ FKDUDFWHULVWLFV RI VHFRQGRUGHU XSZLQGLQJ RQ DOO JULG OHYHOV LV FRQWUDVWHG ZLWK WKH GHIHFWFRUUHFWLRQ VWUDWHJ\ $ WKLUG SRVVLELOLW\ LV VLPSO\ WR DGG H[WUD QXPHULFDO YLVFRVLW\ WR WKH SK\VLFDO YLVFRVLW\ RQ FRDUVH JULGV 7KLV WHFKQLTXH KDV EHHQ LQYHVWLJDWHG E\ )RXULHU DQDO\VLV IRU D PRGHO OLQHDU FRQYHFWLRQGLIIXVLRQ HTXDWLRQ LQ >@ 7KH DXWKRUVf EHVW VWUDWHJ\ ZDV WKH RQH LQ ZKLFK WKH DPRXQW RI QXPHULFDO YLVFRVLW\ ZDV WDNHQ WR EH SURSRUWLRQDO WR WKH JULG VSDFLQJ RQ WKH QH[W ILQHUf PXOWLJULG OHYHO )RU WKH 1DYLHU6WRNHV WKLV EUXWHIRUFH DSSURDFK LV QRW H[SHFWHG WR SHUIRUP YHU\ ZHOO EHFDXVH WKH VROXWLRQV RQ WKH ILQH JULGV DUH IUHTXHQWO\ QRW MXVW D VPRRWK FRQWLQXDWLRQ RI WKH ORZHU 5H\QROGV QXPEHU IORZ SUREOHPV EHLQJ VROYHG RQ WKH FRDUVH JULG OHYHOV 5DWKHU IXQGDPHQWDO FKDQJHV LQ WKH IOXLG G\QDPLFV RFFXU DV 5H\QROGV QXPEHU LQFUHDVHV

PAGE 110

'HIHFW&RUUHFWLRQ 0HWKRG ,Q WKH GHIHFWFRUUHFWLRQ DSSURDFK WKH GLVFUHWL]HG HTXDWLRQV IRU D YDULDEOH I! DUH GHULYHG DV IROORZV ,Q JHQHUDO WKH HTXDWLRQV KDYH WKH IRUP DFS!S f§ DFSI!( DFAIfZ DFS M!V ES f ZKHUH WKH VXSHUVFULSW fFHf GHQRWHV WKDW FHQWUDOGLIIHUHQFLQJ RI WKH FRQYHFWLRQ WHUPV 7R IRUP WKH GLVFUHWH GHIHFWFRUUHFWLRQ HTXDWLRQ WKH FRUUHVSRQGLQJ ILUVWRUGHU XS ZLQGHG GLVFUHWH HTXDWLRQ LV DGGHG WR DQG VXEWUDFWHG IURP (T DQG UHDUUDQJHG WR JLYH DnSILS D@ D?MW!1 DI!V EXS >mS mnWRS m DfLf!( m m;YnPY m“ m6f$Y a mV RffWH S EIf? f ZKHUH WKH VXSHUVFULSW fX f GHQRWHV WKH ILUVWRUGHU XSZLQGLQJ RI WKH FRQYHFWLRQ WHUPV 7KH WHUP LQ EUDFNHWV LV HTXDO WR WKH GLIIHUHQFH LQ UHVLGXDOV VR (T FDQ EH ZULWWHQ DnS IfS DXI!( DAZ DI!1 DIIF EI >UXO UFH@ f 7R REWDLQ WKH XSGDWHG VROXWLRQ WKH GLIIHUHQFH LQ UHVLGXDOV LV ODJJHG 7KXV (T IRU WKH VROXWLRQ DW LWHUDWLRQ FRXQWHU fQf ZLWK WKH UHVLGXDOV HYDOXDWHG DW LWHUDWLRQ FRXQWHU fQf LV ZULWWHQ DXS;!3 DOnFIL( DXc!1 DIW!V EI >UXO UFH@Q f 0RYLQJ WKH ILUVW ILYH WHUPV RQ WKH ULJKWKDQG VLGH WR WKH OHIWKDQG VLGH (T FDQ EH UHZULWWHQ FRQFLVHO\ DV W]OOQO >UfLU LUFHOQ f

PAGE 111

LQ ZKLFK LW LV HDVLO\ VHHQ WKDW VDWLVIDFWLRQ RI WKH VHFRQGRUGHU FHQWUDOGLIIHUHQFH HTXDn WLRQ GLVFUHWL]HG HTXDWLRQV UFH f§!‘ LV UHFRYHUHG ZKHQ >UXO@Q LV DSSUR[LPDWHO\ HTXDO WR >UXO@Q 7DEOH FRPSDUHV WKH FRQYHUJHQFH UDWHV IRU VLQJOHJULG 6,03/( FRPSXWDWLRQV XVLQJ IRXU SRSXODU FRQYHFWLRQ VFKHPHV IRU D OLGGULYHQ FDYLW\ IORZ SUREOHP 7KH SXUSRVH LV WR JDLQ VRPH LQWXLWLRQ UHJDUGLQJ WKH FRQYHUJHQFH SURSHUWLHV RI WKH GHIHFW FRUUHFWLRQ VFKHPH )RU DOO WKH FDVHV SUHVHQWHG LQ WKH WDEOH WKH JULG VL]H ZDV [ 7KH WDEOH JLYHV WKH QXPEHU RI LWHUDWLRQV UHTXLUHG WR FRQYHUJH ERWK RI WKH PRPHQWXP HTXDWLRQV WR WKH OHYHO __UX__ f§ ZKHUH WKH /? QRUP LV XVHG GLYLGHG E\ WKH QXPEHU RI JULG SRLQWV 7KH LQQHU LWHUDWLYH SURFHGXUH IRU FRPSXWLQJ DQ DSSUR[LPDWH VROXWLRQ WR WKH X Y DQG Sn V\VWHPV RI HTXDWLRQV GXULQJ WKH FRXUVH RI WKH RXWHU LWHUDWLRQV RI WKH 6,03/( DOJRULWKP LV OLVWHG LQ FROXPQ ,Q WKH OLQH-DFREL PHWKRG DOO WKH KRUL]RQWDO OLQHV DUH VROYHG VLPXOWDQHRXVO\ IROORZHG E\ WKH YHUWLFDO OLQHV GXULQJ D VLQJOH LQQHU LWHUDn WLRQ 7KH 6/85 SURFHGXUH VDPH WHFKQLTXH DV LQ &KDSWHU f DOVR DOWHUQDWHV EHWZHHQ KRUL]RQWDO DQG YHUWLFDO OLQHV ,Q DGGLWLRQ WKH JULG OLQHV DUH VZHSW RQH DW D WLPH LQ WKH GLUHFWLRQ RI LQFUHDVLQJ L RU M LQ WKH *DXVV6HLGHO IDVKLRQ LQVWHDG RI DOO DW RQFH DV LQ WKH OLQH-DFREL PHWKRG 7KH QXPEHU RI LQQHU LWHUDWLRQV IRU HDFK JRYHUQLQJ HTXDWLRQ ZDV YX f§ XY DQG XF LQ WKH 5H SUREOHP 7KHVH SDUDPHWHUV DUH LQFUHDVHG WR DQG IRU WKH 5H IORZ 7KH LQQHU LWHUDWLRQ GDPSLQJ IDFWRU IRU WKH OLQH-DFREL LWHUDWLYH PHWKRG ZDV )RU WKH 5H FDVHV WKH 6,03/( UHOD[DWLRQ IDFWRUV DUH IRU WKH PRPHQn WXP HTXDWLRQV DQG IRU WKH SUHVVXUH 7KH FRQYHUJHQFH UDWH RI GHIHFWFRUUHFWLRQ LWHUDWLRQV LV QRW TXLWH DV JRRG DV FHQWUDOGLIIHUHQFLQJ RU ILUVWRUGHU XSZLQGLQJ EXW LW LV VOLJKWO\ EHWWHU WKDQ VHFRQGRUGHU XSZLQGLQJ 7KLV UHVXOW LV DQWLFLSDWHG IRU FDVHV ZKHUH

PAGE 112

)ORZ 3UREOHP ,QQHU ,WHUDWLYH 0HWKRG )LUVWRUGHU 8SZLQGLQJ &RQYHFW 'HIHFW &RUUHFWLRQ LRQ 6FKHPH &HQWUDO 'LIIHUHQFLQJ 6HFRQGRUGHU 8SZLQGLQJ (H &DYLW\ 3RLQW-DFREL 5H &DYLW\ /LQH-DFREL 5H &DYLW\ 6/85 5H &DYLW\ 3RLQW-DFREL ! 5H &DYLW\ /LQH-DFREL ! 5H &DYLW\ 6/85 ! 7DEOH 1XPEHU RI VLQJOHJULG 6,03/( LWHUDWLRQV WR FRQYHUJH WR __UX__ IRU WKH OLGGULYHQ FDYLW\ IORZ RQ DQ [ JULG 7KH /? QRUP LV XVHG QRUPDOL]HG E\ WKH QXPEHU RI JULG SRLQWV FHQWUDOGLIIHUHQFLQJ GRHV QRW KDYH VWDELOLW\ SUREOHPV VLQFH WKH GHIHFWFRUUHFWLRQ GLVn FUHWL]DWLRQ LV D OHVVLPSOLFLW YHUVLRQ RI FHQWUDOGLIIHUHQFLQJ /LNHZLVH RQH VKRXOG H[SHFW WKH FRQYHUJHQFH UDWH RI 6,03/( ZLWK WKH GHIHFWFRUUHFWLRQ FRQYHFWLRQ VFKHPH WR EH VOLJKWO\ VORZHU WKDQ ZLWK WKH ILUVWRUGHU XSZLQG VFKHPH GXH WR WKH SUHVHQFH RI VRXUFH WHUPV ZKLFK YDU\ ZLWK WKH LWHUDWLRQV 7KH PHWKRG OLQH-DFREL SRLQW-DFREL 6/85f XVHG IRU LQQHU LWHUDWLRQV KDV QR LQIOXHQFH RQ WKH FRQYHUJHQFH UDWH IRU HLWKHU 5H\QROGV QXPEHU WHVWHG )URP H[SHULHQFH LW DSSHDUV WKDW WKH OLGGULYHQ FDYLW\ IORZ LV XQXVXDO LQ WKLV UHJDUG )RU PRVW SUREOHPV WKH LQQHU LWHUDWLYH SURFHGXUH PDNHV D VLJQLILFDQW GLIIHUHQFH LQ WKH FRQYHUJHQFH UDWH )RU WKH 5H FDVHV WKH UHOD[DWLRQ IDFWRUV ZHUH UHGXFHG XQWLO D FRQYHUJHG VROXWLRQ ZDV SRVVLEOH XVLQJ FHQWUDOGLIIHUHQFLQJ 7KHQ WKHVH UHOD[DWLRQ IDFWRUV IRU WKH PRPHQWXP HTXDWLRQV DQG IRU SUHVVXUH ZHUH XVHG LQ FRQMXQFWLRQ ZLWK WKH RWKHU FRQYHFWLRQ VFKHPHV $FWXDOO\ LQ WKH OLGGULYHQ FDYLW\ IORZV WKH SUHVVXUH SOD\V D PLQRU UROH LQ FRPSDULVRQ ZLWK WKH EDODQFH EHWZHHQ FRQYHFWLRQ DQG GLIIXVLRQ &RQVHTXHQWO\ WKH SUHVVXUH UHOD[DWLRQ IDFWRU FDQ EH YDULHG EHWZHHQ DQG ZLWK QHJOLJLEOH LPSDFW RQ WKH FRQYHUJHQFH UDWH 7KH FRQYHUJHQFH UDWH LV YHU\ VHQVLWLYH WR WKH PRPHQWXP UHOD[DWLRQ IDFWRU KRZHYHU 7KH 5H f§ FDYLW\ IORZ LV KDUG WR

PAGE 113

FRQYHUJH DQG QHLWKHU WKH GHIHFWFRUUHFWLRQ RU VHFRQGRUGHU XSZLQG VFKHPHV VXFFHHGV IRU WKHVH UHOD[DWLRQ IDFWRUV 6HFRQGRUGHU FHQWUDO GLIIHUHQFLQJ GRHV QRW QRUPDOO\ ORRN WKLV JRRG HLWKHU 7KH OLGGULYHQ FDYLW\ IORZ LV D VSHFLDO FDVH IRU ZKLFK FHQWUDO GLIIHUHQFH VROXWLRQV FDQ EH REWDLQHG IRU UHODWLYHO\ KLJK 5H\QROGV QXPEHUV GXH WR WKH VKHDUGULYHQ QDWXUH RI WKH IORZ DQG WKH UHODWLYH XQLPSRUWDQFH RI WKH SUHVVXUH JUDGLHQW )RU WKH 5H FDVH WKH FRQYHUJHQFH SDWKV RI WKH IRXU FRQYHFWLRQ VFKHPHV WHVWHG DUH VKRZQ LQ )LJXUH 1RQH RI WKH FRQYHFWLRQ VFKHPHV LV GLYHUJLQJ EXW WKH DPRXQW RI VPRRWKLQJ DSSHDUV WR EH LQVXIILFLHQW WR KDQGOH WKH VRXUFH WHUPV LQ WKH QGRUGHU XSZLQG DQG GHIHFWFRUUHFWLRQ VFKHPHV IRU WKLV 5H\QROGV QXPEHU &RVW RI 'LIIHUHQW &RQYHFWLRQ 6FKHPHV 7KHUH ZDV LQLWLDOO\ VRPH FRQFHUQ WKDW WKH VRXUFH WHUP HYDOXDWLRQV LQ WKH GHIHFW FRUUHFWLRQ DQGRU VHFRQGRUGHU XSZLQG FRQYHFWLRQ VFKHPHV PLJKW EH H[SHQVLYH LQ WHUPV RI WKH SDUDOOHO UXQ WLPH ,Q OLJKW RI )LJXUH LW LV RI LQWHUHVW WR NQRZ ZKHWKHU WKH FRVW SHU LWHUDWLRQ LV VLJQLILFDQWO\ LQFUHDVHG DV WKLV FRQVHTXHQFH PLJKW OHDG RQH WR IDYRU RQH FRQYHFWLRQ VFKHPH RYHU DQRWKHU IRU FRQVLGHUDWLRQV RI UXQ WLPH LI ERWK KDYH VDWLVIDFWRU\ FRQYHUJHQFH UDWH FKDUDFWHULVLWLFV )LJXUH FRPSDUHV WKH FRVW RI FRPSXWLQJ WKH FRHIILFLHQWV RI WKH GLVFUHWH X Y DQG Sn HTXDWLRQV IRU WKUHH FRQYHFWLRQ VFKHPHV 7KH WLPLQJV ZHUH REWDLQHG RQ D QRGH YHFWRU XQLWf &0 IRU 6,03/( LWHUDWLRQV 6LQFH WKH VPRRWKHU DQG WKH FRHIILFLHQW FRPSXWDWLRQV DUH WKH PRVW WLPHFRQVXPLQJ WDVNV LQ WKH 6,03/( DOJRULWKP WKH FRVW RI WKH LQQHU LWHUDWLRQV WKH fVROYHUff LV LQFOXGHG IRU FRPSDULVRQ SXUSRVHV WKH VROLG OLQHf 7KHUH DUH SRLQW-DFREL LQQHU LWHUDWLRQV SHU RXWHU LWHUDWLRQ GLVWULEXWHG HDFK RQ WKH PRPHQWXP HTXDWLRQV DQG RQ WKH SnV\VWHP RI HTXDWLRQV

PAGE 114

7KH WLPLQJV ZHUH REWDLQHG RYHU D UDQJH RI SUREOHP VL]HV IRU 6,03/( LWHUDn WLRQV 7KH [D[LV LQ )LJXUH SORWV SUREOHP VL]H LQ WHUPV RI WKH YLUWXDO SURFHVVRU UDWLR 93 93 LV SUHIHUUHG RYHU WKH QXPEHU RI JULG SRLQWV VR WKDW WKH UHVXOWV FDQ EH FDUULHG RYHU WR &0V ZLWK PRUH SURFHVVRUV 7KH FRHIILFLHQW FRVW VFDOHV OLQHDUO\ ZLWK SUREOHP VL]H DQG ZLWK WKH GHIHFWFRUUHFWLRQ VFKHPH UHTXLUHV DERXW WKH VDPH WLPH DV VROYLQJ WKH HTXDWLRQV ,I PRUH LQQHU LWHUDWLRQV ZHUH XVHG RU WKH PRUH FRVWO\ OLQH-DFREL PHWKRG ZDV XVHG WKH IUDFWLRQ RI WKH RYHUDOO UXQ WLPH GXH WR WKH FRPSXWDn WLRQ RI FRHIILFLHQWV ZRXOG GHFUHDVH 7KH OLQHDU VFDOLQJ ZLWK 93 LV SRVVLEOH GXH WR WKH XQLIRUP ERXQGDU\ FRHIILFLHQW FRPSXWDWLRQ LPSOHPHQWDWLRQ GLVFXVVHG LQ &KDSWHU 7KH ILJXUH DOVR VKRZV WKDW VHFRQGRUGHU XSZLQGLQJ RI WKH FRQYHFWLRQ WHUPV FRVWV PRUH WKDQ WKH RWKHU VFKHPHV E\ DSSUR[LPDWHO\ b $GGLWLRQDO WHVWLQJ KDV VKRZQ WKDW WKH ILUVWRUGHU XSZLQG K\EULG FHQWUDOGLIIHUHQFH DQG GHIHFWFRUUHFWLRQ VFKHPHV DOO XVH URXJKO\ WKH VDPH DPRXQW RI WLPH 0RUH GHWDLOV DUH VKRZQ LQ )LJXUH ZKLFK EUHDNV GRZQ WKH WLPH VSHQW FRPn SXWLQJ FRHIILFLHQWV LQWR FRPSXWDWLRQ DQG LQWHUSURFHVVRU FRPPXQLFDWLRQ %HFDXVH WKH GLIIHUHQFH VWHQFLOV DUH FRPSDFW RQO\ QHDUHVWQHLJKERU SURFHVVLQJ HOHPHQWV QHHG WR FRPPXQLFDWH LQ WKH FDOFXODWLRQ RI WKH HTXDWLRQ FRHIILFLHQWV 7KHVH DUH f1(:6f W\SH FRPPXQLFDWLRQV RQ WKH &0 ,Q WKH SUHVHQW LPSOHPHQWDWLRQ WKH FRHIILFLHQW FRPSXWDWLRQV IRU WKH PRPHQWXP HTXDWLRQV UHTXLUH 1(:6 FRPPXQLFDWLRQV IRU WKH GHIHFWFRUUHFWLRQ FHQWUDOGLIIHUHQFLQJ DQG ILUVWRUGHU XSZLQG VFKHPHV 6HFRQGRUGHU XSZLQGLQJ UHTXLUHV DW OHDVW 1(:6 FRPPXQLFDWLRQV ,Q WKH SUHVHQW LPSOHPHQn WDWLRQ FRPPXQLFDWLRQ RSHUDWLRQV DUH QHHGHG EHFDXVH WKH IRUPXODWLRQ VXSSRUWV QRQXQLIRUP JULGV DQG WKHUHIRUH VRPH JHRPHWULF TXDQWLWLHV QHHG WR EH FRPPXQLFDWHG LQ DGGLWLRQ WR WKH QHDUE\ YHORFLWLHV 7KH DGGLWLRQDO 1(:6 FRPPXQLFDWLRQ LV DSSDUn HQW LQ )LJXUH 6LPLODUO\ WKH VHFRQGRUGHU XSZLQG VFKHPH LQYROYHV PRUH FRPSXn WDWLRQ WKDQ WKH RWKHU VFKHPHV

PAGE 115

&RLQFLGHQWDOO\ WKH DGGLWLRQDO FRPSXWDWLRQ DQG LQWHUSURFHVVRU FRPPXQLFDWLRQ RI WKH VHFRQGRUGHU XSZLQG FRQYHFWLRQ VFKHPH RIIVHW HDFK RWKHU LQ WHUPV RI WKHLU DIIHFW RQ WKH SDUDOOHO HIILFLHQF\ :LWK HLWKHU FRQYHFWLRQ VFKHPH WKH WUHQG LV HVVHQWLDOO\ WKH VDPH )LJXUH )LJXUH JDYH WKH YDULDWLRQ RI ( ZLWK 93 IRU FHQWUDOGLIIHUHQFLQJ 5HVWULFWLRQ DQG 3URORQJDWLRQ 3URFHGXUHV 7KH GLVFUHWL]DWLRQ RI WKH FRQYHFWLRQ WHUPV RQ FRDUVH JULGV LV D NH\ LVVXH EHFDXVH WKH FRDUVH JULG SUREOHP PXVW EH D UHDVRQDEOH DSSUR[LPDWLRQ WR WKH ILQHJULG GLVFUHWL]HG HTXDWLRQ LQ RUGHU WR REWDLQ JRRG FRUUHFWLRQV ,Q DGGLWLRQ IRU WKH IRUPXODWLRQ JLYHQ LQ WKH EDFNJURXQG VHFWLRQ RQH PXVW DOVR VD\ KRZ WKH FRDUVHJULG VRXUFH WHUPV DUH FRPSXWHG DQG KRZ WKH FRUUHFWLRQV DUH LQWHUSRODWHG WR WKH ILQH JULG 7KH UHVWULFn WLRQ DQG SURORQJDWLRQ SURFHGXUHV DIIHFW ERWK WKH VWDELOLW\ DQG FRQYHUJHQFH UDWH ,Q WKLV VHFWLRQ WKUHH UHVWULFWLRQ SURFHGXUHV DQG WZR SURORQJDWLRQ SURFHGXUHV KDYH EHHQ FRPSDUHG RQ WZR PRGHO SUREOHPV ZLWK GLIIHUHQW SK\VLFDO FKDUDFWHULVWLFV WR DVVHVV WKH HIIHFW RI WKH LQWHUJULG WUDQVIHU SURFHGXUHV RQ WKH PXOWLJULG FRQYHUJHQFH UDWH )RU ILQLWHYROXPH GLVFUHWL]DWLRQV FRQVHUYDWLRQ LV WKH QDWXUDO UHVWULFWLRQ SURFHGXUH IRU WKH HTXDWLRQ UHVLGXDOV EHFDXVH WKH WHUPV LQ WKH GLVFUHWH HTXDWLRQV UHSUHVHQW LQWHJUDOV RYHU DQ DUHD 7KH PHWKRG RI LQWHJUDWLRQ IRU VRXUFH WHUPV GHWHUPLQHV WKH DFWXDO UHVWULFWLRQ SURFHGXUH )RU SLHFHZLVH FRQVWDQW WUHDWPHQW RI VRXUFH WHUPV LQ D FHOOFHQWHUHG ILQLWHYROXPH GLVFUHWL]DWLRQ WKH PDVV UHVLGXDO LQ D FRDUVHJULG FRQWURO YROXPH LV WKH VXP RI WKH PDVV UHVLGXDOV LQ WKH IRXU ILQHJULG FRQWURO YROXPHV ZKLFK FRPSULVH WKH FRDUVHJULG FRQWURO YROXPH 7KLV UHVWULFWLRQ SURFHGXUH LV XVHG IRU WKH UHVLGXDOV RI WKH FRQWLQXLW\ HTXDWLRQ LQ HYHU\ FDVH WHVWHG ,I WKH PDVV UHVLGXDO LV VXPPHG DQG YI DQG Yb DUH UHVWULFWHG E\ FHOOIDFH DYHUDJLQJ GHVFULEHG EHORZf WKH ULJKWKDQG VLGH RI (T LV LGHQWLFDOO\ ]HUR >@ ZKLFK LPSOLHV WKDW WKH YHORFLW\ ILHOG RQ FRDUVH JULGV DOVR VDWLVILHV WKH FRQWLQXLW\ HTXDWLRQ LQ DGGLWLRQ

PAGE 116

WR WKH YHORFLW\ ILHOG RQ WKH ILQHVW JULG +RZHYHU LW LV QRW QHFHVVDU\ WR KDYH LGHQWLFDOO\ ]HUR FRDUVHJULG VRXUFH WHUPV HYHQ LQ WKH FRQWLQXLW\ HTXDWLRQ 5HVWULFWLRQ SURFHGXUH ff REWDLQV WKH LQLWLDO FRDUVHJULG VROXWLRQV QRW E\ UHVWULFWn LQJ WKH VROXWLRQV EXW LQVWHDG E\ WDNLQJ WKH PRVW UHFHQWO\ FRPSXWHG YDOXHV RQ WKH FRDUVH JULG 7KHVH YDOXHV ZLOO EH IURP WKH SUHYLRXV PXOWLJULG F\FOH 7KH mPRPHQWXP HTXDWLRQ UHVLGXDOV DUH VXPPHG RYHU WKH VL[ ILQHJULG X FRQWURO YROXPHV ZKLFK FRPSULVH WKH FRDUVHJULG X FRQWURO YROXPH XQGHU FRQVLGHUDWLRQ 2QO\ KDOI WKH FRQWULEXWLRQ LV WDNHQ IURP WKH FHOOIDFH QHLJKERU X FRQWURO YROXPHV GXH WR WKH VWDJJHUHG JULG )RU WKH UHVWULFWLRQ SURFHGXUH GHQRWHG ff X X DQG WKH PRPHQWXP HTXDWLRQ UHVLGXDOV DUH UHVWULFWHG E\ FHOOIDFH DYHUDJLQJ &HOOIDFH DYHUDJLQJ UHIHUV WR WKH DYHUn DJLQJ RI WKH WZR ILQHJULG X YHORFLW\ FRPSRQHQWV LPPHGLDWHO\ DERYH DQG EHORZ WKH FRDUVHJULG X ORFDWLRQ ZKLFK DUH RQ WKH VDPH FRDUVHJULG S FRQWURO YROXPH IDFH 6LPn LODU WUHDWPHQW LV DSSOLHG WR Y 7KH FRDUVHJULG SUHVVXUHV DUH REWDLQHG E\ DYHUDJLQJ WKH IRXU QHDUHVW ILQHJULG SUHVVXUHV 7KH UHVWULFWLRQ SURFHGXUH ff LQGLFDWHV D ZHLJKWHG DYHUDJH RI VL[ ILQHJULG X YHn ORFLW\ FRPSRQHQWV WKH FHOOIDFH RQHV DQG WKHLU QHDUHVWQHLJKERUV RQ HLWKHU VLGH 7KH FHOOIDFH ILQHJULG X YHORFLW\ FRPSRQHQWV FRQWULEXWH WZLFH DV PXFK DV WKHLU QHLJKERUV 6LPLODU WUHDWPHQW LV DSSOLHG IRU Y DQG IRU WKH PRPHQWXP HTXDWLRQ UHVLGXDOV 7KH FRDUVHJULG SUHVVXUHV DUH REWDLQHG E\ DYHUDJLQJ WKH IRXU QHDUHVW ILQHJULG SUHVVXUHV DV LQ UHVWULFWLRQ SURFHGXUH )RU WKH SURORQJDWLRQ SURFHGXUHV ff DQG ff LQGLFDWH ELOLQHDU DQG ELTXDGUDWLF LQWHUSRODWLRQ UHVSHFWLYHO\ 7KH ELOLQHDU LQWHUSRODWLRQ SURFHGXUH LV LGHQWLFDO WR WKDW XVHG E\ 6K\\ DQG 6XQ >@ LQ ZKLFK WKH WZR QHDUHVW FRDUVHJULG FRUUHFWLRQV DORQJ D OLQH [ f§ FRQVWDQW IRU Xf DUH XVHG WR FRPSXWH WKH FRUUHFWLRQ DW WKH ORFDWLRQ RI WKH ILQHJULG X YHORFLW\ FRPSRQHQW E\ OLQHDU LQWHUSRODWLRQ 6LPLODU WUHDWPHQW LV DGRSWHG IRU Y FRUUHFWLRQV 7R FRPSXWH WKH FRUUHFWLRQV RQ WKH fLQEHWZHHQf ILQHJULG OLQHV WKH

PAGE 117

QR DYDLODEOH ILQHJULG FRUUHFWLRQV DUH LQWHUSRODWHG OLQHDUO\ &RUUHFWLRQV IRU SUHVVXUH DUH LQWHUSRODWHG OLQHDUO\ IURP WKH IRXU QHDUHVW FRDUVHJULG YDOXHV 7KH ELTXDGUDWLF LQWHUSRODWLRQ SURFHGXUH ff LV VLPLODU WR WKH SURFHGXUH XVHG E\ %UXQHDX DQG -RXURQ >@ ,W ILQLVKHV LQ H[DFWO\ WKH VDPH ZD\ DV WKH ELOLQHDU LQn WHUSRODWLRQ EXW LV SUHFHGHG E\ D TXDGUDWLF LQVWHDG RI OLQHDUf LQWHUSRODWLRQ LQ WKH \GLUHFWLRQ DQG DQ DYHUDJLQJ LQ WKH ]GLUHFWLRQ 7KXV WKH WKUHH QHDUHVW FRUUHFWLRQ TXDQWLWLHV RQ WKH FRDUVH JULG DERYH DQG EHORZ WKH ILQHJULG X ORFDWLRQf DUH XVHG WR LQWHUSRODWH LQ WKH \GLUHFWLRQ IRU D FRUUHFWLRQ ORFDWHG DW WKH SRVLWLRQ RI WKH ILQHJULG X YHORFLW\ FRPSRQHQW $IWHU WKLV \GLUHFWLRQ LQWHUSRODWLRQ WKHUH DUH WZR FRUUHFWLRQV GHILQHG RQ HDFK IDFH RI WKH FRDUVHJULG X FRQWURO YROXPHV DW WKH ORFDWLRQV FRUUHVSRQGn LQJ WR WKH ORFDWLRQV RI WKH ILQHJULG X YHORFLW\ FRPSRQHQWV 7KHVH DUH LQMHFWHG WR JLYH WKH ILQHJULG FRUUHFWLRQV DW WKHVH SRLQWV DIWHU D ZHLJKWHG DYHUDJLQJ LQ WKH [GLUHFWLRQ )RU H[DPSOH RQ D XQLIRUP JULG WKLV SUHLQMHFWLRQ DYHUDJLQJ JRHV OLNH AFFRUU -f f§ LAFFRUU 7 -f 7 8&&RUU -f 7 AFFRUU I OfrAff f ZKHUH X&L&RUU DQG WKH FDSLWDOL]HG LQGLFHV LQGLFDWH WKDW WKH FRUUHFWLRQ TXDQWLWLHV DUH VWLOO GHILQHG RQ WKH FRDUVH JULGf§WKH\ DUH SRVLWLRQHG WR FRUUHVSRQG ZLWK WKH ILQHJULG X ORFDWLRQV $IWHU WKH DYHUDJHG FRUUHFWLRQV DUH LQMHFWHG WR WKH ILQH JULG WKH ILQH JULG FRUUHFWLRQV DUH GHILQHG DORQJ HYHU\ RWKHU OLQH [ FRQVWDQW 7KH FRUUHFWLRQV RQ fLQEHWZHHQf OLQHV DUH OLQHDUO\ LQWHUSRODWHG IURP WKH LQMHFWHG DYHUDJHG FRUUHFn WLRQV 6LPLODU WUHDWPHQW LV DGRSWHG IRU WKH Y FRUUHFWLRQV &RUUHFWLRQV IRU SUHVVXUH DUH LQWHUSRODWHG ELTXDGUDWLFDOO\ IURP WKH QLQH QHDUHVW FRDUVHJULG YDOXHV 7DEOH EHORZ FRPSDUHV WKH YDULRXV LQWHUJULG WUDQVIHU SURFHGXUHV LQ WHUPV RI WKH ZRUN XQLWV UHTXLUHG WR UHDFK D SUHVFULEHG FRQYHUJHQFH WROHUDQFH RQ WKH ILQHVW JULG OHYHO 7KH QRWDWLRQ SUf LQGLFDWHV WKH QXPEHU RI WKH SURORQJDWLRQ DQG UHVWULFWLRQ SURFHGXUHV DGRSWHG 7KH FRQYHUJHQFH WROHUDQFH RQ WKH ILQH JULG LV SUHVFULEHG E\

PAGE 118

,OO DQ HVWLPDWH RI WKH WUXQFDWLRQ HUURU RI WKH ILQHJULG GLVFUHWL]DWLRQ ZKLFK LV GHULYHG LQ &KDSWHU 7KH FULWHULRQ LV W\SLFDOO\ QRW YHU\ VWULQJHQW VR WKH WDEOH UHVXOWV EHVW UHIOHFW GLIIHUHQFHV LQ WKH LQLWLDO FRQYHUJHQFH UDWH LQVWHDG RI WKH DV\PSWRWLF FRQYHUJHQFH UDWH 1XPEHU RI ZRUN XQLWV WR FRQYHUJH 3nUf 5H &DYLW\ 5H %DFN6WHS 9 f 9f 9 f 9f f f f f f GLY f 7DEOH 7KH HIIHFW RI GLIIHUHQW UHVWULFWLRQ DQG SURORQJDWLRQ SURFHGXUHV RQ WKH FRQn YHUJHQFH UDWH RI WKH SUHVVXUHFRUUHFWLRQ PXOWLJULG DOJRULWKP IRU D OHYHO FDYLW\ IORZ SUREOHP ZLWK D [ ILQH JULG DQG IRU D OHYHO V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ ZLWK D [ ILQH JULG 7KH GHIHFWFRUUHFWLRQ DSSURDFK LV XVHG 1XPHULFDO H[SHULPHQWV ZLWK WKH QXPEHU RI SUH DQG SRVWVPRRWKLQJ LWHUDWLRQV KDYH VKRZQ WKDW IRU WKH FDYLW\ IORZ 9Of F\FOHV SURYLGH HQRXJK VPRRWKLQJ 9f F\FOHV DUH QHHGHG IRU WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ FRPSXWDWLRQ :LWK OHVV VPRRWKLQJ WKH QXPEHU RI ZRUN XQLWV WR UHDFK FRQYHUJHQFH JHQHUDOO\ LQFUHDVHV HYHQ WKRXJK WKH QXPEHU RI ZRUN XQLWV SHU F\FOH LV VPDOOHU 7KH UHVWULFWLRQ SURFHGXUH XVHG DSSHDUV WR EH YHU\ LPSRUWDQW WR WKH FRQYHUJHQFH UDWH LQ HLWKHU IORZ SUREOHP 7KH UHVWULFWLRQ SURFHGXUH DSSHDUV WR SHUIRUP EHWWHU WKDQ RU 7KH GLVFXVVLRQ SUHVHQWHG HDUOLHU VXJJHVWHG WKLV UHVXOW +RZHYHU VLQFH WKH UHVLGXDOV DUH VXPPHG LQVWHDG RI DYHUDJHG WKH\ DUH W\SLFDOO\ ODUJHU ZLWK PRUH VSDWLDO YDULDWLRQ DOVR $V D UHVXOW PRUH VPRRWKLQJ LWHUDWLRQV DUH QHHGHG WR HQVXUH VWDELOLW\ RI WKH PXOWLJULG LWHUDWLRQV )RU U LW DSSHDUV WKDW WKH ELOLQHDU LQWHUSRODWLRQ SURFHGXUH S f FRQYHUJHV VOLJKWO\ IDVWHU WKDQ WKH ELTXDGUDWLF SURFHGXUH

PAGE 119

7KH SHUIRUPDQFH RI WKH RWKHU UHVWULFWLRQ SURFHGXUHV DSSHDUV WR GHSHQG RQ WKH SURORQJDWLRQ SURFHGXUH ,Q ERWK SUREOHPV WKH EHVW UHVXOWV IRU U RU U DUH REWDLQHG ZKHQ WKH FRUUHVSRQGLQJ S RU S f SURORQJDWLRQ SURFHGXUH LV XVHG ,Q WKH EDFNZDUGIDFLQJ VWHS IORZ WKH UHVXOWV IRU FHOOIDFH DYHUDJLQJ U f DUH EHWWHU WKDQ WKH VL[SRLQW DYHUDJLQJ E\ D VLJQLILFDQW DPRXQW 7KH VDPH LV WUXH IRU WKH FDYLW\ IORZ EXW WR D OHVVHU GHJUHH 7KH HIIHFW RI 5H\QROGV QXPEHU IRU HDFK IORZ SUREOHP VKRXOG EH FRQVLGHUHG LQ IXWXUH ZRUN )LJXUHV DQG JLYH D GLIIHUHQW ORRN DW WKH UHODWLYH SHUIRUPDQFH RI WKH DQG UHVWULFWLRQ SURFHGXUHV FHOOIDFH DYHUDJLQJ RI VROXWLRQV DQG UHVLGXDOV FRQWUDVWHG ZLWK VXPPDWLRQ RI UHVLGXDOV RQO\ 7KH IRFXV LV RQ WKH DV\PSWRWLF FRQYHUJHQFH UDWH DV RSSRVHG WR WKH LQLWLDO FRQYHUJHQFH UDWH FRQVLGHUHG LQ 7DEOH 7KH XPRPHQWXP HTXDWLRQ DYHUDJH UHVLGXDO WKH /? QRUP GLYLGHG E\ WKH QXPEHU RI JULG SRLQWVf LV SORWWHG RQ HDFK JULG OHYHO DJDLQVW ZRUN XQLWV 9f F\FOHV DQG ELOLQHDU LQWHUSRODWLRQ S f ZHUH XVHG IRU WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ FDOFXODWLRQ 7KH FRPSXWDWLRQV KDYH EHHQ FDUULHG IDU EH\RQG WKH SRLQW DW ZKLFK FRQYHUJHQFH ZDV GHFODUHG LQ 7DEOH 7KH GDVKHG OLQH VKRZV WKH HVWLPDWHG WUXQFDWLRQ HUURU RQ WKH ILQH JULG XVHG WR GHFODUH FRQYHUJHQFH IRU WKH WDEOH %UDQGW DQG @ )XUWKHU PXOWLJULG F\FOHV UHGXFH WKH DOJHEUDLF HUURU EXW QRW QHFHVVDULO\ WKH GLIIHUHQWLDO HUURU :LWK UHVWULFWLRQ SURFHGXUH )LJXUH WKH LQLWLDO PXOWLJULG FRQYHUJHQFH UDWH LV UDSLG EXW OHYHOV RII VLJQLILFDQWO\ DIWHU DERXW ZRUN XQLWV 7KLV DSSDUHQWO\ VORZ DV\PSWRWLF PXOWLJULG FRQYHUJHQFH UDWH LV VWLOO PXFK EHWWHU WKDQ WKH VLQJOHJULG FRQYHUn JHQFH UDWH IRU WKLV IORZ SUREOHP LQGLFDWLQJ WKDW WKHUH LV VRPH EHQHILW EHLQJ REWDLQHG IURP WKH FRDUVHJULG FRUUHFWLRQV ZLWK WKH UHVWULFWLRQ SURFHGXUH 7KH FRUUHFWLRQV DUH HYLGHQWO\ QRW DV ODUJH DV ZLWK UHVWULFWLRQ SURFHGXUH )LJXUH f EHFDXVH WKLV FDVH VKRZV QR UHGXFWLRQ LQ WKH LQLWLDO UDSLG FRQYHUJHQFH UDWH ,W KDV EHHQ YHULILHG WKDW

PAGE 120

WKH FRQYHUJHQFH UDWH LV PDLQWDLQHG XQWLO WKH OHYHO RI GRXEOHSUHFLVLRQ URXQGRII HUURU f LV UHDFKHG DOWKRXJK WKH FRQYHUJHQFH SDWK LV VKRZQ RQO\ GRZQ WR 7KHVH ILJXUHV VXSSRUW WKH HDUOLHU REVHUYDWLRQ WKDW WKH UHVWULFWLRQ SURFHGXUH LV DSSURSULDWH WR WKH ILQLWHYROXPH GLVFUHWL]DWLRQ 7KH GLIIHUHQFH EHWZHHQ WKH SHUIRUPDQFH RI WKH UHVWULFWLRQ SURFHGXUHV DQG LV HYHQ PRUH GUDPDWLF LQ WKH OLGGULYHQ FDYLW\ IORZ )LJXUHV DQG 7KH FRQYHUJHQFH UDWH RI WKH SUHVHQW PXOWLJULG PHWKRG DSSHDUV WR EH FRPSDUDEOH WR RWKHU UHVXOWV LQ WKH OLWHUDWXUH 6RFNRO >@ IRXQG WKDW URXJKO\ ZRUN XQLWV ZHUH QHHGHG WR REWDLQ FRQYHUJHQFH IRU WKH OLGGULYHQ FDYLW\ IORZ DW 5H IRU ERWK %*6 DQG 6,03/( 7KH UHVLGXDOV ZHUH VXPPHG DV LQ UHVWULFWLRQ SURFHGXUH EXW WKH YDULDEOHV ZHUH DOVR UHVWULFWHG E\ FHOOIDFH DYHUDJLQJ :OOf F\FOHV ZHUH XVHG 6K\\ DQG 6XQ >@ QHHGHG PDQ\ PRUH ZRUN XQLWV WR UHDFK FRQYHUJHQFH XVLQJ 9 F\FOHV DW WKH VDPH 5H\QROGV QXPEHU EXW ZLWK OHVV UHVROXWLRQ RQ WKH ILQH JULG [ f 7KH UHVWULFWLRQ SURFHGXUH ZDV XVHG 7KH FRQYHUJHQFH FULWHULRQ ZDV WLJKWHU DQG WKHUH ZHUH SURFHGXUDO GLIIHUHQFHV IURP WKH SUHVHQW ZRUN DQG WKDW RI 6RFNRO ZKLFK PD\ DOVR DFFRXQW IRU WKH GLIIHUHQFHV &RQFOXGLQJ 5HPDUNV 0XOWLJULG WHFKQLTXHV DUH SRWHQWLDOO\ VFDODEOH SDUDOOHO FRPSXWDWLRQDO PHWKRGV ERWK LQ WKH QXPHULFDO VHQVH DQG WKH FRPSXWDWLRQDO VHQVH 7KH NH\ LVVXH IRU DSSO\LQJ PXOWLJULG WHFKQLTXHV WR WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV LV WKH FRQQHFWLRQ EHWZHHQ WKH HYROYLQJ VROXWLRQV RQ WKH YDULRXV JULG OHYHOV ZKLFK LQFOXGHV WKH WUDQVIHU RI LQIRUPDWLRQ EHWZHHQ FRDUVH DQG ILQH JULGV LH WKH UHVWULFWLRQ DQG SURORQJDWLRQ SURFHGXUHV DQG WKH IRUPXODWLRQ RI WKH FRDUVHJULG SUREOHP LH WKH FKRLFH RI WKH FRDUVHJULG FRQYHFWLRQ VFKHPH 7KHVH IDFWRUV DOVR LQIOXHQFH WKH VWDELOLW\ RI PXOWLJULG LWHUDWLRQV

PAGE 121

7KH UHVWULFWLRQ SURFHGXUH IRU ILQLWHYROXPH GLVFUHWL]DWLRQV VKRXOG EH VXPPLQJ RI UHVLGXDOV $OVR LW ZDV IRXQG XQQHFHVVDU\ WR UHVWULFW WKH VROXWLRQ YDULDEOHV 7KH FRQYHUJHQFH UDWH LQ ERWK W\SHV RI IORZ SUREOHPV VKHDU DQG SUHVVXUHGULYHQ ZHUH VLJQLILFDQWO\ DFFHOHUDWHG ZKHQ WKH UHVLGXDOV ZHUH VXPPHG LQVWHDG RI DYHUDJHG +RZn HYHU EHFDXVH WKH UHVLGXDOV DUH ODUJHU PRUH VPRRWKLQJ LV IRXQG WR EH QHFHVVDU\ WR DYRLG VWDELOLW\ SUREOHPV LQ WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ 7KH ELOLQHDU SURORQJDWLRQ SURFHGXUH DSSHDUV WR EH SUHIHUUDEOH WR WKH ELTXDGUDWLF SURORQJDWLRQ SURFHGXUH 7KH FRQYHUJHQFH UDWHV ZKLFK KDYH EHHQ DFKLHYHG LQ WKH PRGHO SUREOHPV DUH FRPSDUDEOH WR RWKHU UHVXOWV LQ WKH OLWHUDWXUH ,Q WHUPV RI FRVW SHU LWHUDWLRQ LW DSSHDUV WKDW WKH SUHVVXUHFRUUHFWLRQ W\SH VPRRWKHU LV FRPSDUDEOH WR WKH ORFDOO\FRXSOHG H[SOLFLW PHWKRG RQ WKH &0 ZKHUHDV IRU VHULDO FRPSXWDWLRQV WKH ODWWHU KDV EHHQ IDYRUHG E\ VRPH >@ %RWK DOJRULWKPV FRQVLVW RI EDVLFDOO\ WKH VDPH RSHUDWLRQV ZLWK URXJKO\ WZLFH DV PXFK LQIOXHQFH RQ WKH SDUDOOHO UXQ WLPH IURP WKH FRHIILFLHQW FRPSXWDWLRQV IRU %5% 7KH FRHIILFLHQW FRPSXWDWLRQ FRVW LV FRPSDUDEOH WR WKH VPRRWKLQJ FRVW IRU WKH 6,03/( PHWKRG EXW IRU %5% WKH IRUPHU LV WKH GRPLQDQW FRQVLGHUDWLRQ ,Q WKDW UHVSHFW WKH XQLIRUP LPSOHPHQWDWLRQ IRU ERXQGDU\ FRHIILFLHQW FRPSXWDWLRQV GHVFULEHG LQ &KDSWHU DQG WKH FKRLFH RI FRQn YHFWLRQ VFKHPH DUH YHU\ LPSRUWDQW FRQVLGHUDWLRQV 8VLQJ WKH VHFRQGRUGHU XSZLQG VFKHPH WKH FRVW SHU LWHUDWLRQ RI 6,03/( DVVXPLQJ DQG SRLQW-DFREL LQn QHU LWHUDWLRQV LV URXJKO\ WZLFH DV PXFK FRPSDUHG WR WKH GHIHFWFRUUHFWLRQ VFKHPH DOWKRXJK WKHUH LV QHJOLJLEOH HIIHFW RQ WKH SDUDOOHO HIILFLHQF\

PAGE 122

/HYHO ILQH JULGf /HYHO /HYHO /HYHO FRDUVH JULGf f VPRRWKLQJ LWHUDWLRQV )LJXUH 6FKHPDWLF RI D 9f PXOWLJULG F\FOH ZKLFK KDV WKUHH VPRRWKLQJ LWHUDn WLRQV RQ WKH fGRZQVWURNHf RI WKH 9 DQG VPRRWKLQJ LWHUDWLRQV RQ WKH fXSVWURNHf

PAGE 123

6PRRWKHU &RPSDULVRQ )LJXUH &RPSDULVRQ RI WKH WRWDO SDUDOOHO UXQ WLPH IRU 6,03/( DQG %5% RQ D YHFWRUXQLW &0 IRU LWHUDWLRQV RYHU D UDQJH RI SUREOHP VL]HV 7KH IORZ SUREOHP ZKLFK ZDV WLPHG ZDV 5H OLGGULYHQ FDYLW\ IORZ

PAGE 124

6PRRWKHU &RPSDULVRQ 93 )LJXUH &RPSDULVRQ RI WKH SDUDOOHO UXQ WLPHV IRU 6,03/( DQG %5% GHFRPSRVHG LQWR FRQWULEXWLRQV IURP WKH FRHIILFLHQW FRPSXWDWLRQV DQG WKH VROXWLRQ VWHSV LQ WKHVH DOJRULWKPV 7KH WLPH DUH REWDLQHG RQ D YHFWRUXQLW &0 IRU LWHUDWLRQV RYHU D UDQJH RI SUREOHP VL]HV 7KH FRQYHFWLRQ WHUPV DUH FHQWUDOGLIIHUHQFHG

PAGE 125

6PRRWKHU &RPSDULVRQ &' ; R 6,03/( R [ %R[ 5HG%ODFN ; R 1RGH FSX ; E R &RPP 1(:6f 2 A R[ ; ; / 93 )LJXUH &RPSDULVRQ RI WKH SDUDOOHO UXQ WLPH IRU 6,03/( DQG %5% GHFRPn SRVHG LQWR FRQWULEXWLRQV IURP SDUDOOHO FRPSXWDWLRQ DQG QHDUHVWQHLJKERU LQWHUSURFHVn VRU FRPPXQLFDWLRQ f1(:6ff 7KH WLPLQJV ZHUH PDGH RQ D YHFWRUXQLW &0 IRU LWHUDWLRQV RYHU D UDQJH RI SUREOHP VL]HV

PAGE 126

6LQJOH*ULG &RQYHUJHQFH 3DWKV IRU 5H &DVH 1XPEHU RI ,WHUDWLRQV )LJXUH 'HFUHDVH LQ WKH QRUP RI WKH XPRPHQWXP HTXDWLRQ UHVLGXDO DV D IXQFWLRQ RI WKH QXPEHU RI 6,03/( LWHUDWLRQV IRU GLIIHUHQW FRQYHFWLRQ VFKHPHV 7KH UHVXOWV DUH IRU D VLQJOHJULG VLPXODWLRQ RI 5H OLGGULYHQ FDYLW\ IORZ RQ DQ [ JULG 7KH DOWHUQDWLQJ OLQH-DFREL PHWKRG LV XVHG IRU WKH LQQHU LWHUDWLRQV 7KH UHVXOWV GR QRW FKDQJH VLJQLILFDQWO\ ZLWK WKH SRLQW-DFREL RU WKH 6/85 VROYHU

PAGE 127

&RVW RI &RHIILFLHQW &RPSXWDWLRQV )LJXUH &RPSDULVRQ EHWZHHQ WZR FRQYHFWLRQ VFKHPHV LQ WHUPV RI SDUDOOHO UXQ WLPH 7KH WRWDO FRPSXWDWLRQ FRPPXQLFDWLRQf WLPH VSHQW FRPSXWLQJ FRHIILFLHQWV RYHU 6,03/( LWHUDWLRQV RQ D 98 &0 LV SORWWHG DJDLQVW WKH YLUWXDO SURn FHVVRU UDWLR 93 f6ROYHU WLPHf LV WKH WLPH VSHQW RQ SRLQW-DFREL LQQHU LWHUDWLRQV SHU 6,03/( LWHUDWLRQ DQG IRU WKH X X DQG Sn V\VWHPV RI HTXDWLRQV ,W LV MXVW FRLQFLGHQWDO WKDW IRU WKH GHIHFWFRUUHFWLRQ DQG FHQWUDOGLIIHUHQFH FDVHV WKH FRHIILFLHQW FRPSXWDWLRQV DQG WKH VROYHU WLPH DUH DERXW HTXDO

PAGE 128

1(:6 t &38 &RVWV LQ &RHIILFLHQW &RPSXWDWLRQV A F R D! Z A c [ QGRUGHU XSZLQG &38 QGRUGHU XSZLQG 1(:6 R 'HIHFWFRUUHFWLRQ &38 r 'HIHFWFRUUHFWLRQ 1(:6 [ R ; $ -/ ; 93 [ ; )LJXUH )RU WKH VHFRQGRUGHU XSZLQG DQG GHIHFWFRUUHFWLRQ VFKHPHV WKH WLPH VSHQW LQ FRHIILFLHQW FRPSXWDWLRQV IRU 6,03/( LWHUDWLRQV LV GHFRPSRVHG LQWR FRQn WULEXWLRQV IURP FRPSXWDWLRQ GHQRWHG f&38f DQG IURP QHDUHVWQHLJKERU LQWHUSURn FHVVRU FRPPXQLFDWLRQ GHQRWHG f1(:6f 7KHVH TXDQWLWLHV DUH SORWWHG DJDLQVW WKH YLUWXDO SURFHVVRU UDWLR 93 7LPHV DUH IRU D 98 &0

PAGE 129

&0 6,03/( &RGH ( YV 93 IRU 98V /8 R f R 8f§ /8 -" &2 4B 93 )LJXUH 3DUDOOHO HIILFLHQF\ ( IRU D UDQJH RI SUREOHP VL]HV ( 7LQS7S ZKHUH 7? LV WKH VHULDO H[HFXWLRQ WLPH HVWLPDWHG E\ PXOWLSO\LQJ WKH PHDVXUHG FRPSXWDn WLRQ WLPH SHU SURFHVVRU E\ WKH QXPEHU RI SURFHVVRUV QS 7S LV WKH HODSVHG &0 UXQ WLPH LQFOXGLQJ FRPSXWDWLRQ LQWHUSURFHVVRU DQG IURQWHQGWRSURFHVVRU W\SHV RI FRPPXQLFDWLRQ ; U R ; R ; D R V [ QGRUGHU XSZLQG VFKHPH R 'HIHFWFRUUHFWLRQ VFKHPH 6ROYHU SRLQW-DFREL LWHUDWLRQV DQG f

PAGE 130

/HYHO /HYHO /HYHO /HYHO /HYHO :RUN 8QLWV 5HVLGXDO QRUP 7UXQFDWLRQ HUURU QRUP 5H 6\PPHWULF %DFN6WHS )0*)$6 9f F\FOHV [ JULG OHYHOV 'HIHFWFRUUHFWLRQ VFKHPH 3RLQW-DFREL VROYHU f 5HOD[ IDFWRUV f 3Uf )LJXUH &RQYHUJHQFH SDWK RQ HDFK JULG OHYHO IRU D OHYHO 9f PXOWLJULG F\FOH 7KH ILQH JULG LV [ 7KH IORZ SUREOHP LV D 5H V\PPHWULF EDFNZDUGIDFLQJ IORZ %LOLQHDU LQWHUSRODWLRQ S f DQG FHOOIDFH DYHUDJLQJ IRU UHVWULFWLRQ U f DUH XVHG

PAGE 131

/HYHO /HYHO /HYHO /HYHO /HYHO :RUN 8QLWV 5HVLGXDO QRUP f§ 7UXQFDWLRQ HUURU QRUP 5H 6\PPHWULF %DFN6WHS )0*)$6 9f F\FOHV [ JULG OHYHOV 'HIHFWFRUUHFWLRQ VFKHPH 3RLQW-DFREL VROYHU f 5HOD[ IDFWRUV f 3Uf Gf )LJXUH &RQYHUJHQFH SDWK RQ HDFK JULG OHYHO IRU D OHYHO 9f PXOWLJULG F\FOH 7KH ILQH JULG LV [ 7KH IORZ SUREOHP LV D 5H V\PPHWULF EDFNZDUGIDFLQJ IORZ %LOLQHDU LQWHUSRODWLRQ S f DQG VXPPDWLRQ RI UHVLGXDOV IRU UHVWULFWLRQ U f DUH XVHG

PAGE 132

5HVLGXDO QRUP 5HVLGXDO QRUP 5HVLGXDO QRUP /HYHO /HYHO /HYHO :RUN 8QLWV /HYHO /HYHO 5HVLGXDO QRUP 7UXQFDWLRQ HUURU QRUP 5H /LG'ULYHQ FDYLW\ )0*)$6 9Of F\FOHV [ JULG OHYHOV 'HIHFW FRUUHFWLRQ VFKHPH 3RLQW-DFREL VROYHU f 5HOD[ IDFWRUV f SUf f )LJXUH &RQYHUJHQFH SDWK RQ HDFK JULG OHYHO IRU D OHYHO 9Of PXOWLJULG F\FOH 7KH ILQH JULG LV [ 7KH IORZ SUREOHP LV 5H OLGGULYHQ FDYLW\ IORZ %LOLQHDU LQWHUSRODWLRQ S f DQG FHOOIDFH DYHUDJLQJ IRU UHVWULFWLRQ U f DUH XVHG

PAGE 133

/HYHO /HYHO /HYHO /HYHO /HYHO :RUN 8QLWV 5HVLGXDO QRUP 7UXQFDWLRQ HUURU QRUP 5H /LG'ULYHQ FDYLW\ )0*)$6 9Of F\FOHV [ JULG OHYHOV 'HIHFW FRUUHFWLRQ VFKHPH 3RLQW-DFREL VROYHU f 5HOD[ IDFWRUV f 3Uf f )LJXUH &RQYHUJHQFH SDWK RQ HDFK JULG OHYHO IRU D OHYHO 9Of PXOWLJULG F\FOH 7KH ILQH JULG LV [ 7KH IORZ SUREOHP LV 5H OLGGULYHQ FDYLW\ IORZ %LOLQHDU LQWHUSRODWLRQ S f DQG VXPPDWLRQ RI UHVLGXDOV IRU UHVWULFWLRQ U f DUH XVHG

PAGE 134

&+$37(5 ,03/(0(17$7,21 $1' 3(5)250$1&( 21 7+( &0 7KLV FKDSWHU GHVFULEHV WKH LPSOHPHQWDWLRQ RQ WKH &0 RI WKH PXOWLJULG PHWKRG VWXGLHG SUHYLRXVO\ DQG DSSOLHV WKH SDUDOOHO FRGH WR WZR PRGHO IORZ SUREOHPV WR DVVHVV WKH SHUIRUPDQFH ERWK LQ WHUPV RI WKH FRQYHUJHQFH UDWH DQG WKH FRVW SHU LWHUDWLRQ 7KH PDMRU LPSOHPHQWDWLRQDO FRQVLGHUDWLRQ IRU WKH &0 PXOWLJULG DOJRULWKP LV WKH VWRUDJH SUREOHP 7KH VWDUWLQJ SURFHGXUH E\ ZKLFK DQ LQLWLDO JXHVV LV JHQHUDWHG IRU WKH ILQH JULG LV DQ LPSRUWDQW SUDFWLFDO WHFKQLTXH ZKRVH FRVW RQ SDUDOOHO FRPSXWHUV LV RI LQWHUHVW $OVR WKH VWDUWLQJ SURFHGXUH LV LPSRUWDQW LQ WKH VHQVH WKDW WKH LQLWLDO JXHVV FDQ DIIHFW WKH VWDELOLW\ RI WKH VXEVHTXHQW PXOWLJULG LWHUDWLRQV DQG WKH FRQYHUJHQFH UDWH 7KH F\FOLQJ VWUDWHJ\ LV GLVFXVVHG QH[W ,W DOVR DIIHFWV ERWK WKH UXQ WLPH DQG WKH FRQYHUJHQFH UDWH %HFDXVH RI WKH QRQQHJOLEOH VPRRWKLQJ FRVW RI FRDUVH JULGV WKH FRPSDULVRQ EHWZHHQ 9 DQG : F\FOHV LQ WHUPV RI WKH WLPH SHU F\FOH LV GLIIHUHQW WKDQ RQ VHULDO FRPSXWHUV DQG QHHGV WR EH DVVHVVHG IRU WKH &0 7KH SXUSRVH RI WKH FKDSWHU LV WR SURYLGH VRPH SUDFWLFDO JXLGDQFH UHJDUGLQJ WKH XVH RI WKH QXPHULFDO PHWKRG RQ WKH &0 QRZ WKDW WKH FKRLFH IRU WKH VPRRWKHU WKH FRDUVHJULG GLVFUHWL]DWLRQ DQG WKH UHVWULFWLRQ DQG SURORQJDWLRQ SURFHGXUHV KDV EHHQ DGGUHVVHG )LQDOO\ WKH FRPSXWDWLRQDO VFDODELOLW\ RI WKH SDUDOOHO LPSOHPHQWDWLRQ LV VWXGLHG XVLQJ WLPLQJV IRU D UDQJH RI SUREOHP VL]HV DQG QXPEHUV RI SURFHVVRUV :LWK WKH H[SHULHQFH JDLQHG ZLWK UHJDUGV WR WKH FKRLFH RI DOJRULWKP FRPSRQHQWV DQG SUDFWLn FDO WHFKQLTXHV WKLV LQIRUPDWLRQ JLYHV D FOHDU SLFWXUH RI WKH SRWHQWLDO RI WKH SUHVHQW DSSURDFK IRU VFDOHGVSHHGXS SHUIRUPDQFH RQ PDVVLYHO\SDUDOOHO 6,0' PDFKLQHV

PAGE 135

6WRUDJH 3UREOHP 0XOWLJULG DOJRULWKPV SRVH LPSOHPHQWDWLRQDO SUREOHPV LQ )RUWUDQ EHFDXVH WKH ODQJXDJH GRHV QRW VXSSRUW UHFXUVLRQ $ YDULDEOH QXPEHU RI PXOWLJULG OHYHOV PXVW EH DFFRPPRGDWHG EXW FDUH PXVW EH WDNHQ QRW WR ZDVWH PHPRU\ /HW 1,Nf DQG 1-Nf EH DUUD\V GHQRWLQJ WKH JULG H[WHQWV RQ WKH IFWK PXOWLJULG OHYHO ZKHUH N f§ UHIHUV WR WKH FRDUVHVW JULG DQG N NPD[ LV WKH ILQHVW JULG 7KH GLPHQVLRQ H[WHQWV RQ WKH ILQH JULG DUH SDUDPHWHUV RI WKH SUREOHP )RU DQ DUUD\ $ WKH GLIIHUHQW JULG OHYHOV DUH PDGH H[SOLFLW E\ DGGLQJ D WKLUG DUUD\ GLPHQVLRQ 7KLV LV D QDWXUDO DOEHLW QDLYH VWRUDJH GHFODUDWLRQ 3$5$0(7(5 1,NPD[f 1-NPD[f NPD7 f 5($/r $ 1,NPD[f 1-NPD[f NPD[ f 8QIRUWXQDWHO\ WKLV DSSURDFK ZDVWHV VWRUDJH EHFDXVH HYHU\ JULG OHYHO LV GLPHQn VLRQHG WR WKH H[WHQWV RI WKH ILQHVW JULG 7KH FRDUVH JULGV DUH VLJQLILFDQWO\ VPDOOHU WKRXJK GHFUHDVLQJ LQ VL]H E\ IDFWRU RI IRU HDFK OHYHO EHQHDWK WKH WRS OHYHO WKH ILQH JULGf 7KH WRWDO DPRXQW RI PHPRU\ XVHG LQ WKLV DSSURDFK LV WKH QXPEHU RI DUUD\V QDUUD\ PXOWLSOLHG E\ WKH VWRUDJH FRVW RI HDFK DUUD\ 6W2744& $ANPD[f $AW NPD[f AUQD[ADUUD\ f 7KH DFWXDO VWRUDJH QHHGHG LV RQO\ $77f?$7 O?a 1,NPD[f1-NPD[f 6WRUDJH Y $ ,Nf $ -NM7ODUUD\ Y $PD[ f§ Nf ADUUD\ Af N N  PD; 7KH DFWXDO VWRUDJH QHHGHG DSSURDFKHV f1,NPD[f1-NPD[fLDUUD\ DV NPD[ LQn FUHDVHV 7KXV WKH ZDVWHG VWRUDJH LV NPD[ f§ f1,NPD[f1-NUQD[fQDUUD\ ZKHQ WKH QDLYH DSSURDFK LV XVHG &OHDUO\ WKLV FDQ EHFRPH WKH GRPLQDWLQJ IDFWRU YHU\ TXLFNO\ DV WKH QXPEHU RI OHYHOV LQFUHDVHV

PAGE 136

2QH HIILFLHQW VROXWLRQ IRU VHULDO FRPSXWDWLRQ LV WR GHFODUH D G DUUD\ RI VXIILFLHQW VL]H WR KROG DOO WKH GDWD RQ DOO OHYHOV DQG WR UHVKDSH LW DFURVV VXEURXWLQH ERXQGDULHV WDNLQJ DGYDQWDJH RI WKH IDFW WKDW )RUWUDQ SDVVHV DUUD\V E\ UHIHUHQFH 7KLV SUDFWLFH LV W\SLFDO LQ VHULDO PXOWLJULG DOJRULWKPV >@ $ G DUUD\ VHFWLRQ RI WKH DSSURSULDWH OHQJWK IRU WKH JULG OHYHO XQGHU FRQVLGHUDWLRQ LV SDVVHG WR D VXEURXWLQH ZKHUH LW LV UHFHLYHG DV D G DUUD\ ZLWK WKH GLPHQVLRQ H[WHQWV 1,Nf [ 1-Nf 2Q VHULDO FRPSXWHUV WKLV UHVKDSLQJ RI DUUD\V DFURVV VXEURXWLQH ERXQGDULHV LV SRVn VLEOH EHFDXVH WKH SK\VLFDO OD\RXW RI WKH DUUD\ LV OLQHDU LQ WKH FRPSXWHUfV PHPRU\ 2Q GLVWULEXWHG PHPRU\ SDUDOOHO FRPSXWHUV OLNH WKH &0 KRZHYHU WKH VWRUDJH SUREOHP LV QRW VR HDVLO\ UHVROYHG EHFDXVH WKH GDWD DUUD\V DUH QRW SK\VLFDOO\ LQ D VLQJOH SURn FHVVRU PHPRU\ WKH\ DUH GLVWULEXWHG DPRQJ WKH SURFHVVRUV ,QVWHDG RI EHLQJ SDVVHG E\ UHIHUHQFH DV LV WKH FDVH ZLWK )RUWUDQ RQ VHULDO FRPSXWHUV GDWDSDUDOOHO DUUD\V DUH SDVVHG WR VXEURXWLQHV E\ fGHVFULSWRUf RQ WKH &0 7KH DUUD\ GHVFULSWRU LV D IURQW HQG DUUD\ FRQWDLQLQJ HOHPHQWV 7KH GHVFULSWRU FRQWDLQV LQIRUPDWLRQ DERXW WKH DUUD\ EHLQJ GHVFULEHG WKH OD\RXW RI WKH SK\VLFDO SURFHVVRU PHVK WKH YLUWXDO VXEJULG GLPHQVLRQV WKH UDQN DQG W\SH RI WKH DUUD\ WKH QDPH DQG VR RQ 2Q WKH &0 WKH VWRUDJH SUREOHP LV UHVROYHG XVLQJ DUUD\ fDOLDVHVf $UUD\ DOLDVLQJ LV D IRUP RI WKH )RUWUDQ (48,9$/(1&( IXQFWLRQ XVHG RQ VHULDO FRPSXWHUV ,Q WKH PXOWLJULG DOJRULWKP VWRUDJH IRU HDFK YDULDEOH LV LQLWLDOO\ GHFODUHG IRU DOO JULG OHYHOV H[SOLFLWO\ UHIHUHQFLQJ WKH SK\VLFDO OD\RXW RI WKH SURFHVVRUV )RU H[DPSOH DQ DUUD\ $ ZLWK ILQHJULG GLPHQVLRQ H[WHQWV 1,NPD[f [ 1-NPD[f LV GHFODUHG DV IROORZV IRU D 98 &0 ZLWK WKH SURFHVVRUV DUUDQJHG LQ DQ QOS f§ f [ Q-S f PHVK 3$5$0(7(5 1VHULDO f1,NPD[f1-NPD[fQS QnS QM} f 5($,$ $ 1LHULDLQnSQ-S f $FWXDOO\ WKH IDFWRU QHHGV WR EH LQFUHDVHG VOLJKWO\ WR DFFRXQW IRU fDUUD\ SDGGLQJf (DFK SK\VLFDO SURFHVVRU PXVW EH DVVLJQHG H[DFWO\ WKH VDPH QXPEHU RI

PAGE 137

YLUWXDO SURFHVVRUV LQ WKH 6,0' PRGHO VLQFH DOO SURFHVVRUV GR WKH VDPH WKLQJ DW WKH VDPH WLPH 7KXV LQ JHQHUDO WKH DUUD\ GLPHQVLRQV RQ HDFK OHYHO PXVW EH fSDGGHGf WR ILW H[DFWO\ RQWR WKH SURFHVVRU PHVK )RU H[DPSOH DQ [ ILQH JULG ZLWK PXOWLJULG OHYHOV KDV FRDUVH JULGV ZLWK GLPHQVLRQV [ [ [ DQG [ 7R ILW RQWR WKH SURFHVVRU PHVK ZLWK H[DFWO\ WKH VDPH VXEJULG VKDSH DQG VL]H IRU HDFK SK\VLFDO SURFHVVRU DVVXPLQJ DQ [ SURFHVVRU PHVK WKH VWRUDJH DOORFDWHG PXVW EH [ [ [ [ [ RQ WKH FRDUVHVW JULG 93 f 7KXV WKH DFWXDO GHFODUHG VWRUDJH QHHGV WR EH VOLJKWO\ PRUH WKDQ WKDW VKRZQ DERYH 7KH DUUD\ $ LV PDSSHG WR WKH SURFHVVRUV XVLQJ WKH FRPSLOHU GLUHFWLYHV GLVFXVVHG LQ &KDSWHU 7KH ILUVW GLPHQVLRQ H[WHQW RI $ LV WKH DFWXDO VWRUDJH QHHGHG SHU SK\VLFDO SURFHVVRU ,W LV ODLG RXW OLQHDUO\ LQ HDFK SK\VLFDO SURFHVVRUfV PHPRU\ E\ WKH 6(5,$/ VSHFLILFDWLRQ LQ WKH /$<287 FRPSLOHU GLUHFWLYH UHFDOO &KDSWHU H[DPSOHf 7KH ODWWHU WZR GLPHQVLRQV DUH SDUDOOHO 1(:6f ODLG RXW DFURVV WKH SK\VLFDO SURFHVVRU PHVK 7KHQ WR DFFHVV WKH $ DUUD\V FRUUHVSRQGLQJ WR HDFK JULG OHYHO DUUD\ DOLDVHV DOWHUn QDWH IURQWHQG DUUD\ GHVFULSWRUV IRU WKH VDPH SK\VLFDO GDWDf DUH FUHDWHG DV GHVFULEHG LQ &KDSWHU )RU H[DPSOH DQ HTXLYDOHQFH LV HVWDEOLVKHG EHWZHHQ WKH fDUUD\ VHFWLRQf $Orrf f DQG DQRWKHU DUUD\ ZLWK GLPHQVLRQV f ,Q WKLV ZD\ DUUD\V FDQ EH UHIHUHQFHG LQVLGH VXEURXWLQHV DV LI WKH\ KDG WKH GLPHQVLRQV RI WKH DOLDV ZLWK ERWK GLPHQVLRQV SDUDOOHO ,Q WKLV FDVH D 1(:61(:6f OD\RXW RI $f FDQ EH GHFODUHG HYHQ WKRXJK LQ WKH FDOOLQJ URXWLQH WKH GDWD FRPH IURP DQ DUUD\ RI D GLIIHUHQW VKDSH 7KLV IHDWXUH DUUD\ DOLDVLQJ LV UHODWLYHO\ QHZ LQ WKH &0)RUWUDQ FRPSLOHU HYROXWLRQ YHUVLRQ %HWD >@f DQG KDV QRW \HW EHHQ LPSOHPHQWHG E\ 0DV3DU LQ WKHLU FRPSLOHU 3UHYLRXV PXOWLJULG DOJRULWKPV RQ 6,0' FRPSXWHUV ZHUH UHVWULFWHG WR HLWKHU WKH QDLYH DSSURDFK RU H[SOLFLW GHFODUDWLRQ RI DUUD\V RQ HDFK OHYHO >@ 7KH ODWWHU DSSURDFK LV

PAGE 138

H[WUHPHO\ WHGLRXV DQG OHDGV WR YHU\ ODUJH IURQWHQG H[HFXWDEOH FRGHV PDNLQJ IURQW HQG VWRUDJH D FRQFHUQ 7KXV WKH SUHVHQW WHFKQLTXH IRU JHWWLQJ DURXQG WKH PXOWLJULG VWRUDJH SUREOHP DOWKRXJK UHTXLULQJ VRPH SURJUDPPLQJ GLOLJHQFH LV FULWLFDO EHFDXVH LW SHUPLWV PXFK ODUJHU PXOWLJULG FRPSXWDWLRQV WR EH DWWHPSWHG RQ 6,0'W\SH SDUDOOHO FRPSXWHUV $V REVHUYHG LQ &KDSWHU IRU WKH &0 SUREOHP VL]HV RI WKH RUGHU RI WKH ODUJHVW SRVVLEOH SUREOHP VL]HV DUH QHFHVVDU\ WR REWDLQ JRRG SDUDOOHO HIILFLHQFLHV 0XOWLJULG &RQYHUJHQFH 5DWH DQG 6WDELOLW\ 7KH fIXOO PXOWLJULGf )0*f VWDUWXS SURFHGXUH >@ LV VKRZQ LQ )LJXUH ,W EHJLQV ZLWK DQ LQLWLDO JXHVV RQ WKH FRDUVHVW JULG 6PRRWKLQJ LWHUDWLRQV XVLQJ WKH SUHVVXUHFRUUHFWLRQ PHWKRG DUH GRQH XQWLO D FRQYHUJHG VROXWLRQ KDV EHHQ REWDLQHG 7KHQ WKLV FRDUVHVWJULG VROXWLRQ LV SURORQJDWHG WR WKH QH[W JULG OHYHO DQG PXOWLJULG F\FOHV DUH LQLWLDWHG DW OHYHO WKH fQH[WWRFRDUVHVWf JULG OHYHOf &\FOLQJ DW WKLV OHYHO FRQWLQXHV XQWLO VRPH FRQYHUJHQFH FULWHULRQ LV PHW 7KH VROXWLRQ LV SURORQJDWHG WR WKH QH[W ILQHU JULG DQG PXOWLJULG F\FOLQJ UHVXPHV 7KLV SURFHVV LV UHSHDWHG XQWLO WKH ILQHVW JULG OHYHO LV UHDFKHG 7KH FRQYHUJHG VROXWLRQ RQ OHYHO NPD[ f§ DIWHU LQWHUSRODWLRQ WR WKH ILQH JULG LV D PXFK EHWWHU LQLWLDO JXHVV WKDQ LV SRVVLEOH RWKHUZLVH 7KH DOWHUQDWLYH LV WR XVH DQ DUELWUDU\ LQLWLDO JXHVV RQ WKH ILQH JULG )RU 3RLVVRQ HTXDWLRQV RQH 9 F\FOH RQ WKH ILQHVW JULG LV IUHTXHQWO\ VXIILFLHQW WR UHDFK D FRQYHUJHG VROXWLRQ LI WKH LQLWLDO JXHVV LV REWDLQHG E\ WKH )0* SURFHGXUH 7KH EHQHILW WR WKH FRQYHUJHQFH UDWH RI D JRRG LQLWLDO JXHVV PRUH WKDQ RIIVHWV WKH FRVW RI WKH 9 F\FOHV RQ FRDUVH JULGV OHDGLQJ XS WR WKH ILQHVW JULG OHYHO )RU 1DYLHU6WRNHV HTXDWLRQV WKH FRVWFRQYHUJHQFH UDWH WUDGHRII VWLOO IDYRUV XVLQJ WKH )0* SURFHGXUH RQ VHULDO FRPSXWHUV )RU SDUDOOHO FRPSXWHUV KRZHYHU WKH FRVW RI WKH )0* SURFHGXUH LV PRUH RI D FRQFHUQ GXH WR WKH LQHIILFLHQFLHV RI VPRRWKLQJ WKH FRDUVH JULGV DQG WKH SRWHQWLDO QHHG IRU PDQ\ FRDUVHJULG F\FOHV

PAGE 139

2Q 6,0' FRPSXWHUV WKH VPRRWKLQJ LWHUDWLRQV RQ WKH FRDUVH JULG OHYHOV KDYH D IL[HG EDVHOLQH WLPH VHW E\ WKH FRPPXQLFDWLRQ RYHUKHDG RI WKH IURQWHQGWRSURFHVVRU W\SH 7KXV WKH FRVW RI WKH )0* SURFHGXUH LV LQFUHDVHG FRPSDUHG WR VHULDO FRPn SXWDWLRQ EHFDXVH FRDUVHJULG VPRRWKLQJ LV UHODWLYHO\ PRUH FRVWO\ OHVV HIILFLHQWf WKDQ ILQHJULG VPRRWKLQJ ,W EHFRPHV LPSRUWDQW ZLWK UHJDUGV WR FRVW WR PLQLPL]H WKH QXPEHU RI FRDUVHJULG F\FOHV ZLWKRXW VDFULILFLQJ WKH EHQHILW RI D JRRG LQLWLDO JXHVV WR WKH PXOWLJULG FRQYHUJHQFH UDWH 7XPLQDUR DQG :RPEOH >@ KDYH UHFHQWO\ PRGHOOHG WKH SDUDOOHO UXQ WLPH RI WKH )0* F\FOH RQ D GLVWULEXWHG PHPRU\ 0,0' FRPSXWHU D QRGH Q&8%( 7KH\ GHYHORSHG D JULGVZLWFKLQJ FULWHULRQ WR DFFRXQW IRU WKH LQHIILFLHQFLHV RI VPRRWKLQJ RQ FRDUVH JULGV 7KH JULGVZLWFKLQJ FULWHULRQ HIIHFWLYHO\ UHGXFHV WKH QXPEHU RI FRDUVH JULG F\FOHV WDNHQ GXULQJ WKH )0* SURFHGXUH 7KH\ KDYH QRW \HW UHSRUWHG QXPHULFDO WHVWV RI WKHLU PRGHO EXW WKH WKHRUHWLFDO UHVXOWV LQGLFDWH WKDW WKH FRVWFRQYHUJHQFH UDWH WUDGHRII FDQ VWLOO IDYRU )0* F\FOHV IRU PXOWLJULG PHWKRGV RQ SDUDOOHO FRPSXWHUV ZLWK WKHLU WHFKQLTXH ,Q WKH QH[W VHFWLRQ D WUXQFDWLRQ HUURU HVWLPDWH LV GHYHORSHG DQG WKHQ XVHG WR FRQWURO WKH DPRXQW RI FRDUVHJULG F\FOLQJ LQ WKH )0* SURFHGXUH 7KH YDOLGLW\ DQG WKH QXPHULFDO FKDUDFWHULVWLFV RI WKH WUXQFDWLRQ HUURU HVWLPDWH DUH DGGUHVVHG ,Q DGGLWLRQ WR WKH FRVW RI REWDLQLQJ WKH LQLWLDO JXHVV RQ WKH ILQH JULG WKH TXDOLW\ RI WKH LQLWLDO JXHVV FDQ DIIHFW ERWK WKH FRQYHUJHQFH UDWH DQG WKH VWDELOLW\ RI WKH VXEVHTXHQW PXOWLJULG LWHUDWLRQV GHSHQGLQJ RQ WKH IORZ SUREOHP DQG WKH FRDUVHJULG FRQYHFWLRQ VFKHPH LH WKH VWDELOL]DWLRQ VWUDWHJ\ 7KH SHUIRUPDQFH RI WKH WUXQFDWLRQ HUURU FULWHULRQ LQ WKLV UHJDUG LV DOVR VWXGLHG

PAGE 140

7UXQFDWLRQ (UURU &RQYHUJHQFH &ULWHULRQ IRU &RDUVH *ULGV 7KH JRDO RI D JLYHQ GLVFUHWL]DWLRQ DQG QXPHULFDO PHWKRG LV WR REWDLQ DQ DSSUR[n LPDWH VROXWLRQ WR (T YK ZKLFK QHDUO\ VDWLVILHV WKH GLIIHUHQWLDO HTXDWLRQ LH WR DFKLHYH ??$KX $KYK?? H f IRU VRPH VPDOO W +RZHYHU X LV XQNQRZQ DQG WKHUH DUH PDQ\ FRPSOLFDWLQJ LQWHUDFWLQJ IDFWRUV GXH WR WKH JULG GLVWULEXWLRQ UHVROXWLRQ WKH GLVFUHWL]DWLRQ RI WKH QRQOLQHDU WHUPV DQG WKH SURSHU PRGHOOLQJ DQG VSHFLILFDWLRQ RI ERXQGDU\ FRQGLWLRQV 7KXV WKH FRQVHUYDWLYH SKLORVRSK\ LV XVXDOO\ DGRSWHGf§DVVXPH WKDW WKH GLVFUHWL]HG HTXDWLRQ LV D JRRG DSSUR[LPDWLRQ WR WKH GLIIHUHQWLDO HTXDWLRQ DQG VHHN WKH H[DFW VROXWLRQ WR WKH GLVFUHWH HTXDWLRQ LH VHHN DOJHEUDLF FRQYHUJHQFH __$KXK $KYK__ __6K $KYK__ ??UK?? H f DJDLQ FKRRVLQJ WKH OHYHO W WR DFFRPPRGDWH DQ\ LPSRVHG FRQVWUDLQWV RQ WKH UXQ WLPH (T LV DSSOLHG RQ WKH ILQHVW JULG LQ D PXOWLJULG FRPSXWDWLRQ WKH OHYHO RQ ZKLFK WKH VROXWLRQ LV GHVLUHG 7KH FRDUVH JULG VROXWLRQ REWDLQHG LQ WKH )0* SURFHGXUH KDV RQO\ RQH SXUSRVHf§ WR \LHOG D JRRG LQLWLDO JXHVV RQ WKH ILQH JULG 7KH fEHVWf LQLWLDO JXHVV LV WKH RQH WKDW DOORZV (T WR EH VDWLVILHG RQ WKH ILQH JULG TXLFNHVW 7KH FRUUHVSRQGLQJ FRDUVHJULG VROXWLRQ IURP ZKLFK WKH ILQHJULG LQLWLDO JXHVV LV REWDLQHG PD\ RU PD\ QRW VDWLVI\ (T ZLWK H LWVHOI ,W LV QRW DOZD\V EHQHILFLDO WR WKH ILQHJULG FRQYHUJHQFH UDWH WR REWDLQ WKH FRDUVHJULG VROXWLRQ WR VWULFW WROHUDQFHV 7KH XWLOLW\ RI D FRDUVHJULG VROXWLRQ IRU WKH SXUSRVH RI SURYLGLQJ D JRRG LQLWLDO JXHVV RQ WKH ILQH JULG GHSHQGV PRUH RQ WKH GLIIHUHQFH LQ WKH WUXQFDWLRQ HUURUV RI WKH LfO DQG 4K DSSUR[LPDWLRQV WKDQ LW GRHV RQ WKH DFFXUDF\ RI WKH FRDUVHJULG VROXn WLRQ )RU H[DPSOH LQ KLJKO\ QRQOLQHDU HTXDWLRQV RU LQ SUREOHPV ZKHUH JULG OHYHOV DUH

PAGE 141

FRDUVHQHG E\ IDFWRUV JUHDWHU WKDQ WZR LW LV LPPHGLDWHO\ DSSDUHQW WKDW WKH VROXWLRQ DFFXUDF\ LQ WKH FRDUVHJULG VROXWLRQ WR WKH GLVFUHWH SUREOHP FDQQRW WUDQVODWH LQWR D WUXO\ DFFXUDWH LQLWLDO JXHVV RQ WKH ILQH JULG QR PDWWHU KRZ DFFXUDWHO\ WKH FRDUVH JULG SUREOHP LV VROYHG 7KH XVHIXOQHVV RI WKH FRDUVHJULG VROXWLRQ GHSHQGV RQ WKH VPRRWKQHVV RI WKH SK\VLFDO VROXWLRQ DQG WKH SURORQJDWLRQ SURFHGXUH &RQVHTXHQWO\ RQH H[SHFWV WKDW WKH PRVW FRVWHIIHFWLYH SURFHGXUH IRU FRQWUROOLQJ WKH )0* F\FOLQJ ZLOO EH REWDLQHG ZLWK D SDUWLFXODU VHW RI FRDUVHJULG WROHUDQFHV WKDW GHSHQG RQ WKH IORZ FKDUDFWHULVWLFV 7KXV WKH JRDO VKRXOG EH WR GLVFRQWLQXH WKH )0* F\FOHV RQ D SDUWLFXODU FRDUVH JULG OHYHO ZKHQ (T LV VDWLVILHG )UHTXHQWO\ (T LV VDWLVILHG EHIRUH (T 6LPLODU DUJXPHQWV KDYH EHHQ PDGH E\ %UDQGW DQG 7DfDVDQ >@ 8VLQJ WKH GHILQLWLRQV RI WKH WUXQFDWLRQ HUURU (T DQG WKH UHVLGXDO WKH WULDQJOH LQHTXDOLW\ JLYHV __$KX $KYK?? ??$KX $KXK__ __$KXK $KYK?? __UO__ __UO__ f 7KXV LI UI_ H f (T FDQ EH VDWLVILHG LI WKH UHVLGXDO LV OHVV WKDQ WKH WUXQFDWLRQ HUURU -LK?? A __ L__ U U ,, f (T LV WKH FULWHULRQ DSSOLHG WR WKH FRDUVH JULGV ZKLOH (T LV UHWDLQHG IRU WKH ILQHVW OHYHO 7R GHYHORS DQ HVWLPDWH IRU __UL__ LQ (T FRQVLGHU DQ H[DPSOH FDVH RI D QRQOLQHDU FRQYHFWLRQ GLIIXVLRQ HTXDWLRQ ZLWK D FRQVWDQW RU SRVLWLRQGHSHQGHQW VRXUFH WHUP GX GX 8<[a9aGA n f

PAGE 142

)RU D ILQLWHGLIIHUHQFH GLVFUHWL]DWLRQ ZLWK FHQWUDOGLIIHUHQFLQJ IRU ERWK GHULYDWLYH WHUPV WKH WUXQFDWLRQ HUURU DW JULG SRLQW fLf RQ WKH JULG ZLWK VSDFLQJ K LV JLYHQ E\ nXL XW LWBLn ALL ALf§L K f§ Y K 6L 7Q XK f K 8 8L8 r f ZKHUH X LV WKH GLIIHUHQWLDO VROXWLRQ DW WKH SRVLWLRQ [ LK 6LPLODUO\ RQ WKH JULG ZLWK VSDFLQJ K XL 8BL K IXLL f§ Xc XBL ? O S !n 7 f $XK ‘ K } AUXn [Xfn ‘ 7KH JULG SRLQWV [ f§ ,K DQG [ LK FRUUHVSRQG EXW UHIHUV WR WKH SRLQW DW [ [c 7 K ZKHUHDV L UHIHUV WR WKH SRLQW DW [ [ I K $VVXPLQJ WKH KLJKRUGHU WHUPV DUH QHJOLJLEOH GHEDWDEOH IRU IOXLG IORZ SUREOHPV XQOHVV WKH VROXWLRQ LV YHU\ VPRRWKf DQG VXEWUDFWLQJ WKH ILUVW HTXDWLRQ IURP WKH VHFRQG DW WKH JULG SRLQWV RI QKf RQH REWDLQV IXLL X XB) XL f§ 8BL f§Kf§ K I8L XBL ? XL X XWBL 9 L f 8 9 K f UA ,Q RSHUDWRU QRWDWLRQ $KX $KX 6K@ UK f 6XEVWLWXWLQJ WKH PRVW FXUUHQW DSSUR[LPDWLRQ YK IRU X DW WKH FRDUVHJULG JULG SRLQWVf DQG WKH DSSUR[LPDWH YDOXHV YK ,KYK WKLV H[SUHVVLRQ EHFRPHV $K,KYKf 6 ,K $KYK 6Q aU? f

PAGE 143

7KH WHUP LQ EUDFNHWV LV MXVW WKH UHVLGXDO UK DW WKH FRUUHVSRQGLQJ FRDUVHJULG JULG SRLQWf )RU ILQLWHGLIIHUHQFH GLVFUHWL]DWLRQV WKLV UHVLGXDO LV SUHVXPHG WR EH DFFXUDWHO\ DSSUR[LPDWHG E\ ,KUK 7KXV WKH WUXQFDWLRQ HUURU RI WKH ILQHJULG GLVFUHWL]DWLRQ HVWLPDWHG DW WKH FRDUVHJULG JULG SRLQWV LV $K,KYKf 6K ,KKUK 7K a f 7KLV H[SUHVVLRQ KRZHYHU LV PHUHO\ WKH QXPHULFDOO\ GHULYHG SDUW RI WKH FRDUVHJULG VRXUFH WHUP 6QKXPHULFDO LQ (T 7KXV 6K AK A BQXPHULFDO f 7KH FRQYHUJHQFH FULWHULD EDVHG RQ WKLV WUXQFDWLRQ HUURU HVWLPDWH (T EHFRPHV &K AQXPHULFDO f 7KH QRUPV XVHG RQ HDFK VLGH RI WKH HTXDWLRQ VKRXOG EH GLYLGHG E\ WKH DSSURSULDWH QXPEHU RI JULG SRLQWV VLQFH WKH\ DUH GHILQHG RQ GLIIHUHQW JULG OHYHOVf VR WKDW WKH TXDQWLWLHV UHSUHVHQWHG DUH FRPSDUDEOH 7KH /? QRUP LV XVHG KHUHf§RQ D JULG ZLWK 1 SRLQWV WKH /? QRUP RI D YHFWRU Y LV e DOO LM 1 f (T LV YHU\ FRQYHQLHQW ,W LV D ZD\ RI VHWWLQJ WKH FRDUVHJULG WROHUDQFHV LQ WKH )0* SURFHGXUH DXWRPDWLFDOO\ $OVR VLQFH WKH DGGLWLRQDO FRDUVHJULG WHUP 6APHULFDO LV DOUHDG\ FRPSXWHG DV SDUW RI WKH FRHIILFLHQW FRPSXWDWLRQV SUHFHGLLQJ WKH FRDUVHJULG VPRRWKLQJ WKHUH DUH QR QHZ TXDQWLWLHV WR EH FRPSXWHG DQG PRQLWRUHG 1XPHULFDO &KDUDFWHULVWLFV RI WKH )0* 3URFHGXUH 7KH IROORZLQJ LVVXHV DUH DGGUHVVHG WKH YDOLGLW\XWLOLW\ RI WKH DQDO\VLV DERYH OHDGn LQJ WR (T WKH SHUIRUPDQFH RI WKH UHVXOWLQJ )0* SURFHGXUH EDVHG RQ WKH WUXQn FDWLRQ HUURU FRQYHUJHQFH FULWHULRQ LQ WHUPV RI WKH FRVW DQG WKH LQLWLDO UHVLGXDO OHYHO

PAGE 144

RQ WKH ILQH JULG DQG WKH FKDUDFWHULVWLFV RI WKH FRQYHUJHQFH SDWK WKURXJK WKH )0* F\FOLQJ DV D IXQFWLRQ RI WKH IORZ SUREOHP DQG WKH FRDUVHJULG FRQYHFWLRQ VFKHPH 7ZR IORZ SUREOHPV ZLWK YHU\ GLIIHUHQW SK\VLFDO FKDUDFWHULVWLFV DUH FRQVLGHUHG WKH OLGGULYHQ FDYLW\ IORZ DW 5H\QROGV QXPEHU DQG D V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ DW 5H\QROGV QXPEHU 6WUHDPOLQHV YHORFLW\ YRUWLFLW\ DQG SUHVVXUH FRQWRXUV IRU WKH WZR PRGHO IORZ SUREOHPV DUH VKRZQ LQ )LJXUHV DQG WR FODULI\ WKH SUREOHP VSHFLILFDWLRQ DQG EULQJ RXW WKHLU GLIIHUHQW SK\VLFDO IHDWXUHV ,Q WKH VWUHDPOLQH SORWV WKH FRQWRXUV ERWK LQVLGH DQG RXWVLGH WKH UHFLUFXODWLRQ UHJLRQV DUH VSDFHG HYHQO\ +RZHYHU EHFDXVH WKH UHFLUFXODWLRQ UHJLRQV DUH IDLUO\ ZHDN LQ ERWK SUREOHPV WKH VSDFLQJ EHWZHHQ FRQWRXU OHYHOV LV VHW WR EH VPDOOHU ZLWKLQ WKH UHFLUFXODWLRQ UHJLRQV LQ RUGHU WR EULQJ RXW WKH IORZ SDWWHUQ 7KH OLGGULYHQ FDYLW\ IORZ LV D UHFLUFXODWLQJ IORZ ZKHUH FRQYHFWLRQ DQG FURVVn VWUHDP GLIIXVLRQ EDODQFH HDFK RWKHU LQ PRVW RI WKH GRPDLQ DQG WKH SUHVVXUH JUDGLHQW LV LPSRUWDQW RQO\ LQ WKH XSSHUOHIW FRUQHU ,Q FRQWUDVW WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ LV DOLJQHG ZLWK WKH JULG IRU PXFK RI WKH GRPDLQ 7KH SUHVVXUH JUDGLHQW EDODQFHV YLVFRXV GLIIXVLRQ DV LQ FKDQQHOW\SH IORZV 7KHVH SUREOHPV DUH FKDOOHQJLQJ LQ GLIIHUHQW ZD\V DQG DUH UHSUHVHQWDWLYH RI PXFK EURDGHU FURVVVHFWLRQV RI LQWHUHVWLQJ IORZ VLWXDWLRQV )LJXUHV VKRZ WKH FRQYHUJHQFH SDWK RI WKH mPRPHQWXP UHVLGXDO LQ WKH OLGGULYHQ FDYLW\ IORZ IRU GLIIHUHQW FRDUVHJULG FRQYHUJHQFH FULWHULD 7KH UHVLGXDO LV SORWWHG IRU WKH FXUUHQW RXWHUPRVW OHYHO GXULQJ WKH )0* SURFHGXUH $OVR WKH SORW LV FRQWLQXHG IRU WKH ILUVW WKUHH PXOWLJULG F\FOHV RQ WKH ILQHVW JULG OHYHO WR VKRZ WKH LQLWLDO PXOWLJULG FRQYHUJHQFH UDWH RQ WKH ILQH JULG 7KH ILQHVW JULG OHYHO ZDV [ DQG VHYHQ PXOWLJULG OHYHOV ZHUH XVHGf§WKH FRDUVHVW JULG LV [ 7KH GHIHFWFRUUHFWLRQ DSSURDFK ZDV XVHG ILUVWRUGHU XSZLQGLQJ RQ FRDUVH JULGV DQG GHIHFWFRUUHFWLRQ RQ WKH ILQHVW OHYHO 9f F\FOHV ZLWK ELOLQHDU LQWHUSRODWLRQ IRU WKH SURORQJDWLRQ SURFHGXUH

PAGE 145

DQG UHVWULFWLRQ SURFHGXUH SLHFHZLVHFRQVWDQW VXPPDWLRQ RI WKH UHVLGXDOV RQO\ ZHUH XVHG 7KH UHOD[DWLRQ IDFWRUV ZHUH XXY DQG XMS f§ DQG SRLQW-DFREL LQQHU LWHUDWLRQV ZHUH XVHG ZLWK YX XY DQG XF ,Q WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS UHVXOWV JLYHQ EHORZ WKH VDPH SURFHGXUHV DUH XVHG H[FHSW LQ WKH VPRRWKHU WKH UHOD[DWLRQ IDFWRUV DUH XfXY f§ DQG ORY 7KH ILQH JULG LV [ DQG ILYH PXOWLJULG OHYHOV DUH XVHG ,Q )LJXUH WKH WUXQFDWLRQ HUURU FULWHULRQ (T LV DSSOLHG ZLWK WKH GHQRPn LQDWRU VHW WR 7KLV LV WKH fULJKWf GHQRPLQDWRU DFFRUGLQJ WR WKH DQDO\VLV EHKLQG (T VLQFH WKH RXWHUPRVW OHYHOV GXULQJ WKH )0* F\FOLQJ RQ FRDUVH JULGV DUH ILUVW RUGHU DFFXUDWH LQ WKH FRQYHFWLRQ WHUP SURYLGHG FRQYHFWLRQ LV LPSRUWDQW LQ WKH IORZ SUREOHP 7KH WROHUDQFHV JLYHQ E\ WKH WUXQFDWLRQ HUURU FULWHULRQ DUH JUDGHG EHFDXVH WKH WUXQFDWLRQ HUURU LV ODUJHU RQ FRDUVHU JULGV 7KH VSDFLQJ EHWZHHQ WKH OHYHOV LV XQHYHQ WKRXJK DQG GHSHQGV RQ WKH HYROYLQJ VROXWLRQ )RU WKH FDYLW\ IORZ __U$__ LQ (T ZLWK WKH GHQRPLQDWRU HTXDO WR FRQYHUJHG WR DQG IRU OHYHOV WKURXJK 2Q WKH ILQHVW JULG WKH WUXQFDWLRQ HUURU HVWLPDWH FRQYHUJHV WR 7KH ILJXUH VKRZV D MXPS LQ WKH UHVLGXDO OHYHO JRLQJ IURP FRDUVH JULG WR ILQH JULG RI DSSUR[LPDWHO\ EHWZHHQ DQ\ WZR VXFFHVVLYH OHYHOV 7KLV MXPS LV MXVW RTRO 3K\VLFDOO\ WKH HTXDWLRQ UHVLGXDOV UHSUHVHQW LQWHJUDWHG TXDQWLWLHV LQ WKH ILQLWHYROXPH GLVFUHWL]DWLRQ 7KXV ZKHWKHU RQ WKH FRDUVH RU WKH ILQH JULG WKH QHW UHVLGXDO /? QRUPf VKRXOG EH URXJKO\ WKH VDPH RU JUHDWHU EHFDXVH WKH ELOLQHDUELTXDGUDWLF LQn WHUSRODWLRQV FRQVLGHUHG KHUH VKRXOG QRW EH H[SHFWHG WR LPSURYH WKH VROXWLRQ VLQFH WKH\ DUH QRW GHULYHG IURP WKH SK\VLFVf ,Q WKH QRUP XVHG KHUH WKH VXP RI WKH UHVLGXDOV LV GLYLGHG E\ WKH QXPEHU RI JULG SRLQWV 7KXV LQ WKH EHVW FDVH RQH ZRXOG DQWLFLn SDWH WKH UHVXOW ZKLFK KDV EHHQ REWDLQHG ZLWK WKH IDFWRU RI GHFUHDVH LQ WKH DYHUDJH UHVLGXDOf§WKH ILQHJULG FRQWURO YROXPHV DUH D IDFWRU RI VPDOOHU WKDQ WKH FRDUVHJULG

PAGE 146

FRQWURO YROXPHV 7KH IDFW WKDW WKH PD[LPXP MXPS LV DFKLHYHG LQGLFDWHV WKDW WKH RUGHU RI WKH SURORQJDWLRQ SURFHGXUH LV VXIILFLHQW IRU WKH IORZ SUREOHP ,Q )LJXUH WKH FRUUHVSRQGLQJ FDVH IRU WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ LV VKRZQ 7KH MXPS LQ WKH DYHUDJH X UHVLGXDO EHWZHHQ OHYHOV LV DERXW 6LPLODU REVHUYDWLRQV KROG IRU VHFRQGRUGHU XSZLQGLQJ LQ ERWK IORZ SUREOHPV XVLQJ WKH WUXQFDWLRQ HUURU FULWHULRQ ZLWK WKH GHQRPLQDWRU VHW HTXDO WR WKUHH 7KXV WKH UHVXOWV REWDLQHG DUH SODXVLEOH DQG RQH ZRXOG H[SHFW DERXW WKH EHVW UHVXOWV ZKLFK DUH SRVVLEOH )LJXUH VKRZV WKH HIIHFW RI DSSO\LQJ D PRUH VWULQJHQW FRDUVHJULG FRQYHUJHQFH FULWHULRQ ,Q WKLV FDVH WKH WUXQFDWLRQ HUURU HVWLPDWH LV DJDLQ XVHG EXW ZLWK WKH GHn QRPLQDWRU VHW WR ILYH $ VOLJKW LPSURYHPHQW LQ WKH LQLWLDO OHYHO RI WKH UHVLGXDO RQ WKH ILQHVW JULG LV REWDLQHG $IWHU ILQHJULG F\FOH WKH UHVLGXDO LV FRPSDUHG WR IRU WKH )0* F\FOH +RZHYHU WLJKWHQLQJ WKH FRDUVHJULG WROHUDQFHV HYHQ IXUWKHU GRHV QRW JLYH DQ\ EHQHILW )RU H[DPSOH )LJXUH VKRZV WKH )0* FRQYHUJHQFH SDWK ZKHQ WKH FRDUVHJULG UHVLGXDO LV GULYHQ GRZQ WR D VSHFLILHG YDOXH RQ HDFK OHYHO LH ZKHQ GUnOO W f LV DSSOLHG ZLWK W f§ LQ )LJXUH $OVR LQ WKH VXEVHTXHQW ILJXUH )LJXUH WKH )0* FRQYHUJHQFH SDWK LV VKRZQ IRU D fJUDGHGf VHW RI WROHUDQFHV 6SHFLILFDOO\ IRU WKH OHYHO FDYLW\ IORZ OHYHOV WKURXJK ZHUH FRQYHUJHG WR DQG UHVSHFWLYHO\ IDFWRU RI UHGXFWLRQ SHU OHYHOf 7KHVH SDUWLFXODU YDOXHV DUH DOO HTXDO WR LI LQVWHDG RI (T WKH UHVLGXDO LV QRUPHG DFFRUGLQJ WR 9LM ,1, ( f D8LMIOX;n ZKHUH IOX[ LV D FKDUDFWHULVWLF PRPHQWXP IOX[ HTXDO WR WKH 5H\QROGV QXPEHU LQ WKH SUHVHQW IORZ SUREOHP 6K\\ DQG 6XQ >@ XVHG WKLV DSSURDFK 7KH WROHUDQFH RQ OHYHO ZDV FKRVHQ D SRVWHULRUL WR PDWFK WKH NQRZQ LQLWLDO OHYHO RI WKH ILQHJULG UHVLGXDO

PAGE 147

7KH JUDGHG VHW RI FRDUVHJULG WROHUDQFHV DUH UHSUHVHQWDWLYH RI D fEHVW SRVVLEOH JXHVVf WKDW RQH FRXOG PDNH ZLWKRXW SULRU H[SHULHQFH )URP WKHVH ILJXUHV WKHUH GRHV QRW DSSHDU WR EH DQ\ EHQHILW LQ FRQYHUJLQJ WKH FRDUVH JULGV WR WLJKWHU WROHUDQFHV )XUWKHUPRUH WKHUH LV WKH GLVDGYDQWDJH WKDW WLJKWHU FRDUVHJULG WROHUDQFHV UHTXLUH PRUH FRDUVHJULG F\FOHV DQG DUH WKHUHIRUH PRUH H[SHQn VLYH LQ WHUPV RI ZRUN XQLWV DQG HVSHFLDOO\ LQ WHUPV RI UXQ WLPH RQ WKH &0 WKH ERWWRP SORWf 7KH JUDGHG WROHUDQFHV ZRUN DOPRVW DV ZHOO DV WKH WUXQFDWLRQ HUURU FULWHULRQ H[FHSW WKDW WKHUH DUH D IHZ XQQHFHVVDU\ F\FOHV RQ OHYHOV DQG 7KH WUDGHRII EHWZHHQ WKH UXQ WLPH HODSVHG GXULQJ WKH )0* SURFHGXUH RQ VHULDO DQG SDUDOOHO FRPSXWHUV DQG WKH LQLWLDO OHYHO RI WKH X UHVLGXDO LV VXPPDUL]HG LQ 7DEOH &RDUVHJULG WROHUDQFHV 1XPEHU RI 9f F\FOHV RQ OHYHOV ^ ` )0* ZRUN XQLWV )0* &0 EXV\ WLPH ,QLWLDO OHYHO RI ILQHJULG 8 UHVLGXDO 7( ZGHQRP ^[ ` V 7( ZGHQRP ^[ ` V RQ DOO OHYHOV ^[ ` V RQ DOO OHYHOV ^[ ` V *UDGHG WROHUDQFHV ^[ ` V 7DEOH &RPSDULVRQ EHWZHHQ GLIIHUHQW VHWV RI FRDUVHJULG WROHUDQFHV LQ WHUPV RI WKH HIIRUW H[SHQGHG LQ WKH )0* SURFHGXUH IRU WKH 5H f§ OLGGULYHQ FDYLW\ IORZ DQG WKH ELOLQHDU LQWHUSRODWLRQ SURORQJDWLRQ SURFHGXUH 7KH GHIHFWFRUUHFWLRQ VWDELOL]DWLRQ VWUDWHJ\ LV XVHG 7R MXGJH ZKLFK FDVH LV WKH fEHVWf RQH DVNV KRZ PDQ\ ZRUN XQLWV RU KRZ PXFK FSX WLPH LV UHTXLUHG WR UHDFK D JLYHQ OHYHO RI WKH UHVLGXDO $ IHZ ILQHJULG F\FOHV DUH UHTXLUHG WR PDNH XS WKH GLIIHUHQFH LQ WKH LQLWLDO OHYHOV RI WKH ILQHJULG UHVLGXDO 7KHVH DUH FKDUJHG DW D UDWH RI VOLJKWO\ PRUH WKDQ ZRUN XQLWV SHU 9f F\FOH IRU WKLV OHYHO SUREOHP ZLWK WKH [ ILQH JULG HTXLYDOHQW WR DERXW VHFRQGV RQ D 98 &0 7KXV WKH f)0*f SURFHGXUH WKH ILUVW URZf LV MXGJHG WR EH WKH PRVW HIILFLHQW

PAGE 148

(YLGHQWO\ WKH FDYLW\ IORZ SUREOHP LV UHODWLYHO\ EHQLJQ LQ WHUPV RI WKH FRQYHFWLRQ HIIHFW RQ WKH FRQYHUJHQFH UDWH FKDUDFWHULVWLFV 7KH WUXQFDWLRQ HUURU HVWLPDWH LV LPn PHGLDWHO\ VDWLVILHG RQ HDFK RI WKH FRDUVH JULGV LQ WKH OHYHO FRPSXWDWLRQ DIWHU RQO\ 9f F\FOH (YHQ OHVV VPRRWKLQJ LV SRVVLEOH IRU WKLV SUREOHP HYHQ WKRXJK WKH 5H\QROGV QXPEHU LV KLJK 7DEOH FODULILHV WKH UROH RI WKH )0* SURFHGXUH LQ WKLV IORZ SUREOHP 1XPEHU RI 9f F\FOHV RQ OHYHOV ^ ` )0* ZRUN XQLWV )0* &0 EXV\ WLPH ,QLWLDO OHYHO RI ILQHJULG 8 UHVLGXDO ^ ` ^[ ` ^[ ` f§ GLYHUJHV f§ f§ GLYHUJHV f§ V ^[ ` V ^[ ` V ^[ ` V ^[ ` V 7DEOH $FFXUDF\HIIRUW WUDGHRII EHWZHHQ D fO)0*f DSSURDFK WK URZf DQG VLPSOH 9 F\FOLQJ ZLWK D ]HUR LQLWLDO JXHVV RQ WKH ILQH JULG VW URZf $Q DSSUR[LPDWH VROXWLRQ PXVW EH REWDLQHG RQ DW OHDVW OHYHO LQ RUGHU WR DYRLG GLYHUJHQFH IRU WKH OHYHO 5H f§ OLGGULYHQ FDYLW\ IORZ SUREOHP ZKHQ WKH UHOD[DWLRQ IDFWRUV DUH L2XY XLF f)0* ZRUN XQLWVf UHIHUV WR WKH ZRUN XQLWV SURSRUWLRQDO WR D VHULDO FRPSXWHUffV UXQ WLPHf DOUHDG\ H[SHQGHG DW WKH SRLQW ZKHQ PXOWLJULG F\FOLQJ RQ WKH ILQHVW JULG OHYHO EHJLQV f&0 EXV\ WLPHf LV WKH FRUUHVSRQGLQJ PHDVXUH RI ZRUN RQ D 98 &0 LQ VHFRQGV 7KH f[f LQ WKH FROXPQ FRUUHVSRQGLQJ WR OHYHO PHDQV WKDW 6,03/( LWHUDWLRQV ZHUH GRQH RQ WKH FRDUVHVW JULG 7KHVH GDWD DUH IRU WKH GHIHFWFRUUHFWLRQ VWUDWHJ\ 7KXV LW LV SRVVLEOH WR SURORQJ WKH VROXWLRQ GLUHFWO\ IURP OHYHO D [ JULG WR WKH ILQH JULG +RZHYHU IRU WKH UHOD[DWLRQ IDFWRUV XVHG DQ LQLWLDO JXHVV RQ DQ HYHQ FRDUVHU JULG OHYHO RU f LV QRW DFFXUDWH HQRXJK WR SUHYHQW WKH ILQHJULG 9f F\FOHV IURP GLYHUJLQJ 7KH UHVXOWV LQ )LJXUHV VKRZHG WKDW WKH LQLWLDO UHVLGXDO RQ WKH ILQH JULG ZDV LQGHSHQGHQW RI WKH GHJUHH RI DFFXUDF\ REWDLQHG RQ WKH FRDUVHU JULG OHYHOV &ORVHU

PAGE 149

H[DPLQDWLRQ VKRZV WKDW WKH LQLWLDO UHVLGXDO OHYHOV RQ WKH FRDUVH JULG OHYHOV GXULQJ WKH )0* SURFHGXUH DOVR GR QRW DSSHDU WR GHSHQG RQ WKH GHJUHH WR ZKLFK WKH QH[W FRDUVHU JULG OHYHO LV FRQYHUJHG )XUWKHUPRUH WKLV REVHUYDWLRQ KROGV IRU VHFRQGRUGHU XSZLQGLQJ RQ DOO OHYHOV LQ WKH FDYLW\ IORZ DQG IRU HLWKHU GHIHFWFRUUHFWLRQ RU VHFRQG RUGHU XSZLQGLQJ LQ WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ 7KH )0* FRQYHUJHQFH SDWKV IRU WKH VWHS IORZ XVLQJ VHFRQGRUGHU XSZLQGLQJ RQ DOO JULG OHYHOV DUH VKRZQ LQ )LJXUHV 7KHUH DSSHDUV WR EH D FHUWDLQ PD[LPXP DPRXQW RI DFFXUDF\ WKDW FDQ EH FDUn ULHG RYHU WR WKH QH[W ILQHU JULG ZLWK WKH ELOLQHDU LQWHUSRODWLRQ SURORQJDWLRQ 6LQFH WKH WUXQFDWLRQ HUURU FRQYHUJHQFH FULWHULRQ GRHV QRW H[FHHG WKLV DPRXQW RI DFFXUDF\ DQG LQGHHG WKH DYHUDJH UHVLGXDOV OHYHOV DUH YLUWXDOO\ WKH VDPH LI WKH GHQRPLQDWRU LQ (T LV VHW WR ILYH WKH UHVXOWV VWURQJO\ VXJJHVW WKDW WKH GHJUHH RI DFFXUDF\ RQ D JLYHQ FRDUVH JULG ZKLFK LV H[SORLWDEOH LV UHODWHG WR WKH GLIIHUHQWLDO HUURU LQ WKH VROXn WLRQ LH WKH WUXQFDWLRQ HUURU DQG QRW WKH DOJHEUDLF HUURU 7KXV WKH UHVXOWV VXSSRUW WKH DUJXPHQWV PDGH LQ WKH SDUDJUDSK IROORZLQJ (T :LWK UHJDUG WR WKH SHUIRUPDQFH RI WKH WUXQFDWLRQ HUURU FULWHULRQ WKH GHIHFW FRUUHFWLRQ DQG VHFRQGRUGHU XSZLQG VWDELOL]DWLRQ VWUDWHJLHV VKRZHG VLPLODU UHVXOWV LQ ERWK IORZ SUREOHPV 7KH LQLWLDO ILQHJULG UHVLGXDO OHYHO DQG WKH VWDELOLW\ RI WKH VXEVHn TXHQW PXOWLJULG LWHUDWLRQV KRZHYHU DSSHDU WR EH VWURQJO\ GHSHQGHQW RQ WKH FRQYHFn WLRQ VFKHPHV XVHG 7DEOH VXPPDUL]HV WKH )0* FRQYHUJHQFH UDWHV IRU VHFRQGRUGHU XSZLQGLQJ LQ WKH OLGGULYHQ FDYLW\ IORZ 7KH DQG JUDGHG WROHUDQFH FDVHV ERWK FRQYHUJHG ZLWK WKH GHIHFWFRUUHFWLRQ VFKHPH EXW ZLWK VHFRQGRUGHU XSZLQGLQJ WKH\ GLYHUJH $IWHU VHYHUDO ILQHJULG F\FOHV WKH FDVH GLYHUJHV DOVR 7KH GLIIHUHQFH EHWZHHQ WKH FDVHV LV HYLGHQWf§PDQ\ PRUH FRDUVHJULG F\FOHV DUH WDNHQ LQ WKH FDVHV ZKLFK GLYHUJH 7KH VRXUFH WHUPV LQ WKH

PAGE 150

&RDUVHJULG WROHUDQFHV 1XPEHU RI 9f F\FOHV RQ OHYHOV ^ ` )0* ZRUN XQLWV )0* &0 EXV\ WLPH ,QLWLDO OHYHO RI ILQHJULG 8 UHVLGXDO 7( ZGHQRP ^[ ` V 7( ZGHQRP ^[ ` V RQ DOO OHYHOV ^[ ` V RQ DOO OHYHOV ^[ RR` GLYHUJHV *UDGHG WROHUDQFHV ^[ RR` GLYHUJHV 7DEOH &RPSDULVRQ EHWZHHQ GLIIHUHQW VHWV RI FRDUVHJULG WROHUDQFHV LQ WHUPV RI WKH HIIRUW H[SHQGHG LQ WKH )0* SURFHGXUH IRU WKH 5H OLGGULYHQ FDYLW\ IORZ DQG WKH ELOLQHDU LQWHUSRODWLRQ SURORQJDWLRQ SURFHGXUH 7KH VHFRQGRUGHU XSZLQG VWDELOL]DWLRQ VWUDWHJ\ LV XVHG VHFRQGRUGHU XSZLQG GLVFUHWL]DWLRQ DSSHDU WR EH D VWURQJ GHVWDELOL]LQJ IDFWRU LQ WKLV IORZ SUREOHP )XUWKHUPRUH WKH IDFW WKDW WKH FRQVWDQW WROHUDQFH DW OHDVW UHDFKHV WKH ILQH JULG ZKLOH WKH FRQVWDQW WROHUDQFH GLYHUJHV VXJJHVWV WKDW WKH DPRXQW RI PLVPDWFK EHWZHHQ WKH HQGLQJ FRDUVHJULG UHVLGXDO OHYHO DQG WKH EHJLQQLQJ ILQHJULG UHVLGXDO OHYHO ZKLFK LV JUHDWHU IRU WKH FDVH WKDQ WKH FDVH LV UHODWHG WR WKH VL]H RI WKH GHVWDELOL]LQJ VRXUFH WHUPV LQ WKH LQLWLDO ILQHJULG SUREOHP 7KXV LQ DGGLWLRQ WR EHLQJ ZDVWHIXO RI ZRUN XQLWV DQGRU FSX WLPH REWDLQLQJ H[FHVVLYH DFFXUDF\ RQ WKH FRDUVH JULGV FDQ DFWXDOO\ EH GHWULPHQWDO WR WKH VWDELOLW\ RI PXOWLJULG LWHUDWLRQV GHSHQGLQJ RQ WKH GLVFUHWL]DWLRQ VFKHPH (YLGHQWO\ ZLWK UHOD[DWLRQ IDFWRUV XXY XF VHFRQG RUGHU XSZLQGLQJ ZLWK 9f F\FOHV DQG ZX XY DQG YF LQQHU SRLQW-DFREL LQQHU LWHUDWLRQV LQ HDFK 6,03/( RXWHU LWHUDWLRQ WKH 5H OLGGULYHQ FDYLW\ IORZ LV GLIILFXOW WR VROYH 7KH PXOWLJULG LWHUDWLRQV RQO\ FRQYHUJH IRU D UHODWLYHO\ VPDOO UDQJH RI FRDUVHJULG WROHUDQFHV 7KLV UDQJH PD\ EH KDUG WR ILQG E\ WULDO DQG HUURU 7KH WUXQFDWLRQ HUURU FULWHULRQ LV XVHIXO LQ WKLV UHJDUG 6LPLODU REVHUYDWLRQV DUH PDGH IRU WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ )LJn XUHV DUH WKH FRUUHVSRQGLQJ UHVXOWV IRU WKH 5H V\PPHWULF EDFNZDUGn IDFLQJ VWHS IORZ XVLQJ VHFRQGRUGHU XSZLQGLQJ RQ DOO FRDUVH JULG OHYHOV LQ WKH )0*

PAGE 151

SURFHGXUH 7KH FRQYHUJHQFH UDWH EHKDYLRU RI WKH VHFRQGRUGHU XSZLQG VFKHPH LQ WKH VWHS IORZ LV VLPLODU WR WKH GHIHFWFRUUHFWLRQ VFKHPH UHVXOWV LQ WKH OLGGULYHQ FDYLW\ IORZ )RU WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ D [ ILQH JULG ZLWK PXOWLn JULG OHYHOV ZDV XVHG 7KH FRDUVHVW JULG ZDV [ $V LQ WKH FDYLW\ IORZ FDVHV 9f F\FOHV ZHUH XVHG ZLWK ELOLQHDU LQWHUSRODWLRQ IRU WKH SURORQJDWLRQ SURFHGXUH DQG UHVWULFWLRQ SURFHGXUH 7KH UHOD[DWLRQ IDFWRUV ZHUH XXY f§ DQG XF $V LQ SUHYLRXV FDVHV DQG SRLQW-DFREL LQQHU LWHUDWLRQV ZHUH XVHG LQ HDFK 6,03/( LWHUDWLRQ IRU WKH X X DQG Sn V\VWHPV RI HTXDWLRQV UHVSHFWLYHO\ ,Q )LJXUH WKH FRQYHUJHQFH SDWK LV VLPLODU WR WKH FDYLW\ IORZ FRQYHUJHQFH SDWKf§H[FHSW WKDW LQ WKH FDYLW\ IORZ WKH FRDUVHJULG WROHUDQFHV JLYHQ E\ (T ZHUH ORRVH HQRXJK WKDW RQO\ RQH F\FOH ZDV QHHGHG RQ HDFK RI WKH FRDUVH JULGV \LHOGLQJ D f)0*f F\FOH ,Q WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ PRUH WKDQ RQH F\FOH LV QHHGHG RQ HDFK FRDUVHJULG OHYHO WR VDWLVI\ WKH WUXQFDWLRQ HUURU FULWHULRQ 7KH WUXQFDWLRQ HUURU HVWLPDWH ZLWK WKH GHQRPLQDWRU VHW HTXDO WR WKUHH EHFDXVH WKH FRDUVH JULG GLVFUHWL]DWLRQV DUH VHFRQGRUGHUf FRQYHUJHV RQ WKH IROORZLQJ OHYHOV FRUUHVSRQGLQJ WR WKH JULG OHYHOV WR 2Q WKH ILQHVW JULG WKH HVWLPDWHG OHYHO LV )LJXUHV VKRZ WKH )0* FRQYHUJHQFH SDWK ZKHQ WLJKWHU FRDUVHJULG WROHUn DQFHV DUH XVHG DQG WKHVH UHVXOWV DUH VXPPDUL]HG LQ 7DEOH EHORZ )RU WKH JUDGHG VHW RI FRDUVH JULG WROHUDQFHV OHYHOV WKURXJK ZHUH FRQYHUJHG WR DQG (DFK RI WKHVH OHYHOV FRUUHVSRQGV WR WKH OHYHO LI WKH QRUP XVHG LV (T LQVWHDG RI WKH DYHUDJH /? QRUP $V LQ WKH FDYLW\ IORZ FDVH WKHUH LV RQO\ D VPDOO HIIHFW RQ WKH LQLWLDO VROXWLRQ DFFXUDF\ RQ HDFK FRDUVHJULG OHYHO 7KHUH LV QR EHQHILW WR WKH LQLWLDO ILQHJULG UHVLGXDO OHYHO E\ FRQYHUJLQJ WKH FRDUVH JULGV WR VWULFW WROHUDQFHV 7KH WUXQFDWLRQ HUURU FULWHULRQ ZLWK WKH GHQRPLQDWRU VHW WR DSSHDUV WR EH WKH PRVW VWULQJHQW FULWHULRQ ZKLFK GRHV QRW ZDVWH DQ\ FRDUVHJULG F\FOHV LH LW LV QHDUO\ WKH RSWLPDO FRVWUHVLGXDO UHGXFWLRQ

PAGE 152

EDODQFH 7KH RWKHU DSSURDFKHV REWDLQ PRUH DFFXUDF\ RQ WKH FRDUVH JULGV WKDQ FDQ EH FDUULHG RYHU WR WKH LQLWLDO ILQHJULG VROXWLRQ IRU WKH ELOLQHDU LQWHUSRODWLRQ SURORQJDWLRQ &RDUVHJULG WROHUDQFHV 1XPEHU RI 9f F\FOHV RQ OHYHOV ^ ` )0* ZRUN XQLWV )0* &0 EXV\ WLPH ,QLWLDO OHYHO RI ILQHJULG 8 UHVLGXDO f)0*f F\FOH ^[ ` V 7( ZGHQRP ^[ ` V 7( ZGHQRP ^[ ` V RQ DOO OHYHOV ^[ ` V RQ DOO OHYHOV ^[ ` V *UDGHG WROHUDQFHV ^[ ` V 7DEOH &RPSDULVRQ EHWZHHQ GLIIHUHQW VHWV RI FRDUVHJULG WROHUDQFHV LQ WHUPV RI WKH HIIRUW H[SHQGHG LQ WKH )0* SURFHGXUH IRU WKH 5H f§ V\PPHWULF EDFNZDUGn IDFLQJ VWHS IORZ DQG WKH ELOLQHDU LQWHUSRODWLRQ SURORQJDWLRQ SURFHGXUH 6HFRQGRUGHU XSZLQGLQJ LV XVHG RQ DOO JULG OHYHOV 7KH UHVXOWV IRU WKH GHIHFWFRUUHFWLRQ VWUDWHJ\ DUH VXPPDUL]HG LQ WKH WDEOH EHORZ ,Q WKH FDYLW\ IORZ WKH VHFRQGRUGHU XSZLQG VFKHPH ZDV YHU\ GLIILFXOW WR FRQYHUJH ZKHQ D FRQVWDQW RU D JUDGHG WROHUDQFH ZDV JLYHQ ,Q WKH VWHS IORZ LW DSSHDUV WKDW WKH GHIHFWFRUUHFWLRQ VWUDWHJ\ LV KDUGHU WR FRQYHUJH &RDUVHJULG WROHUDQFHV 1XPEHU RI 9f F\FOHV RQ OHYHOV ^ ` )0* ZRUN XQLWV )0* &0 EXV\ WLPH ,QLWLDO OHYHO RI ILQHJULG 8 UHVLGXDO f)0*f F\FOH ^[ ` V 7( ZGHQRP ^[ ` V 7( ZGHQRP ^[ ` V RQ DOO OHYHOV ^[ ` V RQ DOO OHYHOV ^[ ` V *UDGHG WROHUDQFHV ^[ ` V 7DEOH &RPSDULVRQ EHWZHHQ GLIIHUHQW VHWV RI FRDUVHJULG WROHUDQFHV LQ WHUPV RI WKH HIIRUW H[SHQGHG LQ WKH )0* SURFHGXUH IRU WKH 5H V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ DQG WKH ELOLQHDU LQWHUSRODWLRQ SURORQJDWLRQ SURFHGXUH 7KH GHIHFWFRUUHFWLRQ VWDELOL]DWLRQ VWUDWHJ\ LV XVHG ,QIOXHQFH RI ,QLWLDO *XHVV RQ &RQYHUJHQFH 5DWH 7KH FRVWLQLWLDO DFFXUDF\ WUDGHRII ZDV GLVFXVVHG DERYH ,Q DGGLWLRQ WKH LQLWLDO JXHVV RQ WKH ILQH JULG LV LPSRUWDQW EHFDXVH LW FDQ DIIHFW WKH DV\PSWRWLF FRQYHUJHQFH

PAGE 153

UDWH DQG VWDELOLW\ RI VXEVHTXHQW ILQHJULG F\FOHV ,Q PDQ\ FDVHV WKLV FRQVLGHUDWLRQ LV PRUH LPSRUWDQW WKDQ WKH FRVWLQLWLDO DFFXUDF\ WUDGHRII VLQFH WKH WLPH VSHQW LQ WKH )0* SURFHGXUH PD\ EH YHU\ VPDOO FRPSDUHG WR WKH RYHUDOO WLPH UHTXLUHG LI PDQ\ ILQHJULG F\FOHV DUH QHHGHG 7KH )0* FRQWULEXWLRQ WR WKH WRWDO UXQ WLPH HVSHFLDOO\ RQ WKH &0 LV QRW DOZD\V QHJOLJLEOH WKRXJK LQ SDUWLFXODU LI RQH GHILQHV FRQYHUn JHQFH DFFRUGLQJ WR WKH WUXQFDWLRQ HUURU HVWLPDWH RQ WKH ILQHVW JULG LH GLIIHUHQWLDO FRQYHUJHQFH DV VXJJHVWHG E\ %UDQGW DQG 7DfDVDQ >@ )LJXUH JLYHV WKH FRQYHUJHQFH SDWK IRU WKH HQWLUH FRPSXWDWLRQ IRU WKH OLG GULYHQ FDYLW\ IORZ ,Q WKH WRS SORW WKH ILQHJULG DYHUDJH X UHVLGXDO LV SORWWHG DJDLQVW WKH &0 EXV\ WLPH IRU WKH GHIHFWFRUUHFWLRQ VFKHPH 7KH GHIHFWFRUUHFWLRQ VFKHPH DQG VHFRQGRUGHU XSZLQG VFKHPH ERWWRP SORWf FRQYHUJH DW QHDUO\ WKH VDPH UDWH 7KH GLIIHUHQFHV LQ WKH LQLWLDO ILQHJULG UHVLGXDO OHYHO GXH WR WKH )0* SURFHGXUH HYLGHQWO\ GR QRW SHUVLVW IRU YHU\ ORQJ DQG LI WKH SXUSRVH LV WR REWDLQ DOJHEUDLF FRQYHUJHQFH (T WKHQ WKH GLIIHUHQFH LQ &0 EXV\ WLPH GXH WR WKH )0* SURFHGXUH LV LQVLJQLIn LFDQW +RZHYHU LI FRQYHUJHQFH LV GHFODUHG ZKHQ WKH DYHUDJH X UHVLGXDO IDOOV EHQHDWK WKH GRWWHG OLQH WKH HVWLPDWHG WUXQFDWLRQ HUURU OHYHO RQ WKH ILQH JULG WKHQ WKH )0* SURFHGXUH FRQWULEXWHV DQ\ZKHUH IURP b RI WKH WRWDO WLPH LQ WKH FDVH RI WKH WUXQn FDWLRQ HUURU FULWHULRQ ZLWK GHQRPLQDWRU WR b RI WKH WRWDO WLPH LQ WKH FDVH RI WKH FRQVWDQW FULWHULRQ )RU WKH 5H OLGGULYHQ FDYLW\ IORZ XVLQJ 6,03/( ZLWK YX YY DQG YF LQQHU 6/85 LWHUDWLRQV DQG D :OOf PXOWLJULG F\FOH 6RFNRO >@ UHSRUWHG WKDW ZRUN XQLWV DQG VHFRQGV RQ DQ $PGDKO ZHUH QHHGHG WR UHDFK FRQYHUJHQFH 7R UHDFK D VLPLODU FRQYHUJHQFH WROHUDQFH WKH SUHVHQW FRPSXWDWLRQ QHHGHG ZRUN XQLWV DQG VHFRQGV RQ WKH &0 ,Q WKH SUHYLRXV VHFWLRQ WKH DPRXQW RI VPRRWKLQJ XVHG LQ WKH SUHVHQW FDVH 9f F\FOHV ZDV REVHUYHG WR EH VRPHZKDW PRUH WKDQ ZDV QHFHVVDU\ IRU WKLV IORZ SUREOHP 7KH GLIIHUHQFH EHWZHHQ 9f F\FOHV DQG :OOf

PAGE 154

F\FOHV LQ WHUPV RI ZRUN XQLWV LV DSSUR[LPDWHO\ SHU F\FOH 7KLUW\ F\FOHV RQ WKH ILQH JULG ZHUH WDNHQ LQ WKH SUHVHQW FDVH 7KXV LW VHHPV WKDW WKH SUHVHQW UHVXOW LV FRPSDUDEOH WR 6RFNROfV UHVXOW 7KH ILQHJULG FRQYHUJHQFH SDWKV IRU WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ )LJn XUH DUH YHU\ LQWHUHVWLQJ 7KH VHFRQGRUGHU XSZLQG VFKHPH SHUIRUPV UHPDUNDEO\ ZHOO 7KH DYHUDJH X UHVLGXDO UHDFKHV LQ MXVW VOLJKWO\ PRUH WKDQ VHFRQGV RQ WKH &0 DQG ZRUN XQLWV 9f F\FOHV RQ WKH [ ILQH JULGf 7KLV FRQYHUJHQFH UDWH FRUUHVSRQGV WR DQ DPSOLILFDWLRQ IDFWRU RI SHU F\FOH IRU WKH /? QRUP RI WKH XUHVLGXDO %HFDXVH RI WKH IDVW FRQYHUJHQFH UDWH WKH FRQWULEXWLRQ RI WKH VWDUWXS )0* F\FOLQJ LV D VLJQLILFDQW IUDFWLRQ RI WKH RYHUDOO SDUDOOHO UXQ WLPH 7KH GHIHFWFRUUHFWLRQ VWUDWHJ\ GRHV QRW FRQYHUJH DV TXLFNO\ DV WKH VHFRQGRUGHU XSZLQG VFKHPH LQ WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ )XUWKHUPRUH IRU WKH GHIHFWFRUUHFWLRQ VFKHPH WKH ILQHJULG LQLWLDO JXHVV HYLGHQWO\ DIIHFWV WKH UDWH RI FRQn YHUJHQFH 7R REWDLQ WKH FRQYHUJHQFH SDWKV LQ WKH WRS SORW RI )LJXUH LGHQWLFDO SURFHGXUHV DQG SDUDPHWHUV ZHUH XVHG IRU WKH PXOWLJULG LWHUDWLRQV EHJLQQLQJ RQ WKH ILQH JULG 7KH UHOD[DWLRQ IDFWRUV ZHUH XMXY X!F DQG IL[HG 9f F\FOHV ZHUH XVHG 7KH FRDUVHJULG GLVFUHWL]DWLRQV LQ WKH )0* SURFHGXUH XVH ILUVWRUGHU XSZLQGLQJ ZKLOH WKH ILQHJULG GLVFUHWL]DWLRQ LV PRGLILHG WR SURGXFH FHQWUDOGLIIHUHQFH DFFXUDF\ 7KXV WKH VXGGHQ ULVH LQ WKH UHVLGXDO OHYHO IRU DOO FDVHV H[FHSW WKH WUXQFDWLRQ HUn URU FULWHULRQ ZLWK GHQRPLQDWRU HTXDO WR f VXJJHVWV WKDW WKH ILUVWRUGHU XSZLQG DQG FHQWUDOGLIIHUHQFH VROXWLRQV WR WKLV IORZ SUREOHP DUH YHU\ GLIIHUHQW ,W LV DSSDUHQWO\ GLIILFXOW IRU WKH QXPHULFDO PHWKRG WR HYROYH WKH VROXWLRQ IURP ILUVWRUGHU XSZLQG DFn FXUDF\ LQWR FHQWUDOGLIIHUHQFH DFFXUDF\ 7KXV WKHUH LV DFWXDOO\ DQ DGYDQWDJH LQ QRW FRQYHUJLQJ WKH FRDUVH JULGV WR WLJKW WROHUDQFHV 2Q WKH RWKHU KDQG WKH f)0*f SURFHGXUH KDV WKH ZRUVW FRQYHUJHQFH UDWH RI WKH FDVHV FRQVLGHUHG 7KH FRQFOXVLRQ

PAGE 155

)LJXUH VXSSRUWV LV WKDW WKHUH LV DQ RSWLPDO VROXWLRQ DFFXUDF\ RQ WKH FRDUVH JULGV LQ WKH )0* SURFHGXUH ZKLFK LV UHODWHG WR WKH GLIIHUHQWLDO HUURU LQ WKH VROXWLRQ VLQFH WKH WUXQFDWLRQ HUURU HVWLPDWH JLYHV WKH EHVW UHVXOW 5HPDUNV %RWK IORZ SUREOHPV KDYH VWURQJ QRQOLQHDULWLHV DQG DUH UHODWLYHO\ GLIILFXOW DQG VORZ WR FRQYHUJH DV VLQJOHJULG FRPSXWDWLRQV 7KH PXOWLJULG PHWKRG DOORZV ODUJHU UHOD[DWLRQ SDUDPHWHUV WR EH XVHG 9HU\ IDVW FRQYHUJHQFH UDWHV FDQ EH REWDLQHG EXW WKH SHUIRUPDQFH GHSHQGV RQ WKH GLVFUHWL]DWLRQ RQ FRDUVH JULGV WKH VWDELOL]DWLRQ VWUDWHJ\f DQG WKH LQLWLDO ILQHJULG JXHVV 7KH IDFW WKDW WKH WUXQFDWLRQ HUURU FULWHULRQ JLYHV WKH EHVW UHVXOWV LQ ERWK IORZ SUREOHPV DQG WKDW UHJDUGOHVV RI KRZ WLJKW WKH FRDUVH JULGV DUH FRQYHUJHG ERWK WKH LQLWLDO ILQH DQG FRDUVHJULG UHVLGXDOV DUH UHODWLYHO\ LQGHSHQGHQW LQGLFDWHV WKDW WKHUH LV RQO\ D FHUWDLQ DPRXQW RI DFFXUDF\ ZKLFK FDQ EH REWDLQHG LQLWLDOO\ IRU D JLYHQ IORZ SUREOHP DQG FRDUVHJULG GLVFUHWL]DWLRQ VFKHPH DQG WKDW WKLV REVHUYDWLRQ LV HVVHQWLDOO\ D UHIOHFWLRQ RI WKH WUXQFDWLRQ HUURU RI WKH GLVFUHWL]DWLRQ 7KH VHFRQGRUGHU XSZLQG VFKHPH PD\ EH SURQH WR ODUJH VRXUFH WHUPV ZKLFK FDQ FDXVH WKH PXOWLJULG LWHUDWLRQV WR GLYHUJH HVSHFLDOO\ LI UHODWLYHO\ IHZ VPRRWKLQJ LWHUDn WLRQV DUH XVHG 7KLV REVHUYDWLRQ ZDV PDGH IRU WKH FDYLW\ IORZ 2Q WKH RWKHU KDQG ZKHQ WKHUH LV D VLJQLILFDQW GLIIHUHQFH EHWZHHQ WKH ILUVWRUGHU DQG FHQWUDOGLIIHUHQFH VRn OXWLRQV RQ D JLYHQ JULG WKH VXFFHVV RI WKH GHIHFWFRUUHFWLRQ VWUDWHJ\ GHSHQGV VWURQJO\ RQ WKH LQLWLDO JXHVV RQ WKH ILQHVW JULG UH WKH VWHS IORZ UHVXOWVf DQG LQ WKLV VHQVH WKH GHIHFWFRUUHFWLRQ DSSURDFK LV QRW YHU\ UREXVW 7KH VWDELOLW\ RI PXOWLJULG LWHUDWLRQV LV GLIIHUHQW WKDQ IRU VLQJOHJULG FDOFXODWLRQV DQG FHUWDLQO\ PRUH FRQIXVLQJ )RU H[DPSOH LI D VLQJOHJULG FDOFXODWLRQ GRHV QRW FRQn YHUJH DW D JLYHQ 5H\QROGV QXPEHU ZLWK D FHUWDLQ VHW RI UHOD[DWLRQ SDUDPHWHUV WKHQ

PAGE 156

UHGXFLQJ WKH UHOD[DWLRQ IDFWRUV LV DOZD\V FRQYHUJHQFHHQKDQFLQJ )RU PXOWLJULG LWHUDn WLRQV WKLV LV QRW QHFHVVDULO\ WUXH ,W ZDV REVHUYHG IRU WKH VHFRQGRUGHU XSZLQG VFKHPH LQ WKH 5H V\PPHWULF EDFNZDUGIDFLQJ VWHS SUREOHP WKDW WKH VLQJOHJULG PHWKRG GLYHUJHV XVLQJ XXY ZLWK XF IRU WKH 5H V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ DQG WKH VHFRQGRUGHU XSZLQG VFKHPH +RZHYHU FRQYHUJHQFH ZDV REWDLQHG ZLWK OXXY DQG XF f§ (YLGHQWO\ WKHUH LV D FHUWDLQ PLQLPXP DPRXQW RI VPRRWKLQJ UHTXLUHG 7KH DPRXQW GHSHQGV RQ WKH IORZ SUREOHP DV ZHOO DV WKH UHVWULFn WLRQ DQG SURORQJDWLRQ SURFHGXUHV ,Q RWKHU ZRUGV UHGXFLQJ WKH UHOD[DWLRQ IDFWRUV WR FRSH ZnLWK SUREOHPV WKDW KDYH VWURQJ QRQOLQHDULWLHV PD\ VLPXOWDQHRXVO\ UHTXLUH LQn FUHDVLQJ WKH QXPEHU RI VPRRWKLQJ LWHUDWLRQV RQ HDFK OHYHO 7KH FRQYHUVH LV DOVR WUXH DOWKRXJK SHUKDSV FRXQWHULQWXLWLYHf§UHGXFLQJ WKH DPRXQW RI VPRRWKLQJ IRU H[DPSOH IURP 9f WR 9Of F\FOHV PD\ FDXVH VWDELOLW\ SUREOHPV ,QFUHDVLQJ WKH UHOD[DWLRQ IDFWRUV LV WKH DSSURSULDWH UHVSRQVH %\ FRQWUDVW IRU VLQJOHJULG FRPSXWDWLRQV LI WKH QXPEHU RI LQQHU LWHUDWLRQV LV WRR ORZ WKH UHOD[DWLRQ IDFWRUV DUH GHFUHDVHG WR DYRLG GLYHUJHQFH $GGLWLRQDO WHVWLQJ LQ WKH VPRRWKLQJUHOD[DWLRQ IDFWRU SDUDPHWHU VSDFH ZRXOG EH GHVLUDEOH WR IXUWKHU FODULI\ WKLV SRLQW 3HUIRUPDQFH RQ WKH &0 7KLV VHFWLRQ TXDQWLILHV WKH FRVW RI PXOWLJULG F\FOLQJ RQ WKH &0 DQG GLVFXVVHV WKH HIILFLHQF\ DQG VFDODELOLW\ RI WKH SUHVHQW DOJRULWKP DQG LPSOHPHQWDWLRQ ,Q RWKHU ZRUGV WR FRQQHFW ZLWK WKH SUHFHGLQJ VHFWLRQ RQFH WKH ILQHJULG LV UHDFKHG ZKDW LV WKH EHVW JULG VFKHGXOH WR XVH KRZ ORQJ GRHV HDFK F\FOH WDNH DQG KRZ GRHV WKLV FRVW VFDOH ZLWK WKH SUREOHP VL]H DQG WKH QXPEHU RI SURFHVVRUV" ,Q )LJXUH WKH FRVWV RI VPRRWKLQJ DQG SURORQJDWLRQ DUH VKRZQ DV D IXQFWLRQ RI SUREOHP VL]H IRU D QRGH &0 DQG D QRGH &0 'XULQJ D PXOWLJULG F\FOH WKHVH FRVWV DUH LQFXUUHG IRU HDFK JULG OHYHO ,Q D 9f F\FOH IRU H[DPSOH

PAGE 157

6,03/( LWHUDWLRQV DUH GRQH DW HYHU\ JULG OHYHO DORQJ ZLWK RQH UHVWULFWLRQ IURP DQG RQH SURORQJDWLRQ WR HYHU\ JULG OHYHO H[FHSW WKH FRDUVHVW ,I WKH ILQHVW JULG LV [ WKHQ RQ D QRGH &0 WKH VXEJULG VL]H ^93f LV URXJKO\ 7KH QH[W FRDUVHUf JULG LV [ DQG KDV D VXEJULG VL]H RI 7KXV LQ D WZROHYHO 9f F\FOH WKH WRWDO WLPH LV WKH VXP RI 6,03/( LWHUDWLRQV DW 93 RQH UHVWULFWLRQ IURP 93 WR 93 6,03/( LWHUDWLRQV DW 93 DQG RQH SURORQJDWLRQ IURP 93 WR 93 f§ 7KXV )LJXUH LV D OHYHOE\OHYHO EUHDNGRZQ RI WKH SDUDOOHO UXQ WLPH XVHG E\ WKH VPRRWKLQJ DQG SURORQJDWLRQ PXOWLJULG FRPSRQHQWV 7KH WLPHV SORWWHG DUH WRWDO HODSVHG WLPHV LQFOXGLQJ WKH SURFHVVRU LGOH WLPH GXH WR IURQWHQG ZRUN 7KH VPRRWKLQJ FRVW GRPLQDWHV WKH FRVW RI WKH SURORQJDWLRQ DW HYHU\ 93 7KXV XQOHVV D PXOWLJULG F\FOH ZLWK OHVV VPRRWKLQJ LV XVHG WKH FRPPRQ LGHDOL]DWLRQ WKDW WKH UHVWULFWLRQ DQG SURORQJDWLRQ FRVWV DUH QHJOLJLEOH RQ VHULDO FRPSXWHUV DOVR KROGV WUXH RQ WKH &0 7KH UHVWULFWLRQ FRVW KDV QRW EHHQ VKRZQ LQ RUGHU WR NHHS WKH ILJXUH FOHDU ,W IROORZV WKH VDPH WUHQG DV SURORQJDWLRQ RQO\ VOLJKWO\ OHVV WLPHFRQVXPLQJ LI WKH UHVLGXDOV DUH DORQH DUH UHVWULFWHG DERXW b OHVVf DQG VOLJKWO\ PRUH WLPHFRQVXPLQJ LI ERWK VROXWLRQV DQG UHVLGXDOV DUH UHVWULFWHG 7KH WUHQG LV OLQHDU IRU ERWK UHVWULFWLRQ SURORQJDWLRQ DQG VPRRWKLQJ :KHQ UHVLGn XDOV RQO\ DUH UHVWULFWHG WKH UDWLR RI WKH WLPHV IRU WKHVH WKUHH FRPSRQHQWV WHQGV WRZDUG RQ WKH QRGH &0 DV WKH QXPEHU RI JULG SRLQWV LQFUHDVHV LH DV WKH VXEJULG VL]H LQFUHDVHVf +RZHYHU IRU WKH QRGH &0 WKH WLPH WDNHQ E\ SURORQJDWLRQ JURZV DW D VOLJKWO\ JUHDWHU UDWH WKDQ RQ WKH QRGH FRPSXWHU 2Q WKH QRGH &0 93 FRUUHVSRQGV WR D [ JULG VL]H LQVWHDG RI [ DV ZDV WKH FDVH ZLWK WKH QRGH &0 $SSDUHQWO\ WKH JOREDO FRPPXQLFDWLRQ SDWWHUQV QHHGHG WR

PAGE 158

DFFRPSOLVK WKH SURORQJDWLRQ DUH QRW SHUIHFWO\ VFDODEOH RQ WKH IDWWUHH DW OHDVW ZLWK WKH FXUUHQW &0)RUWUDQ LPSOHPHQWDWLRQ )LJXUH JLYHV WKH LPSUHVVLRQ WKDW WKH FRVW RI 6,03/( LWHUDWLRQV YDULHV OLQHDUO\ ZLWK 93 +RZHYHU DV VKRZQ LQ )LJXUH WKH YDULDWLRQ LV QRW DFWXDOO\ OLQHDU IRU YHU\ VPDOO 93 7KH EDU RQ WKH OHIW LV WKH &0 EXV\ WLPH IRU 6,03/( LWHUDWLRQV JLYHQ DV D IXQFWLRQ RI WKH JULG OHYHO 7KH EDU RQ WKH ULJKW LV WKH FRUUHVSRQGLQJ &0 HODSVHG WLPH WDNHQ IURP GDWD SRLQWV DORQJ WKH VPRRWKLQJ FRVW FXUYH LQ )LJXUH 7KH EXV\ WLPH UHFRUGV WKH WLPH VSHQW GRLQJ SDUDOOHO FRPSXWDWLRQ DQG LQWHUSURFHVVRU FRPPXQLFDWLRQ RSHUDWLRQV 7KHVH RSHUDWLRQV DUH YHU\ LQHIILFLHQW DW VPDOO 93 RQ WKH &0 EHFDXVH WKH YHFWRU XQLWV DUH QRW IXOO\ ORDGHG 7KXV WKH EXV\ WLPH GRHV QRW VFDOH OLQHDUO\ ZLWK WKH VXEJULG VL]H IRU VPDOO 93 EHFDXVH WKH HIILFLHQF\ RI YHFWRUL]HG FRPSXWDWLRQ DQG LQWHUSURFHVVRU FRPPXQLFDWLRQ LQFUHDVHV DV WKH VXEJULG VL]H JURZV 1RWH KRZHYHU WKDW WKH EXV\ WLPH LV DOZD\V D PRQRWRQLF IXQFWLRQ RI 93 7KH YDULDWLRQ RI HODSVHG WLPH E\ FRQWUDVW VWD\V DSSUR[LPDWHO\ FRQVWDQW XQWLO OHYHO RI WKLV VDPSOH PXOWLJULG F\FOH /HYHO FRUUHVSRQGV WR 93 f§ RQ WKH QRGH &0 7KH HODSVHG WLPH LQFOXGHV WKH LGOHSURFHVVRU WLPH GXH WR IURQWHQG ZRUN $V GLVFXVVHG LQ &KDSWHU WKHUH DUH VHYHUDO RYHUKHDG FRVWV RI SDUDOOHO FRPSXWDWLRQ DQG LQWHUSURFHVVRU FRPPXQLFDWLRQ 7KHVH RSHUDWLRQV PD\ OHDYH WKH &0 YHFWRU XQLWV LQDFWLYH IRU VKRUW SHULRGV RI WLPH )RU VPDOO 93 WKH GRPLQDQW FRQVLGHUDWLRQ LQ WKLV UHJDUG LV WKH SDVVLQJ RI FRGH EORFNV LH WKH IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ 7KLV FRVW VWD\V FRQVWDQW ZLWK 93 DV VKRZQ IRU VPDOO WKH HODSVHG WLPH DW VPDOO 93 LQ )LJXUH 7KH HODSVHG WLPH LV DFWXDOO\ ODUJHU IRU 93 WKDQ 93 7KLV REVHUYDWLRQ LV UHSURGXFLEOH EXW LWV FDXVH LV QRW IXOO\ XQGHUVWRRG ,QDFFXUDWH WLPLQJV PD\ EH WKH SUREOHP $ FRPSXWHU ZLWK D UHODWLYHO\ IDVW IURQWHQG DQG FRPPXQLFDWLRQ QHWZRUN SHUIRUPV FORVHU WR WKH LGHDO IRU VPDOO 93

PAGE 159

6LQFH WKH FRVW RI VPRRWKLQJ RQ WKH FRDUVH JULGV GRHV QRW JR WR ]HUR DV 93 f§! WKH SRVVLELOLW\ H[LVWV IRU FRDUVH JULGV WR PDNH D QRQQHJOLEOH FRQWULEXWLRQ WR WKH SDUDOOHO UXQ WLPH LI WKH F\FOLQJ VFKHPH LV VXFK WKDW WKH FRDUVH JULGV DUH YLVLWHG PRUH IUHTXHQWO\ WKDQ WKH ILQH JULGV )LJXUHV LOOXVWUDWH WKLV SRLQW FOHDUO\ 7KH FRVW SHU PXOWLJULG F\FOH LV FRPSDUHG EHWZHHQ 9 DQG : F\FOLQJ VWUDWHJLHV 6SHFLILFDOO\ 9f F\FOHV DUH FRPSDUHG DJDLQVW :f F\FOHV 7KH WLPLQJV DUH REWDLQHG RQ D QRGH &0 7KH QXPEHU RI OHYHOV LV IL[HG DV WKH ILQHVW JULG GLPHQVLRQV LQFUHDVH %RWK HODSVHG DQG EXV\ WLPHV DUH SORWWHG 7KH WRWDO WLPH SHU F\FOH LQFOXGHV WKH FRVW RI VPRRWKLQJ RQ WKH JULG OHYHOV LQn YROYHG WKH UHVWULFWLRQ DQG SURORQJDWLRQ FRVWV DQG WKH FRVW RI SURJUDP FRQWURO DQG LQSXWRXWSXW )RU D 9 F\FOH WKLV WLPH FDQ EH PRGHOOHG DV f QOHYHO 7LPLVf JnVLQA e UW 9 &9GH N O N ZKHUH Vr U DQG SN DQG WKH VPRRWKLQJ WLPH SHU LWHUDWLRQ RQ OHYHO N IURP )LJn XUH f WKH UHVWULFWLRQ WLPH IURP OHYHO N DQG WKH SURORQJDWLRQ WLPH WR OHYHO N 7KH QXPEHU RI OHYHOV LV QcHL-Hc DQG QSUH DQG QSRVW UHSUHVHQW WKH QXPEHU RI SUH DQG SRVWVPRRWKLQJ LWHUDWLRQV LQ WKLV FDVH DQG UHVSHFWLYHO\ ,Q FRQWUDVW : F\FOHV YLVLW WKH FRDUVH JULGV PXFK PRUH IUHTXHQWO\ 7KHLU WLPH SHU F\FOH FDQ EH PRGHOOHG n7LPS 4 A OHYHO QOHYHO \ e rrSUH QSRVWfffnIF! e Ur SNff : &9GH /O N 7KHVH H[SUHVVLRQV DUH YDOLG IRU VHULDO FRPSXWDWLRQV WRR 2Q VHULDO FRPSXWHUV WKH UHVWULFWLRQ DQG SURORQJDWLRQ FRVWV DUH JHQHUDOO\ QHJOLJLEOH DQG WKH VPRRWKLQJ FRVW SHU OHYHO LV EDVLFDOO\ D IDFWRU RI VPDOOHU IRU WKH ORZHU FRDUVHUf JULG OHYHOV )RU SDUDOOHO FRPSXWDWLRQ RQ WKH &0 WKH IDFW WKDW UHPDLQV DSSUR[LPDWHO\ FRQVWDQW IRU WKH FRDUVHVW JULGV LV D SUREOHP ZKHQ PDQ\ PXOWLJULG OHYHOV DUH XVHG :KHQ RQO\ WKUHH OHYHOV DUH LQYROYHG WKHUH LV YHU\ OLWWOH GLVDGYDQWDJH WR XVLQJ : F\FOHV DV VKRZQ LQ )LJXUH 6LQFH LW LV XVXDOO\ SRVVLEOH WR JDLQ VRPH EHQHILW WR WKH

PAGE 160

FRQYHUJHQFH UDWH E\ PRUH IUHTXHQW FRDUVHJULG FRUUHFWLRQV : F\FOHV DUH UHFRPPHQGHG RQ WKH &0 LI WKH QXPEHU RI PXOWLJULG OHYHOV LV VPDOO +RZHYHU IRU RU PRUH OHYHOV )LJXUHV DQG : F\FOHV EHJLQ WR FRVW PRUH WKDQ WKH\ DUH ZRUWK LQ WHUPV RI LPSURYHG FRQYHUJHQFH UDWHV $OVR VLQFH WKHUH LV D JUHDWHU GLIIHUHQFH EHWZHHQ 9 DQG : F\FOH HODSVHG DQG EXV\ WLPHV DV PRUH PXOWLJULG OHYHOV DUH DGGHG UHIOHFWLQJ WKH UHODWLYHO\ ODUJHU LGOH WLPHV IRU FRDUVH JULGV UHFDOO )LJXUH f WKH SDUDOOHO HIILFLHQF\ RI : F\FOHV LV OHVV WKDQ WKDW RI 9 F\FOHV ,Q WKH SUHVHQW ZRUN 9 F\FOHV KDYH EHHQ VXIILFLHQW WR DFKLHYH JRRG FRQYHUJHQFH UDWHV VR QR FRPSDULVRQV KDYH EHHQ PDGH WR : F\FOHV 6XFK VWXGLHV QHHG WR EH PDGH EXW RQ D SUREOHPE\SUREOHP EDVLV )RU WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ DQG OLGGULYHQ FDYLW\ IORZ LW LV QRW H[SHFWHG WKDW : F\FOHV ZLOO EH DGYDQWDJHRXV ,Q PDQ\ FDVHV LW LV DFFHSWDEOH DQG HYHQ EHQHILFLDO WR XVH OHVV WKDQ WKH IXOO FRPSOHn PHQW RI PXOWLJULG OHYHOV LH WR LQFUHDVH WKH SUREOHP VL]H NHHSLQJ WKH QXPEHU RI OHYHOV IL[HG :KHWKHU RU QRW WKH FRPSXWDWLRQ LV IRU D SK\VLFDOO\ WLPHGHSHQGHQW IORZ SUREn OHP WKHUH H[LVWV DQ LPSOLHG WLPHVWHS LQ LWHUDWLYH QXPHULFDO WHFKQLTXHV ,Q PXOWLJULG FRPSXWDWLRQV WKH FKDQJHV LQ WKH HYROYLQJ VROXWLRQ RQ FRDUVHU JULG OHYHOV DUH VPDOOHU UHIOHFWLQJ WKH IDFW WKDW WKH SK\VLFDO RU SVHXGRSK\VLFDO GHYHORSPHQW RI WKH VROXWLRQ RQ WKH ILQHJULG LV RFFXUULQJ RQ D PXFK VPDOOHU VFDOH 7KXV WKH FRDUVHVW JULG OHYHOV PD\ EH WUXQFDWHG ZLWKRXW GHWHULRUDWLQJ WKH FRQYHUJHQFH UDWH 3UHVVXUH QHHGV WR EH WUHDWHG JOREDOO\ EXW XVXDOO\ WKHUH DUH HQRXJK PXOWLJULG F\FOHV WDNHQ WR HQVXUH WKDW VORZ GHYHORSPHQW RI WKH SUHVVXUH ILHOG LV QRW D SUREOHP HYHQ ZKHQ WKH FRDUVHVW JULG OHYHO LV QRW YHU\ FRDUVH )LJXUHV LQWHJUDWH WKH LQIRUPDWLRQ FRQWDLQHG LQ WKH SUHFHGLQJ ILJXUHV ,Q )LJXUH WKH YDULDWLRQ RI SDUDOOHO HIILFLHQF\ RI OHYHO 9f F\FOHV ZLWK SUREOHP VL]H LV VXPPDUL]HG 7KH SUREOHP VL]H LV WKH YLUWXDO SURFHVVRU UDWLR 93 RI WKH ILQHVW

PAGE 161

JULG OHYHO EXW RI FRXUVH GXULQJ WKH PXOWLJULG F\FOH RSHUDWLRQV DUH EHLQJ GRQH RQ FRDUVHU JULGV WRR ZKHUH 93 LV VPDOOHU )LJXUH LV VLPLODU WR )LJXUH REWDLQHG XVLQJ WKH VLQJOHJULG SUHVVXUHFRUUHFWLRQ DOJRULWKP )RU VPDOO 93 WKH XVHIXO ZRUN WKH FRPSXWDWLRQf LV GRPLQDWHG E\ WKH LQWHUSURFHVVRU DQG IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQ UHVXOWLQJ LQ ORZ SDUDOOHO HIILFLHQFLHV 7KH HIILFLHQF\ ULVHV DV WKH WLPH VSHQW LQ FRPSXWDWLRQ LQFUHDVHV UHODWLYH WR WKH RYHUKHDG FRVWV 7KH KLJKHVW HIILFLHQF\ REWDLQHG LV DOPRVW FRPSDUHG WR IRU WKH VLQJOHJULG PHWKRG RQ WKH &0 7KH EXUGHQ RI DGGLWLRQDO SURJUDP FRQWURO UHODWLYHO\ PRUH H[SHQVLYH FRDUVHJULG VPRRWKLQJ DQG WKH UHVWULFWLRQ DQG SURORQJDWLRQ WDVNV DGGV XS WR LQ WHUPV RI WKH SDUDOOHO HIILFLHQF\ 8QOLNH WKH VLQJOHJULG FDVH KRZHYHU WKH HIILFLHQF\ GRHV QRW SHDN IRU ODUJH SUREOHP VL]HV 7KH FRQWULEXWLRQV IURP WKH OHVVHIILFLHQW FRDUVHU JULGV LQ D PXOWLJULG F\FOH RQ WKH &0 DUH VLJQLILFDQW HYHQ ZKHQ WKH ILQHVW JULG KDV 93 a N 7KH UDQJH RI VXEJULG VL]HV FRPSULVLQJ D OHYHO PXOWLJULG F\FOH VFDOH D UHDOLVWLF F\FOHf VSDQ WKUHH RUGHUV RI PDJQLWXGH 8QIRUWXQDWHO\ WKH UDQJH RI 83 LQ ZKLFK WKH PXOWLJULG VPRRWKHU DFKLHYHV KLJK SDUDOOHO HIILFLHQFLHV LV QRW DV EURDG ,Q WKLV UHJDUG WKH SHUIRUPDQFH RI WKH PXOWLJULG PHWKRG RQ WKH 0DV3DUVW\OH RI 6,0' FRPSXWHUV LV H[SHFWHG WR EH PXFK EHWWHU VLQFH WKH VLQJOHJULG PHWKRG DFKLHYHG KLJK SDUDOOHO HIILFLHQFLHV IRU 93 DOO WKH ZD\ XS WR WKH ODUJHVW SUREOHP VL]H 1XPHULFDO H[SHULPHQWV KDYH QRW EHHQ FRQGXFWHG WR VWXG\ WKH PXOWLJULG PHWKRG RQ 0DV3DU 6,0' FRPSXWHUV KRZHYHU EHFDXVH WKHLU )RUWUDQ FRPSLOHU LV QRW \HW VXIILFLHQWO\ GHYHORSHG WR DGGUHVV WKH VWRUDJH SUREOHP 7KH HIILFLHQF\ LQ DSSDUHQWO\ KDV D VPDOO GHSHQGHQFH RQ WKH QXPEHU RI SURn FHVVRUV 7KLV GHSHQGHQFH LV FOHDUO\ VKRZQ LQ WKH QH[W ILJXUH )LJXUH 7KH GHn SHQGHQFH LV GXH WR WKH VOLJKWO\ LQFUHDVHG WLPH VSHQW LQ LQWHUJULG WUDQVIHU RSHUDWLRQV ZLWK LQFUHDVLQJ QS REVHUYHG HDUOLHU LQ )LJXUH )LJXUH VKRZV WKH GHFUHDVH LQ

PAGE 162

HIILFLHQF\ ZLWK LQFUHDVLQJ QXPEHU RI SURFHVVRUV IRU ILYH GLIIHUHQW VXEJULG VL]HV $JDLQ UHFDOO WKDW WKH VXEJULG VL]H LV IRU WKH ILQHVW JULG EXW WKDW PXFK FRDUVHU JULGV DUH LQn YROYHG LQ WKH OHYHO 9f F\FOHV 7KH ILJXUH LQGLFDWHV WKDW WKH UDWH RI GHFUHDVH LQ HIILFLHQF\ LV WKH VDPH IRU HYHU\ 93 GRZQ WR DW OHDVW 93 f§ 7KH GDVKHG OLQHV DUH OLQHDU OHDVWVTXDUHV FXUYH ILWV WR WKH GDWD 7KH GDWD SRLQWV DUH SHUWXUEHG DERXW WKHVH OLQHV GXH WR YDULDWLRQV LQ WKH HODSVHG SDUDOOHO UXQ WLPH 7Y 7S YDULHV VOLJKWO\ IURP WLPLQJ WR WLPLQJ GHSHQGLQJ RQ WKH ZRUNORDG RI WKH IURQWHQG PDFKLQH ,Q DOO FDVHV PXOWLSOH WLPLQJV ZHUH REWDLQHG DV D FKHFN RQ WKH UHSURGXFLELOLW\ ,Q OLJKW IURQWHQG ORDGLQJV LH WKH PLGGOH RI WKH QLJKWf WKH PHDVXUHG 7S GLG QRW YDU\ PRUH WKDQ b )LJXUH LV FRPELQHV WKH LQIRUPDWLRQ FRQWDLQHG LQ )LJXUHV DQG $V LQ WKH VLQJOHJULG FDVH )LJXUH FXUYHV RI FRQVWDQW HIILFLHQF\ DUH GUDZQ RQ D SORW RI SUREOHP VL]H YHUVXV WKH QXPEHU RI SURFHVVRUV 7KH FXUYHV DUH FRQVWUXFWHG E\ LQWHUSRODWLQJ LQ )LJXUH XVLQJ WKH GDVKHG OLQHV DV WKH GDWD LQVWHDG RI WKH DFWXDO GDWD SRLQWV WR GHWHUPLQH 93 DW D JLYHQ (QSf LQWHUVHFWLRQ 1 LV FRPSXWHG IURP WKH GHILQLWLRQ RI 93 LH 1 QS93 7KH LVRHIILFLHQF\ FXUYHV DUH DOPRVW OLQHDU RU LQ RWKHU ZRUGV WKH OHYHO PXOWLJULG DOJRULWKP DQDO\]HG RQ D SHUF\FOH EDVLV LV DOPRVW VFDODEOH (DFK RI WKH LVRHIILFLHQF\ FXUYHV FDQ EH DFFRPPRGDWHG E\ DQ H[SUHVVLRQ RI WKH IRUP 1 f§ 1R FRQVWDQW QS f§ f4 f ZLWK T a 7KH V\PERO 1R LV WKH LQLWLDO SUREOHP VL]H QHHGHG WR REWDLQ D SDUWLFXODU ( RQ SURFHVVRUV $ORQJ WKH LVRHIILFLHQF\ FXUYHV fVFDOHGVSHHGXSff >@ LV QHDUO\ DFKLHYHG ,I WKH SDUDOOHO UXQ WLPH 7S DW WKH LQLWLDO SUREOHP VL]H LV DFFHSWDEOH WKHQ LW FDQ EH PDLQWDLQHG ZLWK WKH SUHVHQW SUHVVXUHEDVHG PXOWLJULG PHWKRG DV WKH SUREOHP VL]H DQG WKH QXPEHU

PAGE 163

RI SURFHVVRUV DUH LQFUHDVHG LQ SURSRUWLRQ 7KH LQQHU LWHUDWLRQV PXVW EH SRLQW-DFREL RI FRXUVH VLQFH WKH OLQHLWHUDWLYH PHWKRG LV 1 ORJ 1f :LWK WKH OLQHLWHUDWLYH PHWKRG 7S LQFUHDVHV VOLJKWO\ DORQJ WKH LVRHIILFLHQF\ FXUYHV 7KH VFDODELOLW\ VKRXOG EH QHDUO\ WKH VDPH WKRXJK VLQFH QHDUHVWQHLJKERU FRPPXQLFDWLRQV GRPLQDWH LQ WKH F\FOLF UHGXFWLRQ SDUDOOHO DOJRULWKP GXH WR GDWDPDSSLQJ XVHG RQ WKH &0 &RQFOXGLQJ 5HPDUNV $ SDUDOOHO PXOWLJULG DOJRULWKP KDV EHHQ IRUPXODWHG DQG LPSOHPHQWHG RQ WKH &0 7KH IRFXV RI QXPHULFDO H[SHULPHQWV DQG WLPLQJV KDV EHHQ RQ WKH SRWHQWLDO RI WKLV DSSURDFK IRU WKH SXUSRVH RI DFKLHYLQJ VFDODEOH SDUDOOHO FRPSXWLQJ WHFKQLTXHV IRU DSSOLFDWLRQ WR WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV 7KH UHVXOWV REWDLQHG LQGLFDWH WKDW WKH HIILFLHQF\ RI WKH SDUDOOHO LPSOHPHQWDWLRQ RI WKH QRQOLQHDU SUHVVXUHEDVHG PXOWLJULG PHWKRG DSSURDFKHV IRU ODUJH SUREOHP VL]HV DQG LV DOPRVW OLQHDUO\ VFDODEOH RQ WKH &0 7KH FRVW SHU 9f F\FOH LV DERXW V RQ D YHFWRU XQLW &0 IRU D OHYHO SUREOHP ZLWK D [ ILQH JULG 7KH FRVW SHU LWHUDWLRQ LV GRPLQDWHG E\ WKH VPRRWKLQJ FRVW DQG WKXV PXFK DWWHQWLRQ KDV EHHQ JLYHQ WR WKH GHWDLOV RI WKH LPSOHPHQWDWLRQ DQG SHUIRUPDQFH RI WKH VLQJOHJULG SUHVVXUHEDVHG PHWKRG RQ 6,0' FRPSXWHUV 5HVWULFWLRQ DQG SURORQJDWLRQ DUH DOPRVW QHJOLJLEOH DOWKRXJK WKH\ DUH UHVSRQVLEOH IRU WKH GHYLDWLRQ IURP OLQHDU FRPSXWDWLRQDO VFDODELOLW\ REVHUYHG LQ )LJXUH 9HU\ ODUJH SUREOHP VL]HV FDQ EH KDQGOHG RQ WKH &0 XS WR [ RQ D QRGH PDFKLQH SURYLGHG WKH VWRUDJH SUREOHP IRU )RUWUDQ PXOWLJULG LPSOHPHQWDWLRQV FDQ EH UHVROYHG 7KH VSHHG RI WKH PXOWLJULG FRGH ZDV QRW DVVHVVHG GLUHFWO\ EXW UHDVRQDEOH HVWLPDWHV FDQ EH PDGH EDVHG RQ WKH VLQJOHJULG SHUIRUPDQFH )RU WKH VLQJOHJULG 6,03/( PHWKRG XVLQJ WKH SRLQW-DFREL VROYHU 0)ORSV ZDV DFKLHYHG RQ D QRGH 98f &0 6LQFH WKH PXOWLJULG FRVW SHU OHYHO F\FOH LV GRPLQDWHG E\ WKH VPRRWKLQJ

PAGE 164

FRVWV DQG WKH PXOWLJULG HIILFLHQF\ LV FRPSDUHG WR DERXW D b GHFUHDVHf WKH VSHHG LV URXJKO\ 0)ORSV 6OLJKWO\ LPSURYHG HIILFLHQF\ DQG VSHHG FDQ EH REWDLQHG ZLWK IHZHU PXOWLJULG OHYHOV )RU XQVWHDG\ IORZ FDOFXODWLRQV PXOWLJULG F\FOHV ZLWK D VPDOO QXPEHU RI OHYHOV PD\ SHUIRUP UHDVRQDEO\ ZHOO 7KLV VKRXOG EH LQYHVWLJDWHG 6HYHUDO SUDFWLFDO UHFRPPHQGDWLRQV KDYH EHHQ PDGH UHJDUGLQJ PXOWLJULG WHFKn QLTXHV IRU SDUDOOHO FRPSXWDWLRQ 9 F\FOHV VKRXOG EH XVHG XQOHVV WKH QXPEHU RI PXOWLJULG OHYHOV LV VPDOO : F\FOHV DUH WRR H[SHQVLYH EHFDXVH GXH WR WKH QRQQHJ OLJLEOH FRDUVHJULG VPRRWKLQJ FRVWV 7KH )0* SURFHGXUH VKRXOG EH FRQWUROOHG E\ WKH WUXQFDWLRQ HUURU HVWLPDWH (T 7KH )0* SURFHGXUH FDQ DIIHFW QRW RQO\ WKH WLPH QHHGHG WR UHDFK WKH ILQH JULG EXW DOVR WKH DV\PSWRWLF FRQYHUJHQFH UDWH DQG VWDELOLW\ RI PXOWLJULG LWHUDWLRQV FDQ EH DIIHFWHG DV ZHOO DV HYLGHQW IURP )LJn XUH 7KLV REVHUYDWLRQ PD\ QRW FDUU\ RYHU WR WKH WKH ORFDOO\FRXSOHG H[SOLFLW VPRRWKHU ,W VKRXOG EH WHVWHG LQ WKH VDPH ZD\ ,Q WHUPV RI FRPSXWDWLRQDO HIILFLHQF\ WKH ORFDOO\FRXSOHG H[SOLFLW PHWKRG KDV QHDUO\ WKH VDPH SURSHUWLHV RQ WKH &0 DV WKH SUHVVXUHFRUUHFWLRQ PHWKRG DOWKRXJK WKH LQIOXHQFH RQ WKH FRVW SHU LWHUDWLRQ DQG HIILFLHQF\ IURP WKH FRHIILFLHQW FRPSXWDWLRQV LV JUHDWHU 6HYHUDO DOJRULWKPLF IDFWRUV KDYH EHHQ VWXGLHG LQ SDUWLFXODU WKH FRDUVHJULG GLVn FUHWL]DWLRQ WKH VWDELOL]DWLRQ VWUDWHJ\f DQG WKH UHVWULFWLRQ SURFHGXUH DUH REVHUYHG WR EH LPSRUWDQW WR WKH PXOWLJULG FRQYHUJHQFH UDWH ,W DSSHDUV WKDW WKH XVH RI VHFRQG RUGHU XSZLQGLQJ RQ DOO JULG OHYHOV DQG WKH UHVWULFWLRQ SURFHGXUH VXPPLQJ WKH UHVLGXDOV EXW QRW UHVWULFWLQJ WKH VROXWLRQV SURYLGHV D YHU\ HIIHFWLYH DSSURDFK IRU ERWK WKH V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ DQG WKH OLGGULYHQ FDYLW\ IORZ 6PRRWKLQJ UDWHV SHU 9f F\FOH RI FDQ EH PDLQWDLQHG XQWLO WKH UHVLGXDO LV GULYHQ GRZQ WR WKH OHYHO RI WKH URXQGRII HUURU 7KH FRQYHUJHQFH UDWH ZLWK FHOOIDFH DYHUDJLQJ IRU WKH UHVWULFWLRQ RI VROXWLRQV DQG UHVLGXDOV ZDV FRQVLGHUDEO\ VORZHU 6LPLODU UHVXOWV ZHUH REWDLQHG IRU WKH FDYLW\ IORZ

PAGE 165

,Q WHUPV RI WKH FRDUVHJULG GLVFUHWL]DWLRQ VWUDWHJ\ LW DSSHDUV WKDW WKH SRSXODU GHIHFWFRUUHFWLRQ DSSURDFK PD\ QRW EH DV UREXVW DV WKH VHFRQGRUGHU XSZLQGLQJ VWUDWHJ\ DW OHDVW IRU HQWHULQJW\SH IORZ SUREOHPV ,Q WKHVH W\SHV RI IORZV LH SUREn OHPV ZLWK LQIORZ DQG RXWIORZ WKH SURSHU IRUPXODWLRQ RI WKH QXPHULFDO PHWKRG WKH SUHVVXUHFRUUHFWLRQ VPRRWKHUf LV FULWLFDO IRU REWDLQLQJ JRRG FRQYHUJHQFH UDWHV *OREDO PDVV FRQVHUYDWLRQ PXVW EH H[SOLFLWO\ HQIRUFHG GXULQJ WKH FRXUVH RI LWHUDWLRQV *OREDO PDVV FRQVHUYDWLRQ HQVXUHV WKDW WKH V\VWHP RI SUHVVXUHFRUUHFWLRQ HTXDWLRQV KDV D VROXWLRQ ZKLFK LV LGHQWLILHG DV DQ LPSRUWDQW SUHUHTXLVLWH IRU REWDLQLQJ UHDVRQDEOH FRQYHUJHQFH UDWHV LQ RSHQ ERXQGDU\ SUREOHPV 7KH ZHOOSRVHG QXPHULFDO SUREOHP GRHV QRW GLVWLQJXLVK EHWZHHQ LQIORZ DQG RXWIORZ DW WKH RSHQ ERXQGDU\f§LI WKH QXn PHULFDO WUHDWHPHQW RI WKH RSHQ ERXQGDU\ FRQGLWLRQ LV UHDVRQDEOH DQG FDQ LQGXFH FRQYHUJHQFH WKH ILQLWHYROXPH VWDJJHUHGJULG SUHVVXUHFRUUHFWLRQ PHWKRG FDQ REWDLQ WKH FRUUHFW QXPHULFDO VROXWLRQ HYHQ LI LQIORZ RFFXUV DW D QRPLQDOO\ RXWIORZ ERXQGDU\ ,Q FRQFOXVLRQ WKH UHVXOWV RI WKLV UHVHDUFK LQGLFDWH WKDW SUHVVXUHEDVHG PXOWLJULG PHWKRGV DUH FRPSXWDWLRQDOO\ DQG QXPHULFDOO\ VFDODEOH DOJRULWKPV RQ 6,0' FRPn SXWHUV 7DNLQJ SURSHU DFFRXQW RI WKH PDQ\ LPSOHPHQWDWLRQDO FRQVLGHUDWLRQV KLJK SDUDOOHO HIILFLHQFLHV FDQ EH DFKLHYHG DQG PDLQWDLQHG DV WKH QXPEHU RI SURFHVVRUV DQG WKH SUREOHP VL]H LQFUHDVHV /LNHZLVH WKH FRQYHUJHQFH UDWH GHSHQGHQFH RQ SUREOHP VL]H VKRXOG EH JUHDWO\ GHFUHDVHG E\ WKH PXOWLJULG WHFKQLTXH 7KXV WKH SUHVHQW DSn SURDFK LV YLDEOH IRU PDVVLYHO\SDUDOOHO QXPHULFDO VLPXODWLRQV RI WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV DQG VKRXOG EH GHYHORSHG IXUWKHU RQ 6,0' FRPSXWHUV 7KH WDUJHW PDFKLQH VKRXOG EH KDYH IDVW QHDUHVWQHLJKERU DQG IURQWHQGWRSURFHVVRU FRPn PXQLFDWLRQ FRPSDUHG WR WKH VSHHG RI FRPSXWDWLRQ VR WKDW UHDVRQDEO\ KLJK SDUDOOHO

PAGE 166

HIILFLHQFLHV FDQ EH REWDLQHG DW VPDOO SUREOHP VL]HV 7KH NQRZOHGJH DQG LPSOHPHQWDn WLRQV JDLQHG LQ WKLV UHVHDUFK DUH LPPHGLDWHO\ XVHIXO IRU H[SORLWLQJ WKH FXUUHQW FRPn SXWDWLRQDO FDSDELOLWLHV RI WKH &0 DQG 03 6,0' FRPSXWHUV DQG DUH SUDFWLFDO FRQWULEXWLRQV ZKLFK ZLOO IDFLOLWDWH IXWXUH UHVHDUFK LQ SDUDOOHO &)'

PAGE 167

)LJXUH 6FKHPDWLF RI DQ )0* 9f PXOWLJULG F\FOH

PAGE 168

5H /LG'ULYHQ &DYLW\ )ORZ 6WUHDPIXQFWLRQ 8 9HORFLW\ &RPSRQHQW 9RUWLFLW\ 3UHVVXUH )LJXUH 6WUHDPIXQFWLRQ YRUWLFLW\ DQG SUHVVXUH FRQWRXUV IRU 5H OLGGULYHQ FDYLW\ IORZ XVLQJ WKH QGRUGHU XSZLQG FRQYHFWLRQ VFKHPH 7KH VWUHDPIXQFWLRQ FRQWRXUV DUH HYHQO\ VSDFHG ZLWKLQ WKH UHFLUFXODWLRQ EXEEOHV DQG LQ WKH LQWHULRU RI WKH IORZV EXW WKLV VSDFLQJ LV QRW WKH VDPH 7KH DFWXDO YHORFLWLHV ZLWKLQ WKH UHFLUFXODWLRQ UHJLRQV DUH UHODWLYHO\ ZHDN FRPSDUHG WR WKH FRUH IORZV

PAGE 169

5H 6\PPHWULF %DFNZDUG)DFLQJ 6WHS )ORZ 6WUHDPIXQFWLRQ 8 9HORFLW\ &RPSRQHQW 9 9HORFLW\ &RPSRQHQW )LJXUH 6WUHDPIXQFWLRQ YRUWLFLW\ SUHVVXUH DQG YHORFLW\ FRPSRQHQW FRQWRXUV IRU 5H V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ XVLQJ WKH QGRUGHU XSZLQG FRQYHFn WLRQ VFKHPH 7KH VWUHDPIXQFWLRQ FRQWRXUV DUH HYHQO\ VSDFHG ZLWKLQ WKH UHFLUFXODWLRQ EXEEOHV DQG LQ WKH LQWHULRU RI WKH IORZV EXW WKLV VSDFLQJ LV QRW WKH VDPH 7KH DFWXDO YHORFLWLHV ZLWKLQ WKH UHFLUFXODWLRQ UHJLRQV DUH UHODWLYHO\ ZHDN FRPSDUHG WR WKH FRUH IORZV

PAGE 170

,QLWLDO 0*FRQYHUJHQFH SDWK IRU 7( FULWHULRQ ZGHQRP &0 %XV\ 7LPH VHFRQGVf )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH XUHVLGXDO QRUP GXULQJ WKH )0* SURFHGXUH IRU WKH 5H OLGGULYHQ FDYLW\ IORZ XVLQJ WKH GHIHFWFRUUHFWLRQ VWDELOL]DWLRQ VWUDWHJ\ 7KH WUXQFDWLRQ HUURU FULWHULRQ ZLWK GHQRPLQDWRU LV XVHG WR GHWHUPLQH WKH FRDUVHJULG WROHUDQFHV 7KH DEVFLVVDV SORW ZRUN XQLWV SURSRUWLRQDO WR D VHULDO FRPSXWHUfV FSX WLPHf DQG &0 EXV\ WLPH

PAGE 171

,QLWLDO 0*FRQYHUJHQFH SDWK IRU 7( FULWHULRQ ZGHQRP :RUN 8QLWV &0 %XV\ 7LPH VHFRQGVf )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH XUHVLGXDO QRUP GXULQJ WKH )0* SURFHGXUH IRU WKH 5H OLGGULYHQ FDYLW\ IORZ XVLQJ WKH GHIHFWFRUUHFWLRQ VWDELOL]DWLRQ VWUDWHJ\ 7KH WUXQFDWLRQ HUURU FULWHULRQ ZLWK GHQRPLQDWRU LV XVHG WR GHWHUPLQH WKH FRDUVHJULG WROHUDQFHV 7KH DEVFLVVDV SORW ZRUN XQLWV SURSRUWLRQDO WR D VHULDO FRPSXWHUfV FSX WLPHf DQG &0 EXV\ WLPH

PAGE 172

,QLWLDO 0*FRQYHUJHQFH SDWK IRU FRQVWDQW WROHUDQFHV &0 %XV\ 7LPH VHFRQGVf )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH XUHVLGXDO QRUP GXULQJ WKH )0* SURFHGXUH IRU WKH 5H OLGGULYHQ FDYLW\ IORZ XVLQJ WKH GHIHFWFRUUHFWLRQ VWDELOL]DWLRQ VWUDWHJ\ 7KH FRDUVHJULG FRQYHUJHQFH FULWHULRQ LV __UA __ f§ RQ HYHU\ OHYHO 7KH DEVFLVVDV SORW ZRUN XQLWV SURSRUWLRQDO WR D VHULDO FRPSXWHUfV FSX WLPHf DQG &0 EXV\ WLPH

PAGE 173

,QLWLDO 0*FRQYHUJHQFH SDWK IRU JUDGHG WROHUDQFHV &0 %XV\ 7LPH VHFRQGVf )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH XUHVLGXDO QRUP GXULQJ WKH )0* SURFHGXUH IRU WKH 5H OLGGULYHQ FDYLW\ IORZ XVLQJ WKH GHIHFWFRUUHFWLRQ VWDELOL]DWLRQ VWUDWHJ\ 7KH FRDUVHJULG FRQYHUJHQFH FULWHULD DUH JUDGHG )RU OHYHOV f§ ____ f§f§f§f§f§ 7KH DEVFLVVDV SORW ZRUN XQLWV SURSRUWLRQDO WR D VHULDO FRPSXWHUfV FSX WLPHf DQG &0 EXV\ WLPH

PAGE 174

,QLWLDO 0*FRQYHUJHQFH SDWK IRU 7( FULWHULRQ ZGHQRP )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH XUHVLGXDO QRUP GXULQJ WKH )0* SURFHGXUH IRU WKH 5H V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ ZLWK WKH GHIHFWFRUUHFWLRQ VWDELOL]DWLRQ VWUDWHJ\ 7KH WUXQFDWLRQ HUURU FULWHULRQ ZLWK GHQRPLQDWRU LV DSSOLHG WR DEEUHYLDWH FRDUVHJULG PXOWLJULG F\FOLQJ

PAGE 175

,QLWLDO 0*FRQYHUJHQFH SDWK IRU7( FULWHULRQ ZGHQRP &0 %XV\ 7LPH VHFRQGVf )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH XUHVLGXDO QRUP GXULQJ WKH )0* SURFHGXUH IRU WKH 5H V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ ZLWK VHFRQGRUGHU XSZLQG LQJ RQ DOO OHYHOV 7KH WUXQFDWLRQ HUURU FULWHULRQ ZLWK GHQRPLQDWRU LV DSSOLHG WR DEEUHYLDWH FRDUVHJULG PXOWLJULG F\FOLQJ

PAGE 176

,QLWLDO 0*FRQYHUJHQFH SDWK IRU 7( FULWHULRQ ZGHQRP &0 %XV\ 7LPH VHFRQGVf )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH XUHVLGXDO QRUP GXULQJ WKH )0* SURFHGXUH IRU WKH 5H V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ XVLQJ VHFRQGRUGHU XSZLQG LQJ RQ DOO OHYHOV 7KH WUXQFDWLRQ HUURU FULWHULRQ ZLWK GHQRPLQDWRU LV DSSOLHG WR DEEUHYLDWH FRDUVHJULG PXOWLJULG F\FOLQJ

PAGE 177

,QLWLDO 0*FRQYHUJHQFH SDWK IRU FRQVWDQW WROHUDQFHV &0 %XV\ 7LPH VHFRQGVf )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH WLUHVLGXDO QRUP GXULQJ WKH )0* SURFHGXUH IRU WKH 5H V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ XVLQJ VHFRQGRUGHU XSZLQGLQJ RQ DOO OHYHOV 7KH FRDUVHJULG FRQYHUJHQFH FULWHULRQ LV ____ f§ RQ HYHU\ OHYHO

PAGE 178

,QLWLDO 0*FRQYHUJHQFH SDWK IRU JUDGHG WROHUDQFHV &0 %XV\ 7LPH VHFRQGVf )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH XUHVLGXDO QRUP GXULQJ WKH )0* SURFHGXUH IRU WKH 5H V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ XVLQJ VHFRQGRUGHU XSZLQGLQJ RQ DOO OHYHOV 7KH FRDUVHJULG FRQYHUJHQFH FULWHULD DUH JUDGHG )RU OHYHOV f§ ??UK??

PAGE 179

5H 0*&RQYHUJHQFH 3DWKV IRU GLIIHUHQW )0* SURFHGXUHV )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH DYHUDJH XUHVLGXDO QRUP RQ WKH ILQHVW JULG OHYHO LQ WKH OHYHO 5H f§ OLGGULYHQ FDYLW\ IORZ 7KH UHOD[DWLRQ IDFWRUV XVHG ZHUH XMXY XF

PAGE 180

5H 0*&RQYHUJHQFH 3DWKV IRU GLIIHUHQW )0* SURFHGXUHV )LJXUH 7KH FRQYHUJHQFH SDWK RI WKH DYHUDJH XUHVLGXDO QRUP RQ WKH ILQHVW JULG OHYHO LQ WKH OHYHO 5H V\PPHWULF EDFNZDUGIDFLQJ VWHS IORZ 7KH UHOD[DWLRQ IDFWRUV XVHG ZHUH XMXY DQG OXF

PAGE 181

5HODWLYH &RVW RI 0XOWLJULG &RPSRQHQWV RQ DQG QRGH &0V 9LUWXDO SURFHVVRU UDWLR 93 )LJXUH 7KH UHODWLYH FRVW RI VPRRWKLQJ DQG SURORQJDWLRQ SHU 9F\FOH DV D IXQFWLRQ RI WKH SUREOHP VL]H IRU DQG QRGH &0 FRPSXWHUV DQG SURFHVVRUV UHVSHFWLYHO\f 7KH UXQ WLPHV DUH REWDLQHG IURP 9f F\FOHV ZKLFK KDYH VPRRWKLQJ LWHUDWLRQV UHVWULFWLRQ DQG SURORQJDWLRQ DW HDFK JULG OHYHO (ODSVHG WLPH LQFOXGHV IURQWHQGWRSURFHVVRU FRPPXQLFDWLRQf LV SORWWHG 7KH UHVWULFWLRQ FRVW LV VOLJKWO\ OHVV WKDQ WKH SURORQJDWLRQ FRVW ZKHQ RQO\ UHVLGXDOV DUH UHVWULFWHG VOLJKWO\ PRUH ZKHQ VROXWLRQV DUH UHVWULFWHG WRR EXW WKH WUHQG LV WKH VDPH DV IRU SURORQJDWLRQ DQG LV WKHUHIRUH QRW VKRZQ IRU FODULW\

PAGE 182

6PRRWKLQJ &RVWV E\ /HYHO RQ D QRGH &0 0XOWLJULG /HYHO DQG 9LUWXDO 3URFHVVRU 5DWLR )LJXUH 6PRRWKLQJ FRVW LQ WHUPV RI HODSVHG DQG EXV\ WLPH RQ D QRGH &0 DV D IXQFWLRQ RI WKH PXOWLJULG OHYHO IRU D FDVH ZLWK D [ ILQH JULG 7KH HODSVHG WLPH LV WKH RQH RQ WKH ULJKW DOZD\V JUHDWHU WKDQ WKH EXV\ WLPHf 7KH WLPHV FRUUHVSRQG WR RQH 6,03/( LWHUDWLRQ

PAGE 183

/HYHO 9 DQG :&\FOH 7LPHV RQ D QRGH &0 9LUWXDO 3URFHVVRU 5DWLR 93 )LJXUH 3DUDOOHO UXQ WLPH SHU F\FOH RQ D QRGH &0 DV D IXQFWLRQ RI WKH SUREOHP VL]H 9f F\FOH FRVW LV FRPSDUHG ZLWK :f F\FOH FRVW LQ WHUPV RI WRWDO HODSVHG WLPH GDVKHG OLQHVf DQG EXV\ WLPH VROLG OLQHVf $V WKH SUREOHP VL]H LQFUHDVHV WKH QXPEHU RI PXOWLJULG OHYHOV UHPDLQV IL[HG DW WKUHH

PAGE 184

/HYHO 9 DQG :&\FOH 7LPHV RQ D QRGH &0 9LUWXDO 3URFHVVRU 5DWLR 93 )LJXUH 3DUDOOHO UXQ WLPH SHU F\FOH RQ D QRGH &0 DV D IXQFWLRQ RI WKH SUREOHP VL]H 9f F\FOH FRVW LV FRPSDUHG ZLWK :f F\FOH FRVW LQ WHUPV RI WRWDO HODSVHG WLPH GDVKHG OLQHVf DQG EXV\ WLPH VROLG OLQHVf $V WKH SUREOHP VL]H LQFUHDVHV WKH QXPEHU RI PXOWLJULG OHYHOV UHPDLQV IL[HG DW ILYH

PAGE 185

/HYHO 9 DQG :&\FOH 7LPHV RQ D QRGH &0 9LUWXDO 3URFHVVRU 5DWLR 93 )LJXUH 3DUDOOHO UXQ WLPH SHU F\FOH RQ D QRGH &0 DV D IXQFWLRQ RI WKH SUREOHP VL]H 9f F\FOH FRVW LV FRPSDUHG ZLWK :f F\FOH FRVW LQ WHUPV RI WRWDO HODSVHG WLPH GDVKHG OLQHVf DQG EXV\ WLPH VROLG OLQHVf $V WKH SUREOHP VL]H LQFUHDVHV WKH QXPEHU RI PXOWLJULG OHYHOV UHPDLQV IL[HG DW VHYHQ

PAGE 186

(IILFLHQF\ YV 3UREOHP 6L]H IRU /HYHO 0XOWLJULG XVLQJ 9f &\FOHV )LJXUH 3DUDOOHO HIILFLHQF\ RI WKH OHYHO PXOWLJULG DOJRULWKP RQ WKH &0 DV D IXQFWLRQ RI WKH SUREOHP VL]H (IILFLHQF\ LV GHWHUPLQHG IURP (T ZKHUH 7S LV WKH HODSVHG WLPH IRU D IL[HG QXPEHU RI 9f F\FOHV DQG 7L LV WKH SDUDOOHO FRPSXWDWLRQ WLPH 7QRGHFSXf PXOWLSOLHG E\ WKH QXPEHU RI SURFHVVRUV 7KH WUHQG LV WKH VDPH DV IRU WKH VLQJOHJULG DOJRULWKP LQGLFDWLQJ WKH GRPLQDQW FRQWULEXWLRQ RI WKH VPRRWKHU WR WKH RYHUDOO PXOWLJULG FRVW

PAGE 187

(IILFLHQF\ YV 1XPEHU RI &0 1RGHV IRU /HYHO 0XOWLJULG XVLQJ 9f &\FOHV P R 6 nR FR &/ )LJXUH 3DUDOOHO HIILFLHQF\ RI WKH OHYHO PXOWLJULG DOJRULWKP RQ WKH &0 DV D IXQFWLRQ RI WKH QXPEHU RI SURFHVVRUV IRU VHYHUDO SUREOHP VL]HV (IILFLHQF\ LV GHn WHUPLQHG IURP (T ZKHUH 7S LV WKH HODSVHG WLPH IRU D IL[HG QXPEHU RI 9f F\FOHV DQG 7? LV WKH SDUDOOHO FRPSXWDWLRQ WLPH 7QRFHFSmf PXOWLSOLHG E\ WKH QXPEHU RI SURFHVVRUV 7KHUH LV RQO\ D VPDOO IDOORII LQ WKH HIILFLHQF\ DV QS LQFUHDVHV R f§ aRa 93 R a a f§ f§ f§ BB D R a a a a 7fa BBBBB 93 S Dm a a a 7! B B B B 93 2 r 4 a a a R FR OO &/ aa f§ R D B" a a a a B B B 93 R O O R 1XPEHU RI &0 1RGHV

PAGE 188

,VRHIILFLHQF\ &XUYHV )RU /HYHO 0XOWLJULG XVLQJ 9f &\FOHV 1XPEHU RI &0 1RGHV )LJXUH ,VRHIILFLHQF\ FXUYHV IRU WKH OHYHO SUHVVXUHFRUUHFWLRQ PXOWJULG PHWKRG EDVHG RQ WLPLQJV RI D IL[HG QXPEHU RI 9f F\FOHV XVLQJ SRLQW-DFREL LQQHU LWn HUDWLRQV 7KH SORW LV FRQVWUXFWHG EDVHG RQ OLQHDU OHDVWVTXDUHV FXUYH ILWV RI WKH GDWD LQ )LJXUHV DQG 7KH LVRHIILFLHQF\ FXUYHV KDYH WKH JHQHUDO IRUP 1 48S FRQVWDQW ZKHUH a IRU WKH HIILFLHQFLHV VKRZQ

PAGE 189

5()(5(1&(6 >@ : ) $PHV 1XPHULFDO 0HWKRGV IRU 3DUWLDO 'LIIHUHQWLDO (TXDWLRQV &RPSXWHU 6FLHQFH DQG $SSOLHG 0DWKHPDWLFV $FDGHPLF 3UHVV 6DQ 'LHJR VHFRQG HGLWLRQ >@ % %HOO 3 &ROHOOD DQG + 0 *OD] $ VHFRQGRUGHU SURMHFWLRQ PHWKRG IRU WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV -RXUQDO RI &RPSXWDWLRQDO 3K\VLFV f f§ >@ ( / %ORVFK DQG : 6K\\ 6HTXHQWLDO SUHVVXUHEDVHG 1DYLHU6WRNHV DOJRULWKPV RQ 6,0' FRPSXWHUVf§FRPSXWDWLRQDO LVVXHV 1XPHULFDO +HDW 7UDQVIHU 3DUW % f f§ >@ 0 ( %UDDWHQ DQG : 6K\\ 6WXG\ RI SUHVVXUH FRUUHFWLRQ PHWKRGV ZLWK PXOWLJULG IRU YLVFRXV IORZ FDOFXODWLRQV LQ QRQRUWKRJRQDO FXUYLOLQHDU FRRUGLQDWHV 1XPHULFDO +HDW 7UDQVIHU >@ $ %UDQGW 0XOWLOHYHO DGDSWLYH VROXWLRQV WR ERXQGDU\YDOXH SUREOHPV 0DWKHn PDWLFV RI &RPSXWDWLRQ >@ $ %UDQGW 0XOWLJULG *XLGH ZLWK $SSOLFDWLRQV WR )OXLG '\QDPLFV /HFn WXUH 1RWHV LQ &RPSXWDWLRQDO )OXLG '\QDPLFV YRQ .DUPDQ ,QVWLWXWH IRU )OXLG '\QDPFLV 5KRGH6DLQW*HQVH %HOJLXP $YDLODEOH IURP 'HSDUWPHQW RI &RPSXWHU 6FLHQFH 8QLYHUVLW\ RI &RORUDGR 'HQYHU &2 >@ $ %UDQGW DQG 6 7DnDVDQ 0XOWLJULG VROXWLRQV WR TXDVLHOOLSWLF VFKHPHV ,Q ( 0 0XUPDQ DQG 6 6 $EDUEDQHO HGLWRUV 3URJUHVV DQG 6XSHUFRPSXWLQJ LQ &RPn SXWDWLRQDO )OXLG '\QDPLFV 3URFHHGLQJV RI 86,VUDHO :RUNVKRS SDJHV %LUNKDXVHU %RVWRQ >@ $ %UDQGW DQG @ $ %UDQGW DQG @ : %ULJJV $ 0XOWLJULG 7XWRULDO 6,$0 3KLODGHOSKLD >@ : %ULJJV DQG 6 ) 0F&RUPLFN ,QWURGXFWLRQ ,Q 6 ) 0F&RUPLFN HGLWRU 0XOWLJULG 0HWKRGV FKDSWHU 6,$0 3KLODGHOSKLD >@ &+ %UXQHDX DQG & -RXURQ $Q HIILFLHQW VFKHPH IRU VROYLQJ VWHDG\ LQFRPSUHVVn LEOH 1DYLHU6WRNHV HTXDWLRQV -RXUQDO RI &RPSXWDWLRQDO 3K\VLFV

PAGE 190

>@ 7 &KDQ DQG 5 6FKUHLEHU 3DUDOOHO QHWZRUNV IRU PXOWLJULG DOJRULWKPV $UFKLn WHFWXUH DQG FRPSOH[LW\ 6,$0 -RXUQDO RI 6FLHQWLILF DQG 6WDWLVWLFDO &RPSXWLQJ >@ 7 ) &KDQ DQG 5 6 7XPLQDUR $ VXUYH\ RI SDUDOOHO PXOWLJULG DOJRULWKPV ,Q $ 1RRU HGLWRU 3DUDOOHO &RPSXWDWLRQV DQG 7KHLU ,PSDFW RQ 0HFKDQLFV $0' SDJHV $60( 1HZ @ $ &KRULQ $ QXPHULFDO PHWKRG IRU VROYLQJ LQFRPSUHVVLEOH YLVFRXV IORZ SUREn OHPV -RXUQDO RI &RPSXWDWLRQDO 3K\VLFV >@ $ &KRULQ 1XPHULFDO VROXWLRQ RI WKH 1DYLHU6WRNHV HTXDWLRQV 0DWKHPDWLFV RI &RPSXWDWLRQ f >@ ( 'HQG\ %ODFN ER[ PXOWLJULG -RXUQDO RI &RPSXWDWLRQDO 3K\VLFV >@ ( 'HQG\ 0 3 ,GD DQG 0 5XWOHGJH $ VHPLFRDUVHQLQJ PXOWLJULG DOJRn ULWKP IRU 6,0' PDFKLQHV 6,$0 -RXUQDO RI 6FLHQWLILF DQG 6WDWLVWLFDO &RPSXWLQJ f f§ >@ 3 9DQ 'RRUPDO DQG 5DLWKE\ (QKDQFHPHQWV RI WKH 6,03/( PHWKRG IRU SUHGLFWLQJ LQFRPSUHVVLEOH IOXLG IORZV 1XPHULFDO +HDW 7UDQVIHU >@ 7 $ (JROI &RPSXWDWLRQDO SHUIRUPDQFH RI &)' FRGHV RQ WKH &RQQHFWLRQ 0Dn FKLQH ,Q +RUVW 6LPRQ HGLWRU 3DUDOOHO &RPSXWDWLRQDO )OXLG '\QDPLFV ,Pn SOHPHQWDWLRQV DQG 5HVXOWV SDJHV 7KH 0,7 3UHVV &DPEULGJH 0$ >@ % )DYLQL DQG *XM 0* WHFKQLTXHV IRU VWDJJHUHG GLIIHUHQFHV ,Q 3DG GRQ DQG + +ROVWHLQ HGLWRUV 0XOWLJULG 0HWKRGV IRU ,QWHJUDO DQG 'LIIHUHQWLDO (TXDWLRQV SDJHV &ODUHQGRQ 3UHVV 2[IRUG >@ + )HU]LJHU DQG 0 3HULF &RPSXWDWLRQDO PHWKRGV IRU LQFRPSUHVVLEOH IORZ ,Q 0 /HVLHXU DQG =LQQ-XVWLQ HGLWRUV 3URFHHGLQJV RI 6HVVLRQ /,9 RI WKH /HV +RXFKHV FRQIHUHQFH RQ &RPSXWDWLRQDO )OXLG '\QDPLFV (OVHYLHU $PVWHUGDP 7KH 1HWKHUODQGV >@ 3 ) )LVFKHU DQG $ 7 3DWHUD 3DUDOOHO VLPXODWLRQ RI YLVFRXV LQFRPSUHVVLEOH IORZV $QQXDO 5HYLHZ RI )OXLG 0HFKDQLFV >@ & $ )OHWFKHU &RPSXWDWLRQDO 7HFKQLTXHV IRU )OXLG '\QDPLFV 6SULQHHU 9HUODJ %HUOLQ >@ 3 2 )UHGHULFNVRQ DQG $ 0F%U\DQ 1RUPDOL]HG FRQYHUJHQFH UDWHV IRU WKH 360* PHWKRG 6,$0 -RXUQDO RI 6FLHQWLILF DQG 6WDWLVWLFDO &RPSXWLQJ >@ *DQQRQ DQG 9DQ 5RVHQGDOH 2Q WKH VWUXFWXUH RI SDUDOOHOLVP LQ D KLJKO\ FRQFXUUHQW SGH VROYHU -RXUQDO RI 3DUDOOHO DQG 'LVWULEXWHG &RPSXWLQJ

PAGE 191

>@ *DUWOLQJ $ WHVW SUREOHP IRU RXWIORZ ERXQGDU\ FRQGLWLRQVf§IORZ RYHU D EDFNZDUGIDFLQJ VWHS ,QWHUQDWLRQDO -RXUQDO IRU 1XPHULFDO 0HWKRGV LQ )OXLGV >@ 8 *KLD 1 *KLD DQG & 7 6KLQ +LJK5H VROXWLRQV IRU LQFRPSUHVVLEOH IORZ XVLQJ WKH 1DYLHU6WRNHV HTXDWLRQV DQG D PXOWLJULG PHWKRG -RXUQDO RI &RPSXWDWLRQDO 3K\VLFV f§ >@ 3 0 *UHVKR ,QFRPSUHVVLEOH IOXLG G\QDPLFV 6RPH IXQGDPHQWDO IRUPXODWLRQ LVVXHV $QQXDO 5HYLHZ RI )OXLG 0HFKDQLFV >@ 3 0 *UHVKR $ VXPPDU\ UHSRUW RQ WKH -XO\ PLQLV\PSRVLXP RQ RXWIORZ ERXQGDU\ FRQGLWLRQV IRU LQFRPSUHVVLEOH IORZ ,Q 3URFHHGLQJV )RXUWK ,QWHUQDn WLRQDO 6\PSRVLXP RQ &RPSXWDWLRQDO )OXLG '\QDPLFV SDJHV 8QLYHUVLW\ RI &DOLIRUQLD DW 'DYLV >@ 3 0 *UHVKR *DUWOLQJ 5 7RUF]\QVNL $ &OLIIH + :LQWHUV 7 *DUUDWW $ 6SHQFH DQG : *RRGULFK ,V WKH VWHDG\ YLVFRXV LQFRPn SUHVVLEOH WZRGLPHQVLRQDO IORZ RYHU D EDFNZDUGIDFLQJ VWHS DW 5H VWDEOH" ,QWHUQDWLRQDO -RXUQDO IRU 1XPHULFDO 0HWKRGV LQ )OXLGV >@ 3 0 *UHVKR DQG 5 / 6DQL 2Q SUHVVXUH ERXQGDU\ FRQGLWLRQV IRU WKH LQFRPn SUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV ,QWHUQDWLRQDO -RXUQDO IRU 1XPHULFDO 0HWKRGV LQ )OXLGV >@ 0 *ULHEHO 6SDUVH JULG PXOWLOHYHO PHWKRGV WKHLU SDUDOOHOL]DWLRQ DQG WKHLU DSn SOLFDWLRQ WR &)' ,Q 5 % 3HO] $ (FHU DQG +DXVHU HGLWRUV 3DUDOOHO &RPSXWDWLRQDO )OXLG '\QDPLFV f SDJHV (OVHYLHU $PVWHUGDP 7KH 1HWKHUODQGV >@ 6 1 *XSWD 0 =XEDLU DQG & ( *URVFK $ PXOWLJULG DOJRULWKP IRU SDUDOOHO FRPSXWHUV &30* -RXUQDO RI 6FLHQWLILF &RPSXWLQJ f >@ / *XVWDIVRQ )L[HG WLPH WLHUHG PHPRU\ DQG VXSHUOLQHDU VSHHGXS ,Q 3URn FHHGLQJV RI WKH )LIWK 'LVWULEXWHG 0HPRU\ &RPSXWLQJ &RQIHUHQFH SDJHV f§ &KDUOHVWRQ 6& ,((( &RPSXWHU 6RFLHW\ 3UHVV >@ : +DFNEXVFK &RQYHUJHQFH RI PXOWLJULG LWHUDWLRQV DSSOLHG WR GLIIHUHQFH HTXDn WLRQV 0DWKHPDWLFV RI &RPSXWDWLRQ f $SULO >@ : +DFNEXVFK 6XUYH\ RI FRQYHUJHQFH SURRIV IRU PXOWLJULG LWHUDWLRQV ,Q )UHKVH 3DOODVFKNH DQG 8 7URWWHQEHUJ HGLWRUV 6SHFLDO 7RSLFV RI $Sn SOLHG 0DWKHPDWLFVf§)XQFWLRQDO $QDO\VLV 1XPHULFDO $QDO\VLV DQG 2SWLPL]DWLRQ SDJHV 1RUWK+ROODQG $PVWHUGDP 7KH 1HWKHUODQGV >@ 7 +DJVWURP &RQGLWLRQV DW WKH GRZQVWUHDP ERXQGDU\ IRU VLPXODWLRQV RI YLVn FRXV LQFRPSUHVVLEOH IORZ 6,$0 -RXUQDO RI 6FLHQWLILF DQG 6WDWLVWLFDO &RPSXWLQJ f >@ ) + +DUORZ DQG ( :HOFK 1XPHULFDO FDOFXODWLRQ RI WLPHGHSHQGHQW YLVFRXV LQFRPSUHVVLEOH IORZ RI IOXLG ZLWK IUHH VXUIDFH 3K\VLFV RI )OXLGV ff§ 'HFHPEHU

PAGE 192

>@ 5 +RFNQH\ DQG & -HVVKRSH 3DUDOOHO &RPSXWHUV $UFKLWHFWXUH 3URJUDPPLQJ DQG $OJRULWKPV $GDP +LOJHU %ULVWRO >@ 7 5 +XJKHV : /LX DQG $ %URRNV 5HYLHZ RI ILQLWHHOHPHQW DQDO\n VLV RI LQFRPSUHVVLEOH YLVFRXV IORZ E\ WKH SHQDOW\ IXQFWLRQ PHWKRG -RXUQDO RI &RPSXWDWLRQDO 3K\VLFV f f§ >@ %5 +XWFKLQVRQ DQG 5DLWKE\ $ PXOWLJULG PHWKRG EDVHG RQ WKH DGGLWLYH FRUUHFWLRQ VWUDWHJ\ 1XPHULFDO +HDW 7UDQVIHU >@ 5 ,VVD 6ROXWLRQ RI WKH LPSOLFLWO\ GLVFUHWLVHG IOXLG IORZ HTXDWLRQV E\ RSHUDWRUn VSOLWWLQJ -RXUQDO RI &RPSXWDWLRQDO 3K\VLFV >@ & -HVSHUVHQ DQG & /HYLW $ FRPSXWDWLRQDO IOXLG G\QDPLFV DOJRULWKP RQ D PDVVLYHO\ SDUDOOHO FRPSXWHU ,QWHUQDWLRQDO -RXUQDO RI 6XSHUFRPSXWHU $SSOLFDn WLRQV ff§ >@ 0 .HONDU DQG 6 9 3DWDQNDU 'HYHORSPHQW RI JHQHUDOL]HG EORFN FRUUHFWLRQ SURFHGXUHV IRU WKH VROXWLRQ RI GLVFUHWL]HG 1DYLHU6WRNHV HTXDWLRQV &RPSXWHU 3K\VLFV &RPPXQLFDWLRQV >@ 9 .XPDU DQG 9 6LQJK 6FDODELOLW\ RI SDUDOOHO DOJRULWKPV IRU WKH DOOSDLUV VKRUWHVWSDWK SUREOHP -RXUQDO RI 3DUDOOHO DQG 'LVWULEXWHG &RPSXWLQJ f§ >@ & /HYLW *ULG FRPPXQLFDWLRQ RQ WKH &RQQHFWLRQ 0DFKLQH $QDO\VLV SHUIRUn PDQFH DQG LPSURYHPHQWV ,Q +RUVW 6LPRQ HGLWRU 6FLHQWLILF $SSOLFDWLRQV RI WKH &RQQHFWLRQ 0DFKLQH SDJHV :RUOG 6FLHQWLILF 1HZ @ ) 6 /LHQ DQG 0 $ /HVFK]LQHU 0XOWLJULG FRQYHUJHQFH DFFHOHUDWLRQ IRU FRPn SOH[ IORZ LQFOXGLQJ WXUEXOHQFH ,Q : +DFNEXVFK DQG 8 7URWWHQEHUJ HGLWRUV 0XOWLJULG 0HWKRGV ,,, SDJHV %LUNK£XVHU %RVWRQ >@ /LQGHQ /RQVGDOH + 5LW]GRUI DQG $ 6FKLLOOHU %ORFNVWUXFWXUHG PXOWLJULG IRU WKH 1DYLHU6WRNHV HTXDWLRQV ([SHULHQFHV DQG VFDODELOLW\ TXHVWLRQV ,Q 5 % 3HO] $ (FHU DQG +DXVHU HGLWRUV 3DUDOOHO &RPSXWDWLRQDO )OXLG '\QDPLFV f SDJHV (OVHYLHU $PVWHUGDP 7KH 1HWKHUODQGV >@ /LQGHQ /RQVGDOH % 6WHFNHO DQG 6WLLEHQ 0XOWLJULG IRU WKH VWHDG\VWDWH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV D VXUYH\ ,Q ,QWHUQDWLRQDO &RQIHUHQFH IRU 1XPHULFDO 0HWKRGV LQ )OXLGV SDJHV %HUOLQ 6SULQJHU9HUODJ >@ /RQVGDOH DQG $ 6FKLLOOHU 0XOWLJULG HIILFLHQF\ IRU FRPSOH[ IORZ VLPXODWLRQV RQ GLVWULEXWHG PHPRU\ PDFKLQHV 3DUDOOHO &RPSXWLQJ ff§ -DQXDU\ >@ 0DVSDU &RPSXWHU &RUSRUDWLRQ 6XQQ\YDOH &$ 0DVSDU 6\VWHP 2YHUYLHZ >@ 2 $ 0F%U\DQ 3 2 )UHGHULFNVRQ /LQGHQ $ 6FKLLOOHU 6ROFKHQEDFK 6WLLEHQ & $ 7KROH DQG 8 7URWWHQEHUJ 0XOWLJULG PHWKRGV RQ SDUDOOHO FRPSXWHUVf§D VXUYH\ RI UHFHQW GHYHORSPHQWV ,PSDFW RI &RPSXWLQJ LQ 6FLHQFH DQG (QJLQHHULQJ

PAGE 193

>@ $ 0LFKHOVHQ 0HVKDGDSWLYH VROXWLRQ RI WKH 1DYLHU6WRNHV HTXDWLRQV ,Q : +DFNEXVFK DQG 8 7URWWHQEHUJ HGLWRUV 0XOWLJULG 0HWKRGV ,,, SDJHV f§ %LUNK£XVHU %RVWRQ >@ 5 $ 1LFRODLGHV 2Q VRPH WKHRUHWLFDO DQG SUDFWLFDO DVSHFWV RI PXOWLJULG PHWKn RGV 0DWKHPDWLFV RI &RPSXWDWLRQ ff§ >@ 1RUGVWURP 7KH LQIOXHQFH RI RSHQ ERXQGDU\ FRQGLWLRQV RQ WKH FRQYHUJHQFH WR VWHDG\ VWDWH IRU WKH 1DYLHU6WRNHV HTXDWLRQV -RXUQDO RI &RPSXWDWLRQDO 3K\VLFV Of >@ ( 6 2UDQ 3 %RULV DQG ( ) %URZQ )OXLGG\QDPLF FRPSXWDWLRQV RQ D &RQQHFWLRQ 0DFKLQHf§SUHOLPLQDU\ WLPLQJV DQG FRPSOH[ ERXQGDU\ FRQGLWLRQV $,$$ 3DSHU WK $HURVSDFH 6FLHQFHV 0HHWLQJ DQG ([KLELW 5HQR 19 >@ 0 2UWHJD DQG 5 9RLJW 6ROXWLRQ RI SDUWLDO GLIIHUHQWLDO HTXDWLRQV RQ YHFWRU DQG SDUDOOHO FRPSXWHUV 6,$0 5HYLHZ ff§ -XQH >@ $ 2YHUPDQ DQG 9DQ 5RVHQGDOH 0DSSLQJ UREXVW SDUDOOHO PXOWLJULG DOJRn ULWKPV WR VFDODEOH PHPRU\ DUFKLWHFWXUHV ,Q 6 0F&RUPLFN HGLWRU 3URFHHGLQJV RI WKH 7KLUG &RSSHU 0RXQWDLQ &RQIHUHQFH RQ 0XOWLTULG 0HWKRGV 0DUFHO 'HNNHU 1HZ @ 6 9 3DWDQNDU 1XPHULFDO +HDW 7UDQVIHU DQG )OXLG )ORZ +HPLVSKHUH :DVKn LQJWRQ '& >@ 6 9 3DWDQNDU DQG % 6SDOGLQJ $ FDOFXODWLRQ SURFHGXUH IRU KHDW PDVV DQG PRPHQWXP WUDQVIHU LQ WKUHHGLPHQVLRQDO SDUDEROLF IORZV ,QWHUQDWLRQDO -RXUQDO RI +HDW DQG 0DVV 7UDQVIHU >@ 5 3H\UHW DQG 7 7D\ORU &RPSXWDWLRQDO 0HWKRGV IRU )OXLG )ORZ 6SULQJHU 9HUODJ 1HZ @ : + 3UHVV 6 $ 7HXNROVN\ :LOOLDP 7 9HWWHUOLQJ DQG %ULDQ 3 )ODQQHU\ 1XPHULFDO 5HFLSHV LQ )RUWUDQ 7KH $UW RI 6FLHQWLILF &RPSXWLQJ &DPEULGJH 8QLYHUVLW\ 3UHVV /RQGRQ VHFRQG HGLWLRQ >@ & 0 5KLH $ SUHVVXUHEDVHG 1DYLHU6WRNHV VROYHU XVLQJ WKH PXOWLJULG PHWKRG $,$$ -RXUQDO >@ 3 / 5RH %H\RQG WKH 5LHPDQQ SUREOHP SDUW ,Q 0 < +XVVDLQL $ .XPDU DQG 0 6DODV HGLWRUV $OJRULWKPLF 7UHQGV LQ &RPSXWDWLRQDO )OXLG '\QDPLFV SDJHV 6SULQJHU9HUODJ %HUOLQ >@ 5 6FKUHLEHU $Q DVVHVVPHQW RI WKH &RQQHFWLRQ 0DFKLQH 7HFKQLFDO UHSRUW 5,$&6 1$6$ $PHV 5HVHDUFK &HQWHU 0RXQWDLQ 9LHZn &$ $SULO >@ 5 6FKUHLEHU DQG + 6LPRQ 7RZDUGV WKH WHUDIORSV FDSDELOLW\ IRU &)' ,Q +RUVW 6LPRQ HGLWRU 3DUDOOHO &RPSXWDWLRQDO )OXLG '\QDPLFV ,PSOHPHQWDn WLRQV DQG 5HVXOWV FKDSWHU SDJHV 7KH 0,7 3UHVV &DPEULGJH 0$

PAGE 194

>@ 0 + 6FKXOW] 6RPH FKDOOHQJHV LQ PDVVLYHO\ SDUDOOHO FRPSXWDWLRQ ,Q 0 < +XVVDLQL $ .XPDU DQG 0 6DODV HGLWRUV $OJRULWKPLF 7UHQGV LQ &RPSXWDn WLRQDO )OXLG '\QDPLFV SDJHV 6SULQJHU9HUODJ %HUOLQ >@ $ 6HWWDUL DQG $]L] $ JHQHUDOL]DWLRQ RI WKH DGGLWLYH FRUUHFWLRQ PHWKRGV IRU WKH LWHUDWLRQ VROXWLRQ RI PDWUL[ HTXDWLRQV 6,$0 -RXUQDO RI 1XPHULFDO $QDO\VLV >@ 6KDZ DQG 6 6LYDORJDQDWKDQ $ PXOWLJULG PHWKRG IRU UHFLUFXODWLQJ IORZV ,QWHUQDWLRQDO -RXUQDO IRU 1XPHULFDO 0HWKRGV LQ )OXLGV f $SULO >@ 6KDZ DQG 6 6LYDORJDQDWKDQ 2Q WKH VPRRWKLQJ SURSHUWLHV RI WKH 6,03/( SUHVVXUHFRUUHFWLRQ DOJRULWKP ,QWHUQDWLRQDO -RXUQDO IRU 1XPHULFDO 0HWKRGV LQ )OXLGV ff§ $SULO >@ : 6K\\ (IIHFWV RI RSHQ ERXQGDU\ RQ LQFRPSUHVVLEOH 1DYLHU6WRNHV IORZ FRPn SXWDWLRQ 1XPHULFDO H[SHULPHQWV 1XPHULFDO +HDW 7UDQVIHU >@ : 6K\\ &RPSXWDWLRQDO 0RGHOLQJ IRU )OXLG )ORZ DQG ,QWHUIDFLDO 7UDQVSRUW (OVHYLHU $PVWHUGDP 7KH 1HWKHUODQGV >@ : 6K\\ DQG &6 6XQ 'HYHORSPHQW RI D SUHVVXUHFRUUHFWLRQVWDJJHUHGJULG EDVHG PXOWLJULG VROYHU IRU LQFRPSUHVVLEOH UHFLUFXODWLQJ IORZV &RPSXWHUV DQG )OXLGV ff§ >@ : 6K\\ 6 7KDNXU DQG :ULJKW 6HFRQGRUGHU XSZLQG DQG FHQWUDO GLIIHUHQFH VFKHPHV IRU UHFLUFXODWLQJ IORZ FRPSXWDWLRQ $,$$ -RXUQDO >@ & 6LPR DQG ) $UPHUR 8QFRQGLWLRQDO VWDELOLW\ DQG ORQJWHUP EHKDYLRU RI WUDQVLHQW DOJRULWKPV IRU WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV DQG (XOHU HTXDWLRQV &RPSXWDWLRQDO 0HWKRGV LQ $SSOLHG 0HFKDQLFV DQG (QJLQHHULQJ >@ + 6LPRQ HGLWRU 3DUDOOHO &RPSXWDWLRQDO )OXLG '\QDPLFV ,PSOHPHQWDWLRQV DQG 5HVXOWV 7KH 0,7 3UHVV &DPEULGJH 0$ >@ + 6LPRQ : 5 9DQ 'DOVHP DQG / 'DJXP 3DUDOOHO &)' &XUUHQW VWDn WXV DQG IXWXUH UHTXLUHPHQWV ,Q +RUVW 6LPRQ HGLWRU 3DUDOOHO &RPSXWDWLRQDO )OXLG '\QDPLFV ,PSOHPHQWDWLRQV DQG 5HVXOWV FKDSWHU 7KH 0,7 3UHVV &DPn EULGJH 0$ >@ 5 $ 6PLWK DQG $ :HLVHU 6HPLFRDUVHQLQJ PXOWLJULG RQ D K\SHUFXEH 6,$0 -RXUQDO RI 6FLHQWLILF DQG 6WDWLVWLFDO &RPSXWLQJ f f§ >@ 3 0 6RFNRO 0XOWLJULG VROXWLRQ RI WKH 1DYLHU6WRNHV HTXDWLRQV RQ KLJKO\ VWUHWFKHG JULGV ZLWK GHIHFW FRUUHFWLRQ ,Q 6 0F&RUPLFN HGLWRU 3URFHHGLQJV RI WKH 7KLUG &RSSHU 0RXQWDLQ &RQIHUHQFH RQ 0XOWLJULG 0HWKRGV 0DUFHO 'HNNHU 1HZ @ 6 3 6SHNUHLMVH 0XOWLJULG 6ROXWLRQ RI WKH 6WHDG\ (XOHU (TXDWLRQV &:, 7UDFW &HQWUH IRU 0DWKHPDWLFV DQG &RPSXWHU 6FLHQFH $PVWHUGDP 7KH 1HWKHUn ODQGV

PAGE 195

>@ 7KLQNLQJ 0DFKLQHV &RUSRUDWLRQ &DPEULGJH 0$ &0 )RUWUDQ 2SWLPL]DWLRQ 1RWHV 6OLFHZLVH 0RGHO 9HUVLRQ 0DUFK >@ 7KLQNLQJ 0DFKLQHV &RUSRUDWLRQ &DPEULGJH 0$ 3ULVP 8VHUfV *XLGH 9HUVLRQ $SULO >@ 7KLQNLQJ 0DFKLQHV &RUSRUDWLRQ &DPEULGJH 0$ &0 7HFKQLFDO 6XPPDU\ 1RYHPEHU >@ 7KLQNLQJ 0DFKLQHV &RUSRUDWLRQ &DPEULGJH 0$ &0 )RUWUDQ 5HOHDVH 1RWHV 3UHOLPLQDU\ 'RFXPHQWDWLRQ IRU 9HUVLRQ %HWD $SULO >@ 7KLQNLQJ 0DFKLQHV &RUSRUDWLRQ &DPEULGJH 0$ 2SWLPL]LQJ &0)RUWUDQ &RGH RQ WKH &0 $XJXVW >@ 0 & 7KRPSVRQ DQG + )HU]LJHU $Q DGDSWLYH PXOWLJULG WHFKQLTXH IRU WKH LQn FRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV -RXUQDO RI &RPSXWDWLRQDO 3K\VLFV >@ 5 6 7XPLQDUR DQG ( :RPEOH $QDO\VLV RI WKH PXOWLJULG )09 F\FOH RI ODUJH VFDOH SDUDOOHO PDFKLQHV 6,$0 -RXUQDO RI 6FLHQWLILF &RPSXWLQJ f f§ >@ 63 9DQND %ORFNLPSOLFLW PXOWLJULG VROXWLRQ RI 1DYLHU6WRNHV HTXDWLRQV LQ SULPn LWLYH YDULDEOHV -RXUQDO RI &RPSXWDWLRQDO 3K\VLFV >@ 3 :HVVHOLQJ /LQHDU PXOWLJULG PHWKRGV ,Q 6 ) 0F&RUPLFN HGLWRU 0XOWLJULG 0HWKRGV FKDSWHU 6,$0 3KLODGHOSKLD >@ 3 :HVVHOLQJ $ VXUYH\ RI IRXULHU VPRRWKLQJ DQDO\VLV UHVXOWV ,Q : +DFNEXVFK DQG 8 7URWWHQEHUJ HGLWRUV 0XOWLJULG 0HWKRGV ,,, SDJHV %LUNK£XVHU %RVWRQ >@ ( :RPEOH DQG % & @ 3 0 'H =HHXZ DQG ( 9DQ $VVHOW 7KH FRQYHUJHQFH UDWH RI PXOWLOHYHO DOJRn ULWKPV DSSOLHG WR WKH FRQYHFWLRQGLIIXVLRQ HTXDWLRQ 6,$0 -RXUQDO RI 6FLHQWLILF DQG 6WDWLVWLFDO &RPSXWLQJ f $SULO >@ 6 =HQJ DQG 3 :HVVHOLQJ 1XPHULFDO VWXG\ RI D PXOWLJULG PHWKRG ZLWK IRXU VPRRWKLQJ PHWKRGV IRU WKH LQFRPSUHVVLEOH 1DYLHU6WRNHV HTXDWLRQV LQ JHQHUDO FRRUGLQDWHV ,Q 6 0F&RUPLFN HGLWRU 3URFHHGLQJV RI WKH 7KLUG &RSSHU 0RXQWDLQ &RQIHUHQFH RQ 0XOWLJULG 0HWKRGV 0DUFHO 'HNNHU 1HZ
PAGE 196

%,2*5$3+,&$/ 6.(7&+ (GZLQ %ORVFK UHFHLYHG KLV %6 GHJUHH ZLWK KLJK KRQRUV IURP WKH 8QLYHUVLW\ RI )ORULGD LQ UHFHLYHG KLV 06 GHJUHH DOVR IURP 8) LQ DQG DQWLFLSDWHV UHFHLYLQJ KLV 3K' GHJUHH LQ +H ZDQWV WR EH WKH ILUVW SHUVRQ WR GR D FRPSOHWH QXPHULFDO VLPXODWLRQ RI D SUDFWLFDOO\ LPSRUWDQW SK\VLFDO SURFHVV IRU H[DPSOH PHn WHRURORJLFDO RU RFHDQRJUDSKLF SDUWLFOH WUDQVSRUW FRPEXVWLRQ RU WKH PDQXIDFWXULQJ SURFHVVHV RI DOOR\V ZLWK DV OLWWOH PRGHOOLQJ DV SRVVLEOH DQG HQRXJK VSDFH DQG WLPH UHVROXWLRQ VR WKDW WKH SXEOLF ZLOO KDYH QR WURXEOH UHFRJQL]LQJ WKH XWLOLW\ RI KLV ZRUN DQG RI VFLHQWLILF FRPSXWLQJ LQ JHQHUDO $ZD\ IURP ZRUN KH HQMR\V JROI EDVNHWEDOO DQG WUDYHOOLQJ ZLWK KLV ZLIH

PAGE 197

, FHUWLI\ WKDW KDYH UHDG WKLV VWXG\ DQG WKDW LQ P\ RSLQLRQ LW FRQIRUPV WR DFFHSWDEOH VWDQGDUGV RI VFKRODUO\ SUHVHQWDWLRQ DQG LV IXOO\ DGHTXDWH LQ VFRSH DQG TXDOLW\ DV D GLVVHUWDWLRQ IRU WKH GHJUHH RI 'RFWRU RI 3KLORVRSK\ :HL 6K\\ &KDLUPDQ f 3URIHVVRU RI $HURVSDFH (QJLQHHULQJ 0HFKDQLFV DQG (QJLQHHULQJ 6FLHQFH FHUWLI\ WKDW KDYH UHDG WKLV VWXG\ DQG WKDW LQ P\ RSLQLRQ LW FRQIRUPV WR DFFHSWDEOH VWDQGDUGV RI VFKRODUO\ SUHVHQWDWLRQ DQG LV IXOO\ DGHTXDWH LQ VFRSH DQG TXDOLW\ DV D GLVVHUWDWLRQ IRU WKH GHJUHH RI 'RFWRU RI 3KLORVRSK\ &KHQ&KL +VX 3URIHVVRU RI $HURVSDFH (QJLQHHULQJ 0HFKDQLFV DQG (QJLQHHULQJ 6FLHQFH FHUWLI\ WKDW KDYH UHDG WKLV VWXG\ DQG WKDW LQ P\ RSLQLRQ LW FRQIRUPV WR DFFHSWDEOH VWDQGDUGV RI VFKRODUO\ SUHVHQWDWLRQ DQG LV IXOO\ DGHTXDWH LQ VFRSH DQG TXDOLW\ DV D GLVVHUWDWLRQ IRU WKH GHJUHH RI 'RFWRU RI 3KLORVRSK\ %UXFH &DUUROO $VVRFLDWH 3URIHVVRU RI $HURVSDFH (QJLQHHULQJ 0HFKDQLFV DQG (QJLQHHULQJ 6FLHQFH FHUWLI\ WKDW KDYH UHDG WKLV VWXG\ DQG WKDW LQ WR DFFHSWDEOH VWDQGDUGV RI VFKRODUO\ SUHVHQWDWLRQ DKG VFRSH DQG TXDOLW\ DV D GLVVHUWDWLRQ IRU WKH GHJUHH SI RSLQLRQ LW FRQIRUPV V IXOO\ DGHTXDWH LQ FWRU RI 3KLORVRSK\ 'DYLG 0LNRODLWLV $VVRFLDWH 3URIHVVRU RI $HURVSDFH (QJLQHHULQJ 0HFKDQLFV DQG (QJLQHHULQJ 6FLHQFH

PAGE 198

, FHUWLI\ WKDW KDYH UHDG WKLV VWXG\ DQG WKDW LQ P\ RSLQLRQ LW FRQIRUPV WR DFFHSWDEOH VWDQGDUGV RI VFKRODUO\ SUHVHQWDWLRQ DQG LV IXOO\ DGHTXDWH LQ VFRSH DQG TXDOLW\ DV D GLVVHUWDWLRQ IRU WKH GHJUHH RI 'RFWRU RI 3KLORVRSK\ 6DUWDM 6DKQL 3URIHVVRU RI &RPSXWHU DQG ,QIRUPDWLRQ 6FLHQFHV 7KLV GLVVHUWDWLRQ ZDV VXEPLWWHG WR WKH *UDGXDWH )DFXOW\ RI WKH &ROOHJH RI (QJLQHHULQJ DQG WR WKH *UDGXDWH 6FKRRO DQG ZDV DFFHSWHG DV SDUWLDO IXOn ILOOPHQW RI WKH UHTXLUHPHQWV IRU WKH GHJUHH RI 'RFWRU RI 3KLORVRSK\ 'HFHPEHU M& :LQIUHG 0 3KLOOLSV 'HDQ &ROOHJH RI (QJLQHHULQJ .DUHQ $ +ROEURRN 'HDQ *UDGXDWH 6FKRRO

PAGE 199

/' 81,9(56,7< 2) )/25,'$


PRESSURE-BASED METHODS ON SINGLE-INSTRUCTION
STREAM/MULTIPLE-DATA STREAM COMPUTERS
By
EDWIN L. BLOSCH
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1994

ACKNOWLEDGEMENTS
I would like to express my thanks to my advisor Dr. Wei Shyy for reflecting
carefully on my results and for directing my research toward interesting issues. I
would also like to thank him for the exceptional personal support and flexibility he
offered me during my last year of study, which was done off-campus. I would also like
to acknowledge the contributions of the other members of my Ph.D. committee, Dr.
Chen-Chi Hsu, Dr. Bruce Carroll, Dr. David Mikolaitis, and Dr. Sartaj Sahni. Dr.
Hsu and Dr. Carroll supervised my B.S. and M.S. degree research studies, respectively,
and Dr. Mikolaitis, in the role of graduate coordinator, enabled me to obtain financial
support from the Department of Energy.
Also I would like to thank Madhukar Rao, Rick Smith and H.S. Udaykumar, for
paying fees on mv behalf and for registering me for classes while I was in California.
Jeff Wright, S. Thakur, Shin-Jye Liang, Guobao Guo and Pedro Lopez-Fernandez
have also made direct and indirect contributions for which I am grateful.
Special thanks go to Dr. Jamie Sethian, Dr. Alexandre Chorin and Dr. Paul Con-
cus of Lawrence Berkeley Laboratory for allowing me to visit LBL and use their
resources, for giving personal words of support and constructive advice, and for the
privilege of interacting with them and their graduate students in the applied mathe¬
matics branch.
Last but not least I would like to thank my wife, Laura, for her patience, her
example, and her frank thoughts on “cups with sliding lids,” “flow through straws,”
and numerical simulations in general.
n

My research was supported in part by the Computational Science Graduate Fel¬
lowship Program of the Office of Scientific Computing in the Department of Energy.
The CM-5s used in this study were partially funded by National Science Foundation
Infrastructure Grant CDA-8722788 (in the computer science department of the Uni¬
versity of California-Berkeley), and a grant of HPC time from the DoD HPC Shared
Resource Center, Army High-Performance Computing Research Center, Minneapolis,
Minnesota.
m

TABLE OF CONTENTS
ACKNOWLEDGEMENTS ii
ABSTRACT vi
CHAPTERS
1 INTRODUCTION 1
1.1 Motivations 1
1.2 Governing Equations 3
1.3 Numerical Methods for Viscous Incompressible Flow 5
1.4 Parallel Computing 7
1.4.1 Data-Parallelism and SIMD Computers 8
1.4.2 Algorithms and Performance 11
1.5 Pres sure-Based Multigrid Methods 13
1.6 Description of the Research 17
2 PRESSURE-CORRECTION METHODS 21
2.1 Finite-Volume Discretization on Staggered Grids 21
2.2 The SIMPLE Method _ 23
2.3 Discrete Formulation of the Pressure-Correction Equation 27
2.4 Well-Posedness of the Pressure-Correction Equation 30
2.4.1 Analysis 30
2.4.2 Verification by Numerical Experiments 33
2.5 Numerical Treatment of Outflow Boundaries 38
2.6 Concluding Remarks 40
3 EFFICIENCY AND SCALABILITY ON SIMD COMPUTERS 53
3.1 Background 53
3.1.1 Speedup and Efficiency 53
3.1.2 Comparison Between CM-2, CM-5, and MP-1 55
3.1.3 Hierarchical and Cut-and-Stack Data Mappings 57
3.2 Implementional Considerations 59
3.3 Numerical Experiments 61
3.3.1 Efficiency of Point and Line Solvers for the Inner Iterations . . 62
3.3.2 Effect of Uniform Boundary Condition Implementation .... 69
3.3.3 Overall Performance 70
3.3.4 Isoefficiency Plot 72
IV

3.4 Concluding Remarks 74
4 A NONLINEAR PRESSURE-CORRECTION MULTIGRID METHOD . . 83
4.1 Background 84
4.1.1 Terminology and Scheme for Linear Equations 86
4.1.2 Full-Approximation Storage Scheme for Nonlinear Equations . 90
4.1.3 Extension to the Navier-Stokes Equations 92
4.2 Comparison of Pressure-Based Smoothers 94
4.3 Stability of Multigrid Iterations 101
4.3.1 Defect-Correction Method 103
4.3.2 Cost of Different Convection Schemes 106
4.4 Restriction and Prolongation Procedures 108
4.5 Concluding Remarks 113
5 IMPLEMENTATION AND PERFORMANCE ON THE CM-5 127
5.1 Storage Problem 128
5.2 Multigrid Convergence Rate and Stability 131
5.2.1 Truncation Error Convergence Criterion for Coarse Grids ... 133
5.2.2 Numerical Characteristics of the FMG Procedure 136
5.2.3 Influence of Initial Guess on Convergence Rate 145
5.2.4 Remarks 148
5.3 Performance on the CM-5 149
5.4 Concluding Remarks 156
REFERENCES 181
BIOGRAPHICAL SKETCH 188
v

Abstract of Dissertation
Presented to the Graduate School of the University of Florida
in Partial Fulfillment of the Requirements for the
Degree of Doctor of Philosophy
PRESSURE-BASED METHODS ON SINGLE-INSTRUCTION
STREAM/MULTIPLE-DATA STREAM COMPUTERS
By
Edwin L. Blosch
Chairman: Dr. Wei Shyy
Major Department: Aerospace Engineering, Mechanics and Engineering Science
Computationally and numerically scalable algorithms are needed to exploit emerg¬
ing parallel-computing capabilities. In this work pressure-based algorithms which
solve the two-dimensional incompressible Navier-Stokes equations are developed for
single-instruction stream/multiple-data stream (SIMD) computers.
The implications of the continuity constraint for the proper numerical treatment
of open boundary problems are investigated. Mass must be conserved globally so that
the system of linear algebraic pressure-correction equations is numerically consistent.
The convergence rate is poor unless global mass conservation is enforced explicitly.
Using an additive-correction technique to restore global mass conservation, flows
which have recirculating zones across the open boundary can be simulated.
The performance of the single-grid algorithm is assessed on three massively-
parallel computers, MasPar’s MP-1 and Thinking Machines’ CM-2 and CM-5. Paral¬
lel efficiencies approaching 0.8 are possible with speeds exceeding that of traditional
vector supercomputers. The following issues relevant to the variation of parallel ef¬
ficiency with problem size are studied: the suitability of the algorithm for SIMD
computation; the implementation of boundary conditions to avoid idle processors;
vi

the choice of point versus line-iterative relaxation schemes; the relative costs of the
coefficient computations and solving operations, and the variation of these costs with
problem size; the effect of the data-array-to-processor mapping; and the relative
speeds of computation and communication of the computer.
A nonlinear pressure-correction multigrid algorithm which has better convergence
rate characteristics than the single-grid method is formulated and implemented on
the CM-5. On the CM-5, the components of the multigrid algorithm are tested over a
range of problem sizes. The smoothing step is the dominant cost. Pressure-correction
methods and the locally-coupled explicit method are equally efficient on the CM-5.
V cycling is found to be much cheaper than W cycling, and a truncation-error based
“full-multigrid” procedure is found to be a computationally efficient and convenient
method for obtaining the initial fine-grid guess. The findings presented enable further
development of efficient, scalable pressure-based parallel computing algorithms.
Vll

CHAPTER 1
INTRODUCTION
1.1 Motivations
Computational fluid dynamics (CFD) is a growing field which brings together
high-performance computing, physical science, and engineering technology. The dis¬
tinctions between CFD and other fields such as computational physics and computa¬
tional chemistry are largely semantic now, because increasingly more interdisplinary
applications are coming within range of the computational capabilities. CFD algo¬
rithms and techniques are mature enough that the focus of research is expected to
shift in the next decade toward the development of robust flow codes, and toward the
application of these codes to numerical simulations which do not idealize either the
physics or the geometry and which take full account of the coupling between fluid
dynamics and other areas of physics [65]. These applications will require formidable
resources, particularly in the areas of computing speed, memory, storage, and in¬
put/output bandwidth [78].
At the present time, the computational demands of the applications are still
at least two orders-of-magnitude beyond the computing technology. For example,
NASA’s grand challenges for the 1990s are to achieve the capability to simulate vis¬
cous, compressible flows with two-equation turbulence modelling over entire aircraft
configurations, and to couple the fluid dynamics simulation with the propulsion and
aircraft control systems modelling. To meet this challenge it is estimated that 1 ter-
aflops computing speed and 50 gigawords of memory will be required [24]. Current
1

2
massively-parallel supercomputers, for example, the CM-5 manufactured by Thinking
Machines, have peak speeds of 0(10 gigaflops) and memories of 0(1 gigaword).
Optimism is sometimes circulated that teraflop computers may be expected by
1995 [68]. In view of the two orders-of-magnitude disparity between the speed of
present-generation parallel computers and teraflops, such optimism should be dimmed
somewhat. Expectations are not being met in part because the applications, which
are the driving force behind the progress in hardware, have been slow to develop. The
numerical algorithms which have seen two decades of development on traditional vec¬
tor supercomputers are not always easy targets for efficient parallel implementation.
Better understanding of the basic concepts and more experience with the present
generation of parallel computers is a prerequisite for improved algorithms and imple¬
mentations.
The motivation of the present work has been the opportunity to investigate issues
related to the use of parallel computers in CFD, with the hope that the knowledge
gained can assist the transition to the new computing technology. The context of the
research is the numerical solution of the 2-d incompressible Navier-Stokes equations,
by a popular and proven numerical method known as the pressure-correction tech¬
nique. A specific objective emerged as the research progressed, namely to develop
and analyze the performance of pressure-correction methods on the single-instruction
stream/multiple-data stream (SIMD) type of parallel computer. Single-grid compu¬
tations were studied first, then a multigrid method was developed and tested.
StMD computers were chosen because they are easier to program than multiple-
instruction stream/multiple-data stream (MIMD) computers (explict message-passing
is not required), because synchronization of the processors is not an issue, and be¬
cause the factors affecting the parallel run time and computational efficiency are
easier to identify and quantify. Also, these are arguably the most powerful machines

3
available right now—Los Alamos National Laboratory has a 1024-node CM-5 with 32
Gbytes of processor memory and is capable of 32 Gflops peak speed. Thus, the code,
the numerical techniques, and the understanding which are the contribution of this
research can be immediately useful for applications on massively parallel computers.
1.2 Governing Equations
The governing equations for 2-d, constant property, time-dependent viscous in¬
compressible flow are the Navier-Stokes equations. They express the principles of
conservation of mass and momentum. In primitive variables and cartesian coordi¬
nates, they may be written
dpu dpv
dx dy
dpu dpu2 dpuv dp d2u d2u
~df + ~d^ + ~d^~ = ~Tx + ^d^ +
dpv dpuv dpv2 dp d2v d2v
^w + ~d^ + ~d^~ = ~d^ + fld^ + ^d^
(1.1)
(1.2)
(1.3)
where u and v are cartesian velocity components, p is the density, p is the fluid’s
molecular viscosity, and p is the pressure. Eq. 1.1 is the mass continuity equation, also
known as the divergence-free constraint since its coordinate-free form is div u = 0.
The Navier-Stokes equations 1.1-1.3 are a coupled set of nonlinear partial differ¬
ential equations of mixed elliptic/parabolic type. Mathematically, they differ from
the compressible Navier-Stokes equations in two important respects that lead to dif¬
ficulties for devising numerical solution techniques.
First, the role of the continuity equation is different in incompressible flow. In¬
stead of a time-dependent equation for the density, in incompressible fluids the conti¬
nuity equation is a constraint on the admissible velocity solutions. Numerical meth¬
ods must be able to integrate the momentum equations forward in time while simul¬
taneously maintaining satisfaction of the continuity constraint. On the other hand,

4
numerical methods for compressible flows can take advantage of the fact that in the
unsteady form each equation has a time-dependent term. The equations are cast
in vector form—any suitable method for time-integration can be employed on the
system of equations as a whole.
The second problem, assuming that a primitive-variable formulation is desired, is
that there is no equation for pressure. For compressible flows, the pressure can be de¬
termined from the equation of state of the fluid. For incompressible flow, an auxiliary
“pressure-Poisson” equation can be derived by taking the divergence of the vector
form of the momentum equations; the continuity equation is invoked to eliminate
the unsteady term in the result. The formulation of the pressure-Poisson equation
requires manipulating the discrete forms of the momentum and continuity equations.
A particular discretization of the Laplacian operator is therefore implied in pressure-
Poisson equation, depending on the discrete gradient and divergence operators. This
operator may not be implementable at boundaries, and solvability constraints can
be violated [30]. Also, the differentiation of the governing equations introduces the
need for additional unphysical boundary conditions on the pressure. Physically, the
pressure in incompressible flow is only defined relative to an (arbitrary) constant.
Thus, the correct boundary conditions are Neumann. However, if the problem has
an open boundary, the governing equations should be supplemented with a boundary
condition on the normal traction [29, 32],
Fn = —p +
1 dun
Re dn
(1.4)
where F is the force, Re is the Reynolds number, and the subscript n indicates the
normal direction. However, Fn may be difficult to prescribe.

5
In practice, a zero-gradient or linear extrapolation for the normal velocity com¬
ponent is a more popular outflow boundary condition. Many outflow boundary con¬
ditions have been analyzed theoretically for incompressible flow (see [30, 31, 38, 56]).
There are even more boundary condition procedures in use. The method used and its
impact on the “solvability’' of the resulting numerical systems of equations depends
on the discretization and the numerical method. This issue is treated in Chapter 2.
1.3 Numerical Methods for Viscous Incompressible Flow
Numerical algorithms for solving the incompressible Navier-Stokes system of equa¬
tions were first developed by Harlow and Welch [39] and Chorin [15, 16]. Descendants
of these approaches are popular today. Harlow and Welch introduced the important
contribution of the staggered-grid location of the dependent variables. On a stag¬
gered grid, the discrete Laplacian appearing in the derivation of the pressure-Poisson
equation has the standard five-point stencil. On colocated grids it still has a five-
point form but, if the central point is located at (i,j), the other points which are
involved are located at (i+2,j), (i-2,j), (i,j+2), and (i,j-2). Without nearest-neighbor
linkages, two uncoupled (“checkerboard”) pressure fields can develop independently.
This pressure-decoupling can cause stability problems, since nonphysical discontinu¬
ities in the pressure may develop [50]. In the present work, the velocity components
are staggered one-half of a control volume to the west and south of the pressure which
is defined at the center of the control volume as shown in Figure 1.1. Figure 1.1 also
shows the locations of all boundary velocity components involved in the discretization
and numerical solution, and representative boundary control volumes for u, v, and p.
In Chorin’s artificial compressibility approach [15] a time-derivative of pressure is
added to the continuity equation. In this manner the continuity equation becomes
an equation for the pressure, and all the equations can be integrated forward in time,

6
either as a system or one at a time. The artificial compressibility method is closely
related to the penalty formulation used in finite-element methods [41]. The equations
are solved simultaneously in finite-element formulations. Penalty methods and the
artificial compressibility approach suffer from ill-conditioning when the equations
have strong nonlinearities or source terms. Because the pressure term is artificial,
they are not time-accurate either.
Projection methods [16, 62] are two-step procedures which first obtain a velocity
field by integrating the momentum equations, and then project this vector field into
a divergence-free space by subtracting the gradient of the pressure. The pressure-
Poisson equation is solved to obtain the pressure. The solution must be obtained
to a high degree of accuracy in unsteady calculations in order to obtain the correct
long-term behavior [76]—every step may therefore be fairly expensive. Furthermore,
the time-step size is limited by stability considerations, depending on the implicitness
of the treatment used for the convection terms.
“Pressure-based” methods for the incompressible Navier-Stokes equations include
SIMPLE [61] and its variants, SIMPLEC [19], SIMPLER [60], and PISO [43]. These
methods are similar to projection methods in the sense that a non-mass-conserving
velocity field is computed first, and then corrected to satisfy continuity. However, they
are not implicit in two steps because the nonlinear convection terms are linearized
explicitly. Instead of a pressure-Poisson equation, an approximate equation for the
pressure or pressure-correction is derived by manipulating the discrete forms of the
momentum and continuity equations. A few iterations of a suitable relaxation method
are used to obtain a partial solution to the system of correction equations, and
then new guesses for pressure and velocity are obtained by adding the corrections
to the old values. This process is iterated until all three equations are satisfied.
The iterations require underrelaxation because of the sequential coupling between

7
variables. Compared to projection methods, pressure-based methods are less implicit
when used for time-dependent problems. However, they can be used to seek the
steady-state directly if desired.
Compared to a fully coupled strategy, the sequential pressure-based approach
typically has slower convergence and less robustness with respect to Reynolds num¬
ber. However, the sequential approach has the important advantage that additional
complexities, for example, chemical reaction, can be easily accommodated by simply
adding species-balance equations to the stack. The overall run time increases since
each governing equation is solved independently, and the total storage requirements
scale linearly with the number of equations solved. On the other hand, the computer
time and storage requirements escalate faster in a fully coupled solution strategy. The
typical way around this problem is to solve simultaneously the continuity and momen¬
tum equations, then solve any additional equations in a sequential fashion. Without
knowing beforehand that the pressure-velocity coupling is the strongest among all the
various flow variables, however, the extra computational effort spent in simultaneous
solution of these equations is unwarranted.
There are other approaches for solving the incompressible Navier-Stokes equa¬
tions, notably methods based on vorticity-streamfunction — or velocity-vorticity
(u — u) formulations, but pressure-based methods are easier, especially with regard to
boundary conditions and possible extension to 3-d domains. Furthermore, they have
demonstrated considerable robustness in computing incompressible flows. A broad
range of applications of pressure-based methods is demonstrated in [73].
1.4 Parallel Computing
General background of parallel computers and their application to the numeri¬
cal solution of partial differential equations is given in Hockney and Jesshope [40]

8
and Ortega and Voigt [58]. Fischer and Patera [23] gave a recent review of parallel
computing from the perspective of the fluid dynamics community. Their “indirect
cost,” the parallel run time, is of primary interest here. The “direct cost” of parallel
computers and their components is another matter entirely. For the iteration-based
numerical methods developed here, the parallel run time is the cost per iteration
multiplied by the number of iterations. The latter is affected by the characteristics of
the particular parallel computer used and the algorithms and implementations em¬
ployed. Parallel computers come in all shapes and sizes, and it is becoming virtually
impossible to give a thorough taxonomy. The background given here is limited to a
description of the type of computer used in this work.
1.4.1 Data-Parallelism and SIMP Computers
Single-instruction stream/multiple-data stream (SIMD) computers include the
connection machines manufactured by the Thinking Machines Corporation, the CM
and CM-2, and the MP-1, MP-2, and MP-3 computers produced by the MasPar Cor¬
poration. These are massively-parallel machines consisting of a front-end computer
and many processor/memory pairs, figuratively, the “back-end.” The back-end pro¬
cessors are connected to each other by a “data network.” The topology of the data
network is a major feature of distributed-memory parallel computers.
The schematic in Figure 1.2 gives the general idea of the SIMD layout. The
program executes on the serial front-end computer. The front-end triggers the syn¬
chronous execution of the “back-end” processors by sending “code blocks” simul¬
taneously to all processors. Actually, the code blocks are sent to an intermediate
“control processor(s).” The control processor broadcasts the instructions contained

9
in the code block, one at a time, to the computing processors. These “front-end-
to-processor” communications take time. This time is an overhead cost not present
when the program runs on a serial computer.
The operands of the instructions, the data, are distributed among the processors’
memories. Each processor operates on its own locally-stored data. The “data” in
grid-based numerical methods are the arrays, 2-d in this case, of dependent variables,
geometric quantities, and equation coefficients. Because there are usually plenty
of grid points and the same governing equations apply at each point, most CFD
algorithms contain many operations to be performed at every grid point. Thus this
“data-parallel” approach is very natural to most CFD algorithms.
Many operations may be done independently on each grid point, but there is cou¬
pling between grid points in physically-derived problems. The data network enters
the picture when an instruction involves another processor’s data. Such “interpro¬
cessor” communication is another overhead cost of solving the problem on a parallel
computer. For a given algorithm, the amount of interprocessor communication de¬
pends on the “data mapping,” which refers to the partitioning of the arrays and the
assignment of these “subgrids” to processors. For a given machine, the speed of the
interprocessor communication depends on the pattern of communication (random or
regular) and the distance between the processors (far away or nearest-neighbor).
The run time of a parallel program depends first on the amount of front-end and
parallel computation in the algorithm, and the speeds of the front-end and back¬
end for doing these computations. In the programs developed here, the front-end
computations are mainly the program control statements (IF blocks, DO loops, etc.).
The front-end work is not sped up by parallel processing. The parallel computations
are the useful work, and by design one hopes to have enough parallel computation

10
to amortize both the front-end computation and the interprocessor and front-end-to-
processor communication, which are the other factors that contribute to the parallel
run time.
From this brief description it should be clear that SIMD computers have four char¬
acteristic speeds: the computation speed of the processors, the communication speed
between processors, and the speed of the front-end-to-processor communication, i.e.
the speed that code blocks are transferred, and the speed of the front-end. These
machine characteristics are not under the control of the programmer. However, the
amount of computation and communication a program contains is determined by the
programmer because it depends on the algorithm selected and the algorithm’s imple¬
mentation (the choice of the data mapping, for example). Thus, the key to obtaining
good performance from SIMD computers is to pick a suitable algorithm, “matched”
in a sense to the architecture, and to develop an implementation which minimizes
and localizes the interprocessor communication. Then, if there is enough parallel
computation to amortize the serial content of the program and the communication
overheads, the speedup obtained will be nearly the number of processors. The actual
performance, because it depends on the computer, the algorithm, and the imple¬
mentation, must be determined by numerical experiment on a program-by-program
basis.
SIMD computers are restricted to exploiting data-parallelism, as opposed to the
parallelism of the tasks in an algorithm. The task-parallel approach is more com¬
monly used, for example, on the Cray C90 supercomputer. Multiple-instruction
stream/multiple-data stream (MIMD) computers, on the other hand, are composed of
more-or-less autonomous processor/memory pairs. Examples include the Intel series
of machines (iPSC/2, iPSC/860, and Paragon), workstation clusters, and the connec¬
tion machine CM-5. However, in CFD, the data-parallel approach is the prevalent

11
one even on MIMD computers. The front-end/back-end programming paradigm is
implemented by selecting one processor to initiate programs on the other processors,
accumulate global results, and enforce synchronization when necessary, a strategy
called single-program-multiple-data (SPMD) [23]. The CM-5 has a special “control
network” to provide automatic synchronization of the processor’s execution, so a
SIMD programming model can be supported as well as MIMD. SIMD is the manner
in which the CM-5 has been used in the present work. The advantage to using the
CM-5 in the SIMD mode is that the programmer does not have to explicitly specify
message-passing. This simplification saves effort and increases the effective speed of
communication because certain time-consuming protocols for the data transfer can
be eliminated.
1.4.2 Algorithms and Performance
The previous subsection discussed data-parallelism and SIMD computers, i.e.
what parallel computing means in the present context and how it is carried out
by SIMD-tvpe computers. To develop programs for SIMD computers requires one
to recognize that unlike serial computers, parallel computers are not black boxes. In
addition to the selection of an algorithm with ample data-parallelism, consideration
must be given to the implementation of the algorithm in specific ways in order to
achieve the desired benefits (speedups over serial computations).
The success of the choice of algorithm and the implementation on a particular
computer is judged by the “speedup” (S) and “efficiency” (E) of the program. The
communications mentioned above, front-end-to-processor and interprocessor, are es¬
sentially overhead costs associated with the SIMD computational model. They would
not be present if the algorithm were implemented on a serial computer, or if such
communications were infinitely fast. If the overhead cost was zero, a parallel program

12
executing on np processors would run np times faster than on a single processor, a
speedup of np. This idealized case would also have a parallel efficiency of 1. The
parallel efficiency E measures the actual speedup in comparison with the ideal.
One is also interested in how speedup, efficiency, and the parallel run time (Tp)
scale with problem size, and with the number of processors used. The objective in
using parallel computers is more than just obtaining a good speedup on a particular
problem size and a particular number of processors. For parallel CFD, the goals are
to either (1) reduce the time (the indirect cost [23]) to solve problems of a given
complexity, to satisfy the need for rapid turnaround times in design work, or (2)
increase the complexity of problems which can be solved in a fixed amount of time.
For the iteration-based numerical methods studied here, there are two considerations:
the cost per iteration, and the number of iterations, respectively, computational and
numerical factors. The total run time is the product of the two.
Gustafson [35] has presented fixed-size and scaled-size experiments whose results
describe how the cost per iteration scales on a particular machine. In the fixed-
size experiment, the efficiency is measured for a fixed problem size as processors are
added. The hope is that the run time is halved when the number of processors is
doubled. However, the run time obviously cannot be reduced indefinitely by adding
more processors because at some point the parallelism runs out—the limit to the
attainable speedup is the number of grid points. In the scaled-size experiment, the
problem size is increased along with the number of processors, to maintain a constant
local problem size for each of the parallel processors. Care must be taken to make
timings on a per iteration basis if the number of iterations to reach the end of the
computation increases with the problem size. The hope in such an experiment is that
the program will maintain a certain high level of parallel efficiency E. The ability

13
to maintain E in the scaled-size experiment indicates that the additional processors
increased the speedup in a one-for-one trade.
1.5 Pressure-Based Multigrid Methods
Multigrid methods are a potential route to both computationally and numerically
scalable programs. Their cost per iteration on parallel computers and convergence
rate is the subject of Chapters 4-5. For sufficiently smooth elliptic problems, the
convergence rate of multigrid methods is independent of the problem size—their op¬
eration count is 0(N). In practice, good convergence rates are maintained as the
problem size increases for Navier-Stokes problems, also, provided suitable multigrid
components—the smoother, restriction and prolongation procedures—and multigrid
techniques are employed. The standard V-cycle full-multigrid (FMG) algorithm has
an almost optimal operation count, 0(log2N) for Poisson equations, on parallel com¬
puters. Provided the multigrid algorithm is implemented efficiently and that the cost
per iteration scales well with the problem size and the number of processors, the
multigrid approach seems to be a promising way to exploit the increased computa¬
tional capabilities that parallel computers offer.
The pressure-based methods mentioned previously involve the solution of three
systems of linear algebraic equations, one each for the two velocity components
and one for the pressure, by standard iterative methods such as successive line-
underrelaxation (SLUR). Hence they inherit the convergence rate properties of these
solvers, i.e. as the problem size grows the convergence rate deteriorates. With the
single-grid techniques, therefore, it will be difficult to obtain reasonable turnaround
times when the problem size is increased into the target range for parallel com¬
puters. Multigrid techniques for accelerating the convergence of pressure-correction

14
methods should be pursued, and in fact they have been within the last five or so
years [70, 74, 80].
However, there are still many unsettled issues. The complexities affecting the
convergence rate of single-grid calculations carry over to the multigrid framework
and are compounded there by the coupling between the evolving solutions on multiple
grid levels, and by the particular “grid-scheduling” used.
Linear multigrid methods have been applied to accelerate the convergence rate for
the solution of the system of pressure or pressure-correction equations [4, 22, 42, 64,
94], However, the overall convergence rate does not significantly improve because the
velocity-pressure coupling is not addressed [4, 22]. Therefore the multigrid strategy
should be applied on the “outer loop,” with the role of the iterative relaxation method
played by the numerical methods described above, e.g. the projection method or the
pressure-correction method. Thus, the generic term “smoother” is prescribed because
it reflects the purpose of the solution of the coupled system of equations going on
inside the multigrid cycle—to smooth the residual so that an accurate coarse-grid
approximation of the fine-grid problem is possible. It is not true that a good solver,
one with a fast convergence rate on single-grid computations, is necessarily a good
smoother of the residual. It is therefore of interest to assess pressure-correction meth¬
ods as potential multigrid smoothers. See Shyy and Sun [74] for more information
on the staggered-grid implementation of multigrid methods, and some encouraging
results.
Staggered grids require special techniques [21, 74] for the transfer of solutions and
residuals between grid levels, since the positions of the variables on different levels
do not correspond. However, they alleviate the “checkerboard” pressure stability
problem [50], and since techniques have already been established [74], there is no

15
reason not to go this route, especially when cartesian grids are used as in the present
work.
Vanka [89] has proposed a new numerical method as a smoother for multigrid
computations, one which has inferior convergence properties as a single-grid method
but apparently yields an effective multigrid method. A staggered-grid finite-volume
discretization is employed. In Vanka’s smoother, the velocity components and pres¬
sure of each control volume are updated simultaneously, so it is a coupled approach,
but the coupling between control volumes is not taken into account, so the calcu¬
lation of new velocities and pressures is explicit. This method is sometimes called
the “locally-coupled explicit” or “block-explicit” pressure-based method. The control
volumes are visited in lexicographic order in the original method which is therefore
aptly called BGS (block Gauss-Seidel). Line-variants have been developed to couple
the flow variables in neighboring control volumes along lines (see [80, 87]).
Linden et al. [50] gave a brief survey of multigrid methods for the steady-state in¬
compressible Navier-Stokes equations. They argue without analysis that BGS should
be preferred over the pressure-correction type methods since the strong local cou¬
pling is likely to have better success smoothing the residual locally. On the other
hand, Sivaloganathan and Shaw [71, 70] have found good smoothing properties for
the pressure-correction approach, although the analysis was simplified considerably.
Sockol [80] has compared the point and line-variants of BGS with the pressure-
correction methods on serial computers, using model problems with different physical
characteristics. SIMPLE and BGS emerge as favorites in terms of robustness with
BGS preferred due to a lower cost per iteration. This preference may or may not
carry over to SIMD parallel computers (see Chapter 4 for comparison). Interesting
applications of multigrid methods to incompressible Navier-Stokes flow problems can
be found in [12, 28, 48, 54].

16
In terms of parallel implementations there are far fewer results although this
field is rapidly growing. Simon [77] gives a recent cross-section of parallel CFD
results. Parallel multigrid methods, not only in CFD but as a general technique
for partial differential equations, have received much attention due to their desirable
0(N) operation count on Poisson equations. However, it is apparently difficult to find
or design parallel computers with ideal communication networks for multigrid [13].
Consequently implementations have been pursued on a variety of machines to see
what performance can be obtained with the present generation of parallel machines,
and to identify and understand the basic issues. Dendy et al.[18] have recently
described a multigrid method on the CM-2. However, to accommodate the data-
parallel programming model they had to dimension their array data on every grid level
to the dimension extents of the finest grid array data. This approach is very wasteful
of storage. Consequently the size of problems which can be solved is greatly reduced.
Recently an improved release of the compiler has enabled the storage problem to be
circumvented with some programming diligence (see Chapter 5). The implementation
developed in this work is one of the first to take advantage of the new compiler feature.
In addition to parallel implementations of serial multigrid algorithms, several
novel multigrid methods have been proposed for SIMD computers [25, 26, 33]. Some
of the algorithms are instrinsically parallel [25, 26] or have increased parallelism
because they use multiple coarse grids, for example [33]. These efforts and others
have been recently reviewed [14. 53, 92]. Most of the new ideas have not been
developed yet for solving the incompressible Navier-Stokes equations.
One of the most prominent concerns addressed in the literature regarding parallel
implementations of serial multigrid methods is the coarse grids. When the number
of grid points is smaller than the number of processors the parallelism is reduced
to the number of grid points. This loss of parallelism may significantly affect the

17
parallel efficiency. One of the routes around the problem is to use multiple coarse
grids [59, 33, 79]. Another is to alter the grid-scheduling to avoid coarse grids. This
approach can lead to computationally scalable implementations [34, 49] but may
sacrifice the convergence rate. “Agglomeration” is an efficiency-increasing technique
used in MIMD multigrid programs which refers to the technique of duplicating the
coarse grid problem in each processor so that computation proceeds independently
(and redundantly). Such an approach can also be scalable [51]. However, most atten¬
tion so far has focused on parallel implementations of serial multigrid algorithms, in
particular on assessing the importance of the coarse-grid smoothing problem for dif¬
ferent machines and on developing techniques to minimize the impact on the parallel
efficiency.
1.6 Description of the Research
The dissertation is organized as follows. Chapter 2 discusses the role of the mass
conservation in the numerical consistency of the single-grid SIMPLE method for open
boundary problems, and explains the relevance of this issue to the convergence rate.
In Chapter 3 the single-grid pressure-correction method is implemented on the MP-1,
CM-2, and CM-5 computers and its performance is analyzed. High parallel efficien¬
cies are obtained at speeds and problem sizes well beyond the current performance of
such algorithms on traditional vector supercomputers. Chapter 4 develops a multigrid
numerical method for the purpose of accelerating the single-grid pressure-correction
method and maintaining the accelerated convergence property independent of the
problem size. The multigrid smoother, the intergrid transfer operators, and the sta¬
bilization strategy for Navier-Stokes computations are discussed. Chapter 5 describes
the actual implementation of the multigrid algorithm on the CM-5, its convergence
rate, and its parallel run time and scalability. The convergence rate depends on the

18
flow problem and the coarse-grid discretization, among other factors. These factors
are considered in the context of the “full-multigrid” (FMG) starting procedure by
which the initial guess on the fine grid is obtained. The cost of the FMG proce¬
dure is a concern for parallel computation [88], and this issue is also addressed. The
results indicate that the FMG procedure may influence the asymptotic convergence
rate and the stability of the multigrid iterations. Concluding remarks in each chapter
summarize the progress made and suggest avenues for further study.

19
Figure 1.1. Staggered-grid layout of dependent variables, for a small but complete
domain. Boundary values involved in the computation are shown. Representative u,
v, and pressure boundary control volumes are shaded.

Front End (CM-2 and MP-1)
Partition Manager (CM-5)
—> serial code, control code, scalar data
short blocks
of parallel code
Sequencer (CM-2)
Array control unit (MP-1)
Multiple SPARC nodes (CM-5^
more P.E.s
• • •
array data partitioned among processor memories
• • •
Interprocessor communication network
hypercube (CM-2) + “NEWS”
3-stage crossbar (MP-1) + “X-Net”
fat tree (CM-5)
Figure 1.2. Layout of the MP-1, CM-2, and CM-5 SIMD computers.

CHAPTER 2
PRESSURE-CORRECTION METHODS
2.1 Finite-Volume Discretization on Staggered Grids
The formulation of the numerical method used in this work begins with the inte¬
gration of the governing equations Eq 1.1-1.3 over each of the control volumes in the
computational domain. Figure 1.1 shows a model computational domain with u, v,
and p (cell-centered) control volumes shaded. The continuity equation is integrated
over the p control volumes.
Consider the discretization of the u-momentum equation for the control volume
shown in Figure 2.1 whose dimensions are Ax and Ay. The v control volumes are
done exactly the same except rotated 90°. Integration of Eq. 1.2 over the shaded
region is interpreted as follows for each of the terms:
//
apUdxdy='^AxAy,
dt
dpu2
dt
JI dx dy = ~ PU9 Ay’
//
dx
dpuv
dy
dx dy = (punvn - pusvs) Ax,
//
dp
dx
dxdy = (pe ~ pw)Ay,
JJ
J!
d
d2
u
dx2
d2u
dxJy=U-
dn
dx
du
gyldxdy=U dy
-pTx
f*
du
dx
dii
dy
Ay
Ax
(2.1)
(2.2)
(2.3)
(2.4)
(2.5)
(2.6)
The lowercase subscripts e, w. n, s indicate evaluation on the control volume faces.
By convention and the mean-value theorem, these are at the midpoint of the faces.
The subscript P in Eq. 2.1 indicates evaluation at the center of the control volume.

22
Because of the staggered grid, the required pressure values in Eq. 2.4 are already
located on the u control volume faces. The pressure-gradient term is effectively a
second-order central-difference approximation. With colocated grids, however, the
control-volume face pressures are obtained by averaging the nearby pressures. This
averaging results in the pressure at the cell center dropping out of the expression
for the pressure gradient. The central-difference in Eq. 2.4 is effectively taken over
a distance 2Ax on colocated grids. Thus staggered cartesian grids provide a more
accurate approximation of the pressure-gradient term since the difference stencil is
smaller.
The next step is to approximate the terms which involve values at the control
volume faces. In Eq. 2.2, one of the ue and one of the uw are replaced by an average
of neighboring values,
( 2 2
[pue - puu
\ * / ue + up up + uw i A
) Ay = [ p ue - P ^ Uw ) Ay
(2.7)
and in Eq. 2.3, vn and vs are obtained by averaging nearby values,
/ \ a / ^Tie + Vnw vse + Vsw \ A
(punvn - pusvs) Ax = ( p un - p r U* ) Ax
(2.8)
The remaining face velocities in the convection terms, u„, us, ue. and uw, are ex¬
pressed as a certain combination of the nearby u values—which u values are involved
and what weighting they receive is prescribed by the convection scheme. Some pop¬
ular recirculating flow convection schemes are described in [73, 75].
The control-volume face derivatives in the diffusion terms are evaluated by central
du
du
\
Ay =
( ue — up
^ Ax
e
w /
du
du
Ax -
( un up
!* dv
^ 9y
n °
J
V Ay
- p-
Ax J
) Ay
P-
Ay
Ax
(2.9)
(2.10)
differences,

23
The unsteady term in Eq. 2.1 is approximated by a backward Euler scheme. All the
terms are evaluated at the “new” time level, i.e. implicitly.
Thus, the discretized momentum equations for each control volume can be put
into the following general form,
apup = apup + a\vu\v + aNuN + «s^s + (2-11)
where b = (pw — pe)Ay+pup/ At, the superscript n indicating the previous time-step.
The coefficients ap, as, etc. are comprised of the terms which modify u^-, up, etc. in
the discretized convection and diffusion terms.
The continuity equation is integrated over a pressure control volume,
dpu dpv
dx + dy
Again the staggered grid is an advantage because the normal velocity components on
each control volume face are already in position—there is no need for interpolation.
2.2 The SIMPLE Method
One SIMPLE iteration takes initial velocity and pressure fields (u*,v*,p*) and
computes new guesses (u,v,p). The intermediate values are denoted with a tilde,
(ü,v,p). In the algorithm below, a^(u*,v*), for example, means that the ap¡ coeffi¬
cient in the ii-momentum equation depends on u* and v*. The parameters uu, vv, and
vc are the numbers of “inner” iterations to be taken for the u. u, and continuity equa¬
tions, respectively. This notation will be clarified by the following discussion. The
inner iteration count is indicated by the superscript enclosed in parentheses. Finally,
uuv and LOc are the relaxation factors for the momentum and continuity equations.
SIMPLE (u*,v*,p*;i/u,i/v,i/p,uuv,uc)
Compute u coefficients aj*(u*,u*) (k = P,E,W,N,S) and source term 6u(n*,p*)
dx dy = p(ue - uw)Ay + p{vn - us)Aa: = 0. (2.12)

24
for each discrete w-momentum equation:
flu au
+ as^s + o,EüE + a^uw + bu + (1 - uuv)^uP
Do vu iterations to obtain an approximate solution for ü
starting with u* as the initial guess
= Gu(n~l'> + fu
u = u("=**>
Compute v coefficients a£(u,u*) (k = E,W,N,S) and source term 6v(u*,p*)
for each discrete r-momentum equation:
^vp = avNvat + avsvs + aEvE + avwvw + bv + (1 - uuv)^v*P
Do uv iterations to obtain an approximate solution for v
starting with v* as the initial guess
v(n) _ Qv(n-1) + fv
V = V
Compute p1 coefficients ack (k = P,E,W,N,S) and source term 6c(ñ,ñ)
for each discrete p1 equation:
o-pp'p = anP'n + asP's + aeP'e + awP'w + b°
Do uc iterations to obtain an approximate solution for p'
starting with zero as the initial guess
p'(n) _ Qp'(n-1) + fc
Correct it, u, and p* at every interior grid point
Up = üp + [p'w-v'^v
Up - Up (ou)p
~ I (P'. — p’rPAl
VP = VP+ (Wp~
Pp = Pp + UcP'p

25
The algorithm is not as complicated as it looks. The important point to note is
that the major tasks to be done are the computing of coefficients and the solving of
the systems of equations. The symbol G indicates the iteration matrix of whatever
type relaxation is used on these inner iterations (SLUR in this case), and / is the
corresponding source term.
In the SIMPLE pressure-correction method [61], the averages in Eq. 2.7 and 2.8
are lagged in order to linearize the resulting algebraic equations. The governing
equations are solved sequentially. First, the u momentum equation coefficients are
computed and an updated u field is computed by solving the system of linear alge¬
braic equations. The pressures in Eq. 2.4 are lagged. The v momentum equation is
solved next to update v. The continuity equation, recast in terms of pressure correc¬
tions, is then set up and solved. These pressure corrections are coupled to velocity
corrections. Together they are designed to correct the velocity field so that it satisfies
the continuity constraint, while simultaneously correcting the pressure field so that
momentum conservation is maintained.
The relationship between the velocity and pressure corrections is derived from
the momentum equation, as described in the next section. The resulting system
of equations is fully coupled, as one might expect knowing the elliptic nature of
pressure in incompressible fluids, and is therefore expensive to solve. However, if the
resulting system of pressure-correction equations were solved exactly, the divergence-
free constraint and the momentum equations (with old values of u and v present in
the nonlinear convection terms) would be satisfied. This approach would constitute
an implicit method of time integration for the linearized equations. The time-step
size would have to be limited to avoid stability problems caused by the linearization.
To reduce the computational cost, the SIMPLE prescription is to use an approx¬
imate relationship between the velocity and pressure corrections (hence the label

26
“semi-implicit”). Variations on the original SIMPLE approximation have shown bet¬
ter convergence rates for simple flow problems, but in discretizations on curvilinear
grids and other problems with significant contributions from source terms, the per¬
formance is no better than the original SIMPLE method (see the results in [4]).
The goal of satisfying the divergence-free constraint can still be attained, if the
system of pressure-correction equations is converged to strict tolerances, because the
discrete continuity equations are still being solved. But satisfaction of the momentum
equations cannot be maintained with the approximate relationship. Consequently it
is no longer desirable to solve the p'-system of equations to strict tolerances. It¬
erations are necessary to find the right velocities and pressures which satisfy all
three equations. Furthermore, since the equation coefficients are changing from one
iteration to the next, it is pointless to solve the momentum equations to strict tol¬
erances. In practice, only a few iterations of a standard scheme such as successive
line-underrelaxation (SLUR) are performed.
The single “outer” iteration outlined above is repeated many times, with under¬
relaxation to prevent the iterations from diverging. In this sense a two-level iterative
procedure is being employed. In the outer iterations, the momentum and pressure-
correction equations are iteratively updated based on the linearized coefficients and
sources, and inner iterations are applied to partially solve the systems of linear alge¬
braic equations.
The fact that only a few inner iterations are taken on each system of equations sug¬
gests that the asymptotic convergence rate of the iterative solver, which is the usual
means of comparison between solvers, does not necessarily dictate the convergence
rate of the outer iterative process. Braaten and Shyy [4] have found that the con¬
vergence rate of the outer iterations actually decreases when the pressure-correction
equation is solved to a much stricter tolerance than the momentum equations. They

27
concluded that the balance between the equations is important. Because u, v, and
p' are segregated, the overall convergence rate is strongly dependent on the partic¬
ular flow problem, the grid distribution and quality, and the choice of relaxation
parameters.
In contrast to projection methods, which are two-step but treat the convection
terms explicitly (or more recently by solving a Riemann problem [2]) and are therefore
restricted from taking too large a time-step, the pressure-correction approach is fully
implicit with no time-step limitation, but many iterations may be necessary. The
projection methods are formalized as time-integration techniques for semi-discrete
equations. SIMPLE is an iterative method for solving the discretized Navier-Stokes
system of coupled nonlinear algebraic equations. But the details given above should
make it clear that these techniques bear strong similarities—specifically, a single
SIMPLE iteration would be a projection method if the system of pressure-correction
equations was solved to strict tolerances at each iteration. It would be interesting to
do some numerical comparisons between projection methods and pressure-correction
methods to further clarify the similarity.
2.3 Discrete Formulation of the Pressure-Correction Equation
The discrete pressure-correction equation is obtained from the discrete momentum
and continuity equations as follows. The velocity field which has been newly obtained
by solving the momentum equations was denoted by (ü, v) earlier. The pressure field
after the momentum equations are solved still has the initial value p*. So ft, i, and
p* satisfy the tt-momentum equation
apüp = apiiE + awüw + cinÜm + asüs + [p*w — p*)Ay, (2.13)
and the corresponding u-momentum equation. The corrected (continuity-satisfying)
velocity field (u,v) satisfies the ^-momentum equation with the corrected pressure

28
field p,
apup = apup + awuw + aNuN + as^-s + {pw ~ Pe)Ay,
(2.14)
and likewise for the v-momentum equation. Additive corrections are assumed, i.e.
u = ü + u'
(2.15)
v = v + V
(2.16)
p — p' + p'.
(2.17)
Subtracting Eq. 2.13 from Eq. 2.14 gives the desired relationship between pressure
and the u corrections,
aPu'p = H akUk + (p'w - p'e)Ay,
(2.18)
k=E,W1N,S
with a similar expression for the v corrections.
If Eq. 2.18 is used as is, then the nearby velocity corrections in the summation need
to be replaced bv similar expressions involving pressure-corrections. This requirement
brings in more velocity corrections and more pressure corrections, and so on, leading
to an equation which involves the pressure corrections at every grid point. The
resulting system of equations would be expensive to solve. Thus, the summation
term is dropped in order to obtain a compact expression for the velocity correction in
terms of pressure corrections. At convergence, the pressure corrections (and therefore
the velocity corrections) go to zero, so the precise form of the approximate pressure-
velocity correction relationship does not figure in the final converged solution.
The discrete form of the pressure-correction equation follows by first substituting
the simplified version of Eq. 2.18 into Eq. 2.15,
up = up + u'p = up + (p'w - p')Ay,
(2.19)

29
and then substituting this into the continuity equation Eq. 2.12, (with an analogous
formula for vp). The result is
pA¡J (Pp-p'e)~ P^V Ñ {p'w Pp) + pAX .. (Pp — Pyv) — PAX^PS-P'p) = b (2-20)
aP(ue)Krr rtj> aP(uw)
where the source term b is
aP(vn)
ap{vs)
b = puwAy — pueAy + pv* Ax — pv* Ax (2.21)
Recall that Eq. 2.20 and Eq. 2.21 are written for the pressure control volumes, so that
there is some interpretation required. The term ap(ue) in Eq. 2.20 is the appropriate
ap for the discretized u-momentum equation, Eq. 2.13. In other words, up in Eq. 2.13
is actually ue, uw, un, or us in Eq. 2.20 and 2.21, relative to the pressure control
volumes on the staggered grid. Eq. 2.20 can be rearranged into the same general
form as Eq. 2.11. From Eq. 2.21, it is apparent that the right-hand side term is the
net mass flux entering the control volume, which should be zero in incompressible
flow.
In the formulation of the pressure-correction equation for boundary control vol¬
umes, one makes use of the fact that the normal velocity components on the bound¬
aries are known from either Dirichlet or Neumann boundary conditions, so no velocity
correction is required there. Consequently, the formulation of Eq. 2.20 for boundary
control volumes does not require any prescription of boundary p' values [60] when
velocity boundary conditions are prescribed. Without the summation from Eq. 2.18,
it is apparent that a zero velocity correction for the outflow boundary u-velocity
component is obtained when pw = pe—in effect, a Neumann boundary condition on
pressure is implied. This boundary condition is appropriate for an incompressible
fluid because it is physically consistent with the governing equations in which only
the pressure gradient appears. There is a unique pressure gradient but the level is

30
adjustable by any constant amount. If it happens that there is a pressure specified
on the boundary, for example by Eq. 1.4, then the correction there will be zero, pro¬
viding a boundary condition for Eq. 2.20. Thus, it seems that there are no concerns
over the specification of boundary conditions for the p' equations.
2.4 Well-Posedness of the Pressure-Correction Equation
2.4.1 Analysis
To better understand the characteristics of the pressure-correction step in the
SIMPLE procedure, consider a model 3x3 computational domain, so that 9 algebraic
equations for the pressure corrections are obtained. Number the control volumes as
shown in Figure 2.3. Then the system of p' equations can be written
CL p
~aE
0
~aN
0
0
0
0
0
P'l
p(ul
- + v},
-«¡n
~aw
a2P
~aE
0
~aN
0
0
0
0
P'l
P(Ue
n i 2
- uw + vn
-vl)
0
~aw
flp
0
0
~aN
0
0
0
p'z
P(«e
— U^ -4-
uw ' un
-«?)
~as
0
0
dp
aE
0
~aN
0
0
Pa
P(u4e
~ + V4
-V4)
0
~as
0
~aw
(Vp
~aE
0
~aN
0
Á
=
P(ul
~ «5, + »n
-v5,)
0
0
— as
0
~aw
Clp
0
0
~aN
Pe
P(Ue
- + «6
-v6s)
0
0
0
-4
0
0
CL p
7
-a‘E
0
Pi
p(K
-ul + v7n
-V7,)
0
0
0
0
-<4
0
— aw
asp
-a|
P's
P{<
~ < + <
-Vs,)
0
0
0
0
0
-a|
0
Q
~aw
Op
.P9.
Lp(«e
- u9w +
-oj
where the superscript designates the cell location and the subscript designates the
coefficient linking the point in question, P, and the neighboring node. The right-hand
side velocities are understood to be tilde quantities as in Eq. 2.21.
In finite-volume discretizations, fluxes are estimated at the control volume faces
which are common to adjacent control volumes, so if the governing equations are
cast in conservation law form, as they are here, the discrete efflux of any quantity
out of one control volume is guaranteed to be identical to the influx into its neighbor.
There is no possibility of internal sources or sinks. In fact this is what makes finite-
volume discretizations preferable to finite-difference discretizations. The following

31
relationships, using control volume 5 in Figure 2.3 as an example, follow from Eq. 2.20
and the internal consistency of finite-volume discretizations:
Op — Op -(- aw + a| a)y, (2.23)
aW aEi aE — aWi 4 = 4i 4 — aN (2.24)
4 = ul 4 = u6w, vl = vl v5s = vl (2.25)
Eq. 2.23 states that the coefficient matrix is pentadiagonal and diagonally dominant
for the interior control volumes. Furthermore, when the natural boundary condition
(zero velocity correction) is applied, the appropriate term in Eq. 2.20 for the boundary
under consideration does not appear, and therefore the pressure-correction equations
for the boundary control volumes also satisfy Eq. 2.23. If a pressure boundary condi¬
tion is applied so that the corresponding pressure correction is zero, then one would
set pp = 0 in Eq. 2.20, for example, which would give aw + aN + as < ap. Thus,
either way, the entire coefficient matrix in Eq. 2.22 is diagonally dominant. However,
with the natural prescription for boundary treatment, no diagonal term exceeds the
sum of its off-diagonal terms.
Thus, the system of equations Eq. 2.22 is linearly dependent with the natural
(velocity) boundary conditions, which can be verified by adding the 9 equations
above. Because of Eq. 2.23 and Eq. 2.24 all terms on the left-hand side of Eq. 2.22
identically cancel one another. At all interior control volume interfaces, the right-
hand side terms identically cancel due to Eq. 2.25, and the remaining source terms
are simply the boundary mass fluxes. This cancellation is equivalent to a discrete
statement of the divergence theorem
f V - u dtt = /
Jn Jan
u â–  n
d{dÜ)
(2.26)

32
where ft is the domain under consideration and n is the unit vector in the direction
normal to its boundary dfl.
Due to the linear dependence of the left-hand side of Eq. 2.22, the boundary mass
fluxes must also sum to zero in order for the system of equations to be consistent.
No solution exists if the linearly dependent system of equations is inconsistent. The
situation can be likened to a steady-state heat conduction problem with source terms
and adiabatic boundaries. Clearly, a steady-state solution only exists if the sum of
the source terms is zero. If there is a net heat source, then the temperature inside
the domain will simply rise without bound if an iterative solution strategy (quasi
time-marching) is used. Likewise, the net mass source in flow problems with open
boundaries must sum to zero for the pressure-correction equation to have a solution.
In other words, global mass conservation is required in discrete form in order for a
solution to exist. The interesting point to note is that during the course of SIMPLE
iterations, when the pressure-correction equation is executed, the velocity field does
not usually conserve mass globally in flow problems with open boundaries, unless
explicit measure is taken to enforce global mass conservation. The purpose of solving
the pressure-correction equations is to drive the local mass sources to zero by suitable
velocity corrections. But the pressure-correction equations which are supposed to
accomplish this purpose do not have a solution unless the net mass source is already
zero. For domains with closed boundaries, global mass conservation is obviously not
an issue.
Furthermore, this problem does not only show up when the initial guess is bad.
In the backward-facing step flow discussed below, the initial guess is zero everywhere
except for inflow, which obviously is the worst case as far as a net mass source is
concerned (all inflow and no outflow). But even if one starts with a mass-conserving
initial guess, during the course of iterations the outflow velocity boundary condition

33
which is necessary to solve the momentum equations will reset the outflow so that
the global mass-conservation constraint is violated.
2.4.2 Verification bv Numerical Experiments
Support for the preceding discussion is provided by numerical simulation of two
model problems, a lid-driven cavity flow and a backward-facing step flow. The con¬
figurations are shown along with other relevant data in Figure 2.2.
Figure 2.4 shows the outer-loop convergence paths for the lid-driven cavity flow
and the backward-facing step flow, both at Re = 100. The quantities plotted in
Figure 2.4 are the logw of the global residuals for each governing equation obtained
by summing up the local residuals, each of which is obtained by subtracting the
left-hand side of the discretized equations from the right-hand side. For the cavity
flow there are no mass fluxes across the boundary so, as mentioned earlier, the global
mass conservation condition is always satisfied when the algorithm reaches the point
of solving the system of //-equations. The residuals have dropped to 10”' after 150
iterations, which is very rapid convergence, indicating that good pressure and velocity
corrections are being obtained.
In the backward-facing step flow, however, the flowfield is very slow to develop
because no global mass conservation measure is enforced. During the course of iter¬
ations, the mass flux into the domain from the left is not matched by an equal flux
through the outflow boundary, and consequently the system of pressure-correction
equations which is supposed to produce a continuity-satisfying velocity field does not
have a solution. Correspondingly one observes that the outer-loop convergence rate
is about 10 times worse than for cavity flow.
Also, note that the momentum convergence path of the backward-facing step flow
in Figure 2.4 tends to follow the continuity equation, indicating that the pressure and

34
velocity fields are strongly coupled. The present flow problem bears some similarity to
a fully-developed channel flow, in which the streamwise pressure-gradient and cross¬
stream viscous diffusion are balanced, so the observation that pressure and velocity
are strongly coupled is intuitively correct. Thus, the convergence path is controlled
by the development of the pressure field. The slow convergence rate problem is due
to the inconsistency of the system of pressure-correction equations.
The inner-loop convergence path (the SLUR iterations) for the p'-system of equa¬
tions must be examined to determine the manner in which the inner-loop inconsis¬
tency leads to poor outer-loop convergence rates. Table 2.1 shows leading eigenvalues
for successive line-underrelaxation iteration matrices of the p'-system of equations at
an intermediate iteration for which the outer-loop residuals had dropped to approx¬
imately 10-2.
Largest 3 eigenvalues
Cavity Flow
Back-Step Flow
Ai
1.0
1.0
a2
0.956
0.996
^3
0.951
0.984
Table 2.1. Largest eigenvalues of iteration matrices during an intermediate itera¬
tion, applying the successive line-underrelaxation iteration scheme to the p'-system of
equations.
In both model problems the spectral radius is 1.0 because the p'-system of equa¬
tions is linearly dependent. The next largest eigenvalue is smaller in the cavity flow
computation than in the step flow computation, which means a faster asymptotic con¬
vergence rate. However, the difference between 0.996 and 0.956 is not large enough
to produce the significant difference observed in the outer convergence path.
Figure 2.5 shows the inner-loop residuals of the SLUR procedure during an inter¬
mediate iteration. The two momentum equations are well-conditioned and converge
to a solution within 4 iterations. In Figure 2.5 for the cavity flow case, the p'-equation

35
converges to zero, although this happens at a slower rate than the two momentum
equations because of the diffusive nature of the equation. In Figure 2.5 for the back-
step flow, the inner-loop residual is fixed on a nonzero residual, which is in fact the
initial level of inconsistency in the system of equations, i.e. the global mass deficit.
Given that the system of p'- equations which is being solved does not satisfy the
global continuity constraint, however, the significance or utility of the p'-field that
has been obtained is unknown.
In practice, the overall procedure may still be able to lead to a converged solu¬
tion, as in the present case. It appears that the outflow extrapolating procedure,
a zero-gradient treatment utilized here, can help induce the overall computation to
converge to the right solution [72]. Obviously, such a lack of satisfaction of global
mass conservation is not desirable in view of the slow convergence rate.
Further study suggests that the iterative solution to the inconsistent system of
p'-equations converges on a unique pressure gradient, i.e. the difference between p'
values at any two points tends to a constant value, even though the p'-field does not
in general satisfy any of the equations in the system. This relationship is shown in
Figure 2.6, in which the convergence of the difference in p' between the lower-left and
upper-right locations in the domain of the cavity and backward-facing step flows is
plotted. Also shown is the value of p' at the lower-left corner of the domain. For the
cavity flow, there is a solution to the system of p'-equations, and it is obtained by
the SLUR technique in about 10 iterations. Thus all the pressure corrections and the
differences between them tend towards constant values. In the backward-facing step
flow, however, the individual pressure corrections increase linearly with the number
of iterations, symptomatic of the inconsistency in the system of equations. The
differences between p' values approach a constant, however. The rate at which this

36
unique pressure-gradient field is obtained depends on the eigenvalues of the iteration
matrix.
To resolve the inconsistency problem in the p'-system of equations and thereby
improve the outer-loop convergence rate in the backward-facing step flow, global mass
conservation has been explicitly enforced during the sequential solution procedure.
The procedure used is to compute the global mass deficit and then add a constant
value to the outflow boundary u-velocities to restore global mass conservation. Al¬
ternatively, corrections can be applied at every streamwise location by considering
control volumes whose boundaries are the inflow plane, the top and bottom walls
of the channel, and the i=constant line at the specified streamwise location. The
artificially-imposed convection has the effect of speeding up the development of the
pressure field, whose normal development is diffusion-dominated. It is interesting to
note that this physically-motivated approach is in essence an acceleration of conver¬
gence of the line-iterative method via the technique called additive correction [45, 69].
The strategy is to adjust the residual on the current line to zero by adding a con¬
stant to all the unknowns in the line. This procedure is done for every line, for every
iteration, and generally produces improvement in the SLUR solution of a system of
equations. Kelkar and Patankar [45] have gone one step further by applying additive
corrections like an injection step of a multigrid scheme, a so-called block correction
technique. This technique is exploited to its fullest by Hutchinson and Raithby [42].
Given a fine-grid solution and a coarse grid, discretized equations for the correction
quantities on the coarse grid are obtained by summing the equations for each of the
fine-grid cells within a given coarse grid cell. A solution is then obtained (by direct
methods in [45]) which satisfies conservation of mass and momentum. The corrections
are then distributed uniformly to the fine grid cells which make up the coarse grid

37
cell, and the iterative solution on the fine grid is resumed. However, experiences have
shown that the net effect of such a treatment for complex flow problems is limited.
Figure 2.7 illustrates the improved convergence rate of the continuity equation for
the inner and outer loops, in the backward-facing step flow, when conservation of mass
is explicitly enforced. The inner-loop data is from the 10th outer-loop iteration. In
Figure 2.7, the cavity flow convergence path is also shown to facilitate the comparison.
For the back-step, the overall convergence rate is improved by an order of magnitude,
becoming slightly faster than the cavity flow case. This result reflects the improved
inner-loop performance, also shown in Figure 2.7. The improved performance for the
pressure-correction equation comes at the expense of a slightly slower convergence
rate for the momentum equations, because of the nonlinear convection term.
In short, it has been shown that a consistency condition, which is physically the re¬
quirement of global mass conservation, is critical for meaningful pressure-corrections
to be guaranteed. Given natural (velocity) boundary conditions, which lead to a
linearly dependent system of pressure-correction equations, satisfaction of the global
continuity constraint is the only way that a solution can exist, and therefore the only
way that the inner-loop residuals can be driven to zero. For the model backward¬
facing step flow in a channel with length L = 4 and a 21 x 9 mesh, the mass-
conservation constraint is enforced globally or at every streamwdse location by an
additive-correction technique. This technique produces a 10-fold increase in the con¬
vergence rate. Physically, modifying the u velocities has the same effect as adding
a convection term to the Poisson equation for the //-field, which otherwise develops
very slowly. A coarse grid size was used to demonstrate the need of enforcing global
mass conservation. On a finer grid, this issue becomes more critical. In the next
section, the solution accuracy aspects related to mass conservation will be addressed,
and the computations will be conducted with more adequate grid resolution.

38
2.5 Numerical Treatment of Outflow Boundaries
Continuing with the theme of well-posedness, the next numerical issue to be dis¬
cussed is the choice of outflow boundary location. If fluid flows into the domain at
a boundary where extrapolation is applied, then, traditionally, the problem is not
considered to be well-posed, because the information which is being transported into
the domain does not participate in the solution to the problem [60]. Numerically,
however, accurate solutions can be obtained using first-order extrapolation for the ve¬
locity components on a boundary where inflow is occurring [72]. Here open boundary
treatment for both steady and time-dependent flow problems is investigated further.
Figure 2.9 and 2.8 present streamfunction contours for a time-dependent flow
problem, impulsively started backward-facing step flow, using central-differencing
for the convection terms and first-order backward-differencing in time. A parabolic
inflow velocity profile is specified, while outflow boundary velocities are obtained by
first-order extrapolation. The Reynolds number based on the average inflow velocity
uavg and the channel height H, is 800. The expansion ratio H/h is 2 as in the model
problem described in Figure 2.3. Time-accurate simulations were performed for two
channel configurations, one with length L = 8 (81 x 41 mesh) and the other with
length L = 16 (161 x 41 mesh). This flow problem has been the subject of some
recent investigations focusing on open boundary conditions [30, 31].
For each time step, the SIMPLE algorithm is used to iteratively converge on a
solution to the unsteady form of the governing equations, explicitly enforcing global
conservation of mass during the course of iterations. In the present study, convergence
was declared for a given time step when the global residuals had been reduced below
10-4. The time-step size was twice the viscous time scale in the y-direction, i.e.

39
Ai = 2Ay2/u. Thus a fluid particle entering the domain at the average velocity u =
1 travels 2 units downstream during a time-step.
Figure 2.8 shows the formation of alternate bottom/top wall recirculation regions
during startup which gradually become thinner and elongated as they drift down¬
stream. For the L = 16 simulation (Figure 2.8), the transient flowfield has as many
as four separation bubbles at T = 32, the latter two of which are eventually washed
out of the domain. In the L = 8 simulation (Figure 2.9) the streamfunction plots are
at times corresponding to those shown in Figure 2.8. Note that between T = 11 and
T = 32, a secondary bottom wall recirculation zone forms and drifts downstream,
exiting without reflection through the downstream boundary. The time evolution of
the flowfield for the L = 8 and L — 16 simulations is virtually identical.
As can be observed, the facts that a shorter channel length was used in Figure 2.9
and that a recirculating cell may go through the open boundary do not affect the
solutions. Figure 2.10 compares the computed time histories of the bottom wall
reattachment and top wall separation points between the two computations. The
L — 8 and L = 16 curves are perfectly overlapped. The steady-state solutions for
both the L = 8 and L = 16 channel configurations are also shown in Figure 2.9
and 2.8, respectively. Although the outflow boundary cuts the top wall separation
bubble approximately in half, there is no apparent difference between the computed
streamfunction contours for 0 < x < 8. Furthermore, the convergence rate is not
affected by the choice of outflow boundary location.
Figure 2.11 compares the steady-state u and v velocity profiles at x = 7 be¬
tween the two computations. The accuracy of the computed results is assessed by
comparison with an FEM numerical solution reported by Gartling [27]. Figure 2.11
establishes quantitatively that the two simulations differ negligibly over 0 < x < 8
(the v profile differs on the order of 10-3) The velocity scale for the problem is 1.

40
Neither v profile agrees perfectly with the solution obtained by Gartling, which may
be attributed to the need for conducting further grid refinement studies in the present
work and/or Gartling’s work.
Evidently the location of the open boundary is not critical to obtaining a con¬
verged solution. This observation indicates that the downstream information is com¬
pletely accounted for by the continuity equation. The correct pressure field can de¬
velop because the system of //-equations requires only the boundary mass flux specifi¬
cation. If the global continuity constraint is satisfied, the pressure-correction equation
is consistent regardless of whether there is inflow or outflow at the boundary where
extrapolation is applied. The numerical well-posedness of the open boundary com¬
putation results in virtually identical flowfield development for the time-dependent
L = 8 and L = 16 simulations as well as steady-state solutions which agree with each
other and follow closely Gartling’s benchmark data [27].
2.6 Concluding Remarks
In order for the SIMPLE pressure-correction method to be a well-posed numer¬
ical procedure for open boundary problems, explicit steps must be taken to ensure
the numerical consistency of the pressure-correction system of equations during the
course of iterations. For the discrete problem with the natural boundary treatment
for pressure, i.e. normal velocity specified at all boundaries, global mass conserva¬
tion is the solvability constraint which must be satisfied in order that the system of
p'-equations is consistent. Without a globally mass-conserving procedure enforced
during each iterative step, the utility of the pressure-corrections obtained at each it¬
eration cannot be guaranteed. Overall convergence may still occur, albeit very slowly.
In this regard, the poor outer-loop convergence behavior simply reflects the (poor)
convergence rate of the inner-loop iterations of the SLUR technique. In general, the

41
inner-loop residual is fixed on the value of the initial level of inconsistency of the
system of p'-equations which physically is the global mass deficit. The convergence
rate can be improved dramatically by explicitly enforcing mass conservation using
an additive-correction technique. The results of numerical simulations of backward-
facing step flow illustrate and support these conclusions.
The mass-conservation constraint also has implications for the issue of proper
numerical treatment of open boundaries where inflow is occurring. Specifically, the
conventional viewpoint that inflow cannot occur at open boundaries without Dirich-
let prescription of the inflow variables can be rebutted, based on the grounds that
the numerical problem is well-posed if the normal velocity components satisfy the
continuity constraint.

42
Figure 2.1. Staggered grid u control volume and the nearby variables which are
involved in the discretization of the u-momentum equation.

43
U= 1
U(y)
St.
St
Nk
N
\v
>
Vi
\
s.
s*.
X
V
V
\
\
\
St,
s,
5 1
S.
V4
\ A
V
>
\
's*,
V
\ A
\
V
\
Nk
~s*.
â– N.
\
\
\
\
H
Figure 2.2. Description of two model problems. Both are at Re — 100. The cavity
is a square with a top wall sliding to the left, while the backward-facing step is a
4x1 rectangular domain with an expansion ratio H/h — 2, and a parabolic inflow
(average inflow velocity =1). The cavity flow grid is 9 x 9 and the step flow grid is
21 x 9. The meshes and the velocity vectors are shown.

44
Figure 2.3. Model 3x3 computational domain with numbered control volumes, for
discussion of Eq. 2.22. The staggered velocity components which refer to control
volume 5 are also indicated.

Log10 of Residual
45
Re = 100 Cavity Flow Re = 100 Back-Step Flow
# of Iterations # of Iterations
Figure 2.4. Outer-loop convergence paths for the Re = 100 lid-driven cavity and
backward-facing step flows, using central-differencing for the convection terms. Leg¬
end: p' equation: u momentum equation: v momentum equation.

46
# of Iterations
# of Iterations
Figure 2.5. Inner-loop convergence paths for the Re = 100 lid-driven cavity and
backward-facing step flows. The vertical axis is the log\o of the ratio of the current
residual to the initial residual. Legend: p' equation: u momentum equation:
v momentum equation.

47
Inner Loop for Cavity Flow
Inner Loop for Back-Step Flow
Figure 2.6. Variation of p' with inner-loop iterations. The dashed line is the value
of p at the lower-left control volume, while the solid line is the difference between
Plowerleft an<^ Pupperrighf

48
Outer Loop Convergence Path Inner-Loop Convergence Path
# of Iterations # of Iterations
Figure 2.7. Outer-loop and inner-loop convergence paths of the p' equation for the
backward-facing step model problem, with and without enforcing the continuity con¬
straint. (1) conservation of mass not enforced: (2) continuity enforced globally; (3)
cavity flow.

T= 15
T = 20
T = 32
T z GC
Figure 2.8. Time-dependent flowfield for impulsively started backward-facing step flow, Re = 800. I lie domain
has length /, = 1G. Streamfnnction contours are plotted at several instants during the evolution to the steady-
state, which is the last figure.

50
T = 15
T = 20
T = 32
Tzz 00
Figure 2.9. Time-dependent flowfield for impulsively started backward-facing step
flow. Re = 800. The domain has length L = 8. Streamfunction contours are plotted
at several instants during the evolution to the steady-state, which is the last figure.

51
Time-Evolution of Reattachment/Separation Locations
Time
Figure 2.10. Time-dependent location of bottom wall reattachment point and top wall
separation point for Re = 800 impulsively started backward-facing step flow. The
curves for both L = 8 and L — 16 computations are shown; they overlap identically.

52
V Velocity Profile at X = 7 For Re = 800 Back-Step Flow
Figure 2.11. Comparison of u and u-component of velocity profiles at x = 7.0 for
the L = 16 and L — 8 backward-facing step simulations at Re = 800, with central-
differencing. (o) indicates the grid-independent FEM solution obtained by Gartling.
The v profile is scaled up by 103.

CHAPTER 3
EFFICIENCY AND SCALABILITY ON SIMD COMPUTERS
The previous chapter considered an issue which was important because of its im¬
plications for the convergence rate in open boundary problems. The present chapter
shifts gears to focus on the cost and efficiency of pressure-correction methods on
SIMD computers.
As discussed in Chapter 1, the eventual goal is to understand the indirect cost [23],
i.e. the parallel run time, of such methods on SIMD computers, and how this cost
scales with the problem size and the number of processors. The run time is just the
number of iterations multiplied by the cost per iteration. This chapter considers the
cost per iteration.
3.1 Background
The discussion of SIMD computers in Chapter 1 indicated similarities in the
general layout of such machines and in the factors which affect program performance.
More detail is given in this section to better support the discussion of results.
3.1.1 Speedup and Efficiency
Speedup S is defined as
s = y< (3.i)
ip
where Tp is the measured run time using np processors. In the present work Tj is
the run time of the parallel algorithm on one processor, including both serial and
parallel computational work, but excluding the front-end-to-processor and interpro¬
cessor communication. On a MIMD machine it is sometimes possible to actually time
53

54
the program on one processsor, but each SIMD processor is not usually a capable
serial computer by itself, so 7\ must be estimated. The timing tools on the CM-2
and CM-5 are very sophisticated, and can separately measure the time elapsed by
the processors doing computation, doing various kinds of communication, and doing
nothing (waiting for an instruction from the front-end, which might be finishing up
some serial work before it can send another code block). Thus, it is possible to make
a reasonable estimate for T\.
Parallel efficiency is the ratio of the actual speedup to the ideal (np), which reflects
the overhead costs of doing the computation in parallel:
Sgctual _ T\/Tp
Sideal Tip
(3.2)
If Tcornp is the time in seconds spent by each of the np processors doing useful work
(computation), T,nier_proc is the time spent by the processors doing interprocessor
communication, and T/e_<0_pr0C is the time elapsed through front-end-to-processor
communication, then each of the processors is busy a total of Tcomp + 7¿„íer_proc
seconds and the total run time on multiple processors is Tcomp + Tinter-proc-\-Tfe-to-proc
seconds. Assuming that the parallelism is high, i.e. a high percentage of the virtual
processors are not idle, a single processor would need npTcomp time to do the same
work. Thus, T\ = npTcomp, and from Eq. 3.2 E can be expressed as
1
1
(3.3)
1 + {Tinier — proc + Tfe —to—proc) ¡Tcomp 1 T (TComm ) /Tcomp
Since time is work divided by speed, E depends on both machine-related factors and
the implementational factors through Eq. 3.3. High parallel efficiency is not neces¬
sarily a product of fast processors or fast communications considered alone, instead it
is the x’elative speeds that are important, and the relative amount of communication
and computation in the program. Consider the machine-related factors first.

55
3.1.2 Comparison Between CM-2. CM-5, and MP-1
A 32-node CM-5 with vector units, a 16k processor CM-2, and a lk processor
MP-1 were used in the present study. The CM-5 has 4 GBytes total memory, while
the CM-2 has 512 Mbytes, and the MP-1 has 64 MBytes. The peak speeds of these
computers are 4. 3.5, and 0.034 Gflops, respectively, in double precision. Per proces¬
sor, the peak speeds are 32, 7, and 0.033 Mflops, with memory bandwidths of 128,
25, and 0.67 Mbytes/s [67, 83]. Clearly these are computers with very different capa¬
bilities, even taking into account the fact that peak speeds, which are based only on
the processor speed under ideal conditions, are not an accurate basis for comparison.
In the CM-2 and CM-5 the front-end computers are Sun-4 workstations, while
in the MP-1 the front-end is a Decstation 5000. From Eq. 3.3, it is clear that the
relative speeds of the front-end computer and the processors are important. Their
ratio determines the importance of the front-end-to-processor type of communication.
On the CM-2 and MP-1, there is just one of these intermediate processors, called
either a sequencer or an array control unit, respectively, while on the 32-node CM-5
the 32 SPARC microprocessors have the role of sequencers.
Each SPARC node broadcasts to four vector units (VUs) which actually do the
work. Thus a 32-node CM-5 has 128 independent processors. In the CM-2 the ‘‘pro¬
cessors’’ are more often called processing elements (PEs), because each one consists of
a floating-point unit coupled with 32 bit-serial processors. Each bit-serial processor
is the memory manager for a single bit of a 32-bit word. Thus, the 16k-processor
CM-2 actually has only 512 independent processing elements. This strange CM-2
processor design came about basically as a workaround which was introduced to im¬
prove the memory bandwidth for floating-point calculations [66]. Compared to the
CM-5 VUs, the CM-2 processors are about one-fourth as fast, with larger overhead

56
costs associated with memory access and computation. The MP-1 has 1024 4-bit
processors—compared to either the CM-5 or CM-2 processors, the MP-1 processors
are very slow. The generic term “processing element” (PE), which is used occassion-
ally in the discussion below, refers to either one of the VUs, one of the 512 CM-2
processors, or one of the MP-1 processors, whichever is appropriate.
For the present study, the processors are either physically or logically imagined
to be arranged as a 2-d mesh, which is a layout that is well-supported by the data
networks of each of the computers. The data network of the 32-node CM-5 is a
fat tree of height 3, which is similar to a binary tree except the bandwidth stays
constant upwards from height 2 at 160 MBytes/s (details in [83]). One can expect
approximately 480 MBytes/s for regular grid communication patterns (i.e. between
nearest-neighbor SPARC nodes) and 128 MBytes/s for random (global) communica¬
tions. The randomly-directed messages have to go farther up the tree, so they are
slower. The CM-2 network (a hypercube) is completely different from the fat-tree net¬
work and its performance for regular grid communication between nearest-neighbor
processors is roughly 350 MBytes/s [67]. The grid network on the CM-2 is called
NEWS (North-East-West-South). It is a subset of the hypercube connections se¬
lected at run time. The MP-1 has two networks: regular communications use X-Net
(1.25 GBytes/s, peak) which connects each processor to its eight nearest neighbors,
and random communications use a 3-stage crossbar (80 MBytes/s, peak).
To summarize the relative speeds of these three SIMD computers it is sufficient
for the present study to observe that the MP-1 has very fast nearest-neighbor com¬
munication compared to its computational speed, while the exact opposite is true for
the CM-2. The ratio of nearest-neighbor communication speed to computation speed
is smaller still for the CM-5 than the CM-2. Again, from Eq. 3.3, one expects that
these differences will be an important factor influencing the parallel efficiency.

57
3.1.3 Hierarchical and Cut-and-Stack Data Mappings
When there are more array elements (grid points) than processors, each processor
handles multiple grid points. Which grid points are assigned to which processors is
determined by the “data-mapping,” also called the data layout. The processors repeat
any instructions the appropriate number of times to handle all the array elements
which have been assigned to it. A useful idealization for SIMD machines, however,
is to pretend there are always as many processors as grid points. Then one speaks of
the “virtual processor” ratio (VP) which is the number of array elements assigned to
each physical processor. The way the data arrays are partitioned and mapped to the
processors is a main concern for developing a parallel implementation. The layout of
the data determines the amount of communication in a given program.
When the virtual processor ratio is 1, there are an equal number of processors
and array elements and the mapping is just one-to-one. When VP > 1 the mapping
of data to processors is either “hierarchical,” in CM-Fortran, or “cut-and-stack” in
MP-Fortran. These mappings are also termed “block” and “cyclic” [85], respectively,
in the emerging High-Performance Fortran standard. The relative merits of these
different approaches have not been completely explored yet.
In cut-and-stack mapping, nearest-neighbor array elements are mapped to nearest-
neighbor physical processors. When the number of array elements exceeds the num¬
ber of processors, additional memory layers are created. VP is just the number of
memory layers. In the general case, nearest-neighbor virtual processors (i.e. array
elements) will not be mapped to the same physical processor. Thus, the cost of a
nearest-neighbor communication of distance one will be proportional to VP, since the
nearest-neighbors of each virtual processor will be on a different physical processor.
In the hierarchical mapping, contiguous pieces of an array (“virtual subgrids”) are

58
mapped to each processor. The “subgrid size” for the hierarchical mapping is syn¬
onymous with VP. The distinction between hierarchical and cut-and-stack mapping
is clarified by Figure 3.1.
In hierarchical mapping, for V P > 1, each virtual processor has nearest-neighbors
in the same virtual subgrid, that is, on the same physical processor. Thus, for hier¬
archical mapping on the CM-2, interprocessor communication breaks down into two
types (with different speeds)—on-processor and off-processor. Off-processor commu¬
nication on the CM-2 has the NEWS speed given above, while on-processor communi¬
cation is somewhat faster, because it is essentially just a memory operation. A more
detailed presentation and modelling of nearest-neighbor communication costs for the
hierarchical mapping on the CM-2 is given in [3]. The key idea is that with hierar¬
chical mapping on the CM-2 the relative amount of on-processor and off-processor
communication is the area to perimeter ratio of the virtual subgrid.
For the CM-5, there are three types of interprocessor communication: (1) between
virtual processors on the same processor (that is, the same VU), (2) between virtual
processors on different VUs but on the same SPARC node, and (3) between virtual
processors on different SPARC nodes. Between different SPARC nodes (number 2),
the speed is 480 MBytes/s as mentioned above. On the same VU the speed is 16
GBytes/s. (The latter number is just the aggregate memory bandwidth of the 32-
node CM-5.) Thus, although off-processor NEWS communication is slow compared
to computation on the CM-2 and CM-5, good efficiencies can still be achieved as a
consequence of the data mapping which allows the majority of communication to be
of the on-processor type.

59
3.2 Implementional Considerations
The cost per SIMPLE iteration depends on the choice of relaxation method
(solver) for the systems of equations, the number of inner iterations (z/u, ty, and uc),
the computation of coefficients for each system of equations, the correction step, and
the convergence checking and serial work done in program control. The pressure-
correction equation, since it is not underrelaxed, typically needs to be given more
iterations than the momentum equations, and consequently most of the effort is ex¬
pended during this step of the SIMPLE method. This is another reason why the
convergence rate of the p'-equations discussed in Chapter 2 is important. Typically
uu and uv are the same and are < 3, and uc < bvu.
In developing a parallel implementation of the SIMPLE algorithm, the first con¬
sideration is the method of solving the u, v, and p' systems of equations. For serial
computations, successive line-underrelaxation using the tridiagonal matrix algorithm
(TDMA, whose operation count is O(N)) is a good choice because the cost per it¬
eration is optimal and there is long-distance coupling between flow variables (along
lines), which is effective in promoting convergence in the outer iterations. The TDMA
is intrinsically serial. For parallel computations, a parallel tridiagonal solver must be
used (parallel cyclic reduction in the present work). In this case the cost per it¬
eration depends not only on the computational workload (0(Nlog2N)) but also on
the amount of communication generated by the implementation on a particular ma¬
chine. For these reasons, timing comparisons are made for several implementations
of both point- and line-Jacobi solvers used during the inner iterations of the SIMPLE
algorithm.

60
Generally, point-Jacobi iteration is not sufficiently effective for complex flow prob¬
lems. However, as part of a multigrid strategy, good convergence rates can be ob¬
tained (see Chapters 4 and 5). Furthermore, because it only involves the fastest type
of interprocessor communication, that which occurs between nearest-neighbor pro¬
cessors, point-Jacobi iteration provides an upper bound for parallel efficiency, against
which other solvers can be compared.
The second consideration is the treatment of boundary computations. In the
present implementation, the coefficients and source terms for the boundary control
volumes are computed using the interior control volume formula and mask arrays.
Oran et al. [57] have called this trick the uniform boundary condition approach.
All coefficients can be computed simultaneously. The problem with computing the
boundary coefficients separately is that some of the processors are idle, which de¬
creases E. For the CM-5, which is “synchronized MIMD” instead of strictly SIMD,
there exists limited capability to handle both boundary and interior coefficients si¬
multaneously without formulating a single all-inclusive expression. However, this
capability cannot be utilized if either the boundary or interior formulas involve in¬
terprocessor communication, which is the case here. As an example of the uniform
approach, consider the source terms for the north boundary u control volumes, which
are computed by the formula
b = aNuN + (pw - pe)Ay (3.4)
Recall that a# represents the discretized convective and diffusive flux terms, and un
is the boundary value, and in the pressure gradient term, Ay is the vertical dimension
of the u control volume and pw/pe are the west/east u-control-volume face pressures
on the staggered grid. Similar modifications show up in the south, east, and west
boundary u control volume source terms. To compute the boundary and interior

61
source terms simultaneously, the following implementation is used:
^ — Q'boundaryV'boundary T (Pw Pe)^V (3.5)
where
uboundary = UN^N + UsLs + Ue^E + «wAv (3-6)
and
dboundary = CLnIn + CLS^S + CLE^E + CL\V I\V (3-7)
/jv, Is, Ie-i and Iw are the mask arrays, which have the value 1 for the respective
boundary control volumes and 0 everywhere else. They are initialized once, at the
beginning of the program. Then, every iteration, there are four extra nearest-neighbor
communications. A comparison of the uniform approach with an implementation that
treats each boundary separately is discussed in the results.
3.3 Numerical Experiments
The SIMPLE algorithm for two-dimensional laminar flow has been timed on a
range of problem sizes from 8 x 8 to 1024 x 1024 which, on the CM-5, covers up
to VP = 8192. The convection terms are central-differenced. A fixed number (100)
of outer iterations are timed using as a model flow problem the lid-driven cavity
flow at Re = 1000. The timings were made with the “Prism” timing utility on
the CM-2 and CM-5, and the “dpuTimer” routines on the MP-1 [52, 86]. These
utilities can be inaccurate if the front-end machine is heavily loaded, which was the
case with the CM-2. Thus, on the CM-2 all cases were timed three times and the
fastest times were used, as recommended by Thinking Machines [82]. Prism times
every code block and accumulates totals in several categories, including computation
time for the nodes (Tcomp), “NEWS” communication (Tnews), and irregular-pattern
“SEND” communication. Also it is possible to infer Tfe-to-proc from the difference

62
between the processor busy time and the elapsed time. In the results Tcomm is the
sum of the “NEWS” and “SEND” interprocessor times. The front-end-to-processor
communication is separate. Additionally, the component tasks of the algorithm have
been timed, namely the coefficient computations (Tcoe¡¡), the solver (Tso¡ve), and the
velocity-correction and convergence-checking parts.
3.3.1 Efficiency of Point and Line Solvers for the Inner Iterations
Figure 3.2, based on timings made on the CM-5, illustrates the difference in
parallel efficiency for SIMPLE using point-Jacobi and line-Jacobi iterative solvers. E
is computed from Eq. 3.3 by timing Tcornm and Tcomp introduced above. Problem size
is given in terms of the virtual processor ratio VP previously defined.
There are two implementations each with different data layouts, for point-Jacobi
iteration. One ignores the distinction between virtual processors which are on the
same physical processor and those which are on different physical processors. Each
array element is treated as if it is a processor. Thus, interprocessor communication
is generated whenever data is to be moved, even if the two virtual processors do¬
ing the communication happen to be on the same physical processor. To be more
precise, a call to the run-time communication library is generated for every array el¬
ement. Then, those array elements (virtual processors) which actually reside on the
same physical processor are identified and the communication is done as a memory
operation—but the unnecessary overhead of calling the library is incurred. Obviously
there is an inefficiency associated with pretending that there are as many processors
as array elements, but the tradeoff is that this is the most straightforward, and indeed
the intended, way to do the programming. In Figure 3.2, this approach is labelled
“NEWS,” with the symbol “o.” The other implementation is labelled “on-VU,” with

63
the symbol “+,” to indicate that interprocessor communication between virtual pro¬
cessors on the same physical processor is being eliminated—the programming is in a
sense being done “on-VU.”
To indicate to the compiler the different layouts of the data which are needed,
the programmer inserts compiler directives. For the “NEWS” version, the arrays are
laid out as shown in this example for a lk x lk grid and an 8 x 16 processor layout
on the CM-5:
REAL*8 A( 1024,1024)
$CMF LAYOUT A(:BLOCK=128 :PROCS=8, :BLOCK=64 :PROCS=16)
Thus, the subgrid shape is 128 x 64, with a subgrid size {VP) of 8192 (this hap¬
pens to be the biggest problem size for my program on a 32-node CM-5 with 4GBytes
of memory). When shifting all the data to their east nearest-neighbor, for example,
by far the large majority of transfers are on-VU and could be done without real inter¬
processor communication. But there are only 2 dimensions in A, so that data-parallel
program statements cannot specifically access certain array elements, i.e. the ones on
the perimeter of the subgrid. Thus it is not possible with the “NEWS” layout to
treat interior virtual processors differently from those on the perimeter, and conse¬
quently data shifts between the interior virtual processors generate interprocessor
communication even though it is unnecessary.
In the “on-VU” version, a different data layout is used which makes explicit to the
compiler the boundary between physical processors. The arrays are laid out without
virtual processors:
$CMF LAYOUT A(:SERIAL,:SERIAL,:BLOCK=l :PROCS=8,:BLOCK=l :PROCS=16)
The declaration must be changed accordingly, to A(128,64,8,16). Normally it is
inconvenient to work with the arrays in this manner. Thus the approach taken here

64
is to use an “array alias” of A [84]. In other words, this is an EQUIVALENCE func¬
tion for the data-parallel arrays (similar to the Fortran77 EQUIVALENCE concept),
which equates A( 1024,1024) with A( 128,64,8,16), with the different LAYOUTs given
above. It is the alias instead of the original A which is used in the on-VU point-
Jacobi implementation. In the solver, the “on-VU” layout is used; everywhere else,
the more convenient “NEWS” layout is used. The actual mechanism by which the
equivalencing of distributed arrays can be accomplished is not too difficult to under¬
stand. The front-end computer stores “array descriptors,” which contain the array
layout, the starting address in processor memory, and other information. The actual
layout in each processors’ memory is linear and doesn’t change, but multiple array
descriptors can be generated for the same data. This descriptor multiplicity is what
array aliasing accomplishes. With the “on-VU” programming style, the compiler
does not generate communication when the shift of data is along a SERIAL axis.
Thus, interprocessor communication is generated only when the virtual processors
involved are on different physical processors, i.e. only when it is truly necessary. The
difference in the amount of communication is substantial for large subgrid sizes.
For both the “NEWS” and the “on-VU” curves in Figure 3.2, E is initially very
low, but as VP increases, E rises until it reaches a peak value of about 0.8 for the
“NEWS” version and 0.85 for the “on-VU” version. The trend is due to the amor¬
tization of the front-end-to-processor and off-VU (between VUs which are physically
under control of different SPARC nodes) communication. The former contributes a
constant overhead cost per Jacobi iteration to Tcomm, while the latter has a VP1/2
dependency [3]. However, it does not appear from Figure 3.2 that these two terms’
effects can be distinguished from one another.
For VP > 2k, the CM-5 is computing roughly 3/4 of the time for the implementa¬
tion which uses the “NEWS” version of point-Jacobi, with the remainder split evenly

65
between front-end-to-processor communication and on-VU interprocessor communi¬
cation. It appears that the “on-VU” version has more front-end-to-processor com¬
munication per iteration, so there is, in effect, a price of more front-end-to-processor
communication to pay in exchange for less interprocessor communication. Conse¬
quently it takes VP > 4k to reach peak efficiency instead of 2k with the “NEWS”
version. For VP > 4k, however, E is about 5% ~ 10% higher than for the “NEWS”
version because the on-VU communication has been replaced by straight memory
operations.
The observed difference would be even greater if a larger part of the total parallel
run time was spent in the solver. For the large VP cases in Figure 3.2, approximately
equal time was spent computing coefficients and solving the systems of equations.
“Typical” numbers of inner iterations were used, 3 each for the u and v momentum
equations, and 9 for the p' equation. From Figure 3.2, then, it appears that the ad¬
vantage of the “on-VU” version over the “NEWS” version of point-Jacobi relaxation
within the SIMPLE algorithm is around 0.1 in E, for large problem sizes.
Red/black analogues to the “NEWS” and “on-VU” versions of point-Jacobi iter¬
ation have also been tested. Red/black point iteration done in the “on-VU” manner
does not generate any additional front-end-to-processor communication, and there¬
fore takes almost an identical amount of time as point-Jacobi. Thus red/black point
iterations are recommended when the “on-VU” layout is used due to their improved
convergence rate. However, with the “NEWS” layout, red/black point iteration gen¬
erates two code blocks instead of one, and reduces by 2 the amount of computation
per code block. This results in a substantial (~ 35% for the VP = 8k case) in¬
crease in run time. Thus, if using “NEWS” layouts, red/black point iteration is not
cost-effective.

66
There are also two implementations of line-Jacobi iteration. In both, one inner
iteration consists of forming a tridiagonal system of equations for the unknowns in
each vertical line by moving the east/west terms to the right-hand side, solving the
multiple systems of equations simultaneously, and repeating the procedure for the
horizontal lines.
In the first version, parallel cyclic reduction is used to solve the multiple tridiag¬
onal systems of equations (see [44] for a clear presentation). This involves combining
equations to decouple the system into even and odd equations. The result is two
tridiagonal systems of equations each half the size of the original. The reduction step
is repeated log2 N times, where N is the number of unknowns in each line. Thus, the
computational operation count is 0(Nlog2N). Interprocessor communication occurs
for every unknown for every step, thus the communication operation count is also
0(Nlog2N). However, the distance for communication increases every step of the re¬
duction by a factor of 2. For the first step, nearest-neighbor communication occurs,
while for the second step, the distance is 2, then 4, etc. Thus, the net communi¬
cation speed is slower than the nearest-neighbor type of communication. Figure 3.2
confirms this argument—E peaks at about 0.5 compared to 0.8 for point-Jacobi it¬
eration. In other words, for VP > 4k, interprocessor communication takes as much
time as computation with the line-Jacobi solver using cyclic reduction.
In the second version, the multiple systems of tridiagonal equations are solved
using the standard TDMA algorithm along the lines. To implement this version,
one must remap the arrays from (:NEWS,:NEWS) to (:NEWS,:SERIAL), for the
vertical lines, and to (:SERIAL,:NEWS) for the horizontal lines. This change from
rectangular subgrids to 1-d slices is the most time-consuming step, involving a global
communication of data (“SEND” instead of “NEWS”). Applied along the serial di¬
mension, the TDMA does not generate any interprocessor communication. Some

67
front-end-to-processor communication is generated by the incrementing of the D0-
loop index, but unrolling the DO-loop helps to amortize this overhead cost to some
extent. Thus, in Figure 3.2 E is approximately constant at 0.14, except for very small
VP. The global communication is much slower than computation and consequently
there is not enough computation to amortize the communication. Furthermore, the
constant E implies from Eq. 3.3 that Tcomm and Tcomp both scale in the same way
with problem size. It is evident that Tcomp ~ VP because the TDM A is O(N). Thus
constant E implies Tcomm ~ VP. This means doubling VP doubles Tcomm, indicating
the communication speed has reached its peak, which further indicates that the full
bandwidth of the fat-tree is being utilized.
The disappointing performance of the standard line-iterative approach using the
TDMA points out the important fact that, for the CM-5, global communication
within inner iterations is intolerable. There is not enough computation to amortize
slow communication in the solver for any problem size. With parallel cyclic reduction,
where the regularity of the data movement allows faster communication, the efficiency
is much higher, although still significantly lower than for point-iterations. Additional
improvement can be sought by using the “on-VU” data layout to implement the
line-iterative solver within each processor’s subgrid. This implementation essentially
trades interprocessor communication for the front-end-to-PE type of communication,
and in practice a front-end bottleneck develops. For the remainder of the discussion,
all line-Jacobi results refer to the parallel cyclic reduction implementation.
On the MP-1, the front-end-to-processor communication is not a major concern,
as can be inferred from Figure 3.3. The efficiency of the SIMPLE algorithm using
the point-Jacobi solver is plotted for each machine for the range of problem sizes
corresponding to the cases solved on the MP-1. The CM-2 and CM-5 can solve
much larger problems, so for comparison purposes only part of their data is shown.

68
Also, because the computers have different numbers of processors, the number of grid
points is used instead of VP to define the problem size.
As in Figure 3.2, each curve exhibits an initial rise corresponding to the amortiza¬
tion of the front-end-to-processor communication and, for the CM-2 and CM-5, the
off-processor “NEWS” communication. On the MP-1, peak E is reached for small
problems (VP > 32). Due to the MP-l’s relatively slow processors, the computa¬
tion time quickly amortizes the front-end-to-processor communication time as VP
increases. Furthermore, because the relative speed of X-Net communication is fast,
the peak E is high, 0.85. On the CM-2, the peak E is 0.4, and this efficiency is
reached for approximately VP > 128. On the CM-5, the peak E is 0.8, but this
efficiency is not reached until VP > 2k. If computation is fast, then the rate of in¬
crease of E with VP depends on the relative cost of on-processor, off-processor, and
front-end-to-processor communication. If the on-processor communication is fast,
larger VP is required to reach peak E. Thus, on the CM-5, the relatively fast on-VU
communication is simultaneously responsible for the good (0.8) peak E, and the fact
that very large problem sizes, (VP > 2k, 64 times larger than on the MP-1), are
needed to reach this peak E.
The aspect ratio of the virtual subgrid constitutes a secondary effect of the data
layout on the efficiency for hierarchical mapping. The major influence on E depends
on VP, i.e. the subgrid size, but the subgrid shape matters, too. This dependence
comes into play due to the different speeds of the on-processor and off-processor types
of communication. Higher aspect ratio subgrids have higher area to perimeter ratios,
and thus relatively more of off-processor communication than square subgrids.
Figure 3.4 gives some idea of the relative importance of the subgrid aspect ratio
effect. Along each curve the number of grid points is fixed, but the grid dimensions
vary, which, for a given processor layout, causes the subgrid shape (aspect ratio), to

69
vary. For example, on the CM-5 with an 8 x 16 processor layout, the following grids
were used corresponding to the VP = 1024 CM-5 curve: 256 x 512, 512 x 256, 680 x
192, and 1024 x 128. These cases give subgrid aspect ratios of 1, 4, 7, and 16. Tnews
is the time spent in “NEWS” type of interprocessor communication and Tcomp is the
time spent doing computation during 100 SIMPLE iterations. The solver for these
results is point-Jacobi relaxation.
For the VP — 1024 CM-5 case, increasing the aspect ratio from 1 to 16 causes
Tnews/Tcomp to increase from 0.3 to 0.5. This increase in Tnews/Tcomp increases the
run time for 100 iterations from 15s to 20s, and decreases the efficiency from 0.61 to
0.54. For the VP = 8192 CM-5 case, increasing the aspect ratio from 1 to 16 causes
TnewslTcomp to increase from 0.19 to 0.27. This increase in Tnews/Tcomp increases the
run time for 100 iterations from 118s to 126s, and decreases the efficiency from 0.74
to 0.72. Thus, the aspect ratio effect diminishes as VP increases due to the increasing
area of the subgrid. In other words the variation in the perimeter length matters less,
percentage-wise, as the area increases. The CM-2 results are similar. However, on
the CM-2 the on-PE type of communication is slower than on the CM-5, relative to
the computational speed. Thus, Tnews/Tcomp ratios are higher on the CM-2.
3.3.2 Effect of Uniform Boundary Condition Implementation
In addition to the choice of solver, the treatment of boundary coefficient computa¬
tions was discussed earlier as an important consideration affecting parallel efficiency.
Figure 3.5 compares the implementation described in the introductory section of this
chapter, to an implementation which treats the boundary control volumes separate
from the interior control volumes. The latter approach involves some 1-d operations
which leave some processors idle.

70
The results indicated in Figure 3.5 were obtained on the CM-2, using point-
Jacobi relaxation as the solver. With the uniform approach, the ratio of the time
spent computing coefficients, Tcoejf, to the time spent solving the equations, Tsoive,
remains constant at 0.6 for VP > 256. Both Tcoe¡¡ and Tsoive ~ VP in this case, so
doubling VP doubles both Tcoe¡j and Tsoive, leaving their ratio unchanged. The value
0.6 reflects the relative cost of coefficient computations compared to point-Jacobi
iteration. There are three equations for which coefficients are computed and 15 total
inner iterations, 3 each for the u and v equations, and 9 for the p' equation. Thus if
more inner iterations are taken, the ratio of Tcoejj to Tso¡ve will decrease, and vice-
versa. With the 1-d implementation, Tcoefj/Tso¡ve increases until VP > 1024. Both
Tcoeff and Tsoive scale with VP asymptotically, but Figure 3.5 shows that Tcoejj has an
apparently very significant square-root component due to the boundary operations. If
N is the number of grid points and nv is the number of processors, then VP = N/np.
For boundary operations, N1^2 control volumes are computed in parallel with only
nxJ2 processors—hence the VPcontribution to Tcoejj. From Figure 3.5, it appears
that very large problems are required to reach the point where the interior coefficient
computations amortize the boundary coefficient computations. Even for large VP
when Tcoefj/Tsoive is approaching a constant, this constant is larger, approximately
0.8 compared to 0.6 for the uniform approach, due to the additional front-end-to-
processor communication which is intrinsic to the 1-d formulation.
3.3.3 Overall Performance
Table 3.1 summarizes the relative performance of SIMPLE on the CM-2, CM-5,
and MP-1 computers, using point and line-iterative solvers and the uniform boundary

71
condition treatment. In the first three cases the “NEWS” implementation of point-
Jacobi relaxation is the solver, while the last two cases are for the line-Jacobi solver
using cyclic reduction.
Machine
Solver
Problem
Size
VP
T
1 V
Time/Iter./Pt.
Speed )
(MFlops)
Peak
Speed
512 PE
CM-2
Point-
Jacobi
512 x
1024
1024
188 s
2.6 x 10~6 s
147
4
128 VU
CM-5
Point-
Jacobi
736 x
1472
8192
137 s
1.3 x 10“6 s
417
10
1024 PE
MP-1
Point-
Jacobi
512 x
512
256
316 s
1.2 x 10~5 s
44*
59
512 PE
CM-2
Line-
J acobi
512 x
1024
1024
409 s
7.8 x 10"6 s
133
3
128 VU
CM-5
Line-
Jacobi
736 x
1472
8192
453 s
4.2 x 10"6 s
247
6
Table 3.1. Performance results for the SIMPLE algorithm for 100 iterations of the
model problem. The solvers are the point-Jacobi (“NEWS”) and line-Jacobi (cyclic
reduction) implementations. 3, 3, and 9 inner iterations are used for the u, v, and p’
equations, respectively. * The speeds are for double-precision calculations, except on
the MP-1.
In Table 3.1, the speeds reported are obtained by comparing the timings with
the identical code timed on a Cray C90, using the Cray hardware performance mon¬
itor to determine Mflops. In terms of Mflops, the CM-2 version of the SIMPLE
algorithm’s performance appears to be consistent with other CFD algorithms on the
CM-2. Jesperson and Levit [44] report 117 Mflops for a scalar implicit version of an
approximate factorization Navier-Stokes algorithm using parallel cyclic reduction to
solve the tridiagonal systems of equations. This result was obtained for a 512 x 512
simulation of 2-d flow over a cylinder using a 16k CM-2 as in the present study (a
different execution model was used (see [3, 47] for details). The measured time per
time-step per grid point was 1.6 x 10-5 seconds. By comparison, the performance of
the SIMPLE algorithm for the 512 x 1024 problem size using the line-Jacobi solver is

72
133 Mflops and 7.8 x 10-6 seconds per iteration per grid pt. Egolf [20] reports that the
TEACH Navier-Stokes combustor code based on a sequential pressure-based method
with a solver that is comparable to point-Jacobi relaxation, obtains a performance
which is 3.67 times better than a vectorized Cray X-MP version of the code, for a
model problem with 3.2 x 104 nodes. The present program runs 1.6 times faster than
a single Cray C90 processor for a 128 x 256 problem (32k grid points). One Cray
C-90 processor is about 2-4 times faster than a Cray X-MP. Thus, the present code
runs comparably fast.
3.3.4 Isoefficiencv Plot
Figures 3.2-3.4 addressed the effects of the inner-iterative solver, the boundary
treatment, the data layout, and the variation of parallel efficiency with problem size
for a fixed number of processors. Varying the number of processors is also of interest
and, as discussed in Chapter 1, an even more practical numerical experiment is to
vary np in proportion with the problem size, i.e. the scaled-size model.
Figure 3.6, which is based on the point-Jacobi MP-1 timings, incorporates the
above information into one plot, which has been called an isoefficiency plot by Kumar
and Singh [46]. The lines are paths along which the parallel efficiency E remains
constant as the problem size and the number of processors np vary. Using the point-
Jacobi solver and the uniform boundary coefficient implementation, each SÍMPLE
iteration has no substantial contribution from operations which are less than fully
parallel or from operations whose time depends on the number of processors. The
efficiency is only a function of the virtual processor ratio, thus the lines are straight.
Much of the parameter space is covered by efficiencies between 0.6 and 0.8.
The reason that the present implementation is linearly scalable is that the oper¬
ations are all scalable—each StMPLE iteration has predominantly nearest-neighbor

73
communication and computation and full parallelism. Thus, Tp depends on VP.
Local communication speed does not depend on np.
T\ depends on the problem size N. Thus, as N and np are increased in proportion,
starting from some initial ratio, the efficiency from Eq. 3.3 stays constant. If the initial
problem size is large and the corresponding parallel run time is acceptable, then one
can quickly get to very large problem sizes while still maintaining Tp constant by
increasing np a relatively small amount (along the E = 0.85 curve). If the desired
run time is smaller, then initially (i.e. starting from small np) the efficiency will be
lower. Then the scaled-size experiment requires relatively more processors to get
to a large problem size along the constant efficiency (constant Tp for point-Jacobi
ierations) curve. Thus, the most desirable situation occurs when the efficiency is
high for an initially small problem size.
For this case the fixed-time and scaled-size methods are equivalent, because the
problem size T\ depends on N per iteration. However this is not the case when
the SIMPLE inner iterations are done with the line-Jacobi solver using parallel cyclic
reduction. Cyclic reduction requires (131og2 N + l)N operations to solve a tridiagonal
system of N equations [44]. Thus, T\ ~ (131og2 N + l)N and on np — N processors,
Tp ~ 13 log2 N 1 because every processor is active during every step of the reduction
and there are 13 log2 N-(-1 steps. Since VP = 1, every processor’s time is proportional
to the number of steps, assuming each step costs about the same.
In the scaled-size approach, one doubles np and N together, which therefore gives
Ti ~ (261og2 2N-\-2)N and Tp ~ 13 log2 2N+1. The efficiency is 1, but Tp is increased
and 7\ is more than doubled. In the fixed-time approach, then, one concludes that
N must be increased by a factor which is less than two, and np must be doubled, in
order to maintain constant Tp. If a plot like Figure 3.6 is constructed, it should be
done with T\ instead of N as the measure of problem size. In that case, the lines

74
of constant efficiency would be described as T\ ~ npi with a > 1. The ideal case is
a = 1. In addition to the operation count, there is another factor which reduces the
scalability of cyclic reduction, namely the time per step is not actually the same as
was assumed above—later steps require communication over longer distances which
is slower. In practice, however, no more than a few steps are necessary because the
coupling between widely-separated equations becomes very weak. As the system is
reduced the diagonal becomes much larger than the off-diagonal terms which can
then be neglected and the reduction process abbreviated.
In short, the basic prerequisite for scaled-size constant efficiency is that the
amount of work per SIMPLE iteration varies with VP and that the overheads and
inefficiencies, specifically the time spent in communication and the fraction of idle
processors, do not grow relative to the useful computational work as np and N are
increased proportionally. The SIMPLE implementation developed here using the
point-iterative solvers, Jacobi and red/black, have this linear computational scalabil¬
ity property.
On the other hand, the convergence rate of point-iterative methods increases at a
rate greater than the problem size, so although Tp can be maintained constant while
the problem size and np are scaled up, the convergence rate deteriorates. Hence the
total run time (cost per iteration multiplied by the number of iterations) increases.
This lack of numerical scalability of standard iterative methods like point-Jacobi
relaxation is the motivation for the development of multigrid strategies.
3.4 Concluding Remarks
The SIMPLE algorithm, especially using point-iterative methods, is efficient on
SIMD machines and can maintain a relatively high efficiency as the problem size and
the number of processors is scaled up. However, boundary coefficient computations

75
need to be folded in with interior coefficient computations to achieve good efficiencies
at smaller problem sizes. For the CM-5, the inefficiency caused by idle processors
in a 1-d boundary treatment was significant over the entire range of problem sizes
tested. The line-Jacobi solver based on parallel cyclic reduction leads to a lower
peak E (0.5 on the CM-5) than the point-Jacobi solver (0.8), because there is more
communication and on average this communication is less localized. On the other
hand, the asymptotic convergence rates of the two methods are also different and need
to be considered on a problem-by-problem basis. The speeds which are obtained with
the line-iterative method are consistent and comparable with other CFD algorithms
on S1MD computers.
The key factor in obtaining high parallel efficiency for the StMPLE algorithm
on the computers used, is fast nearest-neighbor communication relative to the speed
of computation. On the CM-2 and CM-5, hierarchical mapping allows on-processor
communication to dominate the slower off-processor form(s) of communication for
large VP. The efficiency is low for small problems because of the relatively large
contribution to the run time from the front-end-to-processor type of communication,
but this type of communicaton is constant and becomes less important as the problem
size increases.
Once the peak E is reached, the efficiency is determined by the balance of compu¬
tation and on-processor communication speeds—for the CM-5, using a point-Jacobi
solver, E approaches approximately 0.8, while on the CM-2 the peak efficiency is 0.4,
which reflects the fact that the CM-5 vector units have a better balance, at least for
the operations in this algorithm, than the CM-2 processors.
The rate at which E approaches the peak value depends on the relative contribu¬
tions of on- and off-processor communication and front-end-to-processor communica¬
tion to the total run time. On the CM-5, VP > 2k is required to reach peak E. This

76
problem size is about one-fourth the maximum size which can be accommodated,
and yet still larger than many computations on traditional vector supercomputers.
Clearly a gap is developing between the size of problems which can be solved effi¬
ciently in parallel and the size of problems which are small enough to be solved on
serial computers.
For parallel computations of all but the largest problems, then, the data layout
issue is very important- in going from a square subgrid to one with aspect ratio of
16, for a VP = lk case on the CM-5, the run time increased by 25%. On the MP-1,
hierarchical mapping is not needed, because the processors are slow compared to the
X-Net communication speed. The peak E is 0.85 with the point-Jacobi solver, and
this performance is obtained for VP > 32, which is about one-eighth the size of
the largest case possible for this machine. Thus, with regards to achieving efficient
performance in the teraflops range, the comparison given here suggests a preference
for numerous slow processors instead of fewer fast ones, but such a computer may be
difficult and expensive to build.

77
4 x 1 Layout of Processors
PEO
PE 1
PE 2
PE 3
Array A(8)
Cut-and-Stack Mapping (MP-Fortran) Hierarchical Mapping (CM-Fortran)
Memory Layers
A ^ HZ ^ 8^
PE 0 PE 1 PE 2 PE 3 PE 0 PE 1 PE 2 PE 3
2x1 virtual subgrids
i • 2
3 • 4
5 i 6
7 • s
Figure 3.1. Mapping an 8 element array A onto 4 processors. For the cut-and-
stack mapping, nearest-neighbors array elements are mapped to nearest-neighbor
physical processors. For the hierarchical mapping, nearest-neighbor array elements
are mapped to nearest-neighbor virtual processors, which may be on the same physical
processor.

78
1
0.8
0.6
LD
0.4
0.2
0
0 5000 10000
VP
Efficiency vs. VP
©
©
9
(T>g/ '
x
¡fe>*x
S
x
x
+
o
+
o
*
x
X
+
o
X
+
XX
Point-Jacobi (on-VU)
Point-Jacobi (NEWS)
Line-Jacobi (Cyclic Red.)
Line-Jacobi (TDMA)
X
XX
Figure 3.2. Parallel efficiency, i?, as a function of problem size and solver, for the
CM-5 cases. The number of grid points is the virtual processor ratio, VP, multiplied
by the number of processors, 128. E is computed from Eq. 3.3. It reflects the relative
amount of communication, compared to computation, in the algorithm.

79
E vs. Problem Size
Figure 3.3. Comparison between the CM-2, CM-5 and MP-1. The variation of
parallel efficiency with problem size is shown for the model problem, using point-
Jacobi relaxation as the solver. E is calculated from Eq. 3.3, and 7\ = npTcomp for
the CM-2 and CM-5, where Tcomp is measured. For the MP-1 cases. T\ is the front-end
time, scaled down to the estimated speed of the MP-1 processors (0.05 Mflops).

Tnews/Tcomp
80
Aspect Ratio Effect
Subgrid AR
Figure 3.4. Effect of subgrid aspect ratio on interprocessor communication time,
Tnews, for the hierarchical data-mapping (CM-2 and CM-5). Tnews is normalized by
Tcomp In order to show how the aspect ratio effect varies with problem size, without
the complication of the fact that Tcomp varies also.

Tcoeff/Tsolve
81
Effect of Implementation
VP
Figure 3.5. Normalized coefficient computation time as a function of problem size,
for two implementations (on the CM-2). In the 1-d case the boundary coefficients
are handled by 1-d array operations. In the 2-d case the uniform implementaton
computes both boundary and interior coefficients simultaneously. Tcoe¡¡ is the time
spent computing coefficients in a SIMPLE iteration; Tso¡ve is the time spent in point-
Jacobi iterations. There are 15 point-Jacobi iterations (i/u = vv = 3 and uc = 9).

82
Isoefficiency Curves
i ... I I i
2000 4000 6000 8000
# Processors (MP-1)
Figure 3.6. Isoefficiency curves based on the MP-1 cases and SIMPLE method with
the point-Jacobi solver. Efficiency E is computed from Eq. 3.3. Along lines of
constant E the cost per SIMPLE iteration is constant with the point-Jacobi solver
and the uniform boundary condition implementation.

CHAPTER 4
A NONLINEAR PRESSURE-CORRECTION MULTIGRID METHOD
The single-grid timing results focused on the cost per iteration in order to elucidate
the computational issues which influence the parallel run time and the scalability. But
the parallel run time is the cost per iteration multiplied by the number of iterations.
For scaling to large problem sizes and numbers of processors, the numerical method
must scale well with respect to convergence rate, also.
The convergence rate of the single-grid pressure-correction method deteriorates
with increasing problem size. This trait is inherited from the smoothing property of
the stationary linear iterative method, point or line-Jacobi relaxation, used to solve
the systems of u, v, and p' equations during the course of SIMPLE iterations. Point-
Jacobi relaxation requires 0(N2) iterations, where N is the number of grid points,
to decrease the solution error by a specified amount [1], In other words, the number
of iterations increases faster than the problem size.
At best the cost per iteration stays constant as the number of processors np
increases proportional to the problem size. Thus, the total run time increases in
the scaled-size experiment using single-grid pressure-correction methods, due to the
increased number of iterations required. This lack of numerical scalability is a serious
disadvantage for parallel implementations, since the target problem size for parallel
computation is very large.
Multigrid methods can maintain good convergence rates as the problem size in¬
creases. For Poisson equations, problem-size independent convergence rates can be
obtained [36, 55]. The recent book by Briggs [10] introduces the major concepts in
83

84
the context of Poisson equations. See also [11, 37, 90] for surveys and analyses of
multigrid convergence properties for more general linear equations. For a description
of practical techniques and special considerations for fluid dynamics, see the impor¬
tant early papers by Brandt [5, 6]. However, there are many unresolved issues for
application to the incompressible Navier-Stokes equations, especially with regards to
their implementation and performance on parallel computers. The purpose of this
chapter is to describe the relevant convergence rate and stability issues for multigrid
methods in the context of application to the incompressible Navier-Stokes equations,
with numerical experiments used to illustrate the points made, in particular, regard¬
ing the role of the restriction and prolongation procedures.
4.1 Background
The basic concept is the use of coarse grids to accelerate the asymptotic con¬
vergence rate of an inner iterative scheme. The inner iterative method is called the
“smoother” for reasons to be made clear shortly. In the context of the present applica¬
tion to the incompressible Navier-Stokes equations, the single-grid pressure-correction
method is the inner iterative scheme. Because the pressure-correction algorithm also
uses inner iterations—to solve the systems of u, v, and p' equations—the multigrid
method developed here actually has three nested levels of iterations.
A multigrid V cycle begins with a certain number of smoothing iterations on the
fine grid, where the solution is desired. Figure 4.1 shows a schematic of a V(3,2) cycle.
In this case three pressure-correction iterations are done first. Then residuals and
variables are restricted (averaged) to obtain coarse-grid values for these quantities.
The solution to the coarse-grid discretized equation provides a correction to the fine-
grid solution. Once the solution on the coarse grid is obtained, the correction is
interpolated (prolongated) to the fine grid and added back into the solution there.

85
Some post-smoothing iterations, two in this case, are needed to eliminate errors
introduced by the interpolation. Since it is usually too costly to attempt a direct
solution on the coarse grid, this smoothing-correction cycle is applied recursively,
leading to the V cycle shown.
The next section describes how such a procedure can accelerate the convergence
rate of an iterative method, in the context of linear equations. The multigrid scheme
for nonlinear scalar equations and the Navier-Stokes system of equations is then
described. Brandt [5] was the first to formalize the manner in which coarse grids
could be used as a convergence-acceleration technique for a given smoother. The
idea of using coarse grids to generate initial guesses for fine-grid solutions was around
much earlier.
The cost of the multigrid algorithm, per cycle, is dominated by the smoothing cost,
as will be shown in Chapter 5. Thus, with regard to the parallel run time per multigrid
iteration, the smoother is the primary concern. Also, with regard to the convergence
rate, the smoother is important. The single-grid convergence rate characteristics
of pressure-correction methods, the dependence on Reynolds number, flow problem,
and the convection scheme, carry over to the multigrid context. However, in the
multigrid method the smoother’s role is, as the name implies, to smooth the fine-grid
residual, which is a different objective than to solve the equations quickly. A smooth
fine-grid residual equation can be approximated accurately on a coarser grid. The
next section describes an alternate pressure-based smoother, and compares its cost
against the pressure-correction method on the CM-5.
Stability of multigrid iterations is also an important unresolved issue. There are
two ways in which multigrid iterations can be caused to diverge. First, the single-grid
smoothing iterations can diverge, for example if central-differencing is used there are
possibly stability problems if the Reynolds number is high. Second, poor coarse-grid

86
corrections can cause divergence if the smoothing is insufficient. In a sense this latter
issue, the scheme and intergrid transfer operators which prescribe the coordination
between coarse and fine grids in the multigrid procedure, is the key issue. In the next
section two “stabilization strategies” are described. Then, the impact of different
restriction and prolongation procedures on the convergence rate is studied in the
context of two model problems, lid-driven cavity flow and flow past a symmetric
backward-facing step. These two particular flow problems have different physical
characteristics, and therefore the numerical experiments should give insight into the
problem-dependence of the results.
4.1.1 Terminology and Scheme for Linear Equations
The discrete problem to be solved can be written Ahuh = Sh, corresponding to
some differential equation L[u] — S. The set of values uh is defined by
K',j} = u{ih,jh), (i,j) e ([0 : N], [0 : N]) = Ãœh. (4.1)
Similarly, u2h is defined on the coarser grid Q2h with grid spacing 2h. The variable u
can be a scalar or a vector, and the operator A can be linear or nonlinear.
For linear equations, the “correction scheme” (CS) is frequently used. A two-
level multigrid cycle using CS accelerates the convergence of an iterative method
(with iteration matrix P) by the following procedure:
Do v fine-grid iterations vh 4— Pvvh
Compute residual on flh rh = Ahvh — Sh
Restrict rh to Q2h r2h = I2hrh
Solve exactly for e2h e2h =
Correct vh on fC (u/l)netü = (vh)old + I%he2h

87
72/l and ¡2h symbolize the restriction and prolongation procedures. The quantity vh
is the current approximation to the discrete solution uh. The algebraic error is the
difference between them, eh = uh — vh. The discretization error is the difference
between the exact solutions of the continuous and discrete problems, e<¿tscr = u — uh.
The truncation error is obtained by substituting the exact solution into the discrete
equation,
rh = Ahu -Sh = Ahu - Ahuh. (4.2)
The notation above follows Briggs [10].
The two-level multigrid cycle begins on the fine grid with u iterations of the
smoother. Standard iterative methods all have the “smoothing property,” which is
that the various eigenvector-decomposed components of the solution error are damped
at a rate proportional to their corresponding eigenvalues, i.e. the high frequency
errors are damped faster than the low frequency (smooth) errors. Thus, the conver¬
gence rate of the smoothing iterations is initially rapid, but deteriorates as smooth
error components, those with large eigenvalues, dominate the remaining error. The
purpose of transferring the problem to a coarser grid is to make these smooth error
components appear more oscillatory with respect to the grid spacing, so that the
initial rapid convergence rate is obtained for the elimination of these smooth errors
by coarse-grid iterations. Since the coarse grid Q2h has only 1/4 as many grid points
as Qh (in 2-d), the smoothing iterations on the coarse grid are cheaper as well as
more effective in reducing the smooth error components than on the fine grid.
In the correction scheme, the coarse-grid problem is an equation for the algebraic
error,
A^he2h _ y*2/l
(4.3)

88
approximating the fine-grid residual equation for the algebraic error. To obtain the
coarse-grid source term, r2/l, the restriction procedure I2h is applied to the fine-grid
residual rh,
r2h = I2hhrh. (4.4)
Eq. 4.4 is an averaging type of operation. Two common restriction procedures are
straight injection of fine-grid values to their corresponding coarse-grid grid points,
and averaging rh over a few fine-grid grid points which are near the corresponding
coarse-grid grid point. The initial error on the coarse grid is taken as zero.
After the solution for e2h is obtained, this coarse-grid quantity is interpolated to
the fine grid and used to correct the fine-grid solution,
vh <- vh + I?he2h. (4.5)
For I%hl common choices are bilinear or biquadratic interpolation.
In practice the solution for e2h is obtained by recursion on the two-level cycle—
(A2h)~l is not explicitly computed. On the coarsest grid, direct solution may be
feasible if the equation is simple enough. Otherwise a few smoothing iterations can
be applied.
Recursion on the two-level algorithm leads to a “V cycle,” as shown in Figure 4.1.
A simple V(3,2) cycle is shown. Three smoothing iterations are taken before re¬
stricting to the next coarser grid, and two iterations are taken after the solution has
been corrected. The purpose of the latter smoothing iterations is to smooth out
any high-frequency noise introduced by the prolongation. Other cycles can be envi¬
sioned. In particular the W cycle is popular [6]. The cycling strategy is called the
“grid-schedule.” since it is the order in which the various grid levels are visited.
The most important consideration for the correction scheme has been saved for
last, namely the definition of the coarse-grid discrete equation A2h. One possibility is

89
to discretize the original differential equation directly on the coarse grid. However this
choice is not always the best one. The convergence-rate benefit from the multigrid
strategy is derived from the particular coarse-grid approximation to the fine-grid
discrete problem, not the continuous problem. Because the coarse-grid solutions
and residuals are obtained by particular averaging procedures, there is an implied
averaging procedure for the fine-grid discrete operator Ah which should be honored
to ensure a useful homogenization of the fine-grid residual equation. This issue is
critical when the coefficients and/or dependent variables of the governing equations
are not smooth [17].
For the Poisson equation, the Galerkin approximation A2h = I2hAhI%h is the
right choice. The discretized equation coefficients on the coarse grid are obtained
by applying suitable averaging and interpolation operations to the fine-grid coeffi¬
cients, instead of by discretizing the governing equation on a grid with a coarser
mesh spacing. Briggs has shown, by exploiting the algebraic relationship between
bilinear interpolation and full-weighting restriction operators, that initially smooth
errors begin in the range of interpolation and finish, after the smoothing-correction
cycle is applied, in the null space of the restriction operator [10]. Thus, if the fine-grid
smoothing eliminates all the high-frequency error components in the solution, one V
cycle using the correction-scheme is a direct solver for the Poisson equation. The con¬
vergence rate of multigrid methods using the Galerkin approximation is more difficult
to analyze if the governing equations are more complicated than Poisson equations,
but significant theoretical advantages for application to general linear problems have
been indicated [90].

90
4.1.2 Full-Approximation Storage Scheme for Nonlinear Equations
The brief description given above does not bring out the complexities inherent in
the application to nonlinear problems. There is only experience, derived mostly from
numerical experiments, to guide the choice of the restriction/prolongation procedures
and the smoother. Furthermore, the linkage between the grid levels requires special
considerations because of the nonlinearity.
The correction scheme using the Galerkin approximation can be applied to the
nonlinear Navier-Stokes system of equations [94]. However, in order to use CS for
nonlinear equations, linearization is required. The best coarse-grid correction only
improves the fine-grid solution to the linearized equation. Also, for complex equa¬
tions, considerable expense is incurred in computing A2h by the Galerkin approxi¬
mation. The commonly adopted alternative is the intuitive one, to let A2h be the
differential operator L discretized on the grid with spacing 2h instead of h. In ex¬
change for a straightforward problem definition on the coarse grid though, special
restriction and prolongation procedures may be necessary to ensure the usefulness of
the resulting corrections. Numerical experiments on a problem-by-problem basis are
necessary to determine good choices for the restriction and prolongation procedures
for Navier-Stokes multigrid methods.
The full-approximation storage (FAS) scheme [5] is preferred over the correction
scheme for nonlinear problems. The coarse-grid corrections generated by FAS improve
the solution to the full nonlinear problem instead of just the linearized one. The
discretized equation on the fine grid is, again,
Ahuh = Sh.
(4.6)

91
The approximate solution vh after a few fine-grid iterations defines the residual on
the fine grid,
Ahvk = Sh + rh. (4.7)
A correction, the algebraic error e^lg = uh — vk, is sought which satisfies
¿V + <4,) = S'1. (4.8)
The residual equation is formed by subtracting Eq. 4.7 from Eq. 4.8, and cancelling
Sk,
Ah(vh + eh)-Ah{vh) = -rh, (4.9)
where the subscript “alg” is dropped for convenience. For linear equations the Ahvh
terms cancel leaving Eq. 4.3. Eq. 4.9 does not simplify for nonlinear equations.
Assuming that the smoother has done its job, rh is smooth and Eq. 4.9 is the same
as the coarse-grid residual equation
A2h{v2h + e2h) - A2h(v2h) = -r2h, (4.10)
at coarse-grid grid points.
The error e2h is to be found, interpolated back to Uh according to eh =
and added to vh so that Eq. 4.8 is satisfied. The known quantities are v2h, which is a
“suitable” restriction of vh, and r2h, likewise a restriction of rh. Different restrictions
can be used for residuals and solutions. Thus, Eq. 4.10 can be written
A2h(I2hvh + e2h) = A2h(I2hvh) - I2hhrh. (4.11)
Since Eq. 4.11 is not an equation for e2h, one solves instead for the sum I2hvh + e2h.
Expanding rh and regrouping terms, Eq. 4.11 can be written
A2h(u2k) = A2h{I2hhvh) - I2hhrh
(4.12)

92
= [A2h(I2hvh) - I2h(Ahvh) + I2hSh - S2/l] + 5
r n2h . q2h.
numerical ' ’
(4.13)
(4.14)
Eq. 4.14 is similar to Eq. 4.6 except for the extra numerically-derived source term.
Once I2hvh + e2h is obtained the coarse-grid approximation to the fine-grid error, e2h,
is computed by first subtracting the initial coarse-grid solution I2hvh,
e2h = u-
,2h
(4.15)
then interpolating back to the fine grid and combining with the current solution,
vh <- vh + 4(e2/i).
(4.16)
4.1.3 Extension to the Navier-Stokes Equations
The incompressible Navier-Stokes equations are a system of coupled, nonlinear
equations. Consequently the FAS scheme given above for single nonlinear equations
needs to be modified.
The variables u\, u2, and Ug represent the cartesian velocity components and the
pressure, respectively. Corresponding subscripts are used to identify each equations’
source term, residual and discrete operator in the formulation below. The three
equations for momentum and mass conservation are treated as if part of the following
matrix equation,
’ A\ 0 Gy
\<)
r sí i
o Ah2 Ghy
u\
=
si
.Ghx Ghy 0 .
. U3 .
L si J
The continuity equation source term is zero on the finest grid, Qh, but for coarser grid
levels it may not be zero. Thus, for the sake of generality it is included in Eq. 4.17.
Thus, for the iii-momentum equation Eq. 4.8 is modified to account for the
pressure-gradient, G^u^i which is also an unknown. The approximate solutions are

93
v\, v%, and corresponding to u1}, uif, and u3. For the ui-momentum equation, the
approximate solution satisfies
A\vhx + Ghxv$ = S* + rf. (4.18)
The fine-grid residual equation corresponding to Eq. 4.9 is modified to
â– 4(4 + 4) - 4(4) + <4(4 + 4) - <4(4) = -4.
(4.19)
which is approximated on the coarse grid by the corresponding coarse-grid residual
equation,
A2h/2k 1 „2h\ 4 2/i/„,2/i\ . /-i2h 1 _.2h < __2/i\ /~i2h/ ,2h\ 2h /a <)rv\
A1 +el ) ~ Al 4l ) + (U3 + e3 ) ~ Gx (U3 ) - ~rl (4’2U)
The known terms are v\h — v2h = I2hv^ and r\h = I2hrx.
Expanding r\ and regrouping terms, Eq. 4.19 can be written
A? «) + G2xh (uf) = Alh (/f v}) + Glh (/«)
- r a 2h/ r2h.h\ , si2h/r2h^h\
= iAi (4 v\) + gx (4 v3)
-llh(Axvx + Ghxv$) + /fS'f - 5^] +
(4.21)
= K
numerical
+ Sf
2 h
Since Eq. 4.22 includes numerically derived source terms in addition to the physical
ones, the coarse-grid variables are not in general the same as would be obtained from
a discretization of the original continuous governing equations on the coarse grid.
The u2-momentum equation is treated similarly, and the coarse-grid continuity
equation is
Gf
,2 h
+ G\
2hulh
= Gl\llhu\)
1 /^i2hi r2h„,h\ r2/i„
+ Gy (4 U2) - 4 V
2hh
(4.22)

94
The system of equations Eq. 4.17 are solved by either the pressure-correction
method (sequential) or the locally-coupled explicit method described in the next
section.
In addition to the choice of the smoother, the specification of the coarse-grid
discrete problem (A2h) is critical to the convergence rate, and to the stability of
the multigrid iterations as well. In the description of the FAS scheme for the 2-d
incompressible Navier-Stokes equations presented earlier, no mention was made of
the coarse grid discretization. Intuitively, one would use the same discretization for
each of the terms as on the fine grid. For example, if the convection terms are central-
differenced on the fine grid, then central-differencing should be used on the coarse
grid, also. However, with such an approach numerical stability frequently becomes a
problem, particularly in high Reynolds number flow problems.
4.2 Comparison of Pressure-Based Smoothers
The single-grid convergence rate of pressure-correction methods for the incom¬
pressible Navier-Stokes equations depends strongly on the discretization of the non¬
linear convection terms, the Reynolds number, and the importance of the pressure-
velocity coupling in the fluid dynamics. The grid size and quality can also affect the
convergence rate in curvilinear formulations. These issues carry over to the multigrid
context and are complicated by the interplay between the evolving solutions on the
multiple grid levels.
Two pressure-based methods are popular smoothers. The first is the pressure-
correction method studied in Chapter 2 and 3, and the other is Vanka’s locally-
coupled explicit method [89] briefly introduced in Chapter 1. Much attention has
been focused on comparing the performance of these two methods in the multigrid

95
context, i.e. as smoothers. The semi-implicit pressure-correction methods, due to
their implicitness, are better single-grid solvers.
In the locally-coupled explicit method, pressure and velocity are updated in a
coupled manner instead of sequentially. A finite-volume implementation on a stag¬
gered grid is employed. The pressure and the velocities on the faces of each p control
volume are updated simultaneously.
However the simultaneous update of pressure and velocity is only for one control
volume at a time. Underrelaxation is again necessary due to the decoupling between
control volumes. The control volumes are traversed by the lexicographical ordering
with the most recently updated u and v values used when available. Thus the original
method is called BGS (for “block Gauss-Seidel”). After one sweep of the grid each
u and v have been updated twice and each pressure once. A red-black ordering
suitable for parallel computation has been developed in this research. By analogy,
this algorithm is called BRB (block red-black).
For the (i,j)th pressure control volume, the continuity equation is written in terms
of the velocity corrections needed to restore mass conservation:
\u
t+l,j
■u'i,j)Ay+(vij+i-vi,j)Ax = «,•■
"i+hj
i)Ay+(vi,j-vlj+i)Ax = Ri,ji (4-23)
where Rct ] is the mass residual in the (ij)th control volume. The notation follows
the development in Chapter 2 except now that pressure and velocity are coupled it
is necessary to refer to the (i,j) notation on occasion. In Figure 2.3, uw is i¿¿j, ue is
1,j, rs is u1(j, and vn is c,,j-t-1 *
The discrete u-momentum equation for the (i,j)th p control volume is written
U f
aPuij
+ Pi+uAy
E
k=E.W,N,S
a,u
kuk
+ (Pij - Pi+u)Ay - aPi
= ~Rh
(4.24)

96
The discretized momentum equations for the three other faces of the pressure control
volume are written analogously, giving a system of five equations in five unknowns,
(4.25)
'(«;)«
Ay
r i
-Ay
“i+ij
DU
•H-.+lj
Ax
=
K)w
— Ax
r>v
—Ay Ay —Ax Ax
0
L J
The solution of this matrix equation is done by hand for p[j,
_ (A(A_ (Ax)»fl?, (Ax)»7?V| + 1
(ap)>,J (ap) •+!,J (ap)>,J(ap)».J+l
(^).J (°p)»+l,J
+
(Ax)2Rvt ,
Kpkj
+
(ap)«,j+i
(4.26)
The velocity corrections are found by back-substitution. The entire procedure is
summarized in the following algorithm.
BRB(u*, v*, pm;ujuv,uc)
Compute u' coefficient a,p(u*,v*) and residual , V(¿,y)
Compute v' coefficient ap(u*,r*) and residual R^¿, V(i,j)
Compute p'j, back-substitute for u\r u-+lj, v't J, vij+i V(*,j) | ¿ + j = odd
Correct all u, u, and odd p
n11j â–  u, j -j- cjuvv.^j
(analogous corrections for ñ¿+ij, ñt)j, ñ¿j+i, and pltJ)
Compute u' coefficient a,p(u,v) and residual /2“j, V(¿, j)
Compute r/ coefficient avP(ü,v) and residual , V(z,j)
Compute pj- ■, back substitute for u'i j: u¿+lj-, u-j, u'J+1 V(¿, j) | * + j = even
Correct all u, v, and even p
Ui,j T ^UV^iJ
(analogous corrections for iq+ij, u,j, Uiy+i, and p,j)

97
In general the convergence rate in the multigrid context is different between SIM¬
PLE and BRB. Linden et al. [50] stated a preference for the locally-coupled explicit
smoother rather than pressure-correction methods. The argument the authors gave
was that the local coupling of variables is better suited to produce local smoothing
of residuals, i.e. faster resolution of the local variations in the solution. This is be¬
lieved to allow a more accurate coarse-grid approximation of the fine-grid problem.
Similar reasoning appears to have been applied in the original development [89],
by Ferziger and Peric [22], and by Ghia et al. [28]. Linden et al. [50] did a sim¬
plified Fourier analysis of locally-coupled smoothing for the Stokes equations and
confirmed good smoothing properties of the locally-coupled explicit method. Shaw
and Sivaloganathan [71] have found that SIMPLE (with the SLUR solver) also has
good smoothing properties for the Stokes equations, assuming that the pressure-
correction equation is solved completely during each iteration. Thus there is some
analytical evidence that both pressure-correction methods and the locally-coupled
explicit technique are suitable as multigrid smoothers. However, the analytical work
is oversimplified—numerical comparisons are needed on a problem-by-problem basis.
Sockol [80] has compared the performance of BGS. two line-updating variations
on BGS, and the SIMPLE method with successive line-underrelaxation for the inner
iterations. Three model flow problems were tested with different physical charac¬
teristics and varying grid aspect ratios: lid-driven cavity flow, channel flow, and a
combined channel/cavity flow (“open cavity”). In terms of work units, Sockol found
that all four smoothers were competitive for lid-driven cavity flow over a range of
Re from 100 to 5000. For the developing channel flow, BGS and its line-updating
variants converged faster than SIMPLE on square grids, but as the grid aspect ratio
increased SIMPLE became competitive.

98
Brandt and Yavneh [8] have developed a line-relaxation-based multigrid method
which handles pressure and velocity sequentially. Good convergence rates were ob¬
served for “entering-type” flow problems in which the flow has a dominant direction
and is aligned with grid lines. Line-relaxation has the effect of providing non-isotropic
error smoothing properties to match the physics of the problem. Wesseling [91] an¬
alyzed several line-relaxation methods, and concluded that alternating line-Jacobi
relaxation had robust smoothing properties and, somewhat unexpectedly, that it was
a better choice than SLUR.
For pressure-based smoothers, numerical experimentation apparently has created
some intuition regarding the relative performance of sequential and locally-coupled
smoothers in model flow problems, but many of the issues have not been investigated
systematically. Further research perhaps should not be directed toward the goal of
picking one method over the other. General conclusions are unlikely because the
convergence rate is dependent on the particular flow problem. Instead, both types of
smoothers should continue to be implemented and tested in the multigrid context,
not to determine a preference but rather to build understanding for their application
to complex flow problems.
The cost per iteration of BRB and SIMPLE are comparable on serial computers.
If vu = uv — 1 and uc = 4 successive line-underrelaxation inner iterations are used,
SIMPLE costs about 30% more per iteration than BGS [80]. BGS and BRB are
identical in terms of run time on a serial computer.
The relative cost is different on parallel computers though. Figures 4.2, 4.3 and
4.4 compare the parallel run time per iteration of BRB with SIMPLE on a 128-VU
CM-5, i.e. (32 SPARC nodes each controlling 4 vector units), for a fixed number of
iterations (500) of the single-grid BRB and SIMPLE solvers. The convection terms
are central-differenced and, for SIMPLE, point-Jacobi inner iterations are used with

99
uu = vv = 3 and uc = 9. The problem size is given in terms of the virtual processor
ratio; the largest problem size in Figures 4.2, 4.3 and 4.4 is 106 grid points.
Figure 4.2 indicates that SIMPLE and BRB have virtually the same cost per 500
iterations and that this cost scales linearly with the problem size on a fixed number
of processors. Figure 4.3 shows that BRB requires almost twice as much time on
coefficient computations, but only about half as much on solving for the pressure
changes and back-substituting. The coefficient computation cost would be exactly
twice that of SIMPLE except for the small contribution from the computation of the
//-equation coefficients in the SIMPLE procedure.
Figure 4.4 shows the amount of time spent on computation and interprocessor
communication. The interprocessor communication cost is relatively small compared
to the computation cost. Also, the sum of the two is less than the total elapsed
time shown in Figure 4.2, due to front-end-to-processor communication. The relative
time spent overall and in computation is essentially the efficiency. Thus, the results
shown in Figures 4.2-4.4 are summarized by the point-Jacobi curve in Figure 3.2.
Furthermore, the breakdown into communication and computation is approximately
the same for both SIMPLE and BRB, so in terms of efficiency, similar characteristics
for BRB are expected as were observed in Chapter 3 for SIMPLE.
In Figures 4.2-4.4 the SIMPLE timings will be different if line-Jacobi inner it¬
erations are used instead of point-Jacobi inner iterations. The parallel efficiency is
reduced and the actual parallel run time is greater. One line-Jacobi inner iteration
(consisting of two tridiagonal solves—one treating the unknowns implicitly along hor¬
izontal lines and the other for the vertical lines) using the cyclic reduction method
introduced in Chapter 3 takes about 8-10 times as long as one point-Jacobi iteration

100
on the CM-5. Line-Jacobi inner iterations are therefore not preferred over point-
Jacobi inner iterations for use in the SIMPLE algorithm unless the benefit to the
convergence rate is substantial.
The line-updating variants of BRB (see [80, 87]) are even worse in comparison with
BRB than the line-Jacobi SIMPLE method is in comparison with the point-Jacobi
SIMPLE method—they are not suitable for SIMD computation. The line-updating
variations on BGS couple pressures and velocities between control volumes along a
line as well as within each control volume. By contrast, in sequential pressure-based
methods, line-iterative methods are used within the context of solving the individual
systems of equations, so only a single variable is involved.
On the staggered grid, the unknowns which are to be updated simultaneously in
the line-variant of BRB are, for a constant j line, {p2,j, u3,jiP3,ji ■ • ■»uni-i,j,Pni-i,j}-
To set up the tridiagonal system of equations for solving for these unknowns simul¬
taneously requires coefficient and source-term data to be moved from arrays which
have the same layout as the u and p arrays. But this data must be moved to an
array(s) which has a longer dimension in the i-direction. Instead of having dimen¬
sion ni, the array which contains the unknowns, diagonals, and right-hand sides has
dimension 2 ni. The elements 1 :ni for the constant j line of u and the u coefficient
arrays, (u, ap, a^, a]y, ag, aft/, 6“), must be moved into positions l:2m:2. Similar data
movement is required for the p coefficients and data. Thus, “SEND"-type commu¬
nication will be generated during each iteration to set up the tridiagonal system of
equations along the lines. This type of communication is prohibitively expensive in
an algorithm where all the other operations are relatively fast and efficient.
Thus, if line-relaxation smoothing is required to solve a particular flow problem for
either a single-grid or a multigrid computation on the CM-5, the pressure-correction
methods should be used. Otherwise, either BRB or SIMPLE-type methods can be

101
used, if time per iteration is the only consideration. With uu = uv = 3, uc = 9, and
point-Jacobi inner iterations, SIMPLE and BRB have essentially the same parallel
cost and efficiency.
4.3 Stability of Multigrid Iterations
It is well known that central-difference discretizations of the convection terms in
the Navier-Stokes equations may be unstable if cell Peclet numbers are greater than
two, depending on the boundary conditions [73]. The coarse-grid level(s) have higher
cell Peclet numbers. Consequently, multigrid iterations may diverge, driven by the
divergence of smoothing iterations on coarse grids, if central-differencing is used. The
convection terms on coarse grids may need to be upwinded for stability. However,
second-order accuracy is usually desired on the finest grid. The “stabilization strat¬
egy” is the approach used to provide stability of the coarse-grid discretizations while
simultaneously providing second-order accuracy for the fine-grid solution.
The naive stabilization strategy is to simply discretize the convection terms with
first-order upwinding on the coarse-grid levels and by second-order central-differencing
on the finest grid. Unfortunately, the naive approach does not work—there is a “mis¬
match” between the solutions on neighboring levels if different convection schemes
are employed, resulting in poor coarse-grid corrections. In practice divergence usually
results. The coarse-grid discretization needs to be consistent with the fine-grid dis¬
cretization in order that an accurate approximation of the fine-grid residual equation
is generally possible.
In the present work a “defect-correction” stabilization strategy is employed as
in [80, 81, 87, 89]. The convection terms on all coarse grids are discretized by first-
order upwinding. The convection terms on the finest grid are also upwinded, but a

102
source-term correction is applied which allows second-order central-difference accu¬
racy to be obtained, when the multigrid iterations have converged.
Another approach is to use a stable second-order accurate convection scheme,
e.g. second-order upwinding, on all grid levels [74]. Shyy and Sun [74] have used
different convection schemes on all grid levels and compared the convergence rates.
Central-differencing, first-order upwinding, and second-order upwinding were tested
for Re = 100 and Re = 1000 lid-driven cavity flow problems. Comparable conver¬
gence rates were obtained for all three convection schemes, whereas for single-grid
computations there are relatively large differences in the convergence rates. Central-
differencing was unstable for the Re = 1000 case, but a hybrid strategy with second-
order upwinding on the coarsest three grid levels and central-differencing on the finer
grid levels remedied the problem without deteriorating the convergence rate. Further
study of this issue is conducted in Chapter 5, in which the convergence rate and sta¬
bility characteristics of second-order upwinding on all grid levels is contrasted with
the defect-correction strategy.
A third possibility is simply to add extra numerical viscosity to the physical
viscosity on coarse grids. This technique has been investigated by Fourier analysis
for a model linear convection-diffusion equation in [93]. The authors’ best strategy
was the one in which the amount of numerical viscosity was taken to be proportional
to the grid spacing on the next (finer) multigrid level. For the Navier-Stokes this
brute-force approach is not expected to perform very well because the solutions on
the fine grids are frequently not just a smooth continuation of the lower Reynolds
number flow problems being solved on the coarse grid levels. Rather, fundamental
changes in the fluid dynamics occur as Reynolds number increases.

103
4.3.1 Defect-Correction Method
In the defect-correction approach, the discretized equations for a variable <\> are
derived as follows. In general, the equations have the form
acp4>p — acp(f>E + clw&w + + acg 4>s + bp (4-27)
where the superscript “ce” denotes that central-differencing of the convection terms.
To form the discrete defect-correction equation, the corresponding first-order up-
winded discrete equation is added to and subtracted from Eq. 4.27 and rearranged
to give
flpVp = «p1 E + «ív^vv + OjvVjv + asVs + bup +
[!«p - «p'lC'p - « - a‘i)4>E - « - «-Xv'mv -
K? - «S)Av - (o?1 - á$) where the superscript “u 1” denotes the first-order upwinding of the convection terms.
The term in brackets is equal to the difference in residuals, so Eq. 4.28 can be written
a'p 4>p = a'gfc + aticf>w + a#N + affc + bf + [rul - rce], (4.29)
To obtain the updated solution, the difference in residuals is lagged. Thus Eq. 4.29
for the solution at iteration counter “n+1” with the residuals evaluated at iteration
counter “n” is written
auPx4>P = + g^Vjv + affc + bf + [rul - rce]n.
(4.30)
Moving the first five terms on the right-hand side to the left-hand side, Eq. 4.30 can
be rewritten concisely as
tzlln+l
[r“ir - [rcT,
(4.31)

104
in which it is easily seen that satisfaction of the second-order central-difference equa¬
tion discretized equations, rce —>■ 0, is recovered when [rul]n+1 is approximately equal
to [rul]n.
Table 4.1 compares the convergence rates for single-grid. SIMPLE computations
using four popular convection schemes, for a lid-driven cavity flow problem. The
purpose is to gain some intuition regarding the convergence properties of the defect-
correction scheme. For all the cases presented in the table, the grid size was 81 x 81.
The table gives the number of iterations required to converge both of the momentum
equations to the level ||ru|| < —5.0. where the L\ norm is used, divided by the number
of grid points.
The inner iterative procedure for computing an approximate solution to the u, v,
and p' systems of equations, during the course of the outer iterations of the SIMPLE
algorithm, is listed in column 2. In the line-Jacobi method, all the horizontal lines
are solved simultaneously, followed by the vertical lines, during a single inner itera¬
tion. The SLUR procedure (same technique as in Chapter 2) also alternates between
horizontal and vertical lines. In addition, the grid lines are swept one at a time in the
direction of increasing i or j, in the Gauss-Seidel fashion, instead of all at once as in
the line-Jacobi method. The number of inner iterations for each governing equation
was uu — uv = 3, and uc = 9 in the Re = 1000 problem. These parameters are
increased to 5, 5, and 10 for the Re = 3200 flow. The inner iteration damping factor
for the line-Jacobi iterative method was 0.7.
For the Re = 1000 cases, the SIMPLE relaxation factors are 0.4 for the momen¬
tum equations and 0.7 for the pressure. The convergence rate of defect-correction
iterations is not quite as good as central-differencing or first-order upwinding, but it is
slightly better than second-order upwinding. This result is anticipated for cases where

105
Flow
Problem
Inner
Iterative
Method
First-order
Upwinding
Convect
Defect
Correction
ion Scheme
Central
Differencing
Second-order
Upwinding
Ee = 1000 Cavity
Point-Jacobi
2745
3947
1769
4419
Re = 1000 Cavity
Line-Jacobi
2442
3497
1543
3610
Re = 1000 Cavity
SLUR
2433
3482
1534
3568
Re = 3200 Cavity
Point-Jacobi
16526
> 20000
12302
> 20000
Re = 3200 Cavity
Line-Jacobi
16462
> 20000
12032
> 20000
Re = 3200 Cavity
SLUR
16458
> 20000
11985
> 20000
Table 4.1. Number of single-grid SIMPLE iterations to converge to ||ru|| < 10-5, for
the lid-driven cavity flow on an 81 x 81 grid. The L\ norm is used, normalized by
the number of grid points.
central-differencing does not have stability problems, since the defect-correction dis¬
cretization is a less-implicit version of central-differencing. Likewise one should expect
the convergence rate of SIMPLE with the defect-correction convection scheme to be
slightly slower than with the first-order upwind scheme due to the presence of source
terms which vary with the iterations. The method (line-Jacobi, point-Jacobi, SLUR)
used for inner iterations has no influence on the convergence rate for either Reynolds
number tested. From experience it appears that the lid-driven cavity flow is unusual
in this regard. For most problems the inner iterative procedure makes a significant
difference in the convergence rate.
For the Re = 3200 cases, the relaxation factors were reduced until a converged
solution was possible using central-differencing. Then these relaxation factors, 0.1
for the momentum equations and 0.3 for pressure, were used in conjunction with
the other convection schemes. Actually, in the lid-driven cavity flows, the pressure
plays a minor role in comparison with the balance between convection and diffusion.
Consequently, the pressure relaxation factor can be varied between 0.1 and 0.5 with
negligible impact on the convergence rate. The convergence rate is very sensitive to
the momentum relaxation factor, however. The Re = 3200 cavity flow is hard to

106
converge, and neither the defect-correction or second-order upwind schemes succeeds
for these relaxation factors. Second-order central differencing does not normally
look this good either. The lid-driven cavity flow is a special case for which central-
difference solutions can be obtained for relatively high Reynolds numbers due to
the shear-driven nature of the flow and the relative unimportance of the pressure-
gradient. For the Re = 3200 case, the convergence paths of the four convection
schemes tested are shown in Figure 4.5. None of the convection schemes is diverging,
but the amount of smoothing appears to be insufficient to handle the source terms
in the 2nd-order upwind and defect-correction schemes for this Reynolds number.
4.3.2 Cost of Different Convection Schemes
There was initially some concern that the source term evaluations in the defect-
correction and/or second-order upwind convection schemes might be expensive in
terms of the parallel run time. In light of Figure 4.3, it is of interest to know whether
the cost per iteration is significantly increased, as this consequence might lead one
to favor one convection scheme over another for considerations of run time, if both
have satisfactory convergence rate characterisitics. Figure 4.6 compares the cost of
computing the coefficients of the discrete u, v, and p' equations, for three convection
schemes. The timings were obtained on a 32-node (128 vector unit) CM-5 for 500
SIMPLE iterations.
Since the smoother and the coefficient computations are the most time-consuming
tasks in the SIMPLE algorithm, the cost of the inner iterations (the “solver”) is
included for comparison purposes (the solid line). There are 15 point-Jacobi inner
iterations per outer iteration, distributed 3 each on the momentum equations and 9
on the p'-system of equations.

107
The timings were obtained over a range of problem sizes, for 500 SIMPLE itera¬
tions. The x-axis in Figure 4.6 plots problem size in terms of the virtual processor
ratio VP. VP is preferred over the number of grid points so that the results can
be carried over to CM-5s with more processors. The coefficient cost scales linearly
with problem size and, with the defect-correction scheme, requires about the same
time as solving the equations. If more inner iterations were used, or the more costly
line-Jacobi method was used, the fraction of the overall run time due to the computa¬
tion of coefficients would decrease. The linear scaling with VP is possible due to the
uniform boundary coefficient computation implementation, discussed in Chapter 3.
The figure also shows that second-order upwinding of the convection terms costs
more than the other schemes, by approximately 50%. Additional testing has shown
that the first-order upwind, hybrid, central-difference, and defect-correction schemes
all use roughly the same amount of time.
More details are shown in Figure 4.7, which breaks down the time spent com¬
puting coefficients into computation and interprocessor communication. Because the
difference stencils are compact, only nearest-neighbor processing elements need to
communicate in the calculation of the equation coefficients. These are “NEWS”-
type communications on the CM-5. In the present implementation, the coefficient
computations for the momentum equations require 9 NEWS communications for the
defect-correction, central-differencing, and first-order upwind schemes. Second-order
upwinding requires at least 13 NEWS communications. In the present implemen¬
tation 17 communication operations are needed because the formulation supports
nonuniform grids and therefore some geometric quantities need to be communicated
in addition to the nearby velocities. The additional NEWS communication is appar¬
ent in Figure 4.7. Similarly, the second-order upwind scheme involves more compu¬
tation than the other schemes.

108
Coincidentally, the additional computation and interprocessor communication of
the second-order upwind convection scheme offset each other in terms of their affect
on the parallel efficiency. With either convection scheme the trend is essentially the
same, Figure 4.8. Figure 3.2 gave the variation of E with VP for central-differencing.
4.4 Restriction and Prolongation Procedures
The discretization of the convection terms on coarse grids is a key issue because the
coarse grid problem must be a reasonable approximation to the fine-grid discretized
equation, in order to obtain good corrections. In addition, for the formulation given
in the background section, one must also say how the coarse-grid source terms are
computed, and how the corrections are interpolated to the fine grid. The restric¬
tion and prolongation procedures affect both the stability and convergence rate. In
this section, three restriction procedures and two prolongation procedures have been
compared on two model problems with different physical characteristics to assess the
effect of the intergrid transfer procedures on the multigrid convergence rate.
For finite-volume discretizations, conservation is the natural restriction procedure
for the equation residuals, because the terms in the discrete equations represent
integrals over an area. The method of integration for source terms determines the
actual restriction procedure. For piecewise constant treatment of source terms in a
cell-centered finite-volume discretization, the mass residual in a coarse-grid control
volume is the sum of the mass residuals in the four fine-grid control volumes which
comprise the coarse-grid control volume. This restriction procedure is used for the
residuals of the continuity equation in every case tested.
If the mass residual is summed, and and v% are restricted by cell-face averaging
(described below), the right-hand side of Eq. 4.22 is identically zero [80], which implies
that the velocity field on coarse grids also satisfies the continuity equation, in addition

109
to the velocity field on the finest grid. However, it is not necessary to have identically
zero coarse-grid source terms, even in the continuity equation.
Restriction procedure “3” obtains the initial coarse-grid solutions not by restrict¬
ing the solutions, but instead by taking the most recently computed values on the
coarse grid. These values will be from the previous multigrid cycle. The «-momentum
equation residuals are summed over the six fine-grid u control volumes which comprise
the coarse-grid u control volume under consideration. Only half the contribution is
taken from the cell-face neighbor u control volumes due to the staggered grid.
For the restriction procedure denoted “1,” u, v, and the momentum equation
residuals are restricted by cell-face averaging. Cell-face averaging refers to the aver¬
aging of the two fine-grid u velocity components immediately above and below the
coarse-grid u location, which are on the same coarse-grid p control volume face. Sim¬
ilar treatment is applied to v. The coarse-grid pressures are obtained by averaging
the four nearest fine-grid pressures.
The restriction procedure “2” indicates a weighted average of six fine-grid u ve¬
locity components, the cell-face ones and their nearest-neighbors on either side. The
cell-face fine-grid u velocity components contribute twice as much as their neighbors.
Similar treatment is applied for v, and for the momentum equation residuals. The
coarse-grid pressures are obtained by averaging the four nearest fine-grid pressures,
as in restriction procedure 1.
For the prolongation procedures, “1” and “2” indicate bilinear and biquadratic
interpolation, respectively. The bilinear interpolation procedure is identical to that
used by Shyy and Sun [74], in which the two nearest coarse-grid corrections along a
line x — constant (for u) are used to compute the correction at the location of the
fine-grid u velocity component, by linear interpolation. Similar treatment is adopted
for v corrections. To compute the corrections on the “in-between” fine-grid lines the

no
available fine-grid corrections are interpolated linearly. Corrections for pressure are
interpolated linearly from the four nearest coarse-grid values.
The biquadratic interpolation procedure, “2,” is similar to the procedure used
by Bruneau and Jouron [12]. It finishes in exactly the same way as the bilinear in¬
terpolation, but is preceded by a quadratic (instead of linear) interpolation in the
y-direction, and an averaging in the ^-direction. Thus, the three nearest correction
quantities on the coarse grid (above and below the fine-grid u location) are used to
interpolate in the y-direction for a correction located at the position of the fine-grid
u velocity component. After this y-direction interpolation there are two corrections
defined on each face of the coarse-grid u control volumes, at the locations correspond¬
ing to the locations of the fine-grid u velocity components. These are injected to give
the fine-grid corrections at these points after a weighted averaging in the x-direction.
For example, on a uniform grid this pre-injection averaging goes like:
^c,corr (11 J) — i^c.corr (I T 11 J) T 2 UCCorr (11 J) T ^c.corr (I l)*^))/4, (4.32)
where uCtCorr and the capitalized indices indicate that the correction quantities are
still defined on the coarse grid—they are positioned to correspond with the fine-grid
u locations. After the averaged corrections are injected to the fine grid, the fine-
grid corrections are defined along every other line x = constant. The corrections
on “in-between” lines are linearly interpolated from the injected, averaged correc¬
tions. Similar treatment is adopted for the v corrections. Corrections for pressure
are interpolated biquadratically from the nine nearest coarse-grid values.
Table 4.2 below compares the various intergrid transfer procedures in terms of
the work units required to reach a prescribed convergence tolerance on the finest grid
level. The notation (p,r) indicates the number of the prolongation and restriction
procedures adopted. The convergence tolerance on the fine grid is prescribed by

Ill
an estimate of the truncation error of the fine-grid discretization, which is derived
in Chapter 5. The criterion is typically not very stringent, so the table results best
reflect differences in the initial convergence rate instead of the asymptotic convergence
rate.
Number of work units to
converge
(P'r)
Re = 1000 Cavity
Re = 400 Back-Step
V (2,1)
V(3,2)
V (2,1)
V(3,2)
(1,1)
19.0
23.6
123.2
95.7
(2,1)
21.8
28.5
110.0
166.6
(1,2)
16.9
24.4
168.9
181.7
(2.2)
20.2
20.5
263.5
122.4
(1,3)
12.7
13.6
div
51.8
(2,3)
14.1
13.8
239.5
59.6
Table 4.2. The effect of different restriction and prolongation procedures on the con¬
vergence rate of the pressure-correction multigrid algorithm, for a 7-level cavity flow
problem with a 322 x 322 fine grid, and for a 5-level symmetric backward-facing step
flow with a 322 x 82 fine grid. The defect-correction approach is used.
Numerical experiments with the number of pre- and post-smoothing iterations
have shown that for the cavity flow, V(2,l) cycles provide enough smoothing. V(3,2)
cycles are needed for the symmetric backward-facing step flow computation. With
less smoothing the number of work units to reach convergence generally increases
even though the number of work units per cycle is smaller.
The restriction procedure used appears to be very important to the convergence
rate in either flow problem. The restriction procedure 3 appears to perform better
than 1 or 2. The discussion presented earlier suggested this result. However, since the
residuals are summed instead of averaged they are typically larger, with more spatial
variation also. As a result, more smoothing iterations are needed to ensure stability
of the multigrid iterations. For r = 3, it appears that the bilinear interpolation
procedure (p = 1) converges slightly faster than the biquadratic procedure.

112
The performance of the other restriction procedures appears to depend on the
prolongation procedure. In both problems the best results for r = 1 or r = 2 are
obtained when the corresponding (p = 1 or p = 2) prolongation procedure is used.
In the backward-facing step flow, the results for cell-face averaging (r = 1) are better
than the six-point averaging by a significant amount. The same is true for the cavity
flow but to a lesser degree. The effect of Reynolds number for each flow problem
should be considered in future work.
Figures 4.9, and 4.10 give a different look at the relative performance of the 1
and 3 restriction procedures, cell-face averaging of solutions and residuals contrasted
with summation of residuals only. The focus is on the asymptotic convergence rate
as opposed to the initial convergence rate considered in Table 4.2. The u-momentum
equation average residual (the L\ norm divided by the number of grid points) is
plotted on each grid level against work units. V(3,2) cycles and bilinear interpolation
(p = 1) were used for the symmetric backward-facing step flow calculation.
The computations have been carried far beyond the point at which convergence
was declared in Table 4.2. The dashed line shows the estimated truncation error on
the fine grid used to declare convergence for the table. Brandt and Yavneh have
argued that this level of convergence should be sufficient [9]. Further multigrid cycles
reduce the algebraic error but not necessarily the differential error.
With restriction procedure 1, Figure 4.9, the initial multigrid convergence rate is
rapid, but levels off significantly after about 100 work units. This apparently slow
asymptotic multigrid convergence rate is still much better than the single-grid conver¬
gence rate for this flow problem, indicating that there is some benefit being obtained
from the coarse-grid corrections with the restriction procedure 1. The corrections are
evidently not as large as with restriction procedure 3 (Figure 4.10), because this case
shows no reduction in the initial rapid convergence rate. It has been verified that

113
the convergence rate is maintained until the level of double-precision roundoff error
(-15.0) is reached, although the convergence path is shown only down to -8.0. These
figures support the earlier observation that the restriction procedure 3 is appropriate
to the finite-volume discretization. The difference between the performance of the
restriction procedures 1 and 3 is even more dramatic in the lid-driven cavity flow,
Figures 4.11 and 4.12.
The convergence rate of the present multigrid method appears to be comparable
to other results in the literature. Sockol [80] found that roughly 30 work units were
needed to obtain convergence for the lid-driven cavity flow at Re = 1000, for both
BGS and SIMPLE. The residuals were summed as in restriction procedure 3, but the
variables were also restricted, by cell-face averaging. W(l,l) cycles were used. Shyy
and Sun [74] needed many more work units to reach convergence, using V cycles at
the same Reynolds number but with less resolution on the fine grid (81 x 81). The
restriction procedure 1 was used. The convergence criterion was tighter, and there
were procedural differences from the present work and that of Sockol which may also
account for the differences.
4.5 Concluding Remarks
Multigrid techniques are potentially scalable parallel computational methods,
both in the numerical sense and the computational sense. The key issue for applying
multigrid techniques to the incompressible Navier-Stokes equations is the connection
between the evolving solutions on the various grid levels, which includes the transfer
of information between coarse and fine grids, i.e. the restriction and prolongation
procedures, and the formulation of the coarse-grid problem, i.e. the choice of the
coarse-grid convection scheme. These factors also influence the stability of multigrid
iterations.

114
The restriction procedure for finite-volume discretizations should be summing of
residuals. Also, it was found unnecessary to restrict the solution variables. The
convergence rate in both types of flow problems, shear and pressure-driven, were
significantly accelerated when the residuals were summed instead of averaged. How¬
ever, because the residuals are larger, more smoothing is found to be necessary to
avoid stability problems, in the symmetric backward-facing step flow. The bilinear
prolongation procedure appears to be preferrable to the biquadratic prolongation
procedure. The convergence rates which have been achieved in the model problems
are comparable to other results in the literature.
In terms of cost per iteration, it appears that the pressure-correction type smoother
is comparable to the locally-coupled explicit method on the CM-5, whereas for serial
computations the latter has been favored by some [80]. Both algorithms consist of
basically the same operations, with roughly twice as much influence on the parallel
run time from the coefficient computations, for BRB. The coefficient computation
cost is comparable to the smoothing cost for the SIMPLE method, but for BRB the
former is the dominant consideration. In that respect, the uniform implementation
for boundary coefficient computations described in Chapter 3 and the choice of con¬
vection scheme are very important considerations. Using the second-order upwind
scheme, the cost per iteration of SIMPLE, assuming 3, 3, and 9 point-Jacobi in¬
ner iterations, is roughly twice as much compared to the defect-correction scheme,
although there is negligible effect on the parallel efficiency.

115
Level 4 (fine grid)
Level 3
Level 2
Level 1 (coarse grid)
(3) = 3 smoothing iterations
Figure 4.1. Schematic of a V(3,2) multigrid cycle, which has three smoothing itera¬
tions on the “downstroke” of the V and 2 smoothing iterations on the “upstroke.”

116
Smoother Comparison
Figure 4.2. Comparison of the total parallel run time for SIMPLE and BRB on a 128
vector-unit CM-5 for 500 iterations over a range of problem sizes. The flow problem
which was timed was Re = 1000 lid-driven cavity flow.

117
Smoother Comparison
VP
Figure 4.3. Comparison of the parallel run times for SIMPLE and BRB, decomposed
into contributions from the coefficient computations and the solution steps in these
algorithms. The time are obtained on a 128 vector-unit CM-5 for 500 iterations over
a range of problem sizes. The convection terms are central-differenced.

118
Smoother Comparison
400
300
| 200
100
Ü
X
o SIMPLE
o
x Box Red-Black
X
-
o
Node cpu
-
X
-
b
o
Comm (NEWS)
O
O
ox
X
8
X
L
0
5000
VP
10000
Figure 4.4. Comparison of the parallel run time for SIMPLE and BRB, decom¬
posed into contributions from parallel computation and nearest-neighbor interproces¬
sor communication (“NEWS”). The timings were made on a 128 vector-unit CM-5
for 500 iterations over a range of problem sizes.

119
Single-Grid Convergence Paths for Re=3200 Case
Number of Iterations
Figure 4.5. Decrease in the norm of the ii-momentum equation residual as a function
of the number of SIMPLE iterations, for different convection schemes. The results
are for a single-grid simulation of Re = 3200 lid-driven cavity flow on an 81 x 81
grid. The alternating line-Jacobi method is used for the inner iterations. The results
do not change significantly with the point-Jacobi or the SLUR solver.

120
Cost of Coefficient Computations
Figure 4.6. Comparison between two convection schemes, in terms of parallel run
time. The total (computation + communication) time spent computing coefficients
over 500 SIMPLE iterations, on a 128-VU CM-5, is plotted against the virtual pro¬
cessor ratio, VP. “Solver time” is the time spent on 15 point-Jacobi inner iterations
per SIMPLE iteration, 3, 3, and 9 for the u, u, and p' systems of equations. It is just
coincidental that, for the defect-correction and central-difference cases, the coefficient
computations and the solver time are about equal.

121
NEWS & CPU Costs in Coefficient Computations
450
400
350
^ 300
c
o 250
a>
w
^200
(D
¡1 150
100
50
0
0
x 2nd-order upwind CPU
+ 2nd-order upwind NEWS
o Defect-correction CPU
* Defect-correction NEWS
x
o
X
A.
+
JL
X
+
2000 4000 6000
VP
x
+
X
8000
10000
Figure 4.7. For the second-order upwind and defect-correction schemes, the time
spent in coefficient computations for 500 SIMPLE iterations is decomposed into con¬
tributions from computation, denoted “CPU”, and from nearest-neighbor interpro¬
cessor communication, denoted “NEWS”. These quantities are plotted against the
virtual processor ratio, VP. Times are for a 128-VU CM-5.

122
CM-5 SIMPLE Code: E vs. VP for 128 VUs
1
0.8
LU
>.
o
0) 0.6
o
H—
LU
J? 0.4
CO
03
Q_
0.2
0
0 2000 4000 6000 8000 10000
VP
Figure 4.8. Parallel efficiency, E for a range of problem sizes. E = Ti/npTp, where
T\ is the serial execution time, estimated by multiplying the measured computa¬
tion time per processor by the number of processors, np. Tp is the elapsed CM-5
run time, including computation, interprocessor and front-end-to-processor types of
communication.
-
1
X
1
o
X
1 1
o 9
X
a
o
-
x 2nd-order upwind scheme
<
D
o Defect-correction scheme
Solver= point-Jacobi
1
1
iterations (3, 3, and 9)
1 1

123
Level 5 Level 4
Level 3 Level 2
Level 1
Work Units
Residual norm
Truncation error norm
Re = 400 Symmetric Back-Step
FMG-FAS V(3,2) cycles
321 x 81 grid, 5 levels
Defect-correction scheme
Point-Jacobi solver (3,3,9)
Relax, factors (.7,.7,.5)
(P,r) =(1,D
Figure 4.9. Convergence path on each grid level for a 5-level V(3,2) multigrid cycle.
The fine grid is 322 x 82. The flow problem is a Re = 400 symmetric backward-facing
flow. Bilinear interpolation (p = 1) and cell-face averaging for restriction (r = 1) are
used.

124
Level 5 Level 4
Level 3 Level 2
Level 1
Work Units
Residual norm
— Truncation error norm
Re = 400 Symmetric Back-Step
FMG-FAS V(3,2) cycles
321 x 81 grid, 5 levels
Defect-correction scheme
Point-Jacobi solver (3,3,9)
Relax, factors (.7,.7,.5)
(P,r) =d,3)
Figure 4.10. Convergence path on each grid level for a 5-level V(3,2) multigrid cycle.
The fine grid is 322 x 82. The flow problem is a Re = 400 symmetric backward-facing
flow. Bilinear interpolation (p = 1) and summation of residuals for restriction (r =
1) are used.

Residual norm Residual norm Residual norm
125
Level 7
Level 5
Level 3
Work Units
Level 6
Level 4
Residual norm
Truncation error norm
Re = 1000 Lid-Driven cavity
FMG-FAS V(2,l) cycles
321 x 321 grid, 7 levels
Defect correction scheme
Point-Jacobi solver (3,3,9)
Relax, factors (.7,.7,.5)
(p,r) = (1.1)
Figure 4.11. Convergence path on each grid level for a 7-level V(2,l) multigrid cycle.
The fine grid is 322 x 322. The flow problem is Re = 1000 lid-driven cavity flow.
Bilinear interpolation (p = 1) and cell-face averaging for restriction (r = 1) are used.

126
Level 7 Level 6
Level 5 Level 4
Level 3
Work Units
Residual norm
Truncation error norm
Re = 1000 Lid-Driven cavity
FMG-FAS V(2,l) cycles
321 x 321 grid, 7 levels
Defect correction scheme
Point-Jacobi solver (3,3,9)
Relax, factors (.7,.7,.5)
(P,r) = (1,3)
Figure 4.12. Convergence path on each grid level for a 7-level V(2,l) multigrid cycle.
The fine grid is 322 x 322. The flow problem is Re = 1000 lid-driven cavity flow.
Bilinear interpolation (p = 1) and summation of residuals for restriction (r = 3) are
used.

CHAPTER 5
IMPLEMENTATION AND PERFORMANCE ON THE CM-5
This chapter describes the implementation on the CM-5 of the multigrid method
studied previously, and applies the parallel code to two model flow problems to assess
the performance both in terms of the convergence rate and the cost per iteration. The
major implementational consideration for the CM-5 multigrid algorithm is the storage
problem.
The starting procedure by which an initial guess is generated for the fine grid is an
important practical technique whose cost on parallel computers is of interest. Also,
the starting procedure is important in the sense that the initial guess can affect the
stability of the subsequent multigrid iterations and the convergence rate. The cycling
strategy is discussed next. It also affects both the run time and the convergence rate.
Because of the nonneglible smoothing cost of coarse grids, the comparison between V
and W cycles in terms of the time per cycle is different than on serial computers and
needs to be assessed for the CM-5. The purpose of the chapter is to provide some
practical guidance regarding the use of the numerical method on the CM-5, now that
the choice for the smoother, the coarse-grid discretization, and the restriction and
prolongation procedures has been addressed.
Finally, the computational scalability of the parallel implementation is studied
using timings for a range of problem sizes and numbers of processors. With the
experience gained with regards to the choice of algorithm components and practi¬
cal techniques, this information gives a clear picture of the potential of the present
approach for scaled-speedup performance on massively-parallel SIMD machines.
127

128
5.1 Storage Problem
Multigrid algorithms pose implementational problems in Fortran, because the
language does not support recursion. A variable number of multigrid levels must be
accommodated but care must be taken not to waste memory. Let NI(k) and NJ(k)
be arrays denoting the grid extents on the fcth multigrid level, where k — 1 refers
to the coarsest grid and k = kmax is the finest grid. The dimension extents on the
fine grid are parameters of the problem. For an array A, the different grid levels
are made explicit by adding a third array dimension. This is a natural albeit naive
storage declaration,
PARAMETER (NI(kmax) = 1024, NJ(kmax) = 1024, kmaT = 7)
REAL*8 A( NI(kmax), NJ(kmax), kmax )
Unfortunately, this approach wastes storage because every grid level is dimen¬
sioned to the extents of the finest grid. The coarse grids are significantly smaller,
though, decreasing in size by factor of 4 for each level beneath the top level (the fine
grid). The total amount of memory used in this approach is the number of arrays,
narray, multiplied by the storage cost of each array,
StovciQc NI (kmax) N J (kmax)kmaxnarray (5.1)
The actual storage needed is only
ATT„.\ AT Tt 1-\- VI(kmax)NJ(kmax)
Storage / v A I(k) A J(k)Tlarray / v 9(A:max — k) ^array (^*2)
k= 1 k= 1 ¿ TnaX
The actual storage needed approaches (4/3)NI(kmax)NJ{kmax)narray as kmax in¬
creases. Thus the wasted storage is (kmax — 4/3)NI(kmax)NJ(krnax)narray when the
naive approach is used. Clearly this can become the dominating factor very quickly
as the number of levels increases.

129
One efficient solution for serial computation is to declare a 1-d array of sufficient
size to hold all the data on all levels and to reshape it across subroutine boundaries,
taking advantage of the fact that Fortran passes arrays by reference. This practice
is typical in serial multigrid algorithms [63]. A 1-d array section of the appropriate
length for the grid level under consideration is passed to a subroutine where it is
received as a 2-d array with the dimension extents NI(k) x NJ(k).
On serial computers, this reshaping of arrays across subroutine boundaries is pos¬
sible because the physical layout of the array is linear in the computer’s memory. On
distributed memory parallel computers like the CM-5, however, the storage problem
is not so easily resolved because the data arrays are not physically in a single pro¬
cessor memory, they are distributed among the processors. Instead of being passed
by reference as is the case with Fortran on serial computers, data-parallel arrays are
passed to subroutines by “descriptor” on the CM-5. The array descriptor is a front-
end array containing 18 elements. The descriptor contains information about the
array being described: the layout of the physical processor mesh, the virtual subgrid
dimensions, the rank and type of the array, the name and so on.
On the CM-5 the storage problem is resolved using array “aliases.” Array aliasing
is a form of the Fortran EQUIVALENCE function used on serial computers. In the
multigrid algorithm, storage for each variable is initially declared for all grid levels,
explicitly referencing the physical layout of the processors. For example, an array A
with fine-grid dimension extents NI(kmax) x NJ(kmax), is declared as follows for a
128-VU CM-5 with the processors arranged in an (nlp — 8) x (nJp = 16) mesh:
PARAMETER (Nserial = (4/3)NI(kmax)NJ(kmax)/np, n'p = 8, nj = 16)
REAIA8 A( N,eriai,n'p,nj, )
Actually, the factor 4/3 needs to be increased slightly to account for “array
padding.” Each physical processor must be assigned exactly the same number of

130
virtual processors in the SIMD model, since all processors do the same thing at the
same time. Thus, in general the array dimensions on each level must be “padded”
to fit exactly onto the processor mesh. For example, an 80 x 80 fine grid with 5
multigrid levels has coarse grids with dimensions 40 x 40, 20 x 20, 10 x 10 and 5x5.
To fit onto the processor mesh with exactly the same subgrid shape and size for each
physical processor, assuming an 8 x 16 processor mesh, the storage allocated must
be 88 x 96 + 48 x 48 + 24 x 32 + 16 x 16 4- 8 x 16 (on the coarsest grid VP = 1).
Thus the actual declared storage needs to be slightly more than that shown above.
The array A is mapped to the processors using the compiler directives discussed in
Chapter 3. The first dimension extent of A is the actual storage needed per physical
processor. It is laid out linearly in each physical processor’s memory by the :SERIAL
specification in the LAYOUT compiler directive (recall Chapter 3 example). The
latter two dimensions are parallel (:NEWS), laid out across the physical processor
mesh.
Then, to access the A arrays corresponding to each grid level, array aliases (alter¬
nate front-end array descriptors for the same physical data) are created as described
in Chapter 3. For example, an equivalence is established between the “array section”
A(l:88*96/(8*16), 1:8,1:16) and another array with dimensions (1:88,1:96). In this
way arrays can be referenced inside subroutines as if they had the dimensions of
the alias, with both dimensions parallel. In this case a (:NEWS,:NEWS) layout of
A(88,96) can be declared, even though in the calling routine the data come from an
array of a different shape.
This feature, array aliasing, is relatively new in the CM-Fortran compiler evolution
(version 2.1-Beta [84]) and has not yet been implemented by MasPar in their compiler.
Previous multigrid algorithms on SIMD computers were restricted to either the naive
approach or explicit declaration of arrays on each level [18]. The latter approach is

131
extremely tedious and leads to very large front-end executable codes, making front-
end storage a concern. Thus, the present technique for getting around the multigrid
storage problem, although requiring some programming diligence, is critical because
it permits much larger multigrid computations to be attempted on SIMD-type parallel
computers. As observed in Chapter 3, for the CM-5, problem sizes of the order of
the largest possible problem sizes are necessary to obtain good parallel efficiencies.
5.2 Multigrid Convergence Rate and Stability
The “full multigrid” (FMG) startup procedure [11] is shown in Figure 5.1. It
begins with an initial guess on the coarsest grid. Smoothing iterations using the
pressure-correction method are done until a converged solution has been obtained.
Then this coarsest-grid solution is prolongated to the next grid level and multigrid
cycles are initiated (at level 2, the “next-to-coarsest” grid level). Cycling at this level
continues until some convergence criterion is met. The solution is prolongated to the
next finer grid and multigrid cycling resumes. This process is repeated until the finest
grid level is reached. The converged solution on level kmax — 1, after interpolation to
the fine grid, is a much better initial guess than is possible otherwise. The alternative
is to use an arbitrary initial guess on the fine grid.
For Poisson equations, one V cycle on the finest grid is frequently sufficient to
reach a converged solution, if the initial guess is obtained by the FMG procedure.
The benefit to the convergence rate of a good initial guess more than offsets the cost
of the V cycles on coarse grids leading up to the finest grid level. For Navier-Stokes
equations the cost/convergence rate tradeoff still favors using the FMG procedure,
on serial computers. For parallel computers, however, the cost of the FMG procedure
is more of a concern, due to the inefficiencies of smoothing the coarse grids, and the
potential need for many coarse-grid cycles.

132
On SIMD computers, the smoothing iterations on the coarse grid levels have a
hxed baseline time set by the communication overhead of the front-end-to-processor
type. Thus, the cost of the FMG procedure is increased compared to serial com¬
putation because coarse-grid smoothing is relatively more costly (less efficient) than
fine-grid smoothing. It becomes important, with regards to cost, to minimize the
number of coarse-grid cycles, without sacrificing the benefit of a good initial guess to
the multigrid convergence rate.
Tuminaro and Womble [88] have recently modelled the parallel run time of the
FMG cycle on a distributed memory MIMD computer, a 1024-node nCUBE2. They
developed a grid-switching criterion to account for the inefficiencies of smoothing on
coarse grids. The grid-switching criterion effectively reduces the number of coarse-
grid cycles taken during the FMG procedure. They have not yet reported numerical
tests of their model, but the theoretical results indicate that the cost/convergence
rate tradeoff can still favor FMG cycles for multigrid methods on parallel computers,
with their technique. In the next section a truncation error estimate is developed
and then used to control the amount of coarse-grid cycling in the FMG procedure.
The validity and the numerical characteristics of the truncation error estimate are
addressed.
In addition to the cost of obtaining the initial guess on the fine grid, the quality
of the initial guess can affect both the convergence rate and the stability of the
subsequent multigrid iterations, depending on the flow problem and the coarse-grid
convection scheme, i.e. the stabilization strategy. The performance of the truncation
error criterion in this regard is also studied.

133
5.2.1 Truncation Error Convergence Criterion for Coarse Grids
The goal of a given discretization and numerical method is to obtain an approx¬
imate solution to Eq. 4.6, vh, which nearly satisfies the differential equation, i.e. to
achieve
\\Ahu-Ahvh\\ for some small t. However, u is unknown and there are many complicating, interacting
factors due to the grid distribution, resolution, the discretization of the nonlinear
terms and the proper modelling and specification of boundary conditions. Thus the
conservative philosophy is usually adopted—assume that the discretized equation is
a good approximation to the differential equation and seek the exact solution to the
discrete equation, i.e. seek algebraic convergence,
||Ahuh - Ahvh|| = ||Sh - Ahvh|| = ||r/l|| < e, (5.4)
again choosing the level t to accommodate any imposed constraints on the run time.
Eq. 5.4 is applied on the finest grid in a multigrid computation, the level on which
the solution is desired.
The coarse grid solution obtained in the FMG procedure has only one purpose—
to yield a good initial guess on the fine grid. The “best” initial guess is the one that
allows Eq. 5.4 to be satisfied on the fine grid quickest. The corresponding coarse-grid
solution from which the fine-grid initial guess is obtained may or may not satisfy
Eq. 5.4 with t = 0 itself. It is not always beneficial to the fine-grid convergence rate
to obtain the coarse-grid solution to strict tolerances.
The utility of a coarse-grid solution for the purpose of providing a good initial
guess on the fine grid depends more on the difference in the truncation errors of the
Q2h and Qh approximations than it does on the accuracy of the coarse-grid solu¬
tion. For example, in highly nonlinear equations or in problems where grid levels are

134
coarsened by factors greater than two, it is immediately apparent that the solution
accuracy in the coarse-grid solution to the discrete problem cannot translate into
a truly accurate initial guess on the fine grid no matter how accurately the coarse
grid problem is solved. The usefulness of the coarse-grid solution depends on the
smoothness of the physical solution and the prolongation procedure.
Consequently, one expects that the most cost-effective procedure for controlling
the FMG cycling will be obtained with a particular set of coarse-grid tolerances
that depend on the flow characteristics. Thus the goal should be to discontinue the
FMG cycles on a particular coarse grid level when Eq. 5.3 is satisfied. Frequently
Eq. 5.3 is satisfied before Eq. 5.4. Similar arguments have been made by Brandt and
Ta’asan [7].
Using the definitions of the truncation error, Eq. 4.2, and the residual, the triangle
inequality gives
||A2hu - A2hv2h|| < \\A2hu - A2hu2h|| + ||A2hu2h - A2hv2h\\ = ||r2/l|| + ||r2/l||. (5.5)
Thus, if
r2Í = e/2,
(5.6)
Eq. 5.3 can be satisfied if the residual is less than the truncation error,
I Jih\\ ^ 11 _2/i 11
r <. r II.
(5.7)
Eq. 5.7 is the criterion applied to the coarse grids, while Eq. 5.4 is retained for the
finest level.
To develop an estimate for ||r2/i|| in Eq. 5.7, consider an example case of a 1-D
nonlinear convection diffusion equation with a constant or position-dependent source
term,
du d2u
UYx~V~d^ = 5'
(5.8)

135
For a finite-difference discretization with central-differencing for both derivative
terms, the truncation error at grid point “i” on the grid with spacing h is given by
'«,+1 - 2ut + it,_i'
^¿+1 1
2h
h2
-5,- = r*
uh2
(5.9)
/i2
-li; 17,17, + ...
24 * 6 ’
where u is the differential solution at the position x — ih.
Similarly on the grid with spacing 2h,
fui+i ~ «7-1
UI { 2h
fui+i — 2u¡ + u/_i \
"l v >-5' = T
(5.10)
4 uh2 ■ 4h2 »/
= ^ru' - xu,“'+ ■-
The grid points x = Ih and x = ih correspond, but 1+1 refers to the point at
x = x¡ T 2h, whereas i+1 refers to the point at x = x, -f h. Assuming the high-order
terms are negligible (debatable for fluid flow problems unless the solution is very
smooth), and subtracting the first equation from the second (at the grid points of
n2h), one obtains
/«/+1 - 2Ul + 17/—1
«7+1 — «7-1
UI{ 2h )~v\
h2
( «t-M - «¿-1 N («¿+1 - 2li, -I- Ui-i
Ul V 2h ) U V h2
-S¡
s,
(5.11)
3r^
In operator notation,
A2hu -
Ahu - Sh] = 3rfc.
(5.12)
Substituting the most current approximation vh for u (at the coarse-grid grid points),
and the approximate values v2h = /2/lu/l, this expression becomes
A2h(I2hvh) - S'
Ih
Ahvh -Sn I ~3rft.
(5.13)

136
The term in brackets is just the residual rh (at the corresponding coarse-grid grid
point). For finite-difference discretizations this residual is presumed to be accurately
approximated by I2hrh. Thus the truncation error of the fine-grid discretization,
estimated at the coarse-grid grid points, is
A2h(I2hhvh) - S2h - I2hhrh
Th ~
(5.14)
This expression, however, is merely the numerically derived part of the coarse-grid
source term, S2nhumerical in Eq. 4.14. Thus
S2h
h ^ numerical
(5.15)
The convergence criteria based on this truncation error estimate, Eq. 5.7, becomes
<
02 h
^numerical
(5.16)
The norms used on each side of the equation should be divided by the appropriate
number of grid points (since they are defined on different grid levels), so that the
quantities represented are comparable. The L\ norm is used here—on a grid with
N2 points, the L\ norm of a vector v is
IMI = 5Z
all i,j
. N2
(5.17)
Eq. 5.16 is very convenient. It is a way of setting the coarse-grid tolerances in the
FMG procedure automatically. Also, since the additional coarse-grid term S2^mertcal
is already computed as part of the coefficient computations precediing the coarse-grid
smoothing, there are no new quantities to be computed and monitored.
5.2.2 Numerical Characteristics of the FMG Procedure
The following issues are addressed: the validity/utility of the analysis above lead¬
ing to Eq. 5.16, the performance of the resulting FMG procedure based on the trun¬
cation error convergence criterion in terms of the cost and the initial residual level

137
on the fine grid, and the characteristics of the convergence path through the FMG
cycling as a function of the flow problem and the coarse-grid convection scheme.
Two flow problems with very different physical characteristics are considered, the
lid-driven cavity flow at Reynolds number 5000 and a symmetric backward-facing step
flow at Reynolds number 300. Streamlines, velocity, vorticity and pressure contours
for the two model flow problems are shown in Figures 5.2 and 5.3, to clarify the
problem specification and bring out their different physical features. In the streamline
plots, the contours both inside and outside the recirculation regions are spaced evenly.
However, because the recirculation regions are fairly weak in both problems, the
spacing between contour levels is set to be smaller within the recirculation regions in
order to bring out the flow pattern.
The lid-driven cavity flow is a recirculating flow where convection and cross¬
stream diffusion balance each other in most of the domain and the pressure gradient is
important only in the upper-left corner. In contrast, the symmetric backward-facing
step flow is aligned with the grid for much of the domain. The pressure gradient
balances viscous diffusion as in channel-type flows. These problems are challenging
in different ways and are representative of much broader cross-sections of interesting
flow situations.
Figures 5.4-5.7 show the convergence path of the «-momentum residual in the
lid-driven cavity flow for different coarse-grid convergence criteria. The residual is
plotted for the current outermost level, during the FMG procedure. Also the plot is
continued for the first three multigrid cycles on the finest grid level to show the initial
multigrid convergence rate on the fine grid. The finest grid level was 321 x 321 and
seven multigrid levels were used—the coarsest grid is 6 x 6. The defect-correction
approach was used, first-order upwinding on coarse grids and defect-correction on the
finest level. V(3,2) cycles with bilinear interpolation for the prolongation procedure

138
and restriction procedure 3, piecewise-constant summation of the residuals only, were
used. The relaxation factors were uuv = 0.5 and u>p = 0.5, and point-Jacobi inner
iterations were used, with vu = vv = 3, and uc = 9. In the symmetric backward-facing
step results given below, the same procedures are used, except in the smoother the
relaxation factors are luuv = 0.6 and lov = 0.4. The fine grid is 321 x 81 and five
multigrid levels are used.
In Figure 5.4, the truncation error criterion Eq. 5.16 is applied, with the denom¬
inator set to 1. This is the “right” denominator according to the analysis behind
Eq. 5.16, since the outermost levels during the FMG cycling on coarse grids are first-
order accurate in the convection term, provided convection is important in the flow
problem. The tolerances given by the truncation error criterion are graded, because
the truncation error is larger on coarser grids. The spacing between the levels is
uneven, though, and depends on the evolving solution. For the cavity flow, ||rfc|| in
Eq. 5.16, with the denominator equal to 1, converged to +0.2, -0.4, -1.2, -1.9 and -2.6
for levels 2 through 6. On the finest grid the truncation error estimate converges to
-3.0.
The figure shows a jump in the residual level going from coarse grid to fine grid
of approximately -0.6 between any two successive levels. This jump is just logiol/4.
Physically the equation residuals represent integrated quantities in the finite-volume
discretization. Thus, whether on the coarse or the fine grid, the net residual (L\
norm) should be roughly the same (or greater, because the bilinear/biquadratic in¬
terpolations considered here should not be expected to improve the solution since they
are not derived from the physics). In the norm used here the sum of the residuals
is divided by the number of grid points. Thus, in the best case one would antici¬
pate the result which has been obtained, with the factor of 4 decrease in the average
residual—the fine-grid control volumes are a factor of 4 smaller than the coarse-grid

139
control volumes. The fact that the maximum jump is achieved indicates that the
order of the prolongation procedure is sufficient for the flow problem. In Figure 5.8
the corresponding case for the symmetric backward-facing step flow is shown. The
jump in the average u residual between levels is about -0.4. Similar observations
hold for second-order upwinding in both flow problems, using the truncation error
criterion with the denominator set equal to three. Thus, the results obtained are
plausible and one would expect, about the best results which are possible.
Figure 5.5 shows the effect of applying a more stringent coarse-grid convergence
criterion. In this case the truncation error estimate is again used but with the de¬
nominator set to five. A slight improvement in the initial level of the residual on the
finest grid is obtained. After 1 fine-grid cycle, the residual is -3.5 compared to -3.25
for the 1-FMG cycle. However, tightening the coarse-grid tolerances even further
does not give any benefit. For example, Figure 5.6 shows the FMG convergence path
when the coarse-grid residual is driven down to a specified value on each level, i.e.
when
dr'll < t (5.18)
is applied, with t = —3.0 in Figure 5.6. Also, in the subsequent figure, Figure 5.7,
the FMG convergence path is shown for a “graded” set of tolerances. Specifically, for
the 7-level cavity flow levels 2 through 6 were converged to -0.7, -1.3, -1.9, -2.5 and
-3.1, respectively (factor of 1/4 reduction per level). These particular values are all
equal to -2.4 if instead of Eq. 5.17, the residual is normed according to
I Vi ,j I
INI = E
(5.19)
aUi,jfluX'
where flux is a characteristic momentum flux, equal to the Reynolds number in the
present flow problem. Shyy and Sun [74] used this approach. The tolerance on level 6,
-3.1, was chosen a posteriori to match the known initial level of the fine-grid residual.

140
The graded set of coarse-grid tolerances are representative of a “best possible guess”
that one could make without prior experience.
From these figures, there does not appear to be any benefit in converging the
coarse grids to tighter tolerances. Furthermore, there is the disadvantage that tighter
coarse-grid tolerances require more coarse-grid cycles and are therefore more expen¬
sive in terms of work units and especially in terms of run time on the CM-5 (the
bottom plot). The graded tolerances work almost as well as the truncation error
criterion, except that there are a few unnecessary cycles on levels 2 and 3.
The tradeoff between the run time elapsed during the FMG procedure on serial
and parallel computers, and the initial level of the u residual, is summarized in
Table 5.1.
Coarse-grid
tolerances
Number of V(3,2)
cycles on levels
{1 ...6}
FMG
work units
FMG CM-5
busy time
Initial level
of fine-grid
U residual
T.E. w/denom. = 1
{x 1 1 1 1 1}
2.2
2.3 s
-3.25
T.E. w/denom. = 5
{x 2 3 6 6 5}
11.5
10.9 s
-3.54
-3.0 on all levels
{x 15 17 14 10 3}
11.1
20.2 s
-3.46
-5.0 on all levels
{x 24 27 26 30 32}
69.3
64.1 s
-3.57
Graded tolerances
{x 6 8 8 6 5}
11.9
13.0 s
-3.54
Table 5.1. Comparison between different sets of coarse-grid tolerances in terms of the
effort expended in the FMG procedure, for the Re — 5000 lid-driven cavity flow and
the bilinear interpolation prolongation procedure. The defect-correction stabilization
strategy is used.
To judge which case is the “best,” one asks how many work units or how much
cpu time is required to reach a given level of the residual. A few fine-grid cycles are
required to make up the difference in the initial levels of the fine-grid residual. These
are charged at a rate of slightly more than 6.25 work units per V(3,2) cycle for this
7-level problem with the 321 x 321 fine grid, equivalent to about 1.5 seconds on a
128-VU CM-5. Thus, the “1-FMG” procedure (the first row) is judged to be the
most efficient.

141
Evidently, the cavity flow problem is relatively benign in terms of the convection
effect on the convergence rate characteristics. The truncation error estimate is im¬
mediately satisfied on each of the coarse grids in the 7-level computation after only
1 V(3,2) cycle. Even less smoothing is possible for this problem, even though the
Reynolds number is high. Table 5.2 clarifies the role of the FMG procedure in this
flow problem.
Number of V(3,2)
cycles on levels
{1 ...6}
FMG
work units
FMG CM-5
busy time
Initial level
of fine-grid
U residual
{0 0 0 0 0 0}
{x 0 0 0 0 0}
{x 1 0 0 0 0}
0.006
— diverges —
— diverges —
0.22 s
-2.44
{x 1 1 0 0 0}
0.031
0.50 s
-2.73
{x 1 1 1 0 0}
0.135
0.88 s
-3.03
{x 1 1 1 1 0}
0.550
1.42 s
-3.16
{x 1 1 1 1 1}
2.216
2.25 s
-3.25
Table 5.2. Accuracy/effort tradeoff between a “1-FMG” approach (7th row), and
simple V cycling with a zero initial guess on the fine grid (1st row). An approximate
solution must be obtained on at least level 2 in order to avoid divergence for the
7-level Re — 5000 lid-driven cavity flow problem, when the relaxation factors are
ujuv = u)c = 0.5. “FMG work units” refers to the work units (proportional to a serial
computer’’s run time) already expended at the point when multigrid cycling on the
finest grid level begins. “CM-5 busy time” is the corresponding measure of work on
a 128-VU CM-5, in seconds. The “x” in the column corresponding to level 1 means
that 2 SIMPLE iterations were done on the coarsest grid. These data are for the
defect-correction strategy.
Thus, it is possible to prolong the solution directly from level 3, a 21 x 21 grid,
to the fine grid. However, for the relaxation factors used, an initial guess on an even
coarser grid (level 1 or 2) is not accurate enough to prevent the fine-grid V(3,2) cycles
from diverging.
The results in Figures 5.6-5.7 showed that the initial residual on the fine grid was
independent of the degree of accuracy obtained on the coarser grid levels. Closer

142
examination shows that the initial residual levels on the coarse grid levels during
the FMG procedure also do not appear to depend on the degree to which the next
coarser grid level is converged. Furthermore this observation holds for second-order
upwinding on all levels in the cavity flow, and for either defect-correction or second-
order upwinding in the symmetric backward-facing step flow. The FMG convergence
paths for the step flow, using second-order upwinding on all grid levels, are shown in
Figures 5.9-5.12.
There appears to be a certain maximum amount of accuracy that can be car¬
ried over to the next finer grid with the bilinear interpolation prolongation. Since
the truncation error convergence criterion does not exceed this amount of accuracy,
and indeed the average residuals levels are virtually the same if the denominator in
Eq. 5.16 is set to five, the results strongly suggest that the degree of accuracy on a
given coarse grid which is exploitable is related to the differential error in the solu¬
tion, i.e. the truncation error, and not the algebraic error. Thus, the results support
the arguments made in the paragraph following Eq. 5.4.
With regard to the performance of the truncation error criterion, the defect-
correction and second-order upwind stabilization strategies showed similar results, in
both flow problems. The initial fine-grid residual level and the stability of the subse¬
quent multigrid iterations, however, appear to be strongly dependent on the convec¬
tion schemes used. Table 5.3 summarizes the FMG convergence rates for second-order
upwinding in the lid-driven cavity flow.
The -3.0 and graded tolerance cases both converged with the defect-correction
scheme, but with second-order upwinding they diverge. After several fine-grid cycles
the -2.0 case diverges also. The difference between the cases is evident—many more
coarse-grid cycles are taken in the cases which diverge. The source terms in the

143
Coarse-grid
tolerances
Number of V(3,2)
cycles on levels
{1 ...6}
FMG
work units
FMG CM-5
busy time
Initial level
of fine-grid
U residual
T.E. w/denom. = 1
{x 1 2 2 2 5}
9.4
8.8 s
-2.88
T.E. w/denom. - 5
{x 4 14 7 16 18}
37.7
40.0 s
-3.50
-2.0 on all levels
{x 35 22 19 6 1}
6.9
28.6 s
-3.17
-3.0 on all levels
{x 45 34 74 35 oo}
diverges
Graded tolerances
{x 23 14 19 20 oo}
diverges
Table 5.3. Comparison between different sets of coarse-grid tolerances in terms of the
effort expended in the FMG procedure, for the Re = 5000 lid-driven cavity flow and the
bilinear interpolation prolongation procedure. The second-order upwind stabilization
strategy is used.
second-order upwind discretization appear to be a strong destabilizing factor in this
flow problem.
Furthermore, the fact that the -2.0 constant tolerance at least reaches the fine
grid while the constant -3.0 tolerance diverges suggests that the amount of mismatch
between the ending coarse-grid residual level and the beginning fine-grid residual
level, which is greater for the -3.0 case than the -2.0 case, is related to the size of the
destabilizing source terms in the initial fine-grid problem. Thus in addition to being
wasteful of work units and/or cpu time, obtaining excessive accuracy on the coarse
grids can actually be detrimental to the stability of multigrid iterations, depending on
the discretization scheme. Evidently with relaxation factors uuv = uic = 0.5, second-
order upwinding, with V(3,2) cycles and uu = uv = 3 and vc = 9 inner point-Jacobi
inner iterations in each SIMPLE outer iteration, the Re = 5000 lid-driven cavity
flow is difficult to solve. The multigrid iterations only converge for a relatively small
range of coarse-grid tolerances. This range may be hard to find by trial and error.
The truncation error criterion is useful in this regard.
Similar observations are made for the symmetric backward-facing step flow. Fig¬
ures 5.9-5.12 are the corresponding results for the Re = 300 symmetric backward¬
facing step flow, using second-order upwinding on all coarse grid levels in the FMG

144
procedure. The convergence rate behavior of the second-order upwind scheme in the
step flow is similar to the defect-correction scheme results in the lid-driven cavity
flow. For the symmetric backward-facing step flow, a 321 x 81 fine grid with 5 multi¬
grid levels was used. The coarsest grid was 21 x 6. As in the cavity flow cases,
V(3,2) cycles were used with bilinear interpolation for the prolongation procedure
and restriction procedure 3. The relaxation factors were uuv — 0.6 and uc = 0.4. As
in previous cases, 3, 3, and 9 point-Jacobi inner iterations were used in each SIMPLE
iteration for the u, u, and p' systems of equations, respectively.
In Figure 5.9, the convergence path is similar to the cavity flow convergence
path—except that in the cavity flow, the coarse-grid tolerances given by Eq. 5.15
were loose enough that only one cycle was needed on each of the coarse grids, yielding
a “1-FMG” cycle. In the symmetric backward-facing step flow, more than one cycle
is needed on each coarse-grid level to satisfy the truncation error criterion. The
truncation error estimate (with the denominator set equal to three because the coarse-
grid discretizations are second-order) converges on the following levels corresponding
to the grid levels 2 to 4: -0.8, -2.9, -4.0. On the finest grid the estimated level is -4.9.
Figures 5.10-5.12 show the FMG convergence path when tighter coarse-grid toler¬
ances are used, and these results are summarized in Table 5.4 below. For the graded
set of coarse grid tolerances, levels 2 through 4 were converged to -3.1, -3.7, and -4.3.
Each of these levels corresponds to the level -2.1 if the norm used is Eq. 5.19 instead
of the average L\ norm.
As in the cavity flow case, there is only a small effect on the initial solution
accuracy on each coarse-grid level. There is no benefit to the initial fine-grid residual
level by converging the coarse grids to strict tolerances. The truncation error criterion
with the denominator set to 5 appears to be the most stringent criterion which does
not waste any coarse-grid cycles, i.e. it is nearly the optimal cost/residual reduction

145
balance. The other approaches obtain more accuracy on the coarse grids than can be
carried over to the initial fine-grid solution, for the bilinear interpolation prolongation.
Coarse-grid
tolerances
Number of V(3,2)
cycles on levels
{1 ...4}
FMG
work units
FMG CM-5
busy time
Initial level
of fine-grid
U residual
“1-FMG” cycle
{x 1 1
1}
2.2
1.2 s
-2.88
T.E. w/denom. = 1
{x 2 2
3}
5.9
3.0 s
-3.82
T.E. w/denom. = 5
{x 4 4
4}
8.6
4.8 s
-4.63
-3.0 on all levels
{x 23 7
1}
6.5
8.1 s
-4.37
-5.0 on all levels
{x 45 16
10}
27.0
21.5 s
-5.10
Graded tolerances
{x 21 9
5}
13.8
10.8 s
-4.71
Table 5.4. Comparison between different sets of coarse-grid tolerances in terms of
the effort expended in the FMG procedure, for the Re — 300 symmetric backward¬
facing step flow and the bilinear interpolation prolongation procedure. Second-order
upwinding is used on all grid levels.
The results for the defect-correction strategy are summarized in the table below.
In the cavity flow, the second-order upwind scheme was very difficult to converge
when a constant or a graded tolerance was given. In the step flow, it appears that
the defect-correction strategy is harder to converge.
Coarse-grid
tolerances
Number of V(3,2)
cycles on levels
{1 - 4}
FMG
work units
FMG CM-5
busy time
Initial level
of fine-grid
U residual
“1-FMG” cycle
{x 1 1
1}
2.2
1.0 s
-2.99
T.E. w/denom. - 1
{x 2 2
2}
4.3
1.9 s
-3.34
T.E. w/denom. = 5
{x 5 6
5}
12.3
5.1 s
-4.18
-3.0 on all levels
{x 22 9
1}
7.3
6.9 s
-4.00
-5.0 on all levels
{x 32 24
53}
100.1
37.8 s
-4.24
Graded tolerances
{x 21 12
21}
41.4
17.2 s
-4.22
Table 5.5. Comparison between different sets of coarse-grid tolerances in terms of the
effort expended in the FMG procedure, for the Re = 300 symmetric backward-facing
step flow and the bilinear interpolation prolongation procedure. The defect-correction
stabilization strategy is used.
5.2.3 Influence of Initial Guess on Convergence Rate
The cost/initial accuracy tradeoff was discussed above. In addition, the initial
guess on the fine grid is important because it can affect the asymptotic convergence

146
rate and stability of subsequent fine-grid cycles. In many cases this consideration is
more important than the cost/initial accuracy tradeoff, since the time spent in the
FMG procedure may be very small compared to the overall time required if many
fine-grid cycles are needed. The FMG contribution to the total run time, especially
on the CM-5, is not always negligible, though, in particular if one defines conver¬
gence according to the truncation error estimate on the finest grid, i.e. differential
convergence, as suggested by Brandt and Ta’asan [7].
Figure 5.13 gives the convergence path for the entire computation for the lid-
driven cavity flow. In the top plot, the fine-grid average u residual is plotted against
the CM-5 busy time for the defect-correction scheme. The defect-correction scheme
and second-order upwind scheme (bottom plot) converge at nearly the same rate. The
differences in the initial fine-grid residual level due to the FMG procedure evidently
do not persist for very long, and if the purpose is to obtain algebraic convergence,
Eq. 5.4, then the difference in CM-5 busy time due to the FMG procedure is insignif¬
icant. However, if convergence is declared when the average u residual falls beneath
the dotted line, the estimated truncation error level on the fine grid, then the FMG
procedure contributes anywhere from 10% of the total time, in the case of the trun¬
cation error criterion with denominator 1, to 80% of the total time, in the case of the
constant -5.0 criterion.
For the Re = 5000 lid-driven cavity flow, using SIMPLE with vu = vv = 1 and
vc = 4 inner SLUR iterations and a W(l,l) multigrid cycle, Sockol [80] reported that
86 work units and 800 seconds on an Amdahl 5980 were needed to reach convergence.
To reach a similar convergence tolerance the present computation needed 200 work
units and 64 seconds on the CM-5. In the previous section, the amount of smoothing
used in the present case, V(3,2) cycles, was observed to be somewhat more than was
necessary for this flow problem. The difference between V(3,2) cycles and W(l,l)

147
cycles in terms of work units is approximately 3, per cycle. Thirty cycles on the
fine grid were taken in the present case. Thus, it seems that the present result is
comparable to Sockol’s result.
The fine-grid convergence paths for the symmetric backward-facing step flow, Fig¬
ure 5.14, are very interesting. The second-order upwind scheme performs remarkably
well. The average u residual reaches -8.0 in just slightly more than 20 seconds on
the CM-5 and 140 work units (20 V(3,2) cycles on the 321 x 81 fine grid). This
convergence rate corresponds to an amplification factor of 0.6 per cycle for the L\
norm of the u-residual. Because of the fast convergence rate the contribution of the
startup FMG cycling is a significant fraction of the overall parallel run time.
The defect-correction strategy does not converge as quickly as the second-order
upwind scheme in the symmetric backward-facing step flow. Furthermore, for the
defect-correction scheme, the fine-grid initial guess evidently affects the rate of con¬
vergence. To obtain the convergence paths in the top plot of Figure 5.14, identical
procedures and parameters were used for the multigrid iterations beginning on the
fine grid. The relaxation factors were ujuv = u>c = 0.5 and fixed V(3,2) cycles were
used.
The coarse-grid discretizations in the FMG procedure use first-order upwinding,
while the fine-grid discretization is modified to produce central-difference accuracy.
Thus, the sudden rise in the residual level for all cases (except the truncation er¬
ror criterion with denominator equal to 1) suggests that the first-order upwind and
central-difference solutions to this flow problem are very different. It is apparently
difficult for the numerical method to evolve the solution from first-order upwind ac¬
curacy into central-difference accuracy. Thus, there is actually an advantage in not
converging the coarse grids to tight tolerances. On the other hand, the “l-FMG”
procedure has the worst convergence rate of the cases considered. The conclusion

148
Figure 5.14 supports is that there is an optimal solution accuracy on the coarse grids
in the FMG procedure, which is related to the differential error in the solution since
the truncation error estimate gives the best result.
5.2.4 Remarks
Both flow problems have strong nonlinearities and are relatively difficult and
slow to converge as single-grid computations. The multigrid method allows larger
relaxation parameters to be used. Very fast convergence rates can be obtained,
but the performance depends on the discretization on coarse grids (the stabilization
strategy) and the initial fine-grid guess. The fact that the truncation error criterion
gives the best results in both flow problems, and that regardless of how tight the
coarse grids are converged both the initial fine and coarse-grid residuals are relatively
independent, indicates that there is only a certain amount of accuracy which can be
obtained initially for a given flow problem and coarse-grid discretization scheme,
and that this observation is essentially a reflection of the truncation error of the
discretization.
The second-order upwind scheme may be prone to large source terms which can
cause the multigrid iterations to diverge, especially if relatively few smoothing itera¬
tions are used. This observation was made for the cavity flow. On the other hand,
when there is a significant difference between the first-order and central-difference so¬
lutions on a given grid, the success of the defect-correction strategy depends strongly
on the initial guess on the finest grid (re: the step flow results) and, in this sense,
the defect-correction approach is not very robust.
The stability of multigrid iterations is different than for single-grid calculations,
and certainly more confusing. For example, if a single-grid calculation does not con¬
verge at a given Reynolds number with a certain set of relaxation parameters, then

149
reducing the relaxation factors is always convergence-enhancing. For multigrid itera¬
tions this is not necessarily true. It was observed for the second-order upwind scheme
in the Re = 300 symmetric backward-facing step problem, that the single-grid method
diverges using uuv = 0.3 with uc = 0.2 for the Re = 300 symmetric backward-facing
step flow and the second-order upwind scheme. However, convergence was obtained
with uuv = 0.6 and uc — 0.4. Evidently, there is a certain minimum amount of
smoothing required. The amount depends on the flow problem as well as the restric¬
tion and prolongation procedures. In other words, reducing the relaxation factors to
cope w'ith problems that have strong nonlinearities may simultaneously require in¬
creasing the number of smoothing iterations on each level. The converse is also true
although perhaps counterintuitive—reducing the amount of smoothing, for example
from V(3,2) to V(2,l) cycles, may cause stability problems. Increasing the relaxation
factors is the appropriate response. By contrast, for single-grid computations, if the
number of inner iterations is too low, the relaxation factors are decreased to avoid
divergence. Additional testing in the smoothing/relaxation factor parameter space
would be desirable to further clarify this point.
5.3 Performance on the CM-5
This section quantifies the cost of multigrid cycling on the CM-5, and discusses
the efficiency and scalability of the present algorithm and implementation. In other
words, to connect with the preceding section, once the fine-grid is reached, what is
the best grid schedule to use, how long does each cycle take, and how does this cost
scale with the problem size and the number of processors?
In Figure 5.15, the costs of smoothing and prolongation are shown as a function
of problem size, for a 32-node CM-5 and a 512-node CM-5. During a multigrid
cycle these costs are incurred for each grid level. In a V(3,2) cycle, for example, 5

150
SIMPLE iterations are done at every grid level, along with one restriction from and
one prolongation to every grid level except the coarsest. If the finest grid is 770 x 770,
then on a 32-node CM-5 the subgrid size {VP) is roughly 4800. The next (coarser)
grid is 385 x 385 and has a subgrid size of 1225. Thus in a two-level V(3,2) cycle,
the total time is the sum of 5 SIMPLE iterations at VP = 4800, one restriction from
VP = 4800 to VP = 1225, 5 SIMPLE iterations at VP = 1225, and one prolongation
from VP = 1225 to VP — 4800. Thus, Figure 5.15 is a level-by-level breakdown of
the parallel run time used by the smoothing and prolongation multigrid components.
The times plotted are total elapsed times including the processor idle time due to
front-end work.
The smoothing cost dominates the cost of the prolongation, at every VP. Thus
unless a multigrid cycle with less smoothing is used, the common idealization that the
restriction and prolongation costs are negligible on serial computers also holds true on
the CM-5. The restriction cost has not been shown in order to keep the figure clear.
It follows the same trend as prolongation only slightly less time-consuming if the
residuals are alone are restricted (about 25% less), and slightly more time-consuming
if both solutions and residuals are restricted.
The trend is linear for both restriction, prolongation and smoothing. When resid¬
uals only are restricted, the ratio of the times for these three components tends
toward 1:2:13, on the 32-node CM-5, as the number of grid points increases (i.e. as
the subgrid size increases).
However, for the 512-node CM-5, the time taken by prolongation grows at a
slightly greater rate than on the 32-node computer. On the 512-node CM-5, VP =
4800 corresponds to a 3080 x 3080 grid size, instead of 770 x 770 as was the case
with the 32-node CM-5. Apparently, the global communication patterns needed to

151
accomplish the prolongation are not perfectly scalable on the fat-tree, at least with
the current CM-Fortran implementation.
Figure 5.15 gives the impression that the cost of SIMPLE iterations varies linearly
with VP. However, as shown in Figure 5.16, the variation is not actually linear for
very small VP. The bar on the left is the CM-5 busy time for 5 SIMPLE iterations,
given as a function of the grid level. The bar on the right is the corresponding CM-5
elapsed time, taken from data points along the smoothing cost curve in Figure 5.15.
The busy time records the time spent doing parallel computation and interprocessor
communication operations. These operations are very inefficient at small VP on the
CM-5 because the vector units are not fully loaded. Thus, the busy time does not
scale linearly with the subgrid size for small VP because the efficiency of vectorized
computation and interprocessor communication increases as the subgrid size grows.
Note however that the busy time is always a monotonic function of VP.
The variation of elapsed time by contrast stays approximately constant until level
5 of this sample multigrid cycle. Level 5 corresponds to VP — 36 on the 32-node
CM-5. The elapsed time includes the idle-processor time due to front-end work. As
discussed in Chapter 3, there are several overhead costs of parallel computation and
interprocessor communication. These operations may leave the CM-5 vector units
inactive for short periods of time. For small VP the dominant consideration in this
regard is the passing of code blocks, i.e. the front-end-to-processor communication.
This cost stays constant with VP, as shown for small the elapsed time at small VP
in Figure 5.16. The elapsed time is actually larger for VP = 1 than VP = 2. This
observation is reproducible but its cause is not fully understood. Inaccurate timings
may be the problem. A computer with a relatively fast front-end and communication
network performs closer to the ideal for small VP.

152
Since the cost of smoothing on the coarse grids does not go to zero as VP —> 0, the
possibility exists for coarse grids to make a nonneglible contribution to the parallel run
time, if the cycling scheme is such that the coarse grids are visited more frequently
than the fine grids. Figures 5.17-5.18 illustrate this point clearly. The cost per
multigrid cycle is compared between V and W cycling strategies. Specifically, V(3,2)
cycles are compared against W(3,2) cycles. The timings are obtained on a 32-node
CM-5. The number of levels is fixed as the finest grid dimensions increase. Both
elapsed and busy times are plotted.
The total time per cycle includes the cost of smoothing on the grid levels in¬
volved, the restriction and prolongation costs, and the cost of program control and
input/output. For a V cycle, this time can be modelled as
(5.20)
nlevel
Timéis) = "g'$t(n^ + ^ + £ {n +
V CVde k=l k=2
where s*,, rk, and pk and the smoothing time per iteration on level k (from Fig¬
ure 5.15), the restriction time from level fc, and the prolongation time to level k.
The number of levels is n¡eiJe¡ and npre and npost represent the number of pre and
post-smoothing iterations, in this case 3 and 2, respectively. In contrast, W cycles
visit the coarse grids much more frequently. Their time per cycle can be modelled
'Timpí Q ^ 71 level nlevel
uy Y = £ sk(npre + npos<)2(ni'-'-fc) + £ (rk + pk)2(n“«'-*>. (5.21)
These expressions are valid for serial computations, too. On serial computers, the
restriction and prolongation costs are generally negligible, and the smoothing cost
per level sk is basically a factor of 1/4 smaller for the lower (coarser) grid levels. For
parallel computation on the CM-5, the fact that sk remains approximately constant
for the coarsest grids is a problem when many multigrid levels are used.
When only three levels are involved, there is very little disadvantage to using W
cycles, as shown in Figure 5.17. Since it is usually possible to gain some benefit to the

153
convergence rate by more frequent coarse-grid corrections, W cycles are recommended
on the CM-5 if the number of multigrid levels is small. However, for 5 or more levels,
Figures 5.18 and 5.19, W cycles begin to cost more than they are worth in terms of
improved convergence rates. Also, since there is a greater difference between V and
W cycle elapsed and busy times as more multigrid levels are added, reflecting the
relatively larger idle times for coarse grids (recall Figure 5.16), the parallel efficiency
of W cycles is less than that of V cycles.
In the present work V cycles have been sufficient to achieve good convergence
rates so no comparisons have been made to W cycles. Such studies need to be made,
but on a problem-by-problem basis. For the symmetric backward-facing step flow
and lid-driven cavity flow, it is not expected that W cycles will be advantageous.
In many cases it is acceptable and even beneficial to use less than the full comple¬
ment of multigrid levels, i.e. to increase the problem size keeping the number of levels
fixed. Whether or not the computation is for a physically time-dependent flow prob¬
lem, there exists an implied time-step in iterative numerical techniques. In multigrid
computations, the changes in the evolving solution on coarser grid levels are smaller,
reflecting the fact that the physical or pseudo-physical development of the solution
on the fine-grid is occurring on a much smaller scale. Thus, the coarsest grid levels
may be truncated without deteriorating the convergence rate. Pressure needs to be
treated globally, but usually there are enough multigrid cycles taken to ensure that
slow development of the pressure field is not a problem, even when the coarsest grid
level is not very coarse.
Figures 5.20-5.22 integrate the information contained in the preceding figures. In
Figure 5.20, the variation of parallel efficiency of 7-level V(3,2) cycles with problem
size is summarized. The problem size is the virtual processor ratio VP of the finest

154
grid level, but of course during the multigrid cycle operations are being done on
coarser grids, too, where VP is smaller.
Figure 5.20 is similar to Figure 3.2 obtained using the single-grid pressure-correction
algorithm. For small VP the useful work (the computation) is dominated by the
interprocessor and front-end-to-processor communication, resulting in low parallel
efficiencies. The efficiency rises as the time spent in computation increases relative
to the overhead costs. The highest efficiency obtained is almost 0.65 compared to 0.8
for the single-grid method on the CM-5. The burden of additional program control,
relatively more expensive coarse-grid smoothing, and the restriction and prolongation
tasks, adds up to 0.15 in terms of the parallel efficiency.
Unlike the single-grid case however, the efficiency does not peak for large problem
sizes. The contributions from the less-efficient coarser grids in a multigrid cycle on the
CM-5 are significant, even when the finest grid has VP ~ 8k. The range of subgrid
sizes comprising a 7-level multigrid cycle scale (a realistic cycle) span three orders
of magnitude. Unfortunately, the range of UP in which the multigrid smoother
achieves high parallel efficiencies is not as broad. In this regard the performance
of the multigrid method on the MasPar-style of SIMD computers is expected to
be much better since the single-grid method achieved high parallel efficiencies for
VP > 32 all the way up to the largest problem size. Numerical experiments have
not been conducted to study the multigrid method on MasPar SIMD computers,
however, because their Fortran compiler is not yet sufficiently developed to address
the storage problem.
The efficiency in 5.20 apparently has a small dependence on the number of pro¬
cessors. This dependence is clearly shown in the next figure, Figure 5.21. The de¬
pendence is due to the slightly increased time spent in intergrid transfer operations
with increasing np, observed earlier in Figure 5.15. Figure 5.21 shows the decrease in

155
efficiency with increasing number of processors for five different subgrid sizes. Again
recall that the subgrid size is for the finest grid but that much coarser grids are in¬
volved in the 7-level V(3,2) cycles. The figure indicates that the rate of decrease in
efficiency is the same for every VP down to at least VP — 320.
The dashed lines are linear least-squares curve fits to the data. The data points
are perturbed about these lines due to variations in the elapsed parallel run time Tv.
Tp varies slightly from timing to timing depending on the workload of the front-end
machine. In all cases multiple timings were obtained as a check on the reproducibility.
In light front-end loadings (i.e. the middle of the night), the measured Tp did not vary
more than +/-20%.
Figure 5.22 is combines the information contained in Figures 5.20 and 5.21. As
in the single-grid case, Figure 3.6, curves of constant efficiency are drawn on a plot
of problem size versus the number of processors. The curves are constructed by
interpolating in Figure 5.21, using the dashed lines as the data instead of the actual
data points, to determine VP at a given (E,np) intersection. N is computed from
the definition of VP, i.e. N = npVP.
The isoefficiency curves are almost linear or, in other words, the 7-level multigrid
algorithm analyzed on a per-cycle basis, is almost scalable. Each of the isoefficiency
curves can be accommodated by an expression of the form
N — No = constant (np — 32)Q, (5.22)
with q ~ 1.1. The symbol No is the initial problem size needed to obtain a particular
E on 32 processors.
Along the isoefficiency curves, “scaled-speedup’’ [35] is nearly achieved. If the
parallel run time Tp at the initial problem size is acceptable, then it can be maintained
with the present pressure-based multigrid method as the problem size and the number

156
of processors are increased in proportion. The inner iterations must be point-Jacobi
of course, since the line-iterative method is 0(N log2 N). With the line-iterative
method Tp increases slightly along the isoefficiency curves. The scalability should
be nearly the same though, since nearest-neighbor communications dominate in the
cyclic reduction parallel algorithm due to data-mapping used on the CM-5.
5.4 Concluding Remarks
A parallel multigrid algorithm has been formulated and implemented on the CM-
5. The focus of numerical experiments and timings has been on the potential of
this approach for the purpose of achieving scalable parallel computing techniques for
application to the incompressible Navier-Stokes equations.
The results obtained indicate that the efficiency of the parallel implementation
of the nonlinear pressure-based multigrid method approaches 0.65 for large problem
sizes, and is almost linearly scalable on the CM-5. The cost per V(3,2) cycle is about
1.5 s on a 128-vector unit CM-5 for a 7-level problem with a 321 x 321 fine grid. The
cost per iteration is dominated by the smoothing cost, and thus much attention has
been given to the details of the implementation and performance of the single-grid
pressure-based method on SIMD computers. Restriction and prolongation are almost
negligible, although they are responsible for the deviation from linear computational
scalability observed in Figure 5.22. Very large problem sizes can be handled on the
CM-5, up to 3074 x 3074 on a 32-node machine, provided the storage problem for
Fortran multigrid implementations can be resolved.
The speed of the multigrid code was not assessed directly, but reasonable estimates
can be made based on the single-grid performance. For the single-grid SIMPLE
method using the point-Jacobi solver, 417 MFlops was achieved on a 32-node (128
VU) CM-5. Since the multigrid cost per 7-level cycle is dominated by the smoothing

157
costs and the multigrid efficiency is 0.65 compared to 0.8 (about a 20% decrease), the
speed is roughly 333 MFlops. Slightly improved efficiency and speed can be obtained
with fewer multigrid levels. For unsteady flow calculations multigrid cycles with a
small number of levels may perform reasonably well. This should be investigated.
Several practical recommendations have been made regarding multigrid tech¬
niques for parallel computation. V cycles should be used unless the number of
multigrid levels is small. W cycles are too expensive because due to the nonneg-
ligible coarse-grid smoothing costs. The FMG procedure should be controlled by
the truncation error estimate Eq. 5.16. The FMG procedure can affect not only
the time needed to reach the fine grid, but also the asymptotic convergence rate
and stability of multigrid iterations can be affected as well, as evident from Fig¬
ure 5.14. This observation may not carry over to the the locally-coupled explicit
smoother. It should be tested in the same way. In terms of computational efficiency
the locally-coupled explicit method has nearly the same properties on the CM-5 as
the pressure-correction method, although the influence on the cost per iteration and
efficiency from the coefficient computations is greater.
Several algorithmic factors have been studied, in particular the coarse-grid dis¬
cretization (the stabilization strategy) and the restriction procedure are observed to
be important to the multigrid convergence rate. It appears that the use of second-
order upwinding on all grid levels and the restriction procedure 3, summing the
residuals but not restricting the solutions, provides a very effective approach for both
the symmetric backward-facing step flow and the lid-driven cavity flow. Smoothing
rates per V(3,2) cycle of 0.6 can be maintained until the residual is driven down to
the level of the roundoff error. The convergence rate with cell-face averaging for the
restriction of solutions and residuals was considerably slower. Similar results were
obtained for the cavity flow.

158
In terms of the coarse-grid discretization strategy, it appears that the popular
defect-correction approach may not be as robust as the second-order upwinding
strategy, at least for entering-type flow problems. In these types of flows, i.e. prob¬
lems with inflow and outflow, the proper formulation of the numerical method (the
pressure-correction smoother) is critical for obtaining good convergence rates. Global
mass conservation must be explicitly enforced during the course of iterations. Global
mass conservation ensures that the system of pressure-correction equations has a
solution, which is identified as an important prerequisite for obtaining reasonable
convergence rates in open boundary problems. The well-posed numerical problem
does not distinguish between inflow and outflow at the open boundary—if the nu¬
merical treatement of the open boundary condition is reasonable and can induce
convergence, the finite-volume staggered-grid pressure-correction method can obtain
the correct numerical solution even if inflow occurs at a nominally outflow boundary.
In conclusion, the results of this research indicate that pressure-based multigrid
methods are computationally and numerically scalable algorithms on SIMD com¬
puters. Taking proper account of the many implementational considerations, high
parallel efficiencies can be achieved and maintained as the number of processors and
the problem size increases. Likewise, the convergence rate dependence on problem
size should be greatly decreased by the multigrid technique. Thus the present ap¬
proach is viable for massively-parallel numerical simulations of the incompressible
Navier-Stokes equations, and should be developed further on SIMD computers. The
target machine should be have fast nearest-neighbor and front-end-to-processor com¬
munication compared to the speed of computation, so that reasonably high parallel

159
efficiencies can be obtained at small problem sizes. The knowledge and implementa¬
tions gained in this research are immediately useful for exploiting the current com¬
putational capabilities of the CM-5 and MP-2 SIMD computers, and are practical
contributions which will facilitate future research in parallel CFD.

160
Figure 5.1. Schematic of an FMG V(3,2) multigrid cycle.

161
Re = 5000 Lid-Driven Cavity Flow
Streamfunction U Velocity Component
Vorticity Pressure
Figure 5.2. Streamfunction, vorticity, and pressure contours for Re = 5000 lid-driven
cavity flow, using the 2nd-order upwind convection scheme. The streamfunction
contours are evenly spaced within the recirculation bubbles and in the interior of the
flows, but this spacing is not the same. The actual velocities within the recirculation
regions are relatively weak compared to the core flows.

162
Re = 300 Symmetric Backward-Facing Step Flow
Streamfunction
012345678
U Velocity Component
V Velocity Component
Figure 5.3. Streamfunction, vorticity, pressure, and velocity component contours for
Re = 300 symmetric backward-facing step flow, using the 2nd-order upwind convec¬
tion scheme. The streamfunction contours are evenly spaced within the recirculation
bubbles and in the interior of the flows, but this spacing is not the same. The actual
velocities within the recirculation regions are relatively weak compared to the core
flows.

Initial MG-convergence path
for T.E. criterion w/denom. = 1
163
CM-5 Busy Time (seconds)
Figure 5.4. The convergence path of the u-residual norm during the FMG procedure
for the Re = 5000 lid-driven cavity flow, using the defect-correction stabilization
strategy. The truncation error criterion, with denominator 1, is used to determine
the coarse-grid tolerances. The abscissas plot work units (proportional to a serial
computer’s cpu time), and CM-5 busy time.

Initial MG-convergence path
for T.E. criterion w/denom. = 5
164
Work Units
CM-5 Busy Time (seconds)
Figure 5.5. The convergence path of the u-residual norm during the FMG procedure
for the Re = 5000 lid-driven cavity flow, using the defect-correction stabilization
strategy. The truncation error criterion, with denominator 5, is used to determine
the coarse-grid tolerances. The abscissas plot work units (proportional to a serial
computer’s cpu time), and CM-5 busy time.

Initial MG-convergence path
for constant -3.0 tolerances
165
CM-5 Busy Time (seconds)
Figure 5.6. The convergence path of the u-residual norm during the FMG procedure
for the Re = 5000 lid-driven cavity flow, using the defect-correction stabilization
strategy. The coarse-grid convergence criterion is ||1| < —3.0 on every level. The
abscissas plot work units (proportional to a serial computer’s cpu time) and CM-5
busy time.

Initial MG-convergence path
for graded tolerances
166
Work Units
CM-5 Busy Time (seconds)
Figure 5.7. The convergence path of the u-residual norm during the FMG procedure
for the Re = 5000 lid-driven cavity flow, using the defect-correction stabilization
strategy. The coarse-grid convergence criteria are graded. For levels 2—6, |||| <
—0.7,—1.3,—1.9,—2.5,—3.1. The abscissas plot work units (proportional to a serial
computer’s cpu time) and CM-5 busy time.

167
Initial MG-convergence path
for T.E. criterion w/denom. = 1
Figure 5.8. The convergence path of the u-residual norm during the FMG procedure
for the Re = 300 symmetric backward-facing step flow, with the defect-correction
stabilization strategy. The truncation error criterion, with denominator 1, is applied
to abbreviate coarse-grid multigrid cycling.

168
Initial MG-convergence path
forT.E. criterion w/denom. = 1
CM-5 Busy Time (seconds)
Figure 5.9. The convergence path of the u-residual norm during the FMG procedure
for the Re = 300 symmetric backward-facing step flow, with second-order upwind-
ing on all levels. The truncation error criterion, with denominator 1, is applied to
abbreviate coarse-grid multigrid cycling.

169
Initial MG-convergence path
for T.E. criterion w/denom. = 5
CM-5 Busy Time (seconds)
Figure 5.10. The convergence path of the u-residual norm during the FMG procedure
for the Re = 300 symmetric backward-facing step flow, using second-order upwind-
ing on all levels. The truncation error criterion, with denominator 5, is applied to
abbreviate coarse-grid multigrid cycling.

170
Initial MG-convergence path
for constant -3.0 tolerances
CM-5 Busy Time (seconds)
Figure 5.11. The convergence path of the r/-residual norm during the FMG procedure
for the Re = 300 symmetric backward-facing step flow, using second-order upwinding
on all levels. The coarse-grid convergence criterion is ||rfc|| < —3.0 on every level.

171
Initial MG-convergence path
for graded tolerances
CM-5 Busy Time (seconds)
Figure 5.12. The convergence path of the u-residual norm during the FMG procedure
for the Re = 300 symmetric backward-facing step flow, using second-order upwinding
on all levels. The coarse-grid convergence criteria are graded. For levels 2—4, || <
-2.5,-3.1,-3.7.

172
Re = 5000 MG-Convergence Paths
for different FMG procedures
Figure 5.13. The convergence path of the average u-residual norm on the finest grid
level in the 7-level Re — 5000 lid-driven cavity flow. The relaxation factors used were
ujuv = uc = 0.5.

173
Re = 300 MG-Convergence Paths
for different FMG procedures
Figure 5.14. The convergence path of the average u-residual norm on the finest grid
level in the 5-level Re = 300 symmetric backward-facing step flow. The relaxation
factors used were ujuv = 0.6, and ujc = 0.4.

174
Relative Cost of Multigrid Components
on 32 and 512 node CM-5s
Virtual processor ratio, VP
Figure 5.15. The relative cost of smoothing and prolongation per V-cycle, as a
function of the problem size, for 32 and 512-node CM-5 computers (128 and 2048
processors, respectively). The run times are obtained from V(3,2) cycles, which have
5 smoothing iterations, 1 restriction, and 1 prolongation at each grid level. Elapsed
time (includes front-end-to-processor communication) is plotted. The restriction cost
is slightly less than the prolongation cost when only residuals are restricted, slightly
more when solutions are restricted too, but the trend is the same as for prolongation
and is therefore not shown for clarity.

175
Smoothing Costs by Level
on a 32-node CM-5
1 2 6 15 36 153 561 2145 8385
Multigrid Level, and Virtual Processor Ratio
Figure 5.16. Smoothing cost, in terms of elapsed and busy time on a 32-node CM-5,
as a function of the multigrid level for a case with a 1024 x 1024 fine grid. The
elapsed time is the one on the right (always greater than the busy time). The times
correspond to one SIMPLE iteration.

176
3 Level V and W-Cycle Times
on a 32-node CM-5
Virtual Processor Ratio, VP
Figure 5.17. Parallel run time, per cycle, on a 32-node CM-5, as a function of the
problem size. V(3,2) cycle cost is compared with W(3,2) cycle cost in terms of total
elapsed time (dashed lines), and busy time (solid lines). As the problem size increases
the number of multigrid levels remains fixed at three.

177
5 Level V and W-Cycle Times
on a 32-node CM-5
Virtual Processor Ratio, VP
Figure 5.18. Parallel run time, per cycle, on a 32-node CM-5, as a function of the
problem size. V(3,2) cycle cost is compared with W(3,2) cycle cost in terms of total
elapsed time (dashed lines), and busy time (solid lines). As the problem size increases
the number of multigrid levels remains fixed at five.

178
7 Level V and W-Cycle Times
on a 32-node CM-5
Virtual Processor Ratio, VP
Figure 5.19. Parallel run time, per cycle, on a 32-node CM-5, as a function of the
problem size. V(3,2) cycle cost is compared with W(3,2) cycle cost in terms of total
elapsed time (dashed lines), and busy time (solid lines). As the problem size increases
the number of multigrid levels remains fixed at seven.

179
Efficiency vs. Problem Size for 7-Level Multigrid
using V(3,2) Cycles
Figure 5.20. Parallel efficiency of the 7-level multigrid algorithm on the CM-5, as a
function of the problem size. Efficiency is determined from Eq. 3.3, where Tp is the
elapsed time for a fixed number of V(3,2) cycles and Ti is the parallel computation
time (Tnode-cpu) multiplied by the number of processors. The trend is the same as
for the single-grid algorithm, indicating the dominant contribution of the smoother
to the overall multigrid cost.

180
Efficiency vs. Number of CM-5 Nodes for 7-Level Multigrid
using V(3,2) Cycles
0.65
0.6
0.55
m 0.5
o
S 0.45
'o
= 0.35
co
CL 0.3
0.25
0.2
Figure 5.21. Parallel efficiency of the 7-level multigrid algorithm on the CM-5, as
a function of the number of processors, for several problem sizes. Efficiency is de¬
termined from Eq. 3.3, where Tp is the elapsed time for a fixed number of V(3,2)
cycles, and T\ is the parallel computation time (Tnode-cpu) multiplied by the number
of processors. There is only a small fall-off in the efficiency as np increases.
1 1
o
1
—1
~o~
VP = 6400
o
~ ~ — — —
__
a -o
~ ~ ~ ~ T)~
_____ VP = 3300
p
-
~ ~ ~ X)~
_ _ _ _ VP = 2100
-O
- °- -G-
— — —
o
co
00
ll
CL I
>1
/
1
1
1
0
-
o
a _?
-
-
~ ~ ~ ~ - - -
- _ _ _ VP = 320
o
-
l l
o
1
1
32 64 256 512
Number of CM-5 Nodes

181
Isoefficiency Curves For 7-Level Multigrid
using V(3,2) Cycles
Number of CM-5 Nodes
Figure 5.22. Isoefficiency curves for the 7-level pressure-correction multgrid method,
based on timings of a fixed number of V(3,2) cycles, using point-Jacobi inner it¬
erations. The plot is constructed based on linear least-squares curve fits of the
data in Figures 5.21 and 5.20. The isoefficiency curves have the general form
N = QUp + constant, where (3 ~ 1.1 for the efficiencies shown.

REFERENCES
[1] W. F. Ames. Numerical Methods for Partial Differential Equations. Computer
Science and Applied Mathematics. Academic Press, San Diego, second edition,
1977.
[2] J. B. Bell. P. Colella, and H. M. Glaz. A second-order projection method for
the incompressible Navier-Stokes equations. Journal of Computational Physics,
85(2) :257—283, 1989.
[3] E. L. Blosch and W. Shyy. Sequential pressure-based Navier-Stokes algorithms
on SIMD computers—computational issues. Numerical Heat Transfer, Part B,
26(2): 115—132, 1994.
[4] M. E. Braaten and W. Shyy. Study of pressure correction methods with multigrid
for viscous flow calculations in nonorthogonal curvilinear coordinates. Numerical
Heat Transfer, 11:417-442, 1987.
[5] A. Brandt. Multi-level adaptive solutions to boundary-value problems. Mathe¬
matics of Computation, 31:333-390, 1977.
[6] A. Brandt. 1984 Multigrid Guide with Applications to Fluid Dynamics. Lec¬
ture Notes in Computational Fluid Dynamics, von Karman Institute for Fluid
Dynamcis, Rhode-Saint-Genése, Belgium, 1984. Available from Department of
Computer Science, University of Colorado, Denver, CO.
[7] A. Brandt and S. Ta'asan. Multigrid solutions to quasi-elliptic schemes. In E. M.
Murman and S. S. Abarbanel, editors, Progress and Supercomputing in Com¬
putational Fluid Dynamics, Proceedings of U.S.-Israel Workshop, 1984, pages
235-255. Birkháuser, Boston, 1985.
[8] A. Brandt and I. Yavneh. On multigrid solution of high-Reynolds incompressible
entering flows. Journal of Computational Physics, 101:151-164, 1992.
[9] A. Brandt and I. Yavneh. Accelerated multigrid convergence and high-Reynolds
recirculating flows. SIAM Journal of Scientific Computing, 14(3):607—626, May
1993.
[10] W. Briggs. A Multigrid Tutorial. SIAM, Philadelphia, 1987.
[11] W. Briggs and S. F. McCormick. Introduction. In S. F. McCormick, editor,
Multigrid Methods, chapter 1. SIAM, Philadelphia, 1987.
[12] C.-H. Bruneau and C. Jouron. An efficient scheme for solving steady incompress¬
ible Navier-Stokes equations. Journal of Computational Physics, 89:389-413,
1990.
182

183
[13] T. Chan and R. Schreiber. Parallel networks for multigrid algorithms: Archi¬
tecture and complexity. SIAM Journal of Scientific and Statistical Computing,
6:698-711, 1985.
[14] T. F. Chan and R. S. Tuminaro. A survey of parallel multigrid algorithms.
In A. K. Noor, editor. Parallel Computations and Their Impact on Mechanics,
AMD-86, pages 155-170. ASME, New York, 1988.
[15] A. J. Chorin. A numerical method for solving incompressible viscous flow prob¬
lems. Journal of Computational Physics, 2:12-26, 1967.
[16] A. J. Chorin. Numerical solution of the Navier-Stokes equations. Mathematics
of Computation, 22(106):745-762, 1967.
[17] J. E. Dendy. Black box multigrid. Journal of Computational Physics, 48:366-
386, 1982.
[18] J. E. Dendy, M. P. Ida, and J. M. Rutledge. A semicoarsening multigrid algo¬
rithm for SIMD machines. SIAM Journal of Scientific and Statistical Computing,
13(6): 1460—1469, 1992.
[19] J. P. Van Doormal and G. D. Raithby. Enhancements of the SIMPLE method
for predicting incompressible fluid flows. Numerical Heat Transfer, 7:147-163,
1984.
[20] T. A. Egolf. Computational performance of CFD codes on the Connection Ma¬
chine. In Horst D. Simon, editor, Parallel Computational Fluid Dynamics: Im¬
plementations and Results, pages 271-280. The MIT Press, Cambridge, MA,
1992.
[21] B. Favini and G. Guj. MG techniques for staggered differences. In D. J. Pad-
don and H. Holstein, editors, Multigrid Methods for Integral and Differential
Equations, pages 253-262. Clarendon Press, Oxford, 1985.
[22] J. H. Ferziger and M. Peric. Computational methods for incompressible flow.
In M. Lesieur and J. Zinn-Justin, editors, Proceedings of Session LTV of the Les
Houches conference on Computational Fluid Dynamics. Elsevier, Amsterdam,
The Netherlands. 1993.
[23] P. F. Fischer and A. T. Patera. Parallel simulation of viscous incompressible
flows. Annual Review of Fluid Mechanics, 27:483-527, 1994.
[24] C. A. J. Fletcher. Computational Techniques for Fluid Dynamics. Sprineer-
Verlag, Berlin, 1991.
[25] P. O. Frederickson and 0. A. McBryan. Normalized convergence rates for the
PSMG method. SIAM Journal of Scientific and Statistical Computing, 12:221—
229, 1981.
[26] D. Gannon and J. Van Rosendale. On the structure of parallelism in a highly
concurrent pde solver. Journal of Parallel and Distributed Computing, 3:106-
135, 1986.

184
[27] D. K. Gartling. A test problem for outflow boundary conditions—flow over a
backward-facing step. International Journal for Numerical Methods in Fluids,
11:953-967, 1990.
[28] U. Ghia, K. N. Ghia, and C. T. Shin. High-Re solutions for incompressible
flow using the Navier-Stokes equations and a multigrid method. Journal of
Computational Physics, 48:387—411, 1982.
[29] P. M. Gresho. Incompressible fluid dynamics: Some fundamental formulation
issues. Annual Review of Fluid Mechanics, 23:413-454, 1991.
[30] P. M. Gresho. A summary report on the 14 July 91 minisymposium on outflow
boundary conditions for incompressible flow. In Proceedings, Fourth Interna¬
tional Symposium on Computational Fluid Dynamics, pages 436-442, University
of California at Davis, 1991.
[31] P. M. Gresho, D. K. Gartling, J. R. Torczynski, K. A. Cliffe, K. H. Winters,
T. J. Garratt, A. Spence, and J. W. Goodrich. Is the steady viscous incom¬
pressible two-dimensional flow over a backward-facing step at Re = 800 stable?
International Journal for Numerical Methods in Fluids, 17:501-541, 1993.
[32] P. M. Gresho and R. L. Sani. On pressure boundary conditions for the incom¬
pressible Navier-Stokes equations. International Journal for Numerical Methods
in Fluids, 7:1111-1146, 1990.
[33] M. Griebel. Sparse grid multilevel methods, their parallelization, and their ap¬
plication to CFD. In R. B. Pelz, A. Ecer, and J. Hauser, editors, Parallel
Computational Fluid Dynamics ’92, pages 161-174. Elsevier, Amsterdam, The
Netherlands, 1993.
[34] S. N. Gupta, M. Zubair, and C. E. Grosch. A multigrid algorithm for parallel
computers: CPMG. Journal of Scientific Computing, 7(3):263-279, 1992.
[35] J. L. Gustafson. Fixed time, tiered memory, and superlinear speedup. In Pro¬
ceedings of the Fifth Distributed Memory Computing Conference, pages 1255—
1260, Charleston, SC, 1990. IEEE Computer Society Press.
[36] W. Hackbusch. Convergence of multi-grid iterations applied to difference equa¬
tions. Mathematics of Computation, 34(150):425-440, April 1980.
[37] W. Hackbusch. Survey of convergence proofs for multi-grid iterations. In
J. Frehse, D. Pallaschke, and U. Trottenberg, editors, Special Topics of Ap¬
plied Mathematics—Functional Analysis, Numerical Analysis, and Optimization,
pages 151-164. North-Holland, Amsterdam. The Netherlands, 1980.
[38] T. Hagstrom. Conditions at the downstream boundary for simulations of vis¬
cous, incompressible flow. SIAM Journal of Scientific and Statistical Computing,
12(4):843-858, 1991.
[39] F. H. Harlow and J. E. Welch. Numerical calculation of time-dependent viscous
incompressible flow of fluid with free surface. Physics of Fluids, 8( 12):2182—2189,
December 1965.

185
[40] R. Hockney and C. Jesshope. Parallel Computers: Architecture, Programming
and Algorithms. Adam Hilger, Bristol, 1981.
[41] T. J. R. Hughes, W. K. Liu, and A. Brooks. Review of finite-element analy¬
sis of incompressible viscous flow by the penalty function method. Journal of
Computational Physics, 30( 1): 1—60, 1979.
[42] B.R. Hutchinson and G. D. Raithby. A multigrid method based on the additive
correction strategy. Numerical Heat Transfer, 9:511-537, 1986.
[43] R. I. Issa. Solution of the implicitly discretised fluid flow equations by operator¬
splitting. Journal of Computational Physics, 61:40-65, 1985.
[44] D. C. Jespersen and C. Levit. A computational fluid dynamics algorithm on a
massively parallel computer. International Journal of Supercomputer Applica¬
tions, 3(4):9—27, 1989.
[45] K. M. Kelkar and S. V. Patankar. Development of generalized block correction
procedures for the solution of discretized Navier-Stokes equations. Computer
Physics Communications, 53:329-336, 1989.
[46] V. Kumar and V. Singh. Scalability of parallel algorithms for the all-pairs
shortest-path problem. Journal of Parallel and Distributed Computing, 13:124-
138, 1991.
[47] C. Levit. Grid communication on the Connection Machine: Analysis, perfor¬
mance, and improvements. In Horst D. Simon, editor. Scientific Applications of
the Connection Machine, pages 316-332. World Scientific, New York, 1989.
[48] F. S. Lien and M. A. Leschziner. Multigrid convergence acceleration for com¬
plex flow including turbulence. In W. Hackbusch and U. Trottenberg, editors,
Multigrid Methods IIP pages 277-288. Birkháuser, Boston, 1991.
[49] J. Linden, G. Lonsdale, H. Ritzdorf, and A. Schiiller. Block-structured multigrid
for the Navier-Stokes equations: Experiences and scalability questions. In R. B.
Pelz, A. Ecer, and J. Hauser, editors, Parallel Computational Fluid Dynamics
’92, pages 267-278. Elsevier, Amsterdam, The Netherlands, 1993.
[50] J. Linden, G. Lonsdale, B. Steckel, and K. Stiiben. Multigrid for the steady-state
incompressible Navier-Stokes equations: a survey. In International Conference
for Numerical Methods in Fluids, pages 57-68, Berlin, 1990. Springer-Verlag.
[51] G. Lonsdale and A. Schiiller. Multigrid efficiency for complex flow simulations on
distributed memory machines. Parallel Computing, 19( 1 ):23—32, January 1993.
[52] Maspar Computer Corporation, Sunnyvale, CA. Maspar System Overview. 1993.
[53] O. A. McBryan, P. O. Frederickson, J. Linden, A. Schiiller, K. Solchenbach,
K. Stiiben, C. A. Thole, and U. Trottenberg. Multigrid methods on parallel
computers—a survey of recent developments. Impact of Computing in Science
and Engineering, 3:1-75, 1991.

186
[54] J. A. Michelsen. Mesh-adaptive solution of the Navier-Stokes equations. In
W. Hackbusch and U. Trottenberg, editors, Multigrid Methods III, pages 301—
312. Birkháuser, Boston, 1991.
[55] R. A. Nicolaides. On some theoretical and practical aspects of multigrid meth¬
ods. Mathematics of Computation, 33( 147):933—952, 1979.
[56] J. Nordstrom. The influence of open boundary conditions on the convergence to
steady state for the Navier-Stokes equations. Journal of Computational Physics,
85(l):210-244, 1989.
[57] E. S. Oran, J. P. Boris, and E. F. Brown. Fluid-dynamic computations on a
Connection Machine—preliminary timings and complex boundary conditions.
AIAA Paper 90-0335, 28th Aerospace Sciences Meeting and Exhibit, Reno, NV,
1990.
[58] J. M. Ortega and R. G. Voigt. Solution of partial differential equations on vector
and parallel computers. SIAM Review, 27(2):149—240, June 1985.
[59] A. Overman and J. Van Rosendale. Mapping robust parallel multigrid algo¬
rithms to scalable memory architectures. In S. McCormick, editor, Proceedings
of the Third Copper Mountain Conference on Multiqrid Methods. Marcel Dekker,
New York, 1993.
[60] S. V. Patankar. Numerical Heat Transfer and Fluid Flow. Hemisphere, Wash¬
ington, D.C., 1980.
[61] S. V. Patankar and D. B. Spalding. A calculation procedure for heat, mass and
momentum transfer in three-dimensional parabolic flows. International Journal
of Heat and Mass Transfer, 15:1787-1806, 1972.
[62] R. Peyret and T. D. Taylor. Computational Methods for Fluid Flow. Springer-
Verlag, New York, 1983.
[63] W. H. Press, S. A. Teukolsky, William T. Vetterling, and Brian P. Flannery.
Numerical Recipes in Fortran, The Art of Scientific Computing. Cambridge
University Press, London, second edition, 1992.
[64] C. M. Rhie. A pressure-based Navier-Stokes solver using the multigrid method.
AIAA Journal, 27:1017-1018, 1989.
[65] P. L. Roe. Beyond the Riemann problem, part 1. In M. Y. Hussaini, A. Kumar,
and M. D. Salas, editors, Algorithmic Trends in Computational Fluid Dynamics,
pages 341-367. Springer-Verlag, Berlin, 1991.
[66] R. Schreiber. An assessment of the Connection Machine. Technical report,
RIACS, NASA Ames Research Center, Mountain View', CA, April 1990.
[67] R. Schreiber and H. D. Simon. Towards the teraflops capability for CFD. In
Horst D. Simon, editor, Parallel Computational Fluid Dynamics: Implementa¬
tions and Results, chapter 16, pages 313-342. The MIT Press, Cambridge, MA,
1992.

187
[68] M. H. Schultz. Some challenges in massively parallel computation. In M. Y.
Hussaini, A. Kumar, and M. D. Salas, editors, Algorithmic Trends in Computa¬
tional Fluid Dynamics, pages 59-63. Springer-Verlag, Berlin, 1991.
[69] A. Settari and K. Aziz. A generalization of the additive correction methods for
the iteration solution of matrix equations. SIAM Journal of Numerical Analysis,
10:506-521, 1973.
[70] G. J. Shaw and S. Sivaloganathan. A multigrid method for recirculating flows.
International Journal for Numerical Methods in Fluids, 8(4):417-440, April
1988.
[71] G. J. Shaw and S. Sivaloganathan. On the smoothing properties of the SIMPLE
pressure-correction algorithm. International Journal for Numerical Methods in
Fluids, 8(4):441—461, April 1988.
[72] W. Shyy. Effects of open boundary on incompressible Navier-Stokes flow com¬
putation: Numerical experiments. Numerical Heat Transfer, 12:157-178, 1987.
[73] W. Shyy. Computational Modeling for Fluid Flow and Interfacial Transport.
Elsevier, Amsterdam, The Netherlands, 1994.
[74] W. Shyy and C.-S. Sun. Development of a pressure-correction/staggered-grid
based multigrid solver for incompressible recirculating flows. Computers and
Fluids, 22( 1 ):51—76, 1993.
[75] W. Shyy, S. Thakur, and J. Wright. Second-order upwind and central difference
schemes for recirculating flow computation. AIAA Journal, 30:923-931, 1992.
[76] J. C. Simo and F. Armero. Unconditional stability and long-term behavior of
transient algorithms for the incompressible Navier-Stokes and Euler equations.
Computational Methods in Applied Mechanics and Engineering, 111:111-154,
1994.
[77] H. D. Simon, editor. Parallel Computational Fluid Dynamics: Implementations
and Results. The MIT Press, Cambridge, MA, 1992.
[78] H. D. Simon, W. R. Van Dalsem, and L. Dagum. Parallel CFD: Current sta¬
tus and future requirements. In Horst D. Simon, editor. Parallel Computational
Fluid Dynamics: Implementations and Results, chapter 1. The MIT Press, Cam¬
bridge, MA, 1992.
[79] R. A. Smith and A. Weiser. Semicoarsening multigrid on a hypercube. SIAM
Journal of Scientific and Statistical Computing, 13(6):1314—1329, 1992.
[80] P. M. Sockol. Multigrid solution of the Navier-Stokes equations on highly
stretched grids with defect correction. In S. McCormick, editor, Proceedings of
the Third Copper Mountain Conference on Multigrid Methods. Marcel Dekker,
New York, 1993.
[81] S. P. Spekreijse. Multigrid Solution of the Steady Euler Equations. CWI Tract
46. Centre for Mathematics and Computer Science, Amsterdam, The Nether¬
lands, 1988.

188
[82] Thinking Machines Corporation, Cambridge, MA. CM Fortran Optimization
Notes: Slicewise Model Version 1.0, March 1991.
[83] Thinking Machines Corporation, Cambridge, MA. Prism User’s Guide Version
1.2, April 1991.
[84] Thinking Machines Corporation, Cambridge, MA. CM-5 Technical Summary,
November 1992.
[85] Thinking Machines Corporation, Cambridge, MA. CM Fortran Release Notes,
Preliminary Documentation for Version 2.1 Beta 1, April 1993.
[86] Thinking Machines Corporation, Cambridge, MA. Optimizing CM-Fortran Code
on the CM-5, August 1993.
[87] M. C. Thompson and J. H. Ferziger. An adaptive multigrid technique for the in¬
compressible Navier-Stokes equations. Journal of Computational Physics, 82:94—
121, 1989.
[88] R. S. Tuminaro and D. E. Womble. Analysis of the multigrid FMV cycle of large-
scale parallel machines. SIAM Journal of Scientific Computing, 14(5): 1159—
1173, 1993.
[89] S.P. Vanka. Block-implicit multigrid solution of Navier-Stokes equations in prim¬
itive variables. Journal of Computational Physics, 65:138-158, 1986.
[90] P. Wesseling. Linear multigrid methods. In S. F. McCormick, editor, Multigrid
Methods, chapter 2. SIAM, Philadelphia, 1987.
[91] P. Wesseling. A survey of fourier smoothing analysis results. In W. Hackbusch
and U. Trottenberg, editors, Multigrid Methods III, pages 105-127. Birkháuser,
Boston, 1991.
[92] D. E. Womble and B. C. Young. Multigrid on massively-parallel computers.
In Proceedings of the Fifth Distributed Memory Computing Conference, pages
559-563, Charleston, SC, 1990. IEEE Computer Society Press.
[93] P. M. De Zeeuw and E. J. Van Asselt. The convergence rate of multi-level algo¬
rithms applied to the convection-diffusion equation. SIAM Journal of Scientific
and Statistical Computing, 6(2):492-503, April 1985.
[94] S. Zeng and P. Wesseling. Numerical study of a multigrid method with four
smoothing methods for the incompressible Navier-Stokes equations in general
coordinates. In S. McCormick, editor, Proceedings of the Third Copper Mountain
Conference on Multigrid Methods. Marcel Dekker, New York, 1993.

BIOGRAPHICAL SKETCH
Edwin Blosch received his B.S. degree with high honors from the University of
Florida in 1989, received his M.S. degree, also from U.F., in 1991, and anticipates
receiving his Ph.D. degree in 1994. He wants to be the first person to do a complete
numerical simulation of a practically important physical process, for example me¬
teorological or oceanographic particle transport, combustion, or the manufacturing
processes of alloys, with as little modelling as possible, and enough space and time
resolution so that the public will have no trouble recognizing the utility of his work
and of scientific computing in general. Away from work he enjoys golf, basketball,
and travelling with his wife.
189

I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate, in
scope and quality, as a dissertation for the degree of Doctor of Philosophy.
Wei Shyy, Chairman " ‘
Professor of Aerospace Engineering,
Mechanics and Engineering Science
I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate, in
scope and quality, as a dissertation for the degree of Doctor of Philosophy.
Chen-Chi Hsu
Professor of Aerospace Engineering,
Mechanics and Engineering Science
I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate, in
scope and quality, as a dissertation for the degree of Doctor of Philosophy.
Bruce Carroll
Associate Professor of Aerospace
Engineering, Mechanics and
Engineering Science
I certify that I have read this study and that in
to acceptable standards of scholarly presentation ahd
scope and quality, as a dissertation for the degree pf D<
opinion it conforms
|s fully adequate, in
ctor of Philosophy.
David Mikolaitis
Associate Professor of Aerospace
Engineering, Mechanics and
Engineering Science

I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate, in
scope and quality, as a dissertation for the degree of Doctor of Philosophy.
Sartaj Sahni
Professor of Computer and
Information Sciences
This dissertation was submitted to the Graduate Faculty of the College
of Engineering and to the Graduate School and was accepted as partial ful¬
fillment of the requirements for the degree of Doctor of Philosophy.
December 1994
£
Winfred M. Phillips
Dean, College of Engineering
Karen A. Holbrook
Dean, Graduate School

LD
1780
,1991
»0
UNIVERSITY OF FLORIDA
3 1262 08553 7057