<%BANNER%>

Multi-Level Graph Partitioning

Permanent Link: http://ufdc.ufl.edu/UFE0021171/00001

Material Information

Title: Multi-Level Graph Partitioning
Physical Description: 1 online resource (39 p.)
Language: english
Creator: Aurora, Pawan K
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: coarsening, compression, graph, level, multi, partitioning, performance, profile, refining, uncoarsening
Computer and Information Science and Engineering -- Dissertations, Academic -- UF
Genre: Computer Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Graph partitioning is an important problem that has extensive applications in many areas, including scientific computing, VLSI design, and task scheduling. The multi-level graph partitioning algorithm reduces the size of the graph gradually by collapsing vertices and edges over various levels, partitions the smallest graph and then uncoarsens it to construct a partition for the original graph. Also, at each step of uncoarsening the partition is refined as the degree of freedom increases. In this thesis, we have implemented the multi-level graph partitioning algorithm and used the Fiduccia Mattheyses algorithm for refining the partition at each level of uncoarsening. Along with the few published heuristics we have tried one of our own for handling dense nodes during the coarsening phase. We present our results and compare them to those of the Metis software that is the current state of the art package for graph partitioning.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Pawan K Aurora.
Thesis: Thesis (M.S.)--University of Florida, 2007.
Local: Adviser: Davis, Timothy A.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0021171:00001

Permanent Link: http://ufdc.ufl.edu/UFE0021171/00001

Material Information

Title: Multi-Level Graph Partitioning
Physical Description: 1 online resource (39 p.)
Language: english
Creator: Aurora, Pawan K
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: coarsening, compression, graph, level, multi, partitioning, performance, profile, refining, uncoarsening
Computer and Information Science and Engineering -- Dissertations, Academic -- UF
Genre: Computer Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Graph partitioning is an important problem that has extensive applications in many areas, including scientific computing, VLSI design, and task scheduling. The multi-level graph partitioning algorithm reduces the size of the graph gradually by collapsing vertices and edges over various levels, partitions the smallest graph and then uncoarsens it to construct a partition for the original graph. Also, at each step of uncoarsening the partition is refined as the degree of freedom increases. In this thesis, we have implemented the multi-level graph partitioning algorithm and used the Fiduccia Mattheyses algorithm for refining the partition at each level of uncoarsening. Along with the few published heuristics we have tried one of our own for handling dense nodes during the coarsening phase. We present our results and compare them to those of the Metis software that is the current state of the art package for graph partitioning.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Pawan K Aurora.
Thesis: Thesis (M.S.)--University of Florida, 2007.
Local: Adviser: Davis, Timothy A.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0021171:00001


This item has the following downloads:


Full Text





MULTI-LEVEL GRAPH PARTITIONING


By
PAWAN K(UMAR AURORA



















A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA

2007




































S2007 Pawan K~umar Aurora


































To my loving family










ACKENOWLED GMENTS

I am deeply indebted to my supervisor Prof. Dr. Timothy Davis whose stimulating

-I _.. H. a!- and encouragement helped me. His expertise in writing robust software was

useful and a great learning experience for me. I have never written a better software. I

would like to express my thanks to Dr. William Hager and Dr. Arunava Banerjee for

agreeing to be on my committee.

My special thanks go to my parents for supporting my decision to pursue higher

studies and providing all the moral support. My brother Jitender deserves a special

mention here for being my best friend. Last but not the least, I thank my fiance Sonal for

all her love and encouragement.










TABLE OF CONTENTS


ACKNOWLEDGMENTS .....

LIST OF TABLES .....

LIST OF FIGURES .....

ABSTRACT .....

CHAPTER

1 INTRODUCTION

2 REVIEW OF SOME GRAPH PARTITIONING METHODS .

2.1 K~ernigfhan and Lin
2.2 Fiduccia and Mattheyses ..........
2.3 Quadratic Programming .........

3 METHODS

3.1 Graph Compression ............
3.2 Handling Disconnected Graphs
3.3 Multi-level Algorithm.
3.4 Coarseningf ... ....
3.4.1 Random Matching ..........
3.4.2 Heavy Edge Matching .........
3.4.3 Heaviest Edge M~atching
3.4.4 Zero Edge Matching .........
3.5 Heuristic for Handling Dense Nodes .......
3.6 Cutting the Coarsest Graph .........
3.7 Uncoarsening and Refining ........

4 RESULTS ........ .....

4.1 Conclusion.
4.2 Future Work .... .... ...
4.2.1 Interfacing with QP-Part .......
4.2.2 Implementing Boundary K(L .....

5 PSEUDO CODE FOR GP ..........

5.1 Top Level
5.2 Handling Dense Nodes

REFERENCES ......_._ .....

BIOGRAPHICAL SKETCH ...........


page

4

6i

7

8










LIST OF TABLES


Table page

4-1 Some of the graphs used ......... . .. 28

4-2 Common parameters for GP ......... .. 28

4-3 Results for pmetis, hmetis and GP ........ .. .. 28

4-4 Cut sizes for pmetis and GP ......... .. 29

4-5 Results for GP with simple and repetitive multi-level .. .. .. 30

4-6 Cut sizes for GP with and without dense node heuristic (DNH) .. .. .. .. 31

4-7 Average number of passes of FM ......... .. 32











LIST OF FIGURES

Figure page

1-1 Multi-level ......... .... .. 10

2-1 K~ernigfhan and Lin: subsets X and Y are swapped ... .. .. 12

:3-1 Graph compression .. ... . .. 14

:3-2 Repetitive nmulti-level .. ... . 16

:3-:3 Graph coarsening .. ... .. .. 18

:3-4 Coarseningf of dense nodes ......... . 19

:3-5 Dense nodes handling heuristic ......... ... 22

:3-6 Bucket list structure ......... . 2:3

:3-7 The FM algorithm ......... . 24

4-1 Performance profile definition ......... ... 27

4-2 Input Matrix: GHS_psdef/ford1 .. ... .. 29

4-3 Pernmuted GHS_psdef/ford1 ......... ... :30

4-4 Edge cut quality comparisons among hnietis, nietis and GP .. .. .. :31

4-5 Run time comparisons among hnietis, nietis and GP .. .. .. :32

4-6 Edge cut quality comparison between nietis and GP ... .. .. 3:3

4-7 Edge cut quality comparisons among various GP .... .. :34

4-8 Run time comparisons among various GP ...... .. .. :35










Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

MITLTI-LEVEL GR APH PARTITIONING

By

Pawan K~umar Aurora

August 2007

C'I I!r: Timothy Davis
Major: Computer Engineering

Graph partitioning is an important problem that has extensive applications in many

areas, including scientific computing, VLSI design, and task scheduling. The niulti-level

graph partitioning algorithm reduces the size of the graph gradually by collapsing vertices

and edges over various levels, partitions the smallest graph and then uncoarsens it to

construct a partition for the original graph. Also, at each step of uncoarsening the

partition is refined as the degree of freedom increases. In this thesis we have intpleniented

the nmulti-level graph partitioning algorithm and used the Fiduccia Mattheyses algorithm

for refining the partition at each level of uncoarsening. Along with the few published

heuristics we have tried one of our own for handling dense nodes during the coarsening

phase. We present our results and compare them to those of the Metis software that is the

current state of the art package for graph partitioning.









CHAPTER 1
INTRODUCTION

Given an un-weigfhted graph G with V vertices and E edges and given a number k,

the Graph Partitioning problem is to divide the V vertices into k parts such that the

number of edges connecting vertices in different parts is minimized given the condition

that each part contains roughly the same number of vertices. If the graph is weighted, i.e.

the vertices and edges have weights associated with them, the problem requires the sum

of the weights of the edges connecting vertices in different parts to be minimized given

the condition that the sum of the weights of the vertices in each part is roughly the same.

The problem can be reduced into that of bisection where the graph is split into two parts

and then each part is further bisected using the same procedure recursively. The problem

addressed in this thesis is that of bisecting the given graph according to a given ratio.

Also, the input graph is assumed to be un-weighted. However, this assumption is just at

the implementation level and does not in any way change the underlying algorithms.

It has been shown that the Graph Partitioning problem is NP-hard [1] and so

heuristic based methods have been emploi-v I to get sub-optimal solutions. The goal for

each heuristic method is to get the smallest possible cut in reasonable time. We discuss

some popular methods in the next chapter.

Graph Partitioning is an important problem since it finds extensive applications

in many areas, including scientific computing, VLSI design and task scheduling. One

important application is the reordering of sparse matrices prior to factorization. It has

been shown that the reordering of rows and columns of a sparse matrix can reduce the

amount of fill that is caused during factorization and thus result in a significant reduction

in the floatingf point operations required during factorization. Although the ideal thing

is to find a node separator rather than an edge separator, an edge separator can be

converted into a node separator using minimum cover methods.






The multi-level graph partitioning algorithm reduces the size of the graph gradually
by collapsing vertices and edges over various levels, partitions the smallest graph and
then uncoarsens it to construct a partition for the original graph. Also, at each step of
uncoarsening the partition is refined as the degree of freedom increases. In this thesis we
have implemented the multi-level graph partitioning algorithm and used the Fiduccia
Mattheyses algorithm for refining the partition at each level of un-coarsening.



G G


G G


G G



G
Figure 1-1. Multi-level Coarseningf and Uncoarseningf. The smallest graph is cut and the
partition gets projected and refined as it moves up to the original N--- -- -r
graph.
In the multi-level approach the coarsening phase is important. If a graph is folded in
such a way that the properties of the graph are preserved i.e. the coarse graph is a smaller
replica of the fine graph, a good cut of the coarse graph translates into a good cut of the
fine graph [2]. Using the general coarsening heuristics described in chapter 3, it is possible
that the properties of the graph are not preserved when handling dense nodes and may










result in a very unbalanced coarse graph. We present a heuristic for handling dense nodes

that preserves the properties of the graph during coarsening and gives a more balanced

coarse graph in fewer coarsening steps.










CHAPTER 2
REVIEW OF SOME GRAPH PARTITIONING METHODS

2.1 Kernighan and Lin

The K(L algorithm [3] starts with an arbitrary partition and then tries to improve it

by finding and exchanging a set of nodes in one partition with a same size set in the other

partition such that the net effect is a reduction in the cut size. The set of nodes is found

incrementally starting with one node in each partition and adding one node in each step.

The algorithm starts by calculating the difference of external and internal costs for each

node and then selects that pair of nodes a and b, one in each partition, for which the gain

is nmaxiniun. Gain is defined as

G = D, + Db Cab, where D, = E, I, and Db = Eb b.

Nodes a and b are then kept aside and the D values are recalculated for the remaining

nodes assuming that nodes a and b have been swapped. The process is repeated and a

new pair of nodes is selected. This is repeated until all the nodes get set aside. Finally a

subset of k nodes is selected front each partition such that the net gain is the nmaxiniun

positive value. These nodes are exchanged and the new graph is again improved using the

same algorithm. The process stops when the nmaxiniun gain by exchanging any subset of

nodes is not more than 0.







A B A* B*
4'= 4-X+Y
B*= B-Y+X

Figure 2-1. K~ernighan and Lin: subsets X and Y are swapped

2.2 Fiduccia and Mattheyses

The FM algorithm [4] is basically an efficient intplenientation of the K(L algorithm

using special data structures that bring down the complexity front n2 tO linarT in ternls









of the size of the graph. Also, unlike K(L, the FM algorithm moves one node at a time

instead of exchanging a set of nodes between the partitions. However, it uses the concept

of node gain and moves nodes even if the gain is negative in order to climb out of local

minima. This algorithm works by inserting nodes into various buckets according to their

gains and uses a doubly-linked list for nodes within the same gain bucket. This makes the

deletion and insertion of nodes into buckets a constant time operation. More details about

this algorithm are presented in the next chapter where we discuss our implementation of

this algorithm.

2.3 Quadratic Programming

Hager et al [5] have shown that the Graph Partitioning problem can be formulated as

the following continuous quadratic programming problem:

Minimize f (x) := (1 X)T(A + D)x

Sub 'ect to 0 < x < i 1 I <1Tx < n,

where 1 is the vector whose entries are all 1. When x is a 0/1 vector, the cost

function f(x) is the sum of those aij for which xi = 0 and xj = 1. Hence, when x is a 0/1

vector, f(x) is the sum of the weights of edges connecting the sets VB and VW defined by

V~= i : xi = 1 and VW = i : xi = 0.

They have shown that for an appropriate choice of the diagonal matrix D, the min-cut

is obtained by solving the above minimization problem.









CHAPTER 3
METHODS

3.1 Graph Compression

The sparse matrices resulting from some finite-element problems have many small

groups of nodes that share the same .Il11 Il:ency structure. We compress such graphs into

smaller graphs by coalescing the nodes with identical .Il11 Il:ency structures. As a result

of compression, the graph partitioning algorithm must process a smaller graph, this,

depending on the degree of compression achieved, can reduce the partitioning time.



3\ A 3B





2








grpahash(1 6au scluae sflos hash(5) 2 (d~). o xml o
node~ 36ih.liI~nysrcur ,3 ,1,hs()=2 0=2.Te







Faingure 3-1. Graph compr desso.AInu g raphe thavin 13 edges and no d aete sa .Il Nodes 1










structure. All the nodes having the same .Il11 Il-ency structure are then merged into one

node and the new node has a weight equal to the sunt of the weights of the merged nodes.

All the edges front the merged nodes to the coninon neighbors are collapsed together and

their weights added up. In case the input graph is un-weighted, each edge and node is

assumed to have a weight of 1 and the output compressed graph is weighted. The time

taken to compress the input graph is proportional to its size.

3.2 Handling Disconnected Graphs

A lot of sparse matrices arising in practice have disconnected components. That is

the underlying graphs have more than one connected components. Such graphs during

coarsening cannot he folded beyond a certain size since there are no edges left to be

matched. We pre-process such graphs before passing them to the partitioning routine. For

all input graphs, we determine if they have one or more connected components by doing a

depth first search and marking all nodes that can he reached. If an input graph has more

than one connected components, we pick a random node front each component and add

zero weight edges joining these nodes. This results in a fully connected graph that can he

coarsened down to two nodes if required. However, the coarseningf heuristics tend to avoid

foldingf these edges until there are no other edges left. This preserves the properties of the

original graph during coarsening and the cut of the coarsest graph is closer to that of the

input graph.

3.3 Multi-level Algorithm

The niulti-level algorithm as intpleniented by us coarsens the input graph to the

required size but does not uncoarsen and refine it back to the original graph in one step.

Rather it uncoarsens it up to a certain intermediate level and then coarsens it back to the

lowest level. This process of uncoarsening and refining up to an intermediate level and

coarsening again to the lowest level is repeated a number of times. Each time the partition

vector is saved and the partition that gives the best cut is used when uncoarsening and

rehiningf proceeds past the intermediate level to the top level graph. Since the coarseningf






heuristics use randomization, by repeating it several times we can get significantly
different results and use the best of them. Also, since this repetition is done only from an
intermediate level when the graph is much smaller, it does not account for a substantial
increase in the running cost.



G G



G G~-
Cycle repeated
a few times and
the best partition
selected
G: G

G
Figure 3-2. Multi-level with repeated coarsening and uncoarsening plus refining.
3.4 Coarsening
The input graph is coarsened over several levels to get the coarsest graph that is not
larger than a user defined size in terms of the number of vertices. To go from a fine graph
to a coarse graph at the next level, two steps are necessary. During the first, a maximal
matching of vertices of the fine graph is found and during the second, these matched
vertices are collapsed to get the coarse graph.
Based on a heuristic, an unmatched vertex is matched to one of its unmatched
neighbors. Maximal matching is obtained when there are no edges that are incident on










two unmatched vertices. We use a total of four different heuristics to get the niatchingfs.

These heuristics are eniploi-. I1 at different times or sometimes a combination of two

different heuristics is used to find the niatchings for the same graph.

3.4.1 Random Matching

A random permutation of the vertices of the graph is generated. The vertices are then

visited in this order and for every unmatched vertex another random permutation of its

.Il1i Il-ent vertices is generated. The neighbors are then visited in this random order and

the first unmatched vertex is selected to be matched.

3.4.2 Heavy Edge Matching

As with Random matching, first a random permutation of all vertices is generated.

The vertices are then visited in this order and for every unmatched vertex all its

unmatched neighbors are visited and the one that connects with the heaviest edge is

selected. The idea is to match the heavy edges so that they get collapsed and the resulting

coarse graph has only light edges that can he cut.

3.4.3 Heaviest Edge Matching

As the name -II__- -- the edges are sorted according to their weights and matching

begins by selecting the heaviest edge. All the edges are visited in descending order and

edges with unmatched end points are selected. This heuristic is used when the graph size

has been reduced substantially so that not much work is done in sorting the edges.

3.4.4 Zero Edge Matching

This heuristic is used to match the zero weight edges that are added when handling

dense nodes. It is similar to the heavy edge matching heuristic except that instead of

matching the heaviest edge, we match an edge of weight zero. In order to differentiate

these zero weight edges with the ones added to connect the disconnected components,

we match those vertices that have a coninon neighbor. Also, we match the second zero

weight edge instead of the first so that the last zero weight edge remaining after collapsing










all but two of the edges .Il11 Il:ent to a dense node is not selected for matching. The

heuristic for handling dense nodes is discussed in detail in the next section.

Once we have a maximal matching of the vertices of the fine graph, we create a

mapping vector that maps the vertices in the fine graph to those in the coarse graph.

Then using the matching and the mapping vectors, the coarse graph is constructed. For

every unmatched vertex, its .Illi Il:ency structure is copied over to the coarse graph whereas

for matched vertices, the new vertex in the coarse graph that is mapped to these vertices

has a union of their .Il11 Il:ency structures minus the edge that connected them, as its

.Il1i Il:ency structure. The edge weights are copied over except when the matched vertices

have a common neighbor. In that case the edges to the common neighbor get merged

into one and the new edge has a weight equal to the sum of the weights of the merged

edges. Similarly, the new vertex gets the sum of the weights of the merged vertices. Any

duplicate edges resulting from the process are merged together with their weights added.

A 2 2 2 B
1 2








C D
2 2


2 2




Figure 3-3. Graph coarsening. A) The original graph showing a maximal matching. B)
Graph after one step of coarsening. C) A maximal matching of the coarse
graph. D) Graph after two steps of coarsening.










3.5 Heuristic for Handling Dense Nodes

Graphs of some matrices arising from linear programming problems have a star-like

structure (Figure :3-5.(A)). Also, some graphs may have a few dense nodes that resemble

a star-like structure. The normal matching heuristics fail to preserve the properties of

such graphs during coarsening and may produce a highly unbalanced coarse graph. Also,

these heuristics can match only one of the star edges in one step and if the whole graph

has only a few of these star-like dense nodes the coarsening process can become extremely

slow (Figure :3-4). This implies a lot more levels of coarsening thus resulting in that many

more graphs to be stored with many consecutive graphs being roughly the same size.

Expectedly, this can use up all the memory resulting in a fatal error.

A B










Figure :3-4. Coarsening of dense nodes. A) A dense node with degree 12. B) Node balance
after 11 steps of coarsening using random or heavy edge matching.

Whenever we detect that the coarsening process is not reducing the size of the graph

by a reasonable amount, we call this routine to handle the dense nodes. The process starts

by calculating the degrees for all nodes and then sorting them according to degree. The

nodes are then visited from the highest degree to the lowest degree. However, if we reach

a node that has a degree less than the median degree or a degree less than three (default

value that can he overwritten by the user) times the average degree or a degree less than

one-fifth (default value that can he overwritten by the user) the maximum degree, the

process is terminated and the routine returns the graph obtained so far. Here is how a

dense node is handled. We add zero weight edges to connect the nodes .Il11 Il-ent to the

dense node. However, these edges are initially added to a new intermediate graph of










the same size (Figure :3-5.(B)). This process is repeated for all the dense nodes that are

handled and all the edges are added to the same intermediate graph. So at the end of

the first step we have an intermediate graph that has only zero weight edges. In the next

step we run the random matching heuristic on the intermediate graph. Since all the edges

have the same weight, random matching heuristic is the obvious choice. Now we add this

intermediate graph to the graph being handled for dense nodes. Since the added edges

weigh zero, the internal degrees of the nodes are preserved in the original graph. Also,

adding an edge over an existing edge makes no difference since the added sunt remains

unchanged. Either heavy edge matching or heaviest edge matching heuristic is now applied

to the resulting graph and it is coarsened (Figure :3-5.(C)). The idea behind this heuristic

is to pair-wise collapse the edges incident on a dense node so that coarseningf proceeds

faster and the resulting coarser graph preserves the structure and the properties of the

original graph. Also, it results in a much more balanced coarsest graph and a good cut

of the coarsest graph transforms into a good cut of the original top level graph. Some

more processing is required in order to collapse the edges that still remain, since the first

round only reduces the nmaxiniun degree by half. For this we repeatedly do zero edge

matching (Figure :3-5.(C,D)) followed by heavy/heaviest edge matching until we reach

a stage when the coarsening produces a graph with size not in desired proportion of the

fine graph thus signaling the presence of dense nodes and a re-run of the handling routine

is required. Zero edge matching, as the name el__o-r-;- matches only the zero weight

edges, thus enabling more pair-wise collapsing of the edges incident on a dense node. This

matching heuristic matches only the second neighbor that is connected via a zero weight

edge to a randomly chosen unmatched vertex. In Figure :3-5.(E) there is only one zero

weight edge connecting the two nodes, hence it does not get matched and the matching

shown is obtained using heavy/heaviest edge matching. Also, it distinguishes with the

zero weight edges added to connect the disconnected components in the top level graph.

By the nature of the edges added during dense node 1, I.1,,111).- these edges have endpoints










that share a coninon neighbor that is the dense node, whereas the edges added to connect

disconnected components do not share this property, because if that is the case, then the

disconnected components would not he disconnected. This logic is used to select the edges

added for handling dense nodes while leaving out the ones that connect the disconnected

components.

The idea behind the selection of second zero weight node, is to avoid the situation

shown in Figure 3-5.(G). As shown in the figure, if the last zero weight edge is selected,

the resulting coarse graph would not he correctly balanced. However, if one of the other

edges is selected as shown in the figure, the resulting graph is more balanced (Figure

3-5.(F)).
3.6 Cutting the Coarsest Graph

We use the Greedy graph growing partitioning algorithm (GGGP) to cut the coarsest

graph. This algorithm is similar to the FM algorithm except that the gains are computed

incrementally unlike the FM algorithm where the gains are preconmputed. Also, in this

intplenientation we need only one bucket array that represents the frontier between the

growing region and the rest of the graph. The graph is partitioned by selecting a node at

random and moving it to the growing region. The cut is represented by the edges crossing

over front the growingf region to the other side of the graph. Based on this the gains for

the vertices .Il11 Il-ent to the moved node are computed and the nodes inserted into the

bucket array. Next, the node with the nmaxiniun gain that causes the smallest increase in

the edge-cut without violating the required balance is moved to the growingf region. As is

done in the FM algorithm, this node is now deleted front the bucket array and the gains

of its neighbors that are already in the bucket are updated. However, unlike FM, gains for

those neighbors that are not already in the bucket or in the growing region are computed

and the vertices added to the bucket array. The above process is repeated until no move

maintains the required balance. Since this method is sensitive to the initial node selection,

the whole process is repeated with four randomly chosen initial vertices. The one that

gives the best cut-size is retained and the corresponding partition is returned. Since the





LZF


Figure 3-5.


Dense nodes handling heuristic. A) A dense node with degree 12. B) The
intermediate graph showing the dense node and the zero weight edges with a
maximal matching. C) The original dense node after one step of coarsening
also showing a maximal matching. D) After two steps of coarsening. E) After
three steps. F) The final node balance after four steps of coarsening. G) When
the zero weight edge in (E) is selected for matching.










coarsest graph is not more than a few hundred nodes in size, running this algorithm four

times does not add much to the total partitioning cost.

3.7 Uncoarsening and Refining

During the uncoarsening and refining phase, the initial partition of the coarsest

graph is projected onto the next level fine graph and is subsequently refined using the FM

heuristic. This procedure is repeated until a partition is projected onto the top level graph

and is refined to obtain the final partition and cut-size for the graph. The mapping vector

is used to project the coarse graph partition onto the fine graph. During uncoarsening,

based on the initial partition of the fine graph, the gains for the vertices are computed and

the vertices inserted into the respective gain buckets. Two bucket arrays (Figure 3-6) are

used, one for the left partition vertices and the other for the right partition vertices.


+ pmax pmax is the max degree of a node across all levels





GAIN




pma



1 2 3 *** N

Figure 3-6. Bucket list structure

The gains are computed using the following equation:



where ed(i) is the external degree and id(i) is the internal degree for node i.

The internal degrees of the nodes are computed during the coarseningf phase while

constructing a coarse graph. At this point the internal degree of a vertex is just the sum

of the edge weights of the edges incident on that vertex and the external degree is zero.










When projecting a partition onto the fine graph, for every vertex, its internal degree is

subtracted by the weight of the edge connecting it to a neighbor on the other side of the

partition and its external degree is increased by the same amount. At this point the coarse

graph is deleted and the fine graph is passed to the FM algorithm routine for refining its

initial partition. The bucket arrays are also passed to the FM algorithm to be used during

refiningf.













Figure 3-7. The FM algorithm. A) Graph showing a partition with three edges cut. Node
'a' has a gain value of +1. B) Graph after one step of FM. Node 'a' is moved
to the other side resulting in a reduction of 1 in the cut-size.

Refining of the initial cut is done by moving nodes from one partition to the other

such that it results in a decrease in the cut-size. A node to be moved is picked from

the bucket array that has a higher maximum gain bucket. However, if this move fails

to maintain the desired balance, the next node in the same gain bucket is tried. If none

of the nodes in that gain bucket could be moved while maintaining the desired balance,

the next lower gain bucket is tried. This could be from the other bucket array. If the

initial partition itself is not balanced, the first pass of the refiningf algorithm is used as

a balancing pass to get to the desired balance without caring for the cut-size. However,

nodes are still moved starting from the highest gain bucket of the heavier side. A slack

equal to the weight of the heaviest node (w) in the graph is provided such that each

partition is within r total weight + w, where r is the desired balance. This ensures

that a balance can ahr-l-~ .14e found especially in the case when r is 0.5. Once a balance

has been obtained, it is ahr-l- .- maintained for that graph since any node affecting the










balanced is not moved. However, later when this graph is projected onto the next fine

graph the balance may not hold since the weight of the heaviest node in the fine graph

can he less and hence the slack would be less. Subsequently, valid moves are made and

the improvement in the cut-size is recorded. Any node that is moved is removed from

its gain bucket and added to a free list that is used to re-initialize the buckets for the

next pass. When a node is moved, the gains of its neighboring vertices are recomputed

and the vertices moved to the new gain buckets. Nodes are moved even when they make

the partition worse in an attempt to climb out of local minima. When the partition does

not improve as compared to its previous best even after making around 50-100 moves,

the pass is terminated and the moves are undone. If the current pass produced any

improvement in the cut-size, another pass is initiated. All the nodes in the free list are

added to their new gain buckets after the new gains are computed taking into account

their new partition. 1\ore passes of the refinement algorithm are made until during a pass

there is no improvement in the cut-size. Our experiments have shown that on an average

3-4 passes are required (Table 4-7).









CHAPTER 4
RESULTS

All the tests were run on Sun Solaris SPARC server using the MATLAB software

version 7.3. Mex functions were written to interface MATLAB with the C libraries. Metis

4.0 [6] and hmetis 1.5 [7] were used for comparisons. We refer to our graph partitioner as

GP when comparing with pmetisl and hmetiS2

Graph matrices were taken from the University of Florida Sparse Matrix Collection

[8]. All the square matrices in the collection were sorted by the number of non-zeros and

the first 1250 were selected for the various experiments. These matrices ranged in size

from 15 to 1014951 in terms of the number of non-zeros and 5 to 325729 in terms of the

number of rows/columns. Each un-symmetric matrix was made symmetric by adding its

transpose to itself. The numerical values were ignored when doing the partitioning and the

graphs were treated as undirected with no self-edges. Table 4-1 lists 10 graphs that were

randomly chosen from the ones used for the experiments. These graphs are used to show a

sample set of some of the results.

Various GP parameters and their values used across all experiments are listed in

Table 4-2. For pmetis and hmetis, we used the default parameter values.

We used the performance profile plot (Figure 4-3) to compare the results. When

comp ari ng t he cut s iz es, for e ach graph t he e dge cut ob stained using various met ho ds /appli cat ions

was divided by the smallest of these values and the result saved as a vector for each

method/application. Finally these vectors were sorted and plotted against each other. The

same process was used for comparing the run times.




I pmetis is the Metis interface used when not more than 8 partitions are required.

2 hmetis is the hypergraph partitioner that can be used for normal graphs.




























O

204


Figure 4-1.


Performance Profile Definition: run times for methods a, b and c are being
compared. For 900 of the test cases the time taken by method a < 1.5 times
the lowest run time obtained for the methods a, b and c, whereas for 800 of
the test cases the time taken by method b < 1.5 times the lowest run time
obtained for the methods a, b and c and the number of test cases for which
method c takes < 1.5 times the lowest run time obtained for the three
methods is only 200. Thus, here a is the best method, followed closely by b
and c is the worst of the three methods that are being compared.


Performance Ratio










Table 4-1. Some of the graphs used


Name

GHS~indef/hloweyhl
HB/hesstni26
GHS~indef/aug2d
Bonthof/circuit_3
Zitney/radfr1
Gset/G29
Sandia/fpga_dcop_:34
GHS_psdef/ford1
GHS~indef/tuntal
Schenk_IBMNA/c-:38


No. of
Vertices
:3000:3
1922
29008
12127
1048
2000
1220
18728
22967
8127


No. of
Non-zeros
109999
1922
768:32
481:37
1:3299
:39980
5892
101576
87760
77689


K~ind

materials problem
structural problem
2D/:3D problem
circuit simulation problem
chemical process simulation problem
undirected weighted random graph
subsequent circuit simulation problem
structural problem
2D/:3D problem
optimization problem


Table 4-2. Coninon parameters for GP
Parameter Description
Ratio of number of nodes in fine graph to
when starting dense node detection


Value
1.05

[total leveils/2
100


0.2

0.01


6101975


number of nodes in coarse graph


Level at which to resume repetitive Multi-level
Number of consecutive moves without intprovenient during FM refining
The number of times the degree of a node is to the average degree to
he classified as a dense node
Ratio of the degree of a node to the nmaxiniun degree when the handling of
dense nodes stops
The ratio of the number of nodes in the current coarse graph to the
number of nodes in the top level input graph until when heavy edge
matching heuristic is used
Seed value for generating random numbers


Table 4-:3. Results for pntetis, hnietis and GP(Repetitive Multi-level with 9 repetitions)


Matrix


hmetis


pntetis


Cut Size
4520
0
116
1:314
554
6850
26
1:31
167
975


Run Time
0.267:3
0.0097
0.08:31
0.0541
0.0082
0.0274
0.002:3
0.0411
0.0566
0.036;2


Cut Size
4502
0
98
1840
501
6691
20
124
160
898


Run Time
6.587:3
0.00:35
4.5201
6i.9645
1.:3391
5.4964
0.2421
5.0167
:3.8550
5.1726


Cut Size
4579
0
107
244:3
501
681:3
18
1:3:3
199
10:30


Run Time
0.64:32
0.0:307
0.1505
(I ;1.11.
0.0669
0.303:3
0.02:32
0.1092
0.1:356
0.2915


GHS~indef/hloweyhl
HB/hesstni26
GHS~indef/aug2d
Bonthof/circuit_3
Zitney/radfr1
Gset/G29
Sandia/fpga_dcop_:34
GHS_psdef/ford1
GHS~indef/tuntal
Schenk_IBMNA/c-:38






































Figure 4-2. Input Matrix: GHS_psdef/ford d1 di\ploix using cspy.m.



Table 4-4. Cut sizes for pmetis and GP(Repetitive Multi-level with 9 repetitions)
Matrix pmetis GP
r = 0.500 r = 0.499 r = 0.497 r = 0.495
GHS~indef/bloweybl 5024 5038 5014 4993
HB/bcsstm26 0 0 0 0
GHS~indef/aug2d 110 110 125 116
Bomhof/circuit_3 1377 2637 2453 2428
Zitney/radfr1 551 543 531 528
Gset/G29 6837 I. <.5 6852 6849
Sandia/fpga_dcop_34 20 18 18 18
GHS_psdef/ford1 147 127 127 131
GHS~indef/tumal 169 202 199 202
Schenk_IBMNA/c-38 1068 1024 1008 1141






































Figure 4-3. GHS_psdef/ford1 permuted using the partition vector and di;11l-pts I using
cspy.m; the entries in the second and fourth quadrant are the two partitions;
the entries in the first/third quadrant are the edges separating the two
partitions.

Table 4-5. Results for GP with simple and repetitive multi-level
Matrix No Repetition 4 Repetitions 9 Repetitions
Cut Size Run Time Cut Size Run Time Cut Size Run Time
GHS~indef/bloweyb1 5091 0.2334 5108 0.3903 5050 0.6183
HB/bcsstm26 0 0.0218 0 0.0263 0 0.0303
GHS~indef/aug2d 136 0.1055 113 0.1258 113 0.1499
Bomhof/circuit_3 2747 0.2741 2575 0.5673 2575 0.9604
Zitney/radfr1 690 0.0181 690 0.0440 690 0.0737
Gset/G29 7294 0.0585 7294 0.1492 7294 .
Sandia/fpga_dcop_34 62 0.0063 19 0.0157 19 0.0252
GHS_psdef/ford1 135 0.0699 135 0.0916 135 0.1206
GHS~indef/tumal 208 0.0950 208 0.1224 208 0.1731
Schenk_IBMNA/c-38 1538 0.1398 1243 0.2144 1243 0.3400












1000


900

800

700

600

S500

400

300

200

100


GP 0 reps~


hGP 9 reps


pmetis


1.05 1.1 1.15 1.2 1.25
Edge Cut Ratio


1.3 1.35 1.4 1.45 1.5


Figure 4-4.


Performance Profile: comparingf the edge cuts among hnletis, pnletis,
GP(Repetitive Multi-level with 9 repetitions) and GP(Sintple Multi-level with
0 repetitions) at r = 0.45. hnietis does better than the other three whereas GP
does better than pnletis.


Table 4-6. Cut sizes for GP(Repetitive Multi-level with 9 repetitions) with and without


dense node heuristic (DNH)
No. of No. of
Vertices Non-zeros
en197ZtZ 2541 7:361
at22 :39899 195429
at2:3 110:355 555441
atl9 1157 :3699
:ircuit_4 80209 :307604
circuit 170998 9589:36
oli 4008 8188
3MNA/c-4:3 11125 12:3659
ordnet 3 82670 1:32964
3MNA/c-67b 57975 5:3058:3


Matrix


K~ind


statistical/nlathentatical
circuit simulation
circuit simulation
circuit simulation
circuit simulation
circuit simulation
economic problem
optimization problem
directed weighted graph
subsequent optimization


Cut Size
DNH Minus DNH
2 9
28 122
186 53:3
5:3 114
2090 :3576
10:3 1:37
16 22
:3141 4:334
4957 6927
1625 2:32:3


Bates/Ch
R ii Ir /raj
R ii Ir /raj
R ii Ir /raj
Bonthof/e
Hanin/se
Grund/pe
Schenk_IB
Pajek/We
Schenk_IB













1000


900

800

700

600

S500

400

300

200

100

0


2 4 6 8 10 12
Run Time Ratio


14 16 18 20


Figure 4-5.


Performance Profile: comparingf the run times among hmetis, pmetis,
GP(Repetitive Multi-level with 9 repetitions) and GP(Simple Multi-level with
0 repetitions) at r = 0.45. pmetis does better than the other three whereas GP
does much better than hmetis.


Table 4-7. Average number of passes of FM
Matrix Passes
GHS~indef/bloweybl 2.9459
HB/bcsstm26 1.7778
GHS~indef/aug2d 2.4167
Bomhof/circuit_3 3.4333
Zitney/radfr1 2.5833
Gset/G29 4.5385
Sandia/fpga_dcop_34 2.5500
GHS_psdef/ford1 2.6667
GHS~indef/tumal 2.8387
Schenk_IBMNA/c-38 2.1837













r =0.495


r =0.499


600pmetis



400-



200-



1 1.D5 1.1 1.15
Cut Size Ratio

Figure 4-6. Performance Profile: comparing the edge cuts among Metis at r = 0.50 and
GP(Repetitive Multi-level with 9 repetitions) at r = 0.499, 0.497, 0.495. GP
almost matches Metis at r = 0.499 and does better in the the other two cases,
the best being at 0.495.


4.1 Conclusion

We have a robust software that has been thoroughly tested for any nienory leaks.

Assertions were used at various points to assure the logical correctness of the software.

Our graph partitioning software can he run with the simple nmulti-level and with repetitive

niulti-level options. These options give the user a flexibility to trade quality with time

depending on what is more critical.

4.2 Future Work

4.2.1 Interfacing with QP-Part

As mentioned in chapter 2, QP-Part solves the graph partitioning problem by

formulating it as a quadratic progranining optimization problem. Hager et al have

shown [9] that their method gives cuts superior to those of METIS, though their method

is expensive when applied to the whole graph. We plan to interface QP-Part with the













1200


1000 4 reps






S600


400-


200-



1 1.D5 1.1 1.15
Cut Size Ratio

Figure 4-7. Performance Profile: comparing the edge cuts among simple Multi-level,
Multi-level with four repetitions of coarsening and uncoarsening plus refining
(middle) and Multi-level with nine repetitions of coarsening and uncoarsening
plus refining (top).


multi-level approach such that the expensive QP-Part is used to refine the partition when

the graph is small and the cheap FM algorithm is used when the graph is sufficiently large.

This should give us better partitions at reasonable cost.

4.2.2 Implementing Boundary KL

In boundary K(L, during uncoarsening only the nodes at the boundary of the partition

are inserted into the gain buckets. Since the number of nodes at the boundary is a small

fraction of the total number of nodes in the graph, this saves a lot of time. However, it

is possible that this may reduce the quality of partition since some interior nodes can

have higher gain values than some boundary nodes and such interior nodes do not get

considered for move since they are not in either bucket list.




























1400


1200
0 reps

1000


4 reps
S800
I / / 9 reps


S600-



400-



200-




S1 .5 2 2.5 3 3.5 4 4.5 b
Run Time Ratio


Figure 4-8.


Performance Profile: comparing the run times among simple Multi-level (top),
Multi-level with four repetitions of coarsening and uncoarsening plus refining

(middle) and Multi-level with nine repetitions of coarsening and uncoarsening
plus refiningf.









CHAPTER 5
PSEUDO CODE FOR GP

5.1 Top Level


Read (GIN)
GSYM/+SRT = tranSpose (GIN + transpose (GIN))
G = G'SYM/+SRT
Go = Compress_Graph (G)
CC = Find_Connected_Components (Go)
G = Goe
if size(CC) > 1
i= 0
for each CC: do
rep (i + +) = Find_Random_Node (CC)
end
E = New_Graph (size(Go))
for j = 0 to i 1 do
if)

Add_
Add_
end
if j > 0


~Zero_Wt_Edge (E, rep (j), rep (j + 1))
~Zero_Wt_Edge (E, rep (j + 1), rep (j))


rep (j 1), rep (j))
rep (j), rep (j 1))


Add _Zero_Wt _
Add _Zero_Wt _
end
end
G = Go + E


Edge
Edge


end
Read (DENSE_RATIO, MIN_SIZE, LEVEL, REPETITION_COUNT)
PREV (G) = NULL
Coarsen: matching = Get _Matchings (G)
GCOARSE = ApplyMatchings (G, matching)
PREV (GCOARSE) = G
NEXT (G) = GCOARSE
if size(GCOARSE) < MINI_SIZE
G = GCOARSE
GOTO Cut
end
if size(G)/size(GCOARSE) < DENSE _RATIO
G = Handle_Dense_Nodes (GCOARSE)
GOTO Coarsen
end
Cut: partition = Cut_Graph (G)
if !GMIARK










GMIARK = Get_Graph_From_List (G, LEVEL)
end
UnCoarsen: GFINvE = Uncoarsen (G, partition)
GREFINvE = Re fine_FM~(GFINE)
G = GREFINE
if G == GMIARK and REPETITION COUNTT > 0
REPETITION COUNTT -
if partition(G) < Best _Partition
Best_Partition = partition(G)
end
GOTO Coarsen
end
else if G == GMIARK
partition(G) = Best _Partition
end
if PREV(G) / NVULL
GOTO UnCoarsen
end
Return (G)

5.2 Handling Dense Nodes
Read (GIN)
EPREV = NVULL
n = size(GliN)
avg_deg = unz(GINv)/n
degrees = Compute_Node_Degrees (GIN)
perm = Quick_Sort (degrees)
med _deg = median(degrees)
max_deg = degrees (0)
for i = 0 to a 1 do
if degrees (i) < 3 *avg_deg or degrees (i) < 0.2 +max_deg or degrees (i) < med_deg
break
end
node = perm(i)
E = New_Graph (u)
for each adj(node) do
Add_Zero_Wt_Edge (E, adj(node), next(adj(node)))
Add_Zero_Wt_Edge (E, adj(node), prev(adj(node)))
end
ENEW = E + EPREV
EPREV = ENEW
end
Get _Random_Mat things (EP REV)
Return (GIN + EPREV)










REFERENCES


[1] 31. Yannakakis, "Computing the minimum fill-in is NP-Complete," SIAAF J. Aly. Disc.
Afeth., vol. 2, pp. 77-79, 1981.

[2] G. K~arypis and V. K~umar, "A! lli-;- of multilevel graph partitioning," Tech. Rep.
TR-95-0:37, Computer Science Dept., Univ. of Minnesota, Minneapolis, MN, 1995.

[:3] B. W. K~ernighan and S. Lin, "An efficient heuristic procedure for partitioning graphs,"
Bell System Tech. J., vol. 49, pp. 291-307, 1970.

[4] C. 31. Fiduccia and R. 31. Alattheyses, "A linear-time heuristic for improving network
partition," in Proc. 19th Design Automation C'onf., Las Vegas, NV, 1982, pp. 175-181.

[5] W. W. Hager and Y. K~rylyuk, "Graph partitioning and continuous quadratic
programming," SIAAF J. Aly. Disc. Afeth., vol. 12, pp. 500-523, 1999.

[6] G. K~arypis and V. K~umar, \! -A software package for partitioning unstructured
graphs, partitioning meshes, and computing fill-reducing orderings of sparse matrices,"
Tech. Rep., Computer Science Dept., Univ. of Minnesotal, Minneapolis, MN, Sept.
1998.

[7] G. K~arypis and V. K~umar, "hMetis: A hypergraph partitioning package," Tech. Rep.,
Computer Science Dept., Univ. of Minnesota, Minneapolis, MN, Nov. 1998.

[8] T. A. Davis, "University of Florida sparse matrix collection,"
www.cise.ufl .edu/research/sparse.

[9] W. W. Hager, S. C. Park, and T. A. Davis, "Block exchange in graph partitioning," in
Approxrimation in C~~iillnil.<;, and Numerical Op~timization: C'ontinuous and Discrete
Problems, P. 31. Pardalos, Ed., pp. 299-307. K~luwer Academic Publishers, 2000.









BIOGRAPHICAL SKETCH

Pawan K~umar Aurora was born in K~anpur, India, on October 6, 1975. He graduated

from the Birla Institute of Technology, Mesra, Ranchi in 1998 with a degree in Computer

Science. For seven years Pawan worked in the IT Industry initially for Tata Consultancy

Services in Mumbai, India and later for IBM Global Services in Pune, India before

relocating to Gainesville, Florida, USA to pursue graduate study in computer engineering.

After completing his M.S., Pawan will pursue his PhD in computer science.





PAGE 1

1

PAGE 2

2

PAGE 3

3

PAGE 4

IamdeeplyindebtedtomysupervisorProf.Dr.TimothyDaviswhosestimulatingsuggestionsandencouragementhelpedme.Hisexpertiseinwritingrobustsoftwarewasusefulandagreatlearningexperienceforme.Ihaveneverwrittenabettersoftware.IwouldliketoexpressmythankstoDr.WilliamHagerandDr.ArunavaBanerjeeforagreeingtobeonmycommittee.Myspecialthanksgotomyparentsforsupportingmydecisiontopursuehigherstudiesandprovidingallthemoralsupport.MybrotherJitenderdeservesaspecialmentionhereforbeingmybestfriend.Lastbutnottheleast,IthankmyanceSonalforallherloveandencouragement. 4

PAGE 5

page ACKNOWLEDGMENTS ................................. 4 LISTOFTABLES ..................................... 6 LISTOFFIGURES .................................... 7 ABSTRACT ........................................ 8 CHAPTER 1INTRODUCTION .................................. 9 2REVIEWOFSOMEGRAPHPARTITIONINGMETHODS ........... 12 2.1KernighanandLin ............................... 12 2.2FiducciaandMattheyses ............................ 12 2.3QuadraticProgramming ............................ 13 3METHODS ...................................... 14 3.1GraphCompression ............................... 14 3.2HandlingDisconnectedGraphs ........................ 15 3.3Multi-levelAlgorithm .............................. 15 3.4Coarsening .................................... 16 3.4.1RandomMatching ............................ 17 3.4.2HeavyEdgeMatching .......................... 17 3.4.3HeaviestEdgeMatching ........................ 17 3.4.4ZeroEdgeMatching ........................... 17 3.5HeuristicforHandlingDenseNodes ...................... 19 3.6CuttingtheCoarsestGraph .......................... 21 3.7UncoarseningandRening ........................... 23 4RESULTS ....................................... 26 4.1Conclusion .................................... 33 4.2FutureWork ................................... 33 4.2.1InterfacingwithQP-Part ........................ 33 4.2.2ImplementingBoundaryKL ...................... 34 5PSEUDOCODEFORGP .............................. 36 5.1TopLevel .................................... 36 5.2HandlingDenseNodes ............................. 37 REFERENCES ....................................... 38 BIOGRAPHICALSKETCH ................................ 39 5

PAGE 6

Table page 4-1Someofthegraphsused ............................... 28 4-2CommonparametersforGP ............................. 28 4-3Resultsforpmetis,hmetisandGP ......................... 28 4-4CutsizesforpmetisandGP ............................. 29 4-5ResultsforGPwithsimpleandrepetitivemulti-level ............... 30 4-6CutsizesforGPwithandwithoutdensenodeheuristic(DNH) ......... 31 4-7AveragenumberofpassesofFM .......................... 32 6

PAGE 7

Figure page 1-1Multi-level ....................................... 10 2-1KernighanandLin:subsetsXandYareswapped ................. 12 3-1Graphcompression .................................. 14 3-2Repetitivemulti-level ................................. 16 3-3Graphcoarsening ................................... 18 3-4Coarseningofdensenodes .............................. 19 3-5Densenodeshandlingheuristic ........................... 22 3-6Bucketliststructure ................................. 23 3-7TheFMalgorithm .................................. 24 4-1Performanceproledenition ............................ 27 4-2InputMatrix:GHS psdef/ford1 ........................... 29 4-3PermutedGHS psdef/ford1 ............................. 30 4-4Edgecutqualitycomparisonsamonghmetis,metisandGP ........... 31 4-5Runtimecomparisonsamonghmetis,metisandGP ............... 32 4-6EdgecutqualitycomparisonbetweenmetisandGP ................ 33 4-7EdgecutqualitycomparisonsamongvariousGP ................. 34 4-8RuntimecomparisonsamongvariousGP ..................... 35 7

PAGE 8

Graphpartitioningisanimportantproblemthathasextensiveapplicationsinmanyareas,includingscienticcomputing,VLSIdesign,andtaskscheduling.Themulti-levelgraphpartitioningalgorithmreducesthesizeofthegraphgraduallybycollapsingverticesandedgesovervariouslevels,partitionsthesmallestgraphandthenuncoarsensittoconstructapartitionfortheoriginalgraph.Also,ateachstepofuncoarseningthepartitionisrenedasthedegreeoffreedomincreases.Inthisthesiswehaveimplementedthemulti-levelgraphpartitioningalgorithmandusedtheFiducciaMattheysesalgorithmforreningthepartitionateachlevelofuncoarsening.Alongwiththefewpublishedheuristicswehavetriedoneofourownforhandlingdensenodesduringthecoarseningphase.WepresentourresultsandcomparethemtothoseoftheMetissoftwarethatisthecurrentstateoftheartpackageforgraphpartitioning. 8

PAGE 9

Givenanun-weightedgraphGwithVverticesandEedgesandgivenanumberk,theGraphPartitioningproblemistodividetheVverticesintokpartssuchthatthenumberofedgesconnectingverticesindierentpartsisminimizedgiventheconditionthateachpartcontainsroughlythesamenumberofvertices.Ifthegraphisweighted,i.e.theverticesandedgeshaveweightsassociatedwiththem;theproblemrequiresthesumoftheweightsoftheedgesconnectingverticesindierentpartstobeminimizedgiventheconditionthatthesumoftheweightsoftheverticesineachpartisroughlythesame.Theproblemcanbereducedintothatofbisectionwherethegraphissplitintotwopartsandtheneachpartisfurtherbisectedusingthesameprocedurerecursively.Theproblemaddressedinthisthesisisthatofbisectingthegivengraphaccordingtoagivenratio.Also,theinputgraphisassumedtobeun-weighted.However,thisassumptionisjustattheimplementationlevelanddoesnotinanywaychangetheunderlyingalgorithms. IthasbeenshownthattheGraphPartitioningproblemisNP-hard[ 1 ]andsoheuristicbasedmethodshavebeenemployedtogetsub-optimalsolutions.Thegoalforeachheuristicmethodistogetthesmallestpossiblecutinreasonabletime.Wediscusssomepopularmethodsinthenextchapter. GraphPartitioningisanimportantproblemsinceitndsextensiveapplicationsinmanyareas,includingscienticcomputing,VLSIdesignandtaskscheduling.Oneimportantapplicationisthereorderingofsparsematricespriortofactorization.Ithasbeenshownthatthereorderingofrowsandcolumnsofasparsematrixcanreducetheamountofllthatiscausedduringfactorizationandthusresultinasignicantreductionintheoatingpointoperationsrequiredduringfactorization.Althoughtheidealthingistondanodeseparatorratherthananedgeseparator,anedgeseparatorcanbeconvertedintoanodeseparatorusingminimumcovermethods. 9

PAGE 10

Figure1-1. Multi-levelCoarseningandUncoarsening.Thesmallestgraphiscutandthepartitiongetsprojectedandrenedasitmovesuptotheoriginalbiggestgraph. Inthemulti-levelapproachthecoarseningphaseisimportant.Ifagraphisfoldedinsuchawaythatthepropertiesofthegrapharepreservedi.e.thecoarsegraphisasmallerreplicaofthenegraph,agoodcutofthecoarsegraphtranslatesintoagoodcutofthenegraph[ 2 ].Usingthegeneralcoarseningheuristicsdescribedinchapter3,itispossiblethatthepropertiesofthegrapharenotpreservedwhenhandlingdensenodesandmay 10

PAGE 11

11

PAGE 12

3 ]startswithanarbitrarypartitionandthentriestoimproveitbyndingandexchangingasetofnodesinonepartitionwithasamesizesetintheotherpartitionsuchthattheneteectisareductioninthecutsize.Thesetofnodesisfoundincrementallystartingwithonenodeineachpartitionandaddingonenodeineachstep.Thealgorithmstartsbycalculatingthedierenceofexternalandinternalcostsforeachnodeandthenselectsthatpairofnodesaandb,oneineachpartition,forwhichthegainismaximum.Gainisdenedas B=BY+X A KernighanandLin:subsetsXandYareswapped 4 ]isbasicallyanecientimplementationoftheKLalgorithmusingspecialdatastructuresthatbringdownthecomplexityfromn2tolinearinterms 12

PAGE 13

5 ]haveshownthattheGraphPartitioningproblemcanbeformulatedasthefollowingcontinuousquadraticprogrammingproblem: Subjectto0x1;l1Txu; 13

PAGE 14

Figure3-1. Graphcompression.A)Inputgraphhaving13edgesand8nodes.Nodes1and5havethesameadjacencystructure.B)Compressedgraphwith6edgesand7nodes. Weusedahashvaluebasedtechniqueforcompression.Foreachnodeoftheinputgraph,ahashvalueiscalculatedasfollows:hash(i)=i+P(Adj(i)).Forexamplefornode2withadjacencystructure1,3,5,10,hash(2)=2+1+3+5+10=21.Thenthehashvalueofeachnodeiscomparedtothatofitsneighbors.Iftwonodeshavethesamehashvalue,thentheiradjacencystructuresarecompared.Havingthesamehashvaluedoesnotguaranteethatthenodeshavethesameadjacencystructure,althoughnothavingthesamehashvaluedoesguaranteethatthenodesdonothavethesameadjacency 14

PAGE 15

15

PAGE 16

Figure3-2. Multi-levelwithrepeatedcoarseninganduncoarseningplusrening. Basedonaheuristic,anunmatchedvertexismatchedtooneofitsunmatchedneighbors.Maximalmatchingisobtainedwhentherearenoedgesthatareincidenton 16

PAGE 17

17

PAGE 18

Oncewehaveamaximalmatchingoftheverticesofthenegraph,wecreateamappingvectorthatmapstheverticesinthenegraphtothoseinthecoarsegraph.Thenusingthematchingandthemappingvectors,thecoarsegraphisconstructed.Foreveryunmatchedvertex,itsadjacencystructureiscopiedovertothecoarsegraphwhereasformatchedvertices,thenewvertexinthecoarsegraphthatismappedtotheseverticeshasaunionoftheiradjacencystructuresminustheedgethatconnectedthem,asitsadjacencystructure.Theedgeweightsarecopiedoverexceptwhenthematchedverticeshaveacommonneighbor.Inthatcasetheedgestothecommonneighborgetmergedintooneandthenewedgehasaweightequaltothesumoftheweightsofthemergededges.Similarly,thenewvertexgetsthesumoftheweightsofthemergedvertices.Anyduplicateedgesresultingfromtheprocessaremergedtogetherwiththeirweightsadded. Figure3-3. Graphcoarsening.A)Theoriginalgraphshowingamaximalmatching.B)Graphafteronestepofcoarsening.C)Amaximalmatchingofthecoarsegraph.D)Graphaftertwostepsofcoarsening. 18

PAGE 19

3-5 .(A)).Also,somegraphsmayhaveafewdensenodesthatresembleastar-likestructure.Thenormalmatchingheuristicsfailtopreservethepropertiesofsuchgraphsduringcoarseningandmayproduceahighlyunbalancedcoarsegraph.Also,theseheuristicscanmatchonlyoneofthestaredgesinonestepandifthewholegraphhasonlyafewofthesestar-likedensenodesthecoarseningprocesscanbecomeextremelyslow(Figure 3-4 ).Thisimpliesalotmorelevelsofcoarseningthusresultinginthatmanymoregraphstobestoredwithmanyconsecutivegraphsbeingroughlythesamesize.Expectedly,thiscanuseupallthememoryresultinginafatalerror. Figure3-4. Coarseningofdensenodes.A)Adensenodewithdegree12.B)Nodebalanceafter11stepsofcoarseningusingrandomorheavyedgematching. Wheneverwedetectthatthecoarseningprocessisnotreducingthesizeofthegraphbyareasonableamount,wecallthisroutinetohandlethedensenodes.Theprocessstartsbycalculatingthedegreesforallnodesandthensortingthemaccordingtodegree.Thenodesarethenvisitedfromthehighestdegreetothelowestdegree.However,ifwereachanodethathasadegreelessthanthemediandegreeoradegreelessthanthree(defaultvaluethatcanbeoverwrittenbytheuser)timestheaveragedegreeoradegreelessthanone-fth(defaultvaluethatcanbeoverwrittenbytheuser)themaximumdegree,theprocessisterminatedandtheroutinereturnsthegraphobtainedsofar.Hereishowadensenodeishandled.Weaddzeroweightedgestoconnectthenodesadjacenttothedensenode.However,theseedgesareinitiallyaddedtoanewintermediategraphof 19

PAGE 20

3-5 .(B)).Thisprocessisrepeatedforallthedensenodesthatarehandledandalltheedgesareaddedtothesameintermediategraph.Soattheendoftherststepwehaveanintermediategraphthathasonlyzeroweightedges.Inthenextstepweruntherandommatchingheuristicontheintermediategraph.Sincealltheedgeshavethesameweight,randommatchingheuristicistheobviouschoice.Nowweaddthisintermediategraphtothegraphbeinghandledfordensenodes.Sincetheaddededgesweighzero,theinternaldegreesofthenodesarepreservedintheoriginalgraph.Also,addinganedgeoveranexistingedgemakesnodierencesincetheaddedsumremainsunchanged.Eitherheavyedgematchingorheaviestedgematchingheuristicisnowappliedtotheresultinggraphanditiscoarsened(Figure 3-5 .(C)).Theideabehindthisheuristicistopair-wisecollapsetheedgesincidentonadensenodesothatcoarseningproceedsfasterandtheresultingcoarsergraphpreservesthestructureandthepropertiesoftheoriginalgraph.Also,itresultsinamuchmorebalancedcoarsestgraphandagoodcutofthecoarsestgraphtransformsintoagoodcutoftheoriginaltoplevelgraph.Somemoreprocessingisrequiredinordertocollapsetheedgesthatstillremain,sincetherstroundonlyreducesthemaximumdegreebyhalf.Forthiswerepeatedlydozeroedgematching(Figure 3-5 .(C,D))followedbyheavy/heaviestedgematchinguntilwereachastagewhenthecoarseningproducesagraphwithsizenotindesiredproportionofthenegraphthussignalingthepresenceofdensenodesandare-runofthehandlingroutineisrequired.Zeroedgematching,asthenamesuggestsmatchesonlythezeroweightedges,thusenablingmorepair-wisecollapsingoftheedgesincidentonadensenode.Thismatchingheuristicmatchesonlythesecondneighborthatisconnectedviaazeroweightedgetoarandomlychosenunmatchedvertex.InFigure 3-5 .(E)thereisonlyonezeroweightedgeconnectingthetwonodes,henceitdoesnotgetmatchedandthematchingshownisobtainedusingheavy/heaviestedgematching.Also,itdistinguisheswiththezeroweightedgesaddedtoconnectthedisconnectedcomponentsinthetoplevelgraph.Bythenatureoftheedgesaddedduringdensenodehandling,theseedgeshaveendpoints 20

PAGE 21

Theideabehindtheselectionofsecondzeroweightnode,istoavoidthesituationshowninFigure 3-5 .(G).Asshowninthegure,ifthelastzeroweightedgeisselected,theresultingcoarsegraphwouldnotbecorrectlybalanced.However,ifoneoftheotheredgesisselectedasshowninthegure,theresultinggraphismorebalanced(Figure 3-5 .(F)). 21

PAGE 22

Densenodeshandlingheuristic.A)Adensenodewithdegree12.B)Theintermediategraphshowingthedensenodeandthezeroweightedgeswithamaximalmatching.C)Theoriginaldensenodeafteronestepofcoarseningalsoshowingamaximalmatching.D)Aftertwostepsofcoarsening.E)Afterthreesteps.F)Thenalnodebalanceafterfourstepsofcoarsening.G)Whenthezeroweightedgein(E)isselectedformatching. 22

PAGE 23

3-6 )areused,onefortheleftpartitionverticesandtheotherfortherightpartitionvertices. Figure3-6. Bucketliststructure Thegainsarecomputedusingthefollowingequation: 23

PAGE 24

Figure3-7. TheFMalgorithm.A)Graphshowingapartitionwiththreeedgescut.Node'a'hasagainvalueof+1.B)GraphafteronestepofFM.Node'a'ismovedtotheothersideresultinginareductionof1inthecut-size. Reningoftheinitialcutisdonebymovingnodesfromonepartitiontotheothersuchthatitresultsinadecreaseinthecut-size.Anodetobemovedispickedfromthebucketarraythathasahighermaximumgainbucket.However,ifthismovefailstomaintainthedesiredbalance,thenextnodeinthesamegainbucketistried.Ifnoneofthenodesinthatgainbucketcouldbemovedwhilemaintainingthedesiredbalance,thenextlowergainbucketistried.Thiscouldbefromtheotherbucketarray.Iftheinitialpartitionitselfisnotbalanced,therstpassofthereningalgorithmisusedasabalancingpasstogettothedesiredbalancewithoutcaringforthecut-size.However,nodesarestillmovedstartingfromthehighestgainbucketoftheheavierside.Aslackequaltotheweightoftheheaviestnode(w)inthegraphisprovidedsuchthateachpartitioniswithinrtotalweightw,whereristhedesiredbalance.Thisensuresthatabalancecanalwaysbefoundespeciallyinthecasewhenris0.5.Onceabalancehasbeenobtained,itisalwaysmaintainedforthatgraphsinceanynodeaectingthe 24

PAGE 25

4-7 ). 25

PAGE 26

AllthetestswererunonSunSolarisSPARCserverusingtheMATLABsoftwareversion7.3.MexfunctionswerewrittentointerfaceMATLABwiththeClibraries.Metis4.0[ 6 ]andhmetis1.5[ 7 ]wereusedforcomparisons.WerefertoourgraphpartitionerasGPwhencomparingwithpmetis GraphmatricesweretakenfromtheUniversityofFloridaSparseMatrixCollection[ 8 ].Allthesquarematricesinthecollectionweresortedbythenumberofnon-zerosandtherst1250wereselectedforthevariousexperiments.Thesematricesrangedinsizefrom15to1014951intermsofthenumberofnon-zerosand5to325729intermsofthenumberofrows/columns.Eachun-symmetricmatrixwasmadesymmetricbyaddingitstransposetoitself.Thenumericalvalueswereignoredwhendoingthepartitioningandthegraphsweretreatedasundirectedwithnoself-edges.Table 4-1 lists10graphsthatwererandomlychosenfromtheonesusedfortheexperiments.Thesegraphsareusedtoshowasamplesetofsomeoftheresults. VariousGPparametersandtheirvaluesusedacrossallexperimentsarelistedinTable 4-2 .Forpmetisandhmetis,weusedthedefaultparametervalues. Weusedtheperformanceproleplot(Figure 4-3 )tocomparetheresults.Whencomparingthecutsizes,foreachgraphtheedgecutobtainedusingvariousmethods/applicationswasdividedbythesmallestofthesevaluesandtheresultsavedasavectorforeachmethod/application.Finallythesevectorsweresortedandplottedagainsteachother.Thesameprocesswasusedforcomparingtheruntimes.

PAGE 27

PerformanceProleDenition:runtimesformethodsa,bandcarebeingcompared.For900ofthetestcasesthetimetakenbymethoda1:5timesthelowestruntimeobtainedforthemethodsa,bandc,whereasfor800ofthetestcasesthetimetakenbymethodb1:5timesthelowestruntimeobtainedforthemethodsa,bandcandthenumberoftestcasesforwhichmethodctakes1:5timesthelowestruntimeobtainedforthethreemethodsisonly200.Thus,hereaisthebestmethod,followedcloselybybandcistheworstofthethreemethodsthatarebeingcompared. 27

PAGE 28

Someofthegraphsused NameNo.ofNo.ofKindVerticesNon-zeros GHS indef/bloweybl30003109999materialsproblemHB/bcsstm2619221922structuralproblemGHS indef/aug2d29008768322D/3DproblemBomhof/circuit 31212748137circuitsimulationproblemZitney/radfr1104813299chemicalprocesssimulationproblemGset/G29200039980undirectedweightedrandomgraphSandia/fpga dcop 3412205892subsequentcircuitsimulationproblemGHS psdef/ford118728101576structuralproblemGHS indef/tuma122967877602D/3DproblemSchenk IBMNA/c-38812777689optimizationproblem Table4-2. CommonparametersforGP ParameterDescriptionValue Ratioofnumberofnodesinnegraphtonumberofnodesincoarsegraph1.05whenstartingdensenodedetectionLevelatwhichtoresumerepetitiveMulti-levelbtotallevels=2cNumberofconsecutivemoveswithoutimprovementduringFMrening100Thenumberoftimesthedegreeofanodeistotheaveragedegreeto3beclassiedasadensenodeRatioofthedegreeofanodetothemaximumdegreewhenthehandlingof0.2densenodesstopsTheratioofthenumberofnodesinthecurrentcoarsegraphtothe0.01numberofnodesinthetoplevelinputgraphuntilwhenheavyedgematchingheuristicisusedSeedvalueforgeneratingrandomnumbers6101975 Table4-3. Resultsforpmetis,hmetisandGP(RepetitiveMulti-levelwith9repetitions) MatrixpmetishmetisGP CutSizeRunTimeCutSizeRunTimeCutSizeRunTime GHS indef/bloweybl45200.267345026.587345790.6432HB/bcsstm2600.009700.003500.0307GHS indef/aug2d1160.0831984.52011070.1505Bomhof/circuit 313140.054118406.964524430.8656Zitney/radfr15540.00825011.33915010.0669Gset/G2968500.027466915.496468130.3033Sandia/fpga dcop 34260.0023200.2421180.0232GHS psdef/ford11310.04111245.01671330.1092GHS indef/tuma11670.05661603.85501990.1356Schenk IBMNA/c-389750.03628985.172610300.2915 28

PAGE 29

InputMatrix:GHS psdef/ford1displayedusingcspy.m. Table4-4. CutsizesforpmetisandGP(RepetitiveMulti-levelwith9repetitions) MatrixpmetisGP r=0.500r=0.499r=0.497r=0.495 GHS indef/bloweybl5024503850144993HB/bcsstm260000GHS indef/aug2d110110125116Bomhof/circuit 31377263724532428Zitney/radfr1551543531528Gset/G296837686568526849Sandia/fpga dcop 3420181818GHS psdef/ford1147127127131GHS indef/tuma1169202199202Schenk IBMNA/c-381068102410081141 29

PAGE 30

GHS psdef/ford1permutedusingthepartitionvectoranddisplayedusingcspy.m;theentriesinthesecondandfourthquadrantarethetwopartitions;theentriesintherst/thirdquadrantaretheedgesseparatingthetwopartitions. Table4-5. ResultsforGPwithsimpleandrepetitivemulti-level MatrixNoRepetition4Repetitions9Repetitions CutSizeRunTimeCutSizeRunTimeCutSizeRunTime GHS indef/bloweybl50910.233451080.390350500.6183HB/bcsstm2600.021800.026300.0303GHS indef/aug2d1360.10551130.12581130.1499Bomhof/circuit 327470.274125750.567325750.9604Zitney/radfr16900.01816900.04406900.0737Gset/G2972940.058572940.149272940.2579Sandia/fpga dcop 34620.0063190.0157190.0252GHS psdef/ford11350.06991350.09161350.1206GHS indef/tuma12080.09502080.12242080.1731Schenk IBMNA/c-3815380.139812430.214412430.3400 30

PAGE 31

PerformanceProle:comparingtheedgecutsamonghmetis,pmetis,GP(RepetitiveMulti-levelwith9repetitions)andGP(SimpleMulti-levelwith0repetitions)atr=0.45.hmetisdoesbetterthantheotherthreewhereasGPdoesbetterthanpmetis. Table4-6. CutsizesforGP(RepetitiveMulti-levelwith9repetitions)withandwithoutdensenodeheuristic(DNH) MatrixNo.ofNo.ofKindCutSize VerticesNon-zerosDNHMinusDNH Bates/Chem97ZtZ25417361statistical/mathematical29Rajat/rajat2239899195429circuitsimulation28122Rajat/rajat23110355555441circuitsimulation186533Rajat/rajat1911573699circuitsimulation53114Bomhof/circuit 480209307604circuitsimulation20903576Hamm/scircuit170998958936circuitsimulation103137Grund/poli40088188economicproblem1622Schenk IBMNA/c-4311125123659optimizationproblem31414334Pajek/Wordnet382670132964directedweightedgraph49576927Schenk IBMNA/c-67b57975530583subsequentoptimization16252323 31

PAGE 32

PerformanceProle:comparingtheruntimesamonghmetis,pmetis,GP(RepetitiveMulti-levelwith9repetitions)andGP(SimpleMulti-levelwith0repetitions)atr=0.45.pmetisdoesbetterthantheotherthreewhereasGPdoesmuchbetterthanhmetis. Table4-7. AveragenumberofpassesofFM MatrixPasses GHS indef/bloweybl2.9459HB/bcsstm261.7778GHS indef/aug2d2.4167Bomhof/circuit 33.4333Zitney/radfr12.5833Gset/G294.5385Sandia/fpga dcop 342.5500GHS psdef/ford12.6667GHS indef/tuma12.8387Schenk IBMNA/c-382.1837 32

PAGE 33

PerformanceProle:comparingtheedgecutsamongMetisatr=0.50andGP(RepetitiveMulti-levelwith9repetitions)atr=0.499,0.497,0.495.GPalmostmatchesMetisatr=0.499anddoesbetterinthetheothertwocases,thebestbeingat0.495. 4.2.1InterfacingwithQP-Part 9 ]thattheirmethodgivescutssuperiortothoseofMETIS,thoughtheirmethodisexpensivewhenappliedtothewholegraph.WeplantointerfaceQP-Partwiththe 33

PAGE 34

PerformanceProle:comparingtheedgecutsamongsimpleMulti-level,Multi-levelwithfourrepetitionsofcoarseninganduncoarseningplusrening(middle)andMulti-levelwithninerepetitionsofcoarseninganduncoarseningplusrening(top). multi-levelapproachsuchthattheexpensiveQP-PartisusedtorenethepartitionwhenthegraphissmallandthecheapFMalgorithmisusedwhenthegraphissucientlylarge.Thisshouldgiveusbetterpartitionsatreasonablecost. 34

PAGE 35

PerformanceProle:comparingtheruntimesamongsimpleMulti-level(top),Multi-levelwithfourrepetitionsofcoarseninganduncoarseningplusrening(middle)andMulti-levelwithninerepetitionsofcoarseninganduncoarseningplusrening. 35

PAGE 36

Graph(G)CC=Find Connected Components(GC)G=GCifsize(CC)>1i=0foreachCCdorep(i++)=Find Random Node(CC)endE=New Graph(size(GC))forj=0toi1doifj0Add Zero Wt Edge(E,rep(j1),rep(j))Add Zero Wt Edge(E,rep(j),rep(j1))endendG=GC+EendRead(DENSE RATIO,MIN SIZE,LEVEL,REPETITION COUNT)PREV(G)=NULLCoarsen:matchings=Get Matchings(G)GCOARSE=Apply Matchings(G,matchings)PREV(GCOARSE)=GNEXT(G)=GCOARSEifsize(GCOARSE)MIN SIZEG=GCOARSEGOTOCutendifsize(G)=size(GCOARSE)
PAGE 37

Graph From List(G,LEVEL)endUnCoarsen:GFINE=Uncoarsen(G,partition)GREFINE=Refine FM(GFINE)G=GREFINEifG==GMARKandREPETITION COUNT>0REPETITION COUNTifpartition(G)
PAGE 38

[1] M.Yannakakis,\Computingtheminimumll-inisNP-Complete,"SIAMJ.Alg.Disc.Meth.,vol.2,pp.77{79,1981. [2] G.KarypisandV.Kumar,\Analysisofmultilevelgraphpartitioning,"Tech.Rep.TR-95-037,ComputerScienceDept.,Univ.ofMinnesota,Minneapolis,MN,1995. [3] B.W.KernighanandS.Lin,\Anecientheuristicprocedureforpartitioninggraphs,"BellSystemTech.J.,vol.49,pp.291{307,1970. [4] C.M.FiducciaandR.M.Mattheyses,\Alinear-timeheuristicforimprovingnetworkpartition,"inProc.19thDesignAutomationConf.,LasVegas,NV,1982,pp.175{181. [5] W.W.HagerandY.Krylyuk,\Graphpartitioningandcontinuousquadraticprogramming,"SIAMJ.Alg.Disc.Meth.,vol.12,pp.500{523,1999. [6] G.KarypisandV.Kumar,\Metis:Asoftwarepackageforpartitioningunstructuredgraphs,partitioningmeshes,andcomputingll-reducingorderingsofsparsematrices,"Tech.Rep.,ComputerScienceDept.,Univ.ofMinnesota1,Minneapolis,MN,Sept.1998. [7] G.KarypisandV.Kumar,\hMetis:Ahypergraphpartitioningpackage,"Tech.Rep.,ComputerScienceDept.,Univ.ofMinnesota,Minneapolis,MN,Nov.1998. [8] T.A.Davis,\UniversityofFloridasparsematrixcollection,"www.cise.u.edu/research/sparse. [9] W.W.Hager,S.C.Park,andT.A.Davis,\Blockexchangeingraphpartitioning,"inApproximationinComplexityandNumericalOptimization:ContinuousandDiscreteProblems,P.M.Pardalos,Ed.,pp.299{307.KluwerAcademicPublishers,2000. 38

PAGE 39

PawanKumarAurorawasborninKanpur,India,onOctober6,1975.HegraduatedfromtheBirlaInstituteofTechnology,Mesra,Ranchiin1998withadegreeinComputerScience.ForsevenyearsPawanworkedintheITIndustryinitiallyforTataConsultancyServicesinMumbai,IndiaandlaterforIBMGlobalServicesinPune,IndiabeforerelocatingtoGainesville,Florida,USAtopursuegraduatestudyincomputerengineering.AftercompletinghisM.S.,PawanwillpursuehisPhDincomputerscience. 39


xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20101113_AAAANA INGEST_TIME 2010-11-13T16:42:42Z PACKAGE UFE0021171_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 6799 DFID F20101113_AACPFG ORIGIN DEPOSITOR PATH aurora_p_Page_01.pro GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
e811d260d34e73f1c17b2d7d4eed5ac2
SHA-1
638b39c9547d2a5bc9e8f7ec5fd22f83eb0c1334
1643 F20101113_AACPAJ aurora_p_Page_01thm.jpg
0bf416b19212699855b38e38f05023d0
e242d3c48351302065411c6976f79aeaf4cd5b56
23114 F20101113_AACPKD aurora_p_Page_31.QC.jpg
f92bf37279f50691686a4c67c087fdd8
52d1bced00da0bd35277967cc8e77e97f490a147
933 F20101113_AACPFH aurora_p_Page_02.pro
e7ded717431b3266b01c332a80d7fe1c
0ddbf6b6be331bb2d6fba8f1a95330d56c3ff865
49597 F20101113_AACPAK aurora_p_Page_19.pro
d818a7a6a44aa290479f4140c4de2bf5
4a429181d43c7bfc7173efeec14c9ffeaea66da4
4157 F20101113_AACPKE aurora_p_Page_32thm.jpg
8e05b18d38ad57be82464cab4e9b8897
892535370d6ca61249be039849bb777425d1a5b0
777 F20101113_AACPFI aurora_p_Page_03.pro
04d61f9a364bafd446a697048415a01e
249eb0aa5ebd8ad863b4072c5293adad2bb0af20
22341 F20101113_AACPAL aurora_p_Page_13.QC.jpg
2c62a35900f94b0a66a8716c8a3739cc
20583eb27ae7ebb292ec15e0690af024e346ab29
15272 F20101113_AACPKF aurora_p_Page_32.QC.jpg
454191ba809b861b1222052b0c1464c5
721a43a815cf5f19759ac2ad6ef453dc1b9e034b
18048 F20101113_AACPFJ aurora_p_Page_04.pro
2aea6807ddf38ac9eb82c853ac982dc2
483566dcd4f90f98eb73b2470766c4c23b3175e6
70270 F20101113_AACPAM aurora_p_Page_10.jpg
579fd35964444fdf97590cd612434527
f8a1ee5ac74ead60db5dd93474b79799f21df7fe
22367 F20101113_AACPKG aurora_p_Page_33.QC.jpg
0bf482f80e6ed4834dba2799bde0aa62
e371b5b9b556817f732869ac992e6a1878412401
26580 F20101113_AACPFK aurora_p_Page_05.pro
bb1f3e1f68151df7c19491cfdfb06ff1
40594a45888157bf3d70eed1dd69cccb415b01c6
4251 F20101113_AACPAN aurora_p_Page_02.jpg
e48c57631060451ceeb41058e2a2de28
bd0e2d50a44b1e0f1bc1e967198310118badc9ed
5373 F20101113_AACPKH aurora_p_Page_34thm.jpg
0b3ff63fb917b0a05b842b0d644684cc
b21ba1cbe7bdcbb38b88714f4378851e229f29f0
14464 F20101113_AACPFL aurora_p_Page_06.pro
3c16334428c017aa0375d0f77fa17dd7
80902827197a09dfd8d69d44baf6cdba500f4deb
1113 F20101113_AACPAO aurora_p_Page_37.txt
737ed1851b5255e7f9fd362ab69daa0e
077413f05a834bfb98311e2f1624da689c665758
20657 F20101113_AACPKI aurora_p_Page_34.QC.jpg
1995541a47597fb420a345580dcfd0b1
f6a55c88924fc5989cfa64cd234d569a90fb8ee1
34791 F20101113_AACPFM aurora_p_Page_07.pro
9bc3d1b9efbc0b53de8b2f17c57bc3fd
d32e651d91b2ee196fa82a68337ff6539507352a
13060 F20101113_AACPAP aurora_p_Page_22.pro
f9bfa642080bdc2fc4a773e86d25fd43
a392b7dc27b5995ca72d537c22566233a1de61a0
10212 F20101113_AACPKJ aurora_p_Page_35.QC.jpg
6b2e6144398a6f62d0173feac508c65d
2b483d1cf6162e21c098d6fc8bc254dcce56114c
32262 F20101113_AACPFN aurora_p_Page_08.pro
7b38abeb75a4cad503d1c690ea320144
e68119ccafa49939576e169afe0a5b13a6bee55f
18933 F20101113_AACPAQ aurora_p_Page_29.QC.jpg
6a001e6d15f29dfdbc6809b422b4ecae
9e93cc5d125c566ed2edf7478f75d4e49cac99a0
4296 F20101113_AACPKK aurora_p_Page_36thm.jpg
0c592c51b842702b8a73d4283f9dc37c
50011d9e5bf99b19e401fa97493de741b5441cd2
54261 F20101113_AACPFO aurora_p_Page_09.pro
be16925893ac1ea9e8e6dedbb68cd3fe
275c5e4f8d3acb542cf13cc0cc983d25fdaaac83
23356 F20101113_AACPAR aurora_p_Page_10.QC.jpg
7c560506d813ec6b251b7325d29396b1
71aba48889b6b9731c68ea2072a294b36fb0c6a1
16869 F20101113_AACPKL aurora_p_Page_36.QC.jpg
ec3a42066c359c85cb9243a74881dd2d
9c0dca269b4a9642236f4e1441ef202e990803e6
31579 F20101113_AACPFP aurora_p_Page_10.pro
8e9d56cb66e4c7045610140364ab3c91
d7f697e6bb8986cb84bb8e8df5beaf3c5bb9f7f2
1139 F20101113_AACPAS aurora_p_Page_03.QC.jpg
03e3de4219231aeff923f1bdc41c6933
4a06ae8e0ea549a786bce59df4b1c622ef4224fb
4149 F20101113_AACPKM aurora_p_Page_37thm.jpg
b1501bfc30935fd3204d9fe75e24114c
cb5c1ce7d09c8bb7fd1a8ee8abe3bddc28938d70
5929 F20101113_AACPFQ aurora_p_Page_11.pro
fe460884d0ea18377b8f3766fbcfee7b
63e11272d159d0323fd79573305d35520feeeb18
41496 F20101113_AACPAT aurora_p_Page_12.pro
8f3cb54585d2426a380a283568225a25
80d0af484ef532e4be9d6e5e7e1658aa734bfb91
16797 F20101113_AACPKN aurora_p_Page_37.QC.jpg
e41a042846517cdbbd66739fc304e69e
3c59488575b7630f36883fe054b0bd091592ab3c
34580 F20101113_AACPFR aurora_p_Page_13.pro
b32a93206ee778cde084189e7d6d0f8f
1d3aebe617e936e60c01010016a062153b9509de
57993 F20101113_AACPAU aurora_p_Page_36.jp2
6f589cd415e4e0a4a50e86b1a50e1536
36204dd5bc1f3e142d372736234dee62277569bb
5405 F20101113_AACPKO aurora_p_Page_38thm.jpg
d43a2eccf3a61e4b984723b664e9de1e
a97ecbbca12bcb32494e637c3e42ef096f673917
18619 F20101113_AACPFS aurora_p_Page_14.pro
ad840b7a23938db5eafff9693b5e1db4
da86cf980d2987ea60cd800de644660e8b315d71
3125 F20101113_AACPAV aurora_p_Page_35thm.jpg
7e78f6ba72a4563ef69a21243fc92a3d
05fc5f09eba186ebb2eff7777e87260834e5e363
22986 F20101113_AACPKP aurora_p_Page_38.QC.jpg
738ea158ee6926cc777b62778efcb394
e6549704537e3e7786c8ce9ca97cfae710f8ae78
57897 F20101113_AACPFT aurora_p_Page_15.pro
f04e8cd894dd6f46ea56a9bfdec43f97
a9ae1bd7cea02933e45bbd1dd6bc57e311395ea8
107538 F20101113_AACPAW aurora_p_Page_15.jpg
e92012039fe621f40a03d760ddeebf7b
e34ee149be7d9e25f100fdd199954e343a38f4b0
2348 F20101113_AACPKQ aurora_p_Page_39thm.jpg
fe1be2d02bfb8c1f16e1a50f3c971968
92130aae357771bd547bdebe7eaf78485410a6ba
28702 F20101113_AACPFU aurora_p_Page_16.pro
be71f05296afa3cbae3af5fac0a94f19
57ad7c3add31f36f5cd09ecd46b5b37b05bc18ae
61782 F20101113_AACPAX UFE0021171_00001.xml FULL
2b9c8e65d423839f558b9d5730d7e6b5
f289fd373ca02bba368d8ee20dea196ff6359bcd
10023 F20101113_AACPKR aurora_p_Page_39.QC.jpg
e0669bb506df40b64b9fc5623a773f23
c5695aeda763f8602a755bfb8e707b56d63b9041
51538 F20101113_AACPFV aurora_p_Page_17.pro
611d906d81c191243bb58f398d3e3c90
11fb95138a3deff644f407782e7e97a05edbbf47
48179 F20101113_AACPKS UFE0021171_00001.mets
d9acf91f5bf59d9e378ffe0059cb3490
6e64415cdff470759218e4e261ce2329536a8f55
63427 F20101113_AACPFW aurora_p_Page_20.pro
e8ceef95fda6dc9b018b6c85b65e6e96
8645caf58be32d2e26672f80f1e76fbb71e0a25a
60733 F20101113_AACPFX aurora_p_Page_21.pro
8972204d65d5568626807b476834d26e
bf462e1e068373dbf963fc413c20ce720f3bef92
38021 F20101113_AACPFY aurora_p_Page_23.pro
3ffdee0f8bda69359d81667da6bd230c
5ad13bdac59af4f866642b779b3855b56d1ffb81
901974 F20101113_AACPDA aurora_p_Page_18.jp2
9938aabd3902c4f7a067badc141b7eb2
5d03c88d9bbf8e64b2efd079b1ee4af60c59e4dd
49189 F20101113_AACPFZ aurora_p_Page_24.pro
5b23b3f16f2a06bae473ca6a06fe5552
918797ebba35c5b8df29c99accf552e2ec395119
1051971 F20101113_AACPDB aurora_p_Page_19.jp2
ca94265f5cd3d867944c3d2632579914
ab0c8db447a3e7c5f763fd6393249d2e8f1a7965
271566 F20101113_AACPIA aurora_p.pdf
74b7f1dd9bef7953baed4db6d72baeeb
a996258159aa34f522d9e31a4702f34d0f6a90ab
1051976 F20101113_AACPDC aurora_p_Page_20.jp2
471ba17577292fede7b8aeabd1e4a5f2
6bd794103164cff1a3c0b6744ba82a130e6a6580
1051907 F20101113_AACPDD aurora_p_Page_21.jp2
0b7bcb12a1dd5c7a7566bfb68cb7fb8c
115a1628b94875c04ddab81effc41c8ffc039ec1
6642 F20101113_AACPIB aurora_p_Page_01.QC.jpg
06f6cbbead83673a869e93e5e532dff8
5bf479727257f7808f5f555b601fdde18b4fa479
62837 F20101113_AACPDE aurora_p_Page_22.jp2
ca8c76704dcf4d4eb6e4ea7beb86557d
9a5d019e58a35155751ab44ff97859c93f5e2908
509 F20101113_AACPIC aurora_p_Page_02thm.jpg
1e880c188747b48c18592ac5aba37b66
9beab76f0473d76e8b323658105f0aed4e87cc2c
886916 F20101113_AACPDF aurora_p_Page_23.jp2
e3e7e275d0b4e427d0cd1aae09e6030c
469c577170e71d7272779022ac65af07f7fba2bf
1515 F20101113_AACPID aurora_p_Page_02.QC.jpg
21002c3fd7f126425e6bd16d02d2fca3
3f1b0685b056bb488dd350db263841858e0e78ea
1051921 F20101113_AACPDG aurora_p_Page_24.jp2
c507f49578e66216960f60b19a20fefe
971840c6d29f75958dc0e41e740faa076efb7270
522 F20101113_AACPIE aurora_p_Page_03thm.jpg
4e3ec7a9688b6ba511c0dbc80a9e9292
9adaf7f51b6162fd84948d62164ad9b0cdd4ce79
723552 F20101113_AACPDH aurora_p_Page_25.jp2
29a7223b00259717feb97b260a5633b1
cd6c9dae37512121be5e37736ed4a0b3b39171a7
3066 F20101113_AACPIF aurora_p_Page_04thm.jpg
ee5b6931ff9ce23a9bb75f10e145dbb2
f2dfbe501367ae832fb189ada0a81e64daa9a65a
1046209 F20101113_AACPDI aurora_p_Page_26.jp2
5ce70b1634a922842a2061ce74bac43d
48855f11eab18e69897117d47626f916b611cf9f
12023 F20101113_AACPIG aurora_p_Page_04.QC.jpg
2f49d14c02a6cc70032eae74fc3eadb2
cdfe3d0a912f588ece17073b623d40d68c63eaa6
49951 F20101113_AACPDJ aurora_p_Page_27.jp2
2928d71f8c779641e226edb4e4c1db5b
27eca43012894eb4850c194e94742a15d4b902c9
6090 F20101113_AACPIH aurora_p_Page_05thm.jpg
668cb4639be7209fc267fe2c9a8fd3e7
cb4360f0812f677500b61bed21db9dd5e6b2728c
120654 F20101113_AACPDK aurora_p_Page_28.jp2
564534d12c0d3b816492a84cf49c3ce8
c67423cd32d714c8170ce4d9ffbcec3f857ee13e
30379 F20101113_AACPII aurora_p_Page_05.QC.jpg
8f869a95c96cbefa53d9e58a7602156b
1e91b1cddb0efe1a3ad191788490be7cb6803cd4
542192 F20101113_AACPDL aurora_p_Page_29.jp2
d642627ce346a9c03fdafa0fbd2611b9
1d2d6b100ccd7abb368b201742f4ce394a7d4ba4
10617 F20101113_AACPIJ aurora_p_Page_06.QC.jpg
07957acb8e49907762dd218773d3c60a
f554339616e2aa50b6f6860209c5af161eddfbf8
758620 F20101113_AACPDM aurora_p_Page_30.jp2
e59e50a2304d34eaa81c7187a4465e18
cd694e7e77c132e68873b36b895eec3cbe99eebc
4664 F20101113_AACPIK aurora_p_Page_07thm.jpg
686073fe7b8bfc4d57afa8f8b5c627c2
331813ef064350e8c964eec1c630a98431989ba4
80645 F20101113_AACPDN aurora_p_Page_31.jp2
fabfc332fd419af87bc14b68ca175f1e
9e4863e4b097d2ef950e7d0986a5625bcb7a0a62
21080 F20101113_AACPIL aurora_p_Page_07.QC.jpg
1f588ecd509235866801ed0435222710
5309481a48dcab7b951aa68ab6612371e41e2e49
55107 F20101113_AACPDO aurora_p_Page_32.jp2
3d47f3283f328a6a59b7e48609928de8
b4f6e635ebab32efa2f4aaf3b8cffef1d253cd5d
4675 F20101113_AACPIM aurora_p_Page_08thm.jpg
36bc9f7bba0d0101b196e470fe655dab
e8d73024548bacf70f55a50ea3c1176cd7066770
780676 F20101113_AACPDP aurora_p_Page_33.jp2
8f2654ab03284dd5a609ef30c21cb1c2
7a2fa10379b9f56bb36f7b8504bfd91093c82961
20199 F20101113_AACPIN aurora_p_Page_08.QC.jpg
c61102f296e97126e432f93bf9a9f5d2
5c9c01ca2522c3e1e5cb21941d31679bfc06003a
70386 F20101113_AACPDQ aurora_p_Page_34.jp2
9571cc167cbcb32a1b1dec431283fc15
77e9a132e6b6ff481467865f5df0b68e8534bd2f
7592 F20101113_AACPIO aurora_p_Page_09thm.jpg
92a71edf2d01d16cbe8e48dd07e2aac0
32dc4a047e042bac65b1dfa4a6542df48b2041be
34942 F20101113_AACPDR aurora_p_Page_35.jp2
a187e449c95cc94edbf6dedf32a665b1
b46c7f0fd2da5c528d85ffa179178a5f1082b28d
33849 F20101113_AACPIP aurora_p_Page_09.QC.jpg
15c62d3b59aa7d19d40549b0a87b8325
f3f25afdb9883a6fa5b289686481ce8d7660557d
63008 F20101113_AACPDS aurora_p_Page_37.jp2
f1e5eb993daeb9c79c8dc878c20ef0aa
a295085dc9868e51214995a1265eb6a89c7ab029
6512 F20101113_AACPIQ aurora_p_Page_10thm.jpg
d3d30c9700ce4146ab381ad61660c5a0
394f6d56c58d2c98f4497d88f60c835d55219cad
87138 F20101113_AACPDT aurora_p_Page_38.jp2
8b5b737cf639a321669ea76ea70f1e2c
71e29c022e68081927ab6ce749ffc60b989f387d
1265 F20101113_AACPIR aurora_p_Page_11thm.jpg
538c5fea3fee6fb5c9580f08f8028e11
951d9c5219803073b26e1ecb8ffe5906e8517b1d
33273 F20101113_AACPDU aurora_p_Page_39.jp2
918c1509a5053cf046ef3483cc3f4fe6
740f290cc75e14a11c821bf5aac42070080e37ea
1053954 F20101113_AACPDV aurora_p_Page_01.tif
8aaffc291644cf3aa2ddfca3b558d0cf
5ddcf800e5fbaa0e735225dc6a541e213d8a822b
4380 F20101113_AACPIS aurora_p_Page_11.QC.jpg
fa54495a26de59868d06fb6be36829d7
4b22db7fd1102eb22fd4493936ff700538392c38
F20101113_AACPDW aurora_p_Page_02.tif
3e064d637f040837c7188c1436b97a06
3d534265dd7dc346262846ef730fce32f066bcd9
7495 F20101113_AACPIT aurora_p_Page_12thm.jpg
bc04e01dd04f329fe5514a3e7f23ea6a
31bf6cde764cfff260c5c8b3fba0037ea0596802
F20101113_AACPDX aurora_p_Page_03.tif
ec8b57fdddc1dd67251f0d6427d22248
2d0bed470515ff72470123713d8153ebe3234de7
29584 F20101113_AACPIU aurora_p_Page_12.QC.jpg
e1e0cfecdcd16855ba70365ba1b83d2a
2f40e7c6628b7ceb70e091c2bd0d1d62b2a13456
F20101113_AACPDY aurora_p_Page_04.tif
ab4c8e6c5ffd12c4e4a8a3de666522c1
9979505cba223a17339a3f674a360502ef4a5032
5565 F20101113_AACPIV aurora_p_Page_13thm.jpg
8af84ebd99aacf64b007a2f06f6576a1
270f342c3ea3d3499f8adea314f5b2f100d98823
22373 F20101113_AACPBA aurora_p_Page_01.jpg
e78056d4bd3d35a8b635c101cf371901
9bacce17a3d9b82cec319fc8a491e8108efb3b4b
25271604 F20101113_AACPDZ aurora_p_Page_05.tif
c3a27340d552d2f61a621dac30e619be
cd059319c3dcac6fc82fcbb5b107a96758af0032
6482 F20101113_AACPIW aurora_p_Page_14thm.jpg
272747873cf42a9463525f94002175e9
083b9f4e9abf1eeb54d0290bbe32dd67c7eafdc0
3439 F20101113_AACPBB aurora_p_Page_03.jpg
218a86c8ea3c9326dd0aea825d423101
43d47fca5400c53819d9c35ee58bdfa8ce3b1fef
25898 F20101113_AACPIX aurora_p_Page_14.QC.jpg
2caddd0db0bed820d31643d4a0c52265
6e42802431cf129188872e0601000aeb09b49931
39458 F20101113_AACPBC aurora_p_Page_04.jpg
d1ede8709339c664c6b8de3c2fd730cc
dc13870dc81d8d30079d15c57f1b384b4eaa536e
7963 F20101113_AACPIY aurora_p_Page_15thm.jpg
cdb5bafe46fa3dcd1895671d6bc24fd8
ad5da58f3d96b81e65108e2036fbad434ff1b551
102173 F20101113_AACPBD aurora_p_Page_05.jpg
d527f89e67b9c3c73749e9dfc77281ba
bb15ed24226e74ac6050fb1e3020a5eccc926cea
32939 F20101113_AACPGA aurora_p_Page_25.pro
ec24894aa10d3be08d34dbce8553c7a1
8cfc2ee4033c202a67f301c10147370cddee47d8
34304 F20101113_AACPIZ aurora_p_Page_15.QC.jpg
10b422b5fc5f73c1a3108fb229c6719e
9bdd06bd6a86e845ab033e449d4034410dee2103
33954 F20101113_AACPBE aurora_p_Page_06.jpg
e84d78085846b117fa6085876cc76b33
9e8d153135c4d2b7c838d039ceb84c9d1bdeb1cd
47168 F20101113_AACPGB aurora_p_Page_26.pro
22ac9ad1afdab5f77ce3ac44e677eff3
cf9825b25b51fe9bde4b2a3276c3bf58bb2fac12
17148 F20101113_AACPGC aurora_p_Page_27.pro
52093f85aedacfdd5a60a64b8e9d4996
1c3d309fa0b61fa8e1f782b3e580d631d12ba129
69503 F20101113_AACPBF aurora_p_Page_07.jpg
edbff3cd968f65f3fe1222511f8adfa5
54278ebcdc12e729e0da7e920b74d51f0d07e257
56844 F20101113_AACPGD aurora_p_Page_28.pro
f8146e3531807bdb3c2f66967e8ca467
625fce90e66fcfa51ddce2c5794de7b82483ec71
66288 F20101113_AACPBG aurora_p_Page_08.jpg
597897f01f3a13333ea26978e0319608
fe35f32b710f75e899007f67aa953ae87c2faacd
14716 F20101113_AACPGE aurora_p_Page_29.pro
211368dac1a76ce465f6810ba4cae63d
a5ebbc1e25d5a55a22d753deff881eee98737f2f
107844 F20101113_AACPBH aurora_p_Page_09.jpg
3b1e98d36d17b72b9b31f4de86c000e7
40f8e1e20f9ec14fbcd0684f48c805a4654a40d4
25220 F20101113_AACPGF aurora_p_Page_30.pro
57dc710d1e19d12ad0cb73887e8ed615
d6f5898f58460e4b1d74f8e284800355861fbc5b
12513 F20101113_AACPBI aurora_p_Page_11.jpg
c8ffd3b5941b151cac866ed931392d56
0d7b6d2d8eac6b74d99a628ab6f6ee5740f40b14
33910 F20101113_AACPGG aurora_p_Page_31.pro
cf62c74ec475ecbcb01b6a6ed7b9887a
44b230499b908c0b3bb87b21399f883a4bf52362
70425 F20101113_AACPBJ aurora_p_Page_13.jpg
20243c16035d37fcff3da678df3b4acc
9abc02d8a57c78c53f7a0d48b8f37e14db88f6c3
17771 F20101113_AACPGH aurora_p_Page_32.pro
2f5bba40bd8a9bb69d9653c1461765dc
9ccbb8ee9d3310a652daa6e2845a4219cd6de86e
84112 F20101113_AACPBK aurora_p_Page_14.jpg
eb16237e6337b7f365bffff2f8d0fac5
ea9bfe031c1327bbfcdb7b73b130d2616b1e6df2
30266 F20101113_AACPGI aurora_p_Page_33.pro
ed0898d38d7ef332bf46f48f364b6e8e
b4cb7a30bc53d383044d9c5493138836f647f755
64875 F20101113_AACPBL aurora_p_Page_16.jpg
3ba7030b352f3012a391f76ef8c2ef28
5966a059b76e22c077f233e8a0a2fd99437d65eb
29176 F20101113_AACPGJ aurora_p_Page_34.pro
e1232faba5baf58ce64739c3a79bc918
6a5b08a53cf5c8f85621d528107d04a3e3faa857
97436 F20101113_AACPBM aurora_p_Page_17.jpg
35dcb31ea073f2069973eaa57d8bbb79
91a75563444bd9e674e5360bc38705b4d112afde
10093 F20101113_AACPGK aurora_p_Page_35.pro
8fa03a8ff3578a4b3d2e7e30e987317d
989d8a01d21fe4366a6260297fc3261945441a06
83796 F20101113_AACPBN aurora_p_Page_18.jpg
fa64a070816120fda8e81cdc907308c9
1eee88437d1fc747069f044cff930f06e5d82519
24953 F20101113_AACPGL aurora_p_Page_36.pro
c0610e00e49c1ce7f2e935acaab2614b
46573d0b2f094b7a4878fdc8e962d1739e3c7af0
107202 F20101113_AACPBO aurora_p_Page_19.jpg
5c46b098df75a7ec04cb4ccfc24df584
3a12d80674839ac81ea752b4d70273e147898f56
25658 F20101113_AACPGM aurora_p_Page_37.pro
b0364ec22305a95d47324c15ff637dfe
59caad4906cfb0654da7f856cbb34e38cc1d5b0c
123834 F20101113_AACPBP aurora_p_Page_20.jpg
1faf8091c224d627009c7ba189bc9962
a8dda87e7acb5c545e0fd9562d1c98547f96f51c
40776 F20101113_AACPGN aurora_p_Page_38.pro
b94b9d2f1cbdb409e7fed3c207189d47
2de4fa702ac4feacace94a0afc06ea958b337533
119887 F20101113_AACPBQ aurora_p_Page_21.jpg
738c19865538c1444e3374e9f38a1058
da762e839ecc371efad7f09c31b3d74d40d164db
13973 F20101113_AACPGO aurora_p_Page_39.pro
96bcfc3bf44e82a37ad44f7aaf9a60ed
1e4b8efa0cbdf94cd8ca9a1d02c01fa50d068b5b
62451 F20101113_AACPBR aurora_p_Page_22.jpg
52e35f17c6cdf2432652aee9f637d1ce
963bfa5118555f9abba1b67160db1e2d6282487a
396 F20101113_AACPGP aurora_p_Page_01.txt
3cdb97a21d2e89c13ba703488295cc6c
62f6d3c8020cfea377ad79c51735a13e3ecece0a
86286 F20101113_AACPBS aurora_p_Page_23.jpg
60811d7ca10d1721167abe728dd037c1
0d6af994ddec3e9edd2d9191e0246f2e5c002a5b
96 F20101113_AACPGQ aurora_p_Page_02.txt
bb17da13ce7139044a45a85a03cc888c
c8f3482fc83a7bc2bc1e55863fa3aa371576becf
102256 F20101113_AACPBT aurora_p_Page_24.jpg
17e1c39b1b2da1833b36b15f7f91ad5a
cf886d8700a5548e5f8b8453f5f278aef7dea1bf
86 F20101113_AACPGR aurora_p_Page_03.txt
9228c52d3f2c499a2f273c6f4589c4be
064a29e547a1b9660fe262673ecf50743b656a03
67457 F20101113_AACPBU aurora_p_Page_25.jpg
9288ff52cc40ed695070cad9b36e8d20
2cd0447e64ea0b34e4e22e4da88b7a417dd1e3bc
766 F20101113_AACPGS aurora_p_Page_04.txt
f3e57a34b60a8ea8f1c019d5b1d7c567
a9c85ec9537154cf5dc5a45933f461aad67af9a9
92334 F20101113_AACPBV aurora_p_Page_26.jpg
b1c437822a372556e57d658964da5a21
700480e804462ba791c05e9fe83a4a92c8252086
1270 F20101113_AACPGT aurora_p_Page_05.txt
f14797927f34b7258aaae3ebe68c3552
738984a8548cb9d3cbd7b79e6f4fc181cc408a71
51018 F20101113_AACPBW aurora_p_Page_27.jpg
599290f6405d9b3e711d332e55b7761f
0d12423692aa55ac664a1924fcfc219dad92db5a
631 F20101113_AACPGU aurora_p_Page_06.txt
144c394b3b47b5e4ce0cb93af1542c08
c7fa2361be153f3870679cd1a11331c9a4951dc5
114977 F20101113_AACPBX aurora_p_Page_28.jpg
f5294ae2b3768b95fb1419cad6015025
c892587b3df1c59a3b778cc48f2f1526d76fdbad
1522 F20101113_AACPGV aurora_p_Page_07.txt
7057dd90791bb60f0593706b8b98ae1f
11ebeff53e1f6de9e39dfeda1f090ab94884f286
61187 F20101113_AACPBY aurora_p_Page_29.jpg
f7aec8da8fbe5c5b2a87c2adc45b7b1d
aef099a283f2808bc55f737fca2d3b2702ea0390
1484 F20101113_AACPGW aurora_p_Page_08.txt
058d4b19112f7ad398cf083dca05dd1b
00a8cda79f125133781b6c00770423a354828d30
79316 F20101113_AACPBZ aurora_p_Page_30.jpg
5bc3052fd7fd2b6cd5f10f9a3fdb7e02
ccbc0700fc20f25ed5cdf6801d9eb3db4afb51ef
2214 F20101113_AACPGX aurora_p_Page_09.txt
218672c885910b84f31be53b389f7d58
a20dc0bed74efaf01e7aeee9c6c6b665c772f8a8
1334 F20101113_AACPGY aurora_p_Page_10.txt
8b8d546cd4d1da7d2950e8ac5005a918
3b8ae358adf9b1b55e7b378ec2f52a2adcb2451e
F20101113_AACPEA aurora_p_Page_06.tif
99cab4da36ebb1b250ebf2a48d22d136
f77ec723e1284c7b6a833495f10ddf1cc37096df
241 F20101113_AACPGZ aurora_p_Page_11.txt
98c7fed39b6e4240fc6b491a4d06d88e
43ec8965a051eb2267d7e5768c5f8477b678958c
F20101113_AACPEB aurora_p_Page_07.tif
fd40fff051c9d0c9eb42cb69f2232778
8ce0827cced4a2f64bd1b3364e269cc1ed59dfc8
6269 F20101113_AACPJA aurora_p_Page_16thm.jpg
7a14fa569b0c7ffedda251fdf527e71f
7c458b02e7b684baa21e6dd820aeb94b7d382f1c
F20101113_AACPEC aurora_p_Page_08.tif
ec75913e934370ee927bec1b93a66be5
2c779b547cd2df81018217e581f1b9baed8815d0
22172 F20101113_AACPJB aurora_p_Page_16.QC.jpg
126fb1821da214b42e6be9b3618bdedd
535f8c13a888eae867c987227e0e4834bf4d76b8
F20101113_AACPED aurora_p_Page_11.tif
6025136ce9a783c80c79d8f6c9cebc74
c0b7a2b24788445548a081aa63a5258ca809e4ff
F20101113_AACPEE aurora_p_Page_12.tif
766e6a509ced5d41d0404e7e26a21866
ea4716fcac6972b9d71a5749c3b95c6bef79f5f9
7234 F20101113_AACPJC aurora_p_Page_17thm.jpg
fa3818f859d7e8e39d4310cd2e871f70
a3ee4b7dbb7e0982e32139a0c2b6157ef656ca38
F20101113_AACPEF aurora_p_Page_13.tif
e78c7121c28fe666a75c70424fdf66e8
2de60f0fae0042e5bf82933d88c024e4ceefc26a
31604 F20101113_AACPJD aurora_p_Page_17.QC.jpg
694ed4fbf20fd21e4145c772c5bc9508
895c88e321d07763f4593f936ed4cf343b1ebc08
F20101113_AACPEG aurora_p_Page_14.tif
7bceae33f9cb12a3e72bc1a974c64ae3
a51ff1104ea65f0e443c36cbd7b405add50bdc9a
6866 F20101113_AACPJE aurora_p_Page_18thm.jpg
426e7272fb56d82d45dc01a2e8a381bd
9e679eee49576938ae1aaebd26d0f1cb0cc5aeb6
F20101113_AACPEH aurora_p_Page_15.tif
40282ccb8db47e2c9f1c08173b617d43
ef192be5b42e62862cf5813f6e01f89977e3c570
26523 F20101113_AACPJF aurora_p_Page_18.QC.jpg
bf9590102d000ba63c76b3729479aa26
3272b08697728048e6f5070d2a1f1087da19f575
F20101113_AACPEI aurora_p_Page_16.tif
c7bef5e3bcb43cce1268dc68cea9bccd
2cbc3d612663896007315d649d48d02e91f3ace5
8220 F20101113_AACPJG aurora_p_Page_19thm.jpg
c5d1ccfce767ef405cf70ec5c59c9066
554ff522215b2ce21a39dd4d7e8cad0c735a9539
F20101113_AACPEJ aurora_p_Page_17.tif
42f1724047f758e327bbb834f615086c
d01813946fc521e619f5d57c425c570600937b44
33839 F20101113_AACPJH aurora_p_Page_19.QC.jpg
60344a89e52277657d04634fb826ccff
eb48e1f32f5805ecd5d635b061e320cb7c721d62
F20101113_AACPEK aurora_p_Page_18.tif
f2850db01978e93e38eb8e7727ac1191
5831842cc16f54e1cbd3a7ca6cf873606a236ac0
8599 F20101113_AACPJI aurora_p_Page_20thm.jpg
5721657d07d2a4659e755444707a52f9
8c4ccc8d7c46838a76feb25859944016bbca07f6
F20101113_AACPEL aurora_p_Page_19.tif
70e622b883be2bdc0428dd24d030d77e
89b56830dd64096cb6cc81ddb22d911a5b3e452d
38703 F20101113_AACPJJ aurora_p_Page_20.QC.jpg
8b59a1cd7ea78c680a6b533b37d84177
474bf0431367932605bde1bf873755e978c1f926
F20101113_AACPEM aurora_p_Page_20.tif
2d43ed600411a7d24e9450ee852c9e6a
0d3ede052c7cc8c8f5e57f8eb060d734cfa71e0b
8375 F20101113_AACPJK aurora_p_Page_21thm.jpg
acabe72347c6ae51922b172d40c86672
542830f77aff8e0caf1c6efa6b8afce3d645eb82
F20101113_AACPEN aurora_p_Page_21.tif
85a7fe3f89a7e4507cb9d585c49ebc3c
7297d18a8b0a4d8477722dad887cf446904a4988
37722 F20101113_AACPJL aurora_p_Page_21.QC.jpg
424fe88656b78c2d6ed4366ffb272539
8ccb41b04d92ca335875a073ab16337bea2acc16
F20101113_AACPEO aurora_p_Page_22.tif
6d222f9fa97229a067c9886f8f2cfc46
21c5c26da8230e9a18b3331768d87239cfefb465
6281 F20101113_AACPJM aurora_p_Page_22thm.jpg
abfeecb22f0207fddffa46097c9dd6cd
b89c76a2e3975b832aaa1fbc77a50dd71e274674
F20101113_AACPEP aurora_p_Page_23.tif
701ef9d31f62ffc6d7b764ef50fea6ea
cb72b95cd67a72c99977da67e550948485746c41
20635 F20101113_AACPJN aurora_p_Page_22.QC.jpg
4d5a683dc251efefcbc28928c266e0c8
f06296236d1efdd4d1316f42967e31e89dbb761c
F20101113_AACPEQ aurora_p_Page_24.tif
fa82cddd717600fc036e06ced33c0d01
0965c5925c805ac39bfdf1cef9cc6d5ae52ef653
7032 F20101113_AACPJO aurora_p_Page_23thm.jpg
2b9a47c31b44450ed93ffa458215e48c
8c8f036205e8ce52f175c0e1f36bf5c6a6233a34
F20101113_AACPER aurora_p_Page_25.tif
155ea6216ac295306e1f3f24a6bf7417
44ad81589ff4022f8bae84ab98f1ad6ef9f063e8
27441 F20101113_AACPJP aurora_p_Page_23.QC.jpg
a6a20e96c9fc227a45afad7c42163c1b
0d4cbd7cf7d820921e5dee5d91ae99b04f4fd8fb
F20101113_AACPES aurora_p_Page_26.tif
1dcb84aecb0a8a6693dd8e8ffae8ec47
a492be4c9b2faf97bfbfe0c930e5a0718c870ce4
7823 F20101113_AACPJQ aurora_p_Page_24thm.jpg
81e089ed441ce716a9514cd6d3a3f44b
221d123e0a9fe0088eb5ef55f0912778d0d0bc65
F20101113_AACPET aurora_p_Page_27.tif
2e0deef2567a424da522731423fd1dbd
6d3353854b39a6ddaa170107ad0bfa3f73ed0a9c
31757 F20101113_AACPJR aurora_p_Page_24.QC.jpg
0fe06615a9f4b8fad44f3e8f9a4061b9
29a6ba4c954785b91d08e255f64b99aa4b9137b5
F20101113_AACPEU aurora_p_Page_28.tif
8c9a7ce74ee14f896e5645fca0199672
93cd28267b193c0b7620a474cce5b01b7a148c5d
4993 F20101113_AACPJS aurora_p_Page_25thm.jpg
4cf31f58cfcec2f1fd74c6d00e53cebc
f866a34ba1ba34de5b880e38c7fba5b051d493b4
8423998 F20101113_AACPEV aurora_p_Page_29.tif
fb8581648f0a8e446f5167273832183d
b5c0822a80579e5a23e7371062f22a64e595064a
21225 F20101113_AACPJT aurora_p_Page_25.QC.jpg
c48d02158f7d675624f45deea9d8a56f
4f6846f42f05bda745d6ecfcf519e49b6e2f5c66
F20101113_AACPEW aurora_p_Page_30.tif
4d2197d3fcdafaf994675c7a1aa44a35
e91f2e47cb8d0fda60735c086cb4ab69ceb337cf
7077 F20101113_AACPJU aurora_p_Page_26thm.jpg
bac549151bf0b2147fd1c3b3189b354c
3d475d8ebfd77a2632ea4f730053e2a8df14ebce
F20101113_AACPEX aurora_p_Page_31.tif
555295c4df14678c8d4b17715f660b08
9f9e11f14a0e6d731e8a8ee0644d97893590f09f
29305 F20101113_AACPJV aurora_p_Page_26.QC.jpg
ac2ecdd267c226c9b5bd04ed74654f36
7e82332219d44dee71ccfba20fc880c9d9d7742d
F20101113_AACPEY aurora_p_Page_32.tif
092a36d3ce725d6491c32429e77dd390
222ade31738d9ae79dc258d4b91560bf2bb40fda
15052 F20101113_AACPJW aurora_p_Page_27.QC.jpg
d5215593edfb22269b746029cf522ea4
62c120da19532553dc6c1af262d09d01b5e64d06
83721 F20101113_AACPCA aurora_p_Page_31.jpg
8a6bec117519479479130dfa36e18129
858d236c942186e5e3e93489f2a54f52a9f82f94
F20101113_AACPEZ aurora_p_Page_33.tif
e4a7fe02dc4b1a008011981e019a4ffe
9f136e1ff04db1d9c25a11bd0ed0ae897dc2c44f
7852 F20101113_AACPJX aurora_p_Page_28thm.jpg
a322bff62d11d5fb60c560bb3036c222
4320339cf3b3458fa2ad889206fe01a39b504fc3
57831 F20101113_AACPCB aurora_p_Page_32.jpg
9e0b25a8f2969772bf1fca80b4afdba1
774b6d5273505769ca8797d7adeaea3f69221c8c
31991 F20101113_AACPJY aurora_p_Page_28.QC.jpg
3ff1505e13d24839767e55311d94db9d
47b079361fddba146d5c0a1eb99883f0d2ee62b5
74656 F20101113_AACPCC aurora_p_Page_33.jpg
32e545477d868219e3fdbdee32b75791
12d37c3caf1f57597a3576393187a23698d34970
5764 F20101113_AACPJZ aurora_p_Page_29thm.jpg
06135233d6d63d6cb2f4c976960cb645
3a444759a4b9294c70b879d07599ed1a06ca03f8
69147 F20101113_AACPCD aurora_p_Page_34.jpg
de3f4e516bd874ee8e30374123a13d28
f0637b10874516ea65e7739a00d052b9315b299a
1856 F20101113_AACPHA aurora_p_Page_12.txt
1313faad32b1ef3b0b56fdb65ed5df9f
34ba1ff27a8d3fea02a1bd54954cec8a8ded51c6
37956 F20101113_AACPCE aurora_p_Page_35.jpg
798315b77fc89420dc64c68893ff230a
63cfce3938b50e10388ebe5dd1e7affbc0b2a8ce
1433 F20101113_AACPHB aurora_p_Page_13.txt
8c481f38db7f423853cd57f10499e4a6
d9ce37a1e2ba1683646c871c597961379cfbe4e1
56470 F20101113_AACPCF aurora_p_Page_36.jpg
745318bd3b68d08132cc5c7d8b3c1cf8
2d0b70a3313c1072dfb017ccdac549b12f631be6
971 F20101113_AACPHC aurora_p_Page_14.txt
6343508f82de36c3081c1491657ab771
0cdf04246c1f4c412e85c484b43a53cc8ca7eb3d
56776 F20101113_AACPCG aurora_p_Page_37.jpg
6dd9dbb289c3480b828f98da7eb195cb
23e9a4d147b153a9833320f6935f3f8d4547cf59
2322 F20101113_AACPHD aurora_p_Page_15.txt
36d9d4148c69e20c2c691c40ed05e187
92b3e9220e697e63fe01990f833a34725be799ac
81424 F20101113_AACPCH aurora_p_Page_38.jpg
f5d26eedcdeb2c0cbfd3b69ec7c4dde4
f32e23917af0daf2d2e9a73c7f29c6998424794a
1393 F20101113_AACPHE aurora_p_Page_16.txt
da5779c4a48cbe36746cfdda05a93799
1bc1b5345ecec04bee69badcf43d308da17006dc
31841 F20101113_AACPCI aurora_p_Page_39.jpg
5049ca84f81b1dfeb3c44063db184d6e
4bc04082bbf9807d26f879e5c8e2b0bfe0af8c97
2042 F20101113_AACPHF aurora_p_Page_17.txt
a956aa3c85e9e53f5dce1628e08158ef
9990717495c625b7cd95af38f67d345b4d58b509
20218 F20101113_AACPCJ aurora_p_Page_01.jp2
1abc9142bd3dcb5d3953bde17ce43db0
1ca23999253d48f99a9e660e393a0e95d7958365
1670 F20101113_AACPHG aurora_p_Page_18.txt
c31265393db043f41d24a3bafd3c665c
bd590505d3fdc00fc7b368e4671a69b316ef6021
5657 F20101113_AACPCK aurora_p_Page_02.jp2
f6b27bae4c17d4c6dbfab441c87b2cf6
24d2c5e6db5f8dcb8040f63d3a2aee98e43dae59
2021 F20101113_AACPHH aurora_p_Page_19.txt
7804c98e6823ea47573d77ee25731cd4
2910e58ea18c45c3db02f6f775040ff9603b62e6
4761 F20101113_AACPCL aurora_p_Page_03.jp2
f795403d8de840639a9d317b7f4f51c3
6d0b41de9f2f221e24c9cbbb112df6afd95ccd84
2479 F20101113_AACPHI aurora_p_Page_20.txt
2d1e3587aa54c54230505caf88799d4a
7eee6d175a3ab5c497b1f962d09770d523776ad2
41594 F20101113_AACPCM aurora_p_Page_04.jp2
ef0bb164452df570cc8920a48601b5f5
77bb906544e690d896feb092997c3f0eb987e36a
2403 F20101113_AACPHJ aurora_p_Page_21.txt
34463af9e705266005019dea7278497a
9fa75d4971669fe936f8e836aabeba31a5040b16
1051985 F20101113_AACPCN aurora_p_Page_05.jp2
76d0b27f65140ec0c8e5c4c98aff3f08
cc482c371e6aee43f6e21aaa6895a52d35f5564d
468 F20101113_AACPHK aurora_p_Page_22.txt
00b63984038f656786832358d0bf242e
0040994c2bd369dcdaa1bf8d8543d91b53ca392b
472937 F20101113_AACPCO aurora_p_Page_06.jp2
68deff1744f5eaea955c86b1ff0ae346
b46dfab8a5eb1560064b9d9408515456f1e0918c
1627 F20101113_AACPHL aurora_p_Page_23.txt
b91993b0a565d410cc8972e1342507b7
81f8712270607688d8c9fc0ed170caa395daa2db
1011561 F20101113_AACPCP aurora_p_Page_07.jp2
39bd137b44e86408ccd8b12cfeadd711
4853c359af4421bda4e3fbeab2d9c56a89aca3bd
1965 F20101113_AACPHM aurora_p_Page_24.txt
7650e1d62b159811fc6b1521b5810ce2
b767052aae8dc749c607c2beaec3a8a7e90c7c43
69837 F20101113_AACPCQ aurora_p_Page_08.jp2
8e63dd4cfba12ddd7247a572ac6805c7
b867de82bb15a644a0abf7a281549a2f2d715f4b
1309 F20101113_AACPHN aurora_p_Page_25.txt
e35ec1e26566db343ee6525841b397ec
738949c79d480ff1072bbcd3457619aad112c98e
1051964 F20101113_AACPCR aurora_p_Page_09.jp2
1f9aa48fd074a29560b6ee8e8581e998
a166f69687f0504f735468004150687b869dc695
1962 F20101113_AACPHO aurora_p_Page_26.txt
66662f8f03732e4cdba8bd4bcebbae1a
bdac11155389f16ec6b2de23f988dac1853aef81
754710 F20101113_AACPCS aurora_p_Page_10.jp2
ac649c8c8e68c98a2f868ff4608cccaf
608d225775ba3e11c9088323e1f36d173dfc8897
701 F20101113_AACPHP aurora_p_Page_27.txt
cc4573417e84af445a7a8d55cffe4737
306bfa29ccefd043c4b8832da98b152312bf3e05
16368 F20101113_AACPCT aurora_p_Page_11.jp2
b48d44cbcacdd473241c99cbdb620315
bbfac199f1cd736a3b1917dee9e8d1e74c6ae307
2340 F20101113_AACPHQ aurora_p_Page_28.txt
e672449a17ca179ba68c0d7fb9b8486a
a4f62fdaac0f41f1b1f014c5bab0c447f47d633a
986127 F20101113_AACPCU aurora_p_Page_12.jp2
dc1b5682136de90bbecf76a60da1acf0
47305932348c646a45eb4bbbc5be30682ff8eef1
655 F20101113_AACPHR aurora_p_Page_29.txt
07c07693d0e6cdc3d0d311cc5bb43683
93672faa90f97f2be16365548f137723daebed10
759933 F20101113_AACPCV aurora_p_Page_13.jp2
050ff70a80b433cc9ef39e14879f6daa
e81ac9a1fd50ecdee3b60c14f2b8d327c74617e5
1096 F20101113_AACPHS aurora_p_Page_30.txt
de087df52de9c5cc066e1472ee217d71
fa21744827a1d93d1b0d4b94e92e10204ee8c364
90055 F20101113_AACPCW aurora_p_Page_14.jp2
745adf77e2593b2c10a1e420b8905aae
3122792baa3e9c113f0655d75bf38de205e9df71
1413 F20101113_AACPHT aurora_p_Page_31.txt
794140393967529c4c9d7207fb93d2d5
c42e8a7050cb93a45f261577dd25ec64f4d2156d
119772 F20101113_AACPCX aurora_p_Page_15.jp2
7a776bf7231307db70b65e8319b9340c
0554edbbc8f4ba0d208bdb513b4efb2655f8f082
754 F20101113_AACPHU aurora_p_Page_32.txt
ca43a485480ab74cce3b0bf8502bd2ef
3b48f43a12100422a1ef92bee2eb0ef81ea02b61
70371 F20101113_AACPCY aurora_p_Page_16.jp2
d92ed3f2d27d765daecb149180e784e3
ddf7765615542210aee2bb4be83d9b7505a2400f
1420 F20101113_AACPHV aurora_p_Page_33.txt
4e9954e8fb9421b2bfccaaa6b8a2f587
baafb68090f09e59f5b2661917ee2af9b155ff96
109171 F20101113_AACPCZ aurora_p_Page_17.jp2
158a6b8c7e9f09d0fd0935f311871396
28e9c350e3455bf5baa3ec23fcc0d505f71a0f16
1352 F20101113_AACPHW aurora_p_Page_34.txt
361378106eb20fb13d121c9a263940a2
af68316d1fcf553c6b4d4e99574700a1ce76c344
553 F20101113_AACPHX aurora_p_Page_35.txt
b92769adfe66a6582317f053e34efccf
e7aaaf5b2743408c277ef25a455f75fa17772570
36563 F20101113_AACPAB aurora_p_Page_18.pro
923c01ba831a077b07f5c2737ea772a7
52c7b22c54f0a7c66068b39cdaf4fc5b3241bace
1076 F20101113_AACPHY aurora_p_Page_36.txt
72fccd08e0652fa0e5a1ddc48e1a2a68
41dbad0b4725458ede77cf2dc68bbbe33a836514
5763 F20101113_AACPAC aurora_p_Page_33thm.jpg
2ff8548042c8bf447d10bd84a7888f77
5f003039e88297a460eeaabc765874ec28f3e2aa
F20101113_AACPFA aurora_p_Page_34.tif
779564571b36373fe53fa242344c4e69
a1cfbd5fa84c1d03225dcf15b5ced9f20a6eaf17
1639 F20101113_AACPHZ aurora_p_Page_38.txt
1f4eaadbb18a84148263e398783a58ac
5d7dac434d49bfb0b0d9bb53796e7638d9f1a45f
593 F20101113_AACPAD aurora_p_Page_39.txt
5cf3bf37ce760f6eee00d6ea6e23de9c
475bbe4dc8f0544b862c3990b35126c7dee42526
F20101113_AACPFB aurora_p_Page_35.tif
0dcff640f9dda79ef739709b26f45105
9e41cad16c09ce270b40d495acb9d8dd0b7a9c1c
F20101113_AACPAE aurora_p_Page_09.tif
4a426d5fd9243ba1c9a0dbbb9e28328b
510213f1b092900d6c59d52eb560e3de9fc196da
F20101113_AACPFC aurora_p_Page_36.tif
be62f4e8cc4c2cc92dcf897a5744b855
7e2020d06397aedb6387d9e254dfed511c0805e6
2725 F20101113_AACPAF aurora_p_Page_06thm.jpg
33490abc33c535849cbf3108d7b6e779
2d7de997aeb5e3de978bd5fb1187faa7ef272e5d
6314 F20101113_AACPKA aurora_p_Page_30thm.jpg
17b45f2adfa1d7027e96d19ca2a0d653
69e13ec7206359cf47103a640c06b4671695fa68
F20101113_AACPFD aurora_p_Page_37.tif
b545406604d0c56682810145a66646ac
8eb2f1ad1c607be9e9a1eadc5be8a8f59abbe2e4
90320 F20101113_AACPAG aurora_p_Page_12.jpg
66a112737e691e37efc17689616ab454
fef4474ea428a1339797ceca3a0a2aadfc284bdf
23977 F20101113_AACPKB aurora_p_Page_30.QC.jpg
7e84d0429594cbc77e1c6b5d96d2f36a
db4c027aae98077c0b69752402f329d39884b658
F20101113_AACPFE aurora_p_Page_38.tif
7ba168bed9c046b0191417714ce30ca5
df822f32b743b70df1e7e572a4ff8b0d9b59e3d9
3923 F20101113_AACPAH aurora_p_Page_27thm.jpg
02244c2b67ee964ded04d91cc4425248
90f06f53337ac9fb5b92ad741dad4f90e5a47e46
5623 F20101113_AACPKC aurora_p_Page_31thm.jpg
c811fdbee70e124ad1720b78d1d10ac1
9121da4c9e9d28f48707b24d36931894a4bd27cc
F20101113_AACPFF aurora_p_Page_39.tif
218ed32e155d5770e22d5565f3e3775d
4ab586970c948060e4d896e5cd3377e343a2027b
F20101113_AACPAI aurora_p_Page_10.tif
ccccea821f7c16bc6d1bd8dd5740a1ef
837703dd5b14df7c1b444afe8fbedbe5e1f1edf5