Group Title: Department of Computer and Information Science and Engineering Technical Reports
Title: An Objective function for vertically partitioning relations in distributed databases and its analysis
Full Citation
Permanent Link:
 Material Information
Title: An Objective function for vertically partitioning relations in distributed databases and its analysis
Series Title: Department of Computer and Information Science and Engineering Technical Report ; 92-045
Physical Description: Book
Language: English
Creator: Chakravarthy, Sharma
Muthuraj, Jaykumar
Varadarajan, Ravi
Navathe, Shamkant, B.
Publisher: Department of Computer and Information Sciences, University of Florida
Place of Publication: Gainesville, Fla.
Copyright Date: 1992
 Record Information
Bibliographic ID: UF00095160
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.


This item has the following downloads:

199276 ( PDF )

Full Text

University of Florida
Computer and Information Sciences

SDepartment of Computer and Information Sciences
c"' Computer Science Engineering Building
SUniversity of Florida
Gainesville, Florida 32611

An Objective Function For Vertically
Partitioning Relations in Distributed
Databases and its Analysis

Sharma Chakravarthy Jaykumar Muthuraj
Ravi Varadarajan Shamkant B. Navathe
email: sharma@snapper. cis.ufl. edu

(Submitted for publication)

(This work was, in part, supported by the National Science Foundation Research
Initiation Grant IRI-9011216 and the National Science Foundation Grant

The design of distributed databases is an optimization problem requiring solutions
to several interrelated problems including: data fragmentation, allocation, and local
optimization. Each problem can be solved with several different approaches thereby
making the distributed database design a very difficult task.
Although there is a large body of work on the design of data fragmentation, most
of them are either ad hoc solutions or formal solutions for special cases (e. g., binary
vertical partitioning). In this paper, we address the general vertical partitioning prob-
lem formally. We first provide a comparison of work in the area of data clustering and
distributed databases to highlight the thrust of this work. We derive an objective func-
tion that generalizes and subsumes earlier work on vertical partitioning in databases.
The objective function developed in this paper provides a basis for developing heuristic
algorithms for vertical partitioning. The objective function also facilitates comparison
of previously developed algorithms for vertical partitioning. We also illustrate how the
objective function developed for distributed transaction processing can be extended to
include additional information, such as transaction types, different local and remote
accessing costs and replication. Finally, we indicate the current status of the implemen-
tation of a testbed.

Index Terms: Data Clustering, distributed database design, vertical partitioning,
objective function, attribute usage matrix

1 Introduction

The design of distributed databases is an optimization problem requiring solutions to several in-
terrelated problems: data fragmentation, allocation, and local optimization. Each problem phase
can be solved with several different approaches thereby making the distributed database design a
very difficult task. Traditionally database design has been heuristic in nature. Although, in most
of the cases, the metric being optimized is not stated quantitatively, it is implicitly assumed to be
the processing cost for a given set of important transactions that constitute the bulk of transaction
load for the database being designed.
Figure 1 gives an outline of the overall distributed database design methodology [3]. Dis-
tributed database design deviates from conventional non-distributed database design only in the
distribution aspect which is highlighted by the box titled distribution design in figure 1. Conceptu-
ally, the distribution design involves data acquisition, partitioning of the database, allocation and
replication of the partitions and local optimization. Partitioning of the database is done in several
ways: vertical, horizontal, and hybrid (also called mixed). Briefly, vertical partitioning partitions
attributes of a relation into groups; horizontal fragmentation partitions tuples of a relation into
groups; and mixed (or hybrid) fragmentation performs both horizontal and vertical fragmentation
(in one of the two possible orders) and generates fragments termed grids (or uniform regions).
There are different alternatives for performing fragmentation and allocation. Figure 2 illustrates
some of the widely used alternatives. Apers [1] considers data fragmentation and allocation as a
single problem (alternative A in Figure 2). In this paper, we take the approach B, as our primary
objective of this work is to obtain an objective function that can be used to compare earlier
work/algorithms on data fragmentation. Other alternatives sown in Figure 2 have also been

It should be noted that the problem of partitioning can be addressed at various levels of detail by
taking not only transaction frequency into account, but also access methods, page-level accesses,
transmission costs, replication and other detailed information into consideration. However, in this
paper, we are taking only transaction information as input to keep the derivation of the objective
function manageable. Also, as one of the motivations is to compare various algorithms, including
other information will make comparison difficult. However, we provide a brief description of how
additional information can be folded into this framework in a later section. Finally, the global
optimization problem (which includes a large number of parameters and a very complex metric) is
partitioned into several smaller optimization problems (e.g., fragmentation followed by allocation)
to reduce the search space and the complexity of each problem. Other detailed information need
to be considered on the outcome of this stage in order to obtain a design at the physical level.
Our long-term objective of this work is to develop a distributed database design testbed in
which different algorithms for various components of distribution design can be mixed and matched.
Although the current work is at the conceptual level, more detailed physical-level information (page
access, indexes etc.) will be considered as follow on work. This work is only a first step in that
direction and addresses the partitioning (or fragmentation) problem.
In this paper, we delimit our discussion to one of the data fragmentation problems, namely
the vertical partitioning problem. Vertical Partitioning (also called attribute partitioning) is a
technique that is used during the design of a database to improve the performance of transactions
[18]. In vertical partitioning, attributes of a relation R1 are clustered into non-overlapping2 groups
and the relation R is projected into fragment relations according to these attribute groups. In
distributed database systems, these fragments are allocated among the different sites. Thus the
objective of vertical partitioning is to create vertical fragments of a relation so as to minimize
the cost of accessing data items during transaction processing. If the fragments closely match
the requirements of the set of transactions provided, then the transaction processing cost could
be minimized. Vertical partitioning also has its use in partitioning individual files in centralized
databases, and dividing data among different levels of memory hierarchies etc. [18, 21]. In the case
of distributed databases, transaction processing cost is minimized by increasing the local processing
of transactions (at a site) as well as by reducing the amount of accesses to data items that are not
local. The aim of vertical partitioning technique (and in general data partitioning techniques) is to
find a partitioning scheme which would satisfy the above objective.

Several vertical partitioning algorithms have been proposed in the literature. Hoffer and Sever-
ance [9] measure the affinity between pairs of attributes and try to cluster attributes according to
their pairwise affinity by using the bond r', .,.,,, "i '1ii (BEA) [14]. Hammer and Niamir [8] use a
file design cost estimator and a heuristic to arrive at a "bottom up" partitioning scheme. Navathe,
et al [18] extend the BEA approach and propose a two phase approach for vertical partitioning.
Cornell and Yu [6] apply the work of Navathe [18] to physical design of relational databases. Ceri,
Pernici and Wiederhold [4] extend the work of Navathe [18] by considering it as a 'divide' tool and
by adding a 'conquer' tool. Navathe and Ra [19] construct a graph-based algorithm to the vertical
partitioning problem where the heuristics used includes an intuitive objective function which is

'For the work presented in this paper, it does not matter whether R is a Universal relation or a relation that is in
first normal form as long as key attributes are identified. In the section on extensions, we discuss how we can take
that information into account.
2Only those attributes designated as primary key are assumed to be present in each partition; overlapping partitions
(of non-primary key attributes) or replication of fragments may also be considered. This is briefly discussed in the
section on extensions.

Requirement Specification


Enterprise Transaction Distribution
Schema Definition Requirement

Distribution of the enterprise schema into local logical schemas

SImplementation Schema


Figure 1: Distributed Database Design Methodology

Fragmentation and Allocation


Fragmentation rragmeintatioi

Grid Optimi



in Allocation +
Replication of
Fragments to sites

Local site

Figure 2: Alternatives for Fragmentation and Allocation

not explicitly quantified. Recently Chu [5] has proposed a transaction-oriented approach to at-
tribute to vertical partitioning in which transaction is used as the decision variable. In addition
to these vertical partitioning algorithms, there are many data clustering techniques [11], tradition-
ally used in pattern recognition and statistics, some of which can be adapted to partitioning of a
database. These data clustering algorithms include Square-error clustering [11], Zahn's clustering
[25], Nearest-neighbor clustering [13] and Fuzzy [ll]clustering.

The partitioning algorithms mentioned above use some heuristics to create fragments of a relation.
The input to most of these algorithms is an Attribute Usage Matrix (A UM). AUM is a matrix which
has attributes as columns, and transactions as rows and the access frequency of the transactions as
values in the matrix. Most of the earlier data fragmentation algorithms use an Attribute Affinity
Matrix (AAM) derived from the AUM provided as input. An AAM is a matrix in which for each
pair of attributes, the sum total of frequencies of transactions accessing that pair of attributes
together is stored. The results of the different algorithms are sometimes different even for the
same attribute affinity matrix indicating that the objective functions used by these algorithms are
different. Most of the proposed vertical partitioning algorithms do not have an objective function
to evaluate the g ..... I,. of partitions that they produce. Also, there is no common criterion or
objective function to compare and evaluate the results of these vertical partitioning algorithms.

1.1 Contributions

This paper makes several contributions to the problem of data fragmentation in general and the
design of vertical partitioning in particular [16],[17]. Specifically:

1. We have, perhaps for the first time, studied the applicability of some data clustering algorithms
for distributed database design3 proposed in areas such as pattern classification, statistics
etc., [11], [25], [13], to data fragmentation problem. In fact, we start from one such objective
function proposed for data clustering and modify and extend it to the specific problem at
2. We have formulated an objective function for n-ary partitions; it consists of two components
that provide the desirable behavior for minimizing transaction processing cost.
3. Finally, we are using the approach of formulating an objective function (termed Partition
Evaluator in this paper) before developing (heuristic) algorithms for the partitioning problem.
This approach enables us to study the properties of algorithms with respect to an agreed
upon objective function, and also to compare different algorithms for g ...... -" using the
same criteria. The objective function formulated in this paper is a step in this direction.
Moreover, the objective function derived in this paper can be easily extended to include
additional information (e. g., query types -retrieval/update, allocation information about
the partitions, remote processing cost, and the transaction usage pattern at any particular
site). Some of these extensions are briefly discussed at end of the paper.

The rest of the paper is structured as follows. Section 2 discusses previous related work on data
clustering and vertical partitioning leading to the need for a Partition Evaluator. In section 3, we
derive the Partition Evaluator (PE) appropriate for distributed database design at the conceptual
level. In section 4, we analyze the Partition Evaluator using our implementation and illustrate the
3Schkolnick [21] uses data clustering techniques for partitioning a hierarchical structure for an IMS database using
detailed cost information which is different from the problem addressed in this paper.

analysis with an example. We show the actual behavior of the PE and compare it with the expected
behavior. Section 5 concretely outlines how the PE derived in this paper is amenable to extensions
and includes the current status of implementation. Section 6 includes summary and future work.

2 Previous Work

In this section we briefly summarize previous relevant work in the area of data clustering and the
work done in data fragmentation in order to understand the similarities and differences between
A number of data clustering algorithms have been developed in application areas such as statis-
tics and pattern classification and analysis which address the problem of grouping or clustering
data using various criteria. The most commonly used partitioning clustering strategy is based on
the square-error criterion [11]. The general objective is to obtain that partition which, for a fixed
number of clusters, minimizes the square-error. Minimizing square-error, or within-cluster varia-
tion, has been shown to be equivalent to maximizing the between-cluster variation. Clusters can
also be viewed as regions of the attribute pattern space in which the patterns are dense, separated
by regions of low attribute pattern density. In the mode-seeking partitioning algorithm due to Torn
[24], clusters are identified by searching for regions of high density, called modes, in the pattern
space. Each mode is associated with a cluster center and each pattern is assigned to the cluster
with the closest center.
Zahn [25] has demonstrated how the minimum spanning tree (1.IST) can be used to detect
clusters. His choice of MST was influenced by the Gestalt principle, which favors the grouping
of attribute patterns based on Euclidean distance measure. Shaffer et al [22] demonstrate the
similarity of the mode-seeking partitioning algorithm [12] to the graph algorithm of Zahn [25]
based on minimum spanning trees. Lu and Fu [13] used another graph-based approach called
"Nearest-Neighbor clustering algorithm" to cluster patterns during character recognition [11].
The concept of using fragmentation of data as a means of improving the performance of a
database management system has often appeared in the literature on file design and optimization.
Hoffer [10] developed a non-linear, zero-one program which minimizes a linear combination of
storage, retrieval and update costs, with capacity constraints for each file. Babad [2] formulated
a less restrictive vertical partitioning problem for variable length attributes as a non-linear zero-
one program. In the work of Eisner and Severance [7], a file can be partitioned into two subfiles:
a primary and secondary subfile. Two forms of cost function are used in this approach. The
first function is the sum of storage charges for subtuples in the primary subfile, and the cost of
accessing all the subtuples residing in the secondary subfile. The second function is nonlinear,
and measures the total costs of access, transfer, and storage for subtuples in both primary and
secondary subfiles. The limitation of this approach is that at most two subfiles are allowed [20].
March and Severance [15] extended this model to incorporate block factors for both primary and
secondary memories. Hoffer and Severance [9] grouped the attributes of a relation based on the
extent to which they were used together (measured the ., ll IiI y between pairs of attributes"). This
clustering of attributes based on their pairwise affinity was done using the bond energy algorithm
(BEA). The BEA produced matrix in which an cost function was minimized for the entire matrix
using the affinity attribute matrix. They left the creation of partitions to the subjective evaluation
of the designer. Schkolnick [21] has examined the problem of partitioning a hierarchical structure
(for a hierarchical database) in order to minimize the access time to it for a given access pattern.
Segment sizes and scan information is used to minimize the page faults.
Hammer and Niamir [8] developed two heuristics, grouping and regrouping, and used them to

perform the partitioning. The grouping heuristic starts by initially assigning each attribute to a
different partition. On each iteration, all possible grouping of these partitions is considered and
the one with maximum improvement is chosen as the candidate grouping for the next iteration.
During regrouping, attributes are moved between partitions to achieve any additional improvements
possible. Navathe et al [18] use a two step approach for vertical partitioning. In the first step,
they use the given input parameters in the form of an attribute usage matrix to construct the
attribute affinity matrix on which clustering is performed. After clustering, an empirical objective
function is used to perform iterative binary partitioning. In the second step, estimated cost factors
reflecting the physical environment of fragment storage are considered for further refinement of
the partitioning scheme. Cornell and Yu [6] propose an algorithm, as an extension of Navathe
et al [18] approach, which decreases the number of disk accesses to obtain an optimal binary
partitioning. This algorithm uses specific physical factors such as number of attributes, their length
and selectivity, cardinality of the relation etc. Navathe and Ra [Nava 89] present a graph-based
approach to the vertical partitioning problem. This approach is based on the observation that
all pairs of attributes in a fragment must have high ,- I1 l,, fragment affinity" but low "between
fragment affinity". Reduction in complexity is claimed as the main advantage of their approach.
There are important differences in the criteria that are used in traditional clustering problems
and data fragmentation problem. In data clustering algorithms, the number of clusters is usually
fixed. Otherwise, the extreme case of only a single cluster in the partition will minimize the inter-
cluster variation. However in the database design application, there is a need to determine the
number of clusters as well and hence the objective function used in data clustering algorithms
cannot be borrowed without any changes to vertical partitioning in databases. Most importantly,
in data base design problem, the number of clusters is an important factor that influences the
trade-off between local and remote transaction processing costs.

2.1 Need for an objective function

Algorithms such as Bond Energy, Binary Vertical Partitioning, Ra's algorithm and Zahn's algorithm
etc. use affinity matrix as the input. The attribute affinity is a measure of an imaginary bond
between a pair of attributes. Because only a pair of attributes is involved, this measure does not
reflect the closeness or affinity when more than two attributes are involved. Hence the algorithms
which use attribute affinity matrix are using a measure (that is an ad hoc extrapolation of pairwise
affinity to cluster affinity) that has no bearing on the affinity as measured with respect to the entire
cluster. As a consequence, we believe, it was difficult to show or even characterize affinity values
for the resulting clusters having more than two attributes.
As we wanted to obtain a general objective function and a criterion for describing affinity value
for clusters of different sizes, our approach does not assume an attribute affinity matrix. The input
model that we consider is a matrix which consists of attributes (columns) and the transactions
(rows) with the frequency of access to the attributes for each transaction, as the values in the
matrix. With this input model we overcome the limitations that are inherent to approaches based
on attribute affinity matrix.
As is evident from the discussion in the previous section, there are a number of partitioning
algorithms available both in the database design area and in other application areas. Many of these
algorithms use different criteria to arrive at a partitioning scheme. The objective function used
by one algorithm is not suitable for evaluating the g.....l --" of other algorithms. Thus we do
not have a common objective function to compare and evaluate the results of these partitioning
algorithms, or in general evaluate the -g..... II --" of a particular partitioning scheme. Hence we

need a partition Evaluator to compare and evaluate different algorithms, that use the same input
in the database design process. Since attribute usage matrix is the most commonly used input
available during the initial design stage, we first design an Evaluator which can be used to evaluate
the .....II. I--" of partitions arrived at using this input. This Partition Evaluator can be used as
a basis for developing algorithms to create fragments of a relation. With this approach, there is
hope that admissibility aspects of algorithms can be proved.
In addition, this Partition Evaluator has the flexibility to incorporate other information, such as
type of queries (retrieval/updates), allocation information about the partitions, remote processing
cost (transmission cost) and the transaction usage pattern at any particular site. In the next section
we will discuss the development of the Partition Evaluator in detail.

3 Partition Evaluator

In any practical database application, a transaction does not usually require all the attributes of
the tuples of a relation being retrieved during the processing of the transaction. When a relation is
vertically divided into data fragments, the attributes stored in a data fragment that are irrelevant
(i.e., not accessed by the transaction) with respect to a transaction, add to the retrieval and
processing cost, especially when the number of tuples involved in the relation is very large. In
a centralized database system with memory hierarchy, this will lead to too many accesses to the
secondary storage. In a distributed database management system, when the relevant attributes (i.e.,
attributes accessed by a transaction) are in different data fragments and allocated to different sites,
there is an additional cost due to remote access of data. Thus one of the desirable characteristics
of a distributed database management systems that we wish to achieve through partitioning is the
local accessibility at any site. In other words, each site must be able to process the transactions
locally with minimal access to data located at remote sites.
Ideally, we would like any transaction to access only the attributes in a single data fragment with
no or minimal access of irrelevant attributes in that fragment. But this is impossible to achieve in the
general cased since transactions access different and overlapping subsets of attributes of a relation.
Moreover, transactions are run at different sites and hence some of the data fragments that contain
relevant attributes of a transaction may reside in remote sites. The overall transaction processing
cost in a distributed environment thus consists of local transaction processing cost and the remote
transaction processing cost. Though it is possible to replicate the data to avoid remote processing
cost, for the first step we assume no data redundancy to avoid modeling overhead to ensure data
integrity and consistency and also additional storage costs. In this paper, we assume that during
the database design process, "partitioning" phase is followed by the "allocation" phase during
which the non-overlapping data fragments obtained during the partitioning phase are allocated to
different sites possibly with some replication. Hence the partition evaluator we propose will evaluate
vertical partitioning schemes wherein the data fragments are non-(c.', ,I/*' ,',p in the attributes. Here
non-overlapping refers only to non-primary key attributes. The primary key is preserved in each
partition. This is necessary to obtain the original relation without any loss or addition of spurious
tuples. Discussion of our approach to relations that are not in Boyce-Codd normal form is in a
later section.
The goal of partitioning the attributes and allocating to different sites, is to arrive at a minimum
processing cost for any transaction originating at any site. Since the transaction processing cost has
two components, one due to local processing and another due to remote processing, our Partition
Evaluator that measures the ..... I,. -" of a vertical partitioning scheme also has two corresponding
component terms; we refer to these terms as ii I. -.1mi local attribute access (..-i" and I. I. -.111

Irrelevant I Relevant
Local Attribute Remote Attribute
Access Cost Access Cost
(I) (II)

1 n
Number of Partitions

Figure 3: Ideal characteristics of PE components

remote attribute access (c..-" terms respectively. For simplicity, we assume that a single access of a
data fragment corresponds to an unit cost; this assumption can easily be relaxed if more information
is available regarding the access methods, network parameters etc. The irrelevant local attribute
cost term measures the local processing cost of transactions that is due to irrelevant attributes of
data fragments assuming that all the data fragments required by a transaction are available locally.
The relevant remote attribute access term measures the remote processing cost due to relevant
attributes of data fragments that are accessed remotely by transactions; note that the contribution
to remote access cost due to irrelevant attributes is already included in the first term. Since we do
not know during partitioning how the data fragments are allocated, we compute the second term
assuming that data fragments needed by a transaction are located at different sites. In the absence
of any information regarding the transaction execution strategies, we can compute the second term
either by determining the average of all the remote access costs obtained by running the transaction
at every site containing a fragment needed by the transaction or by assuming the transaction to
be run at one of the fragment sites. If more information is available regarding the transaction
strategies, we can incorporate it in the second term.

The ideal characteristic of a Partition Evaluator is summarized in Figure 3 where the behavior
of the two components as a function of partition size (number of data fragments) is illustrated.
The first component should be maximum for a partition size of 1 and should be zero for a partition
size equal to number of attributes in the relation. The second component on the other hand should
be zero and maximum respectively for these two extremes. In between the two extremes, the first
component should be more responsive to smaller partition sizes while the second component should
be more responsive to larger partition sizes. This is necessary to avoid unnecessary bias toward one
component or another since we know that in a distributed database design, it is undesirable to have
the two extreme cases of the partition size. In addition to being responsive to partition sizes, the
evaluator function should discriminate between different distributions of the attributes among the
fragments for a fixed partition size. Later, we will present experimental evidence to demonstrate
that our Partition Evaluator meets these criteria reasonably well.

3.1 Database and Transaction Specification

We derive a partition evaluator in this section without making any assumptions on the input. The
input assumed is a relation (consisting of a set of attributes) and an attribute usage matrix. We
really do not have to make any assumptions as to whether the input relation is a Universal relation
or whether it is in a particular normal form. The partitioning obtained is non-overlapping except
for the key attributes in the input relation. The key attributes are part of each partition. In a later
section we indicate how normalization and functional dependency information can be easily added
to our approach.
The input model that we use is an attribute usage matrix [AUM] which consists of the attributes
in a relation as columns and the transactions as rows with the frequency of access to the attributes
for each transaction as the values in the matrix.
A representative attribute usage matrix4 is shown below:

Trans\Attrs Al A2 A3 A4 A5

T1 0 ql 0 ql ql
T2 q2 q2 q2 0 q2
T3 q3 0 0 q3 q3
T4 0 q4 q4 0 0
T5 q5 q5 q5 0 0

3.2 Definitions and Notations

A partition (scheme) is a division of attributes of a relation into vertical fragments in which for any
two fragments, the set of attributes of one is non-overlapping with the set of attributes of another.
For example, the partition {(1, 3)(2, 4)(5)} defines a collection of fragments in which attributes 1
and 3 are in one fragment, 2 and 4 are in another and 5 is in a separate fragment. The following
are used in the derivation of the Partition Evaluator.

n : Total number of attributes in a relation that is being partitioned.
T : Total number of transactions that are under consideration.
qt : frequency of transaction t for t = 1, 2,..., T.
M : Total number of fragments of a partition.
n : Number of attributes in fragment i.
nk : Total number of attributes that are in fragment k
accessed remotely with respect to fragment i by transaction t.
fj:: frequency of transaction t accessing attribute j in fragment i
note that fj is either 0 or qt
Aij : Attribute Vector for attribute j in fragment i.
t-th component of this vector is fti
Sit : Set of attributes contained in fragment i that the transaction t
4In this section, no distinction is made between update and retrieval transactions; it is discussed as one of the
extensions in a later section

accesses; It is empty if t does not need fragment i.
ISat : number of attributes in fragment i that the transaction t accesses.
Ritk : Set of relevant attributes in fragment k accessed remotely
with respect to fragment i by transaction t;
these are attributes not in fragment i but needed by t
|Ritk| : number of relevant attributes in fragment k accessed remotely
with respect to fragment i by transaction t

The attribute vector All (assuming A1 is in partition 1) for the example given above is as

All q3

Consider the partitioning scheme (1,3),(2,4),(5), for the attribute usage matrix given above.
Fragment 1 is (1,3), fragment 2 is (2,4) and fragment 3 is (5). Assume that transaction T2 (i.e., t
= 2) is run at the site where fragment 1 is located. Then Sit is (1,3) and ISal is 2. n[' is 2 and
ni3t is 1. IRit2 is 1 and IRit3 is 1.
Next we explain how each of the two components of our Partition Evaluator is derived.

3.3 Irrelevant local attribute access cost

For the first component, we use the square-error criterion as given in [Jain m"] for data clustering.
The objective here is to obtain a partition which will minimize the square-error for a fixed number
of fragments. This criterion assigns a penalty factor whenever irrelevant attributes are accessed in
a particular fragment.

Let us assume that n attributes have been partitioned into M fragments (Pi, P2, ......PM) with
ni attributes in each fragment. Thus Yi= ni = n. The mean vector V for fragment i is defined as

V Aij 0 < j=1

This mean vector represents an average access pattern of the transactions over all attributes of
fragment i. For an attribute vector Aij, (Aij Vi) is called the "difference vector" for attribute j
in fragment i. The square-error for the fragment Pi is the sum of the squares of the lengths of the
difference vectors of all the attributes in fragment i. It is given by

2 = (Ai V)(A Vi) 0 < < M (2)

If Aij = Vi then e will be zero. This will occur for the trivial case when there is a single
attribute in each fragment or for the case when all the attribute in each fragment are relevant to
all the transactions that access that fragment. It is the latter case that we are interested in and to
avoid the former case, we will use the second component.
The square-error for the entire partition scheme containing M fragments is given by

EM= e2 (3)

Smaller the value of E smaller is the cost due to access of irrelevant attributes. E however
does not reflect the cost that might be incurred by accessing attributes remotely when the fragments
may be in different sites. Hence, for distributed database applications, we cannot evaluate partitions
on the basis of E2 alone. The behavior of E is given in figure 4 as curve I for a data set
consisting of 10 attributes and eight transactions (more details of this example are discussed in the
next section).
From this graph, we can see that the minimum value for E is certainly achieved for n partitions
in an n attribute system (The minimum value may be reached even for less than n partitions
depending on the AUM). We would like to have a number of fragments which is typically much less
than n and still having the least E value. In some data clustering techniques, the number of data
clusters is minimized using an index called Davies-Bouldwin (DB) index [11] which is a measure
of the spread between centers of the clusters. For a typical data set with small standard deviation
(less than 0.1), this index reaches a global minimum (highest spread) for a partition size that falls
between the extremes. From the curve I of figure 4, we can see that the standard deviation of the
data set given by the attribute usage matrix is greater than 0.1, which does not meet the condition
for using the DB index. For this reason, we seek another quantity which will reflect the remote
access cost. In the next section, we will discuss the development of a quantity for remote attribute
access cost.

3.4 Relevant Remote Attribute Access Cost

Now we will include the second component which would compute a penalty factor that computes
the function shown earlier in figure 3. Given a set of partitions, for each transaction running on a
partition compute the ratio of the number of remote attributes to be accessed to the total number
of attributes in each of the remote partitions. This is summed over all the partitions and over all
transactions giving the following equation. The second term is given by

E A T q2 itk itk (4)
t=1 k i nitk

Here A5 is an operator that is either an average, minimum or maximum over all i. These
different choices of the operator give rise to average, optimistic and pessimistic estimates of the
remote access cost. If specific information is available regarding transaction execution strategies,
then we can determine for each transaction t, the remote fragments accessed by the transaction

5A is introduced to keep the formula general and applicable to a wide class of problems; also, a desired new
operator can be substituted for A without having to change the objective function.

and the remote access cost can be refined accordingly. In our experimental investigation, we use
the optimistic estimate for illustration.
We will show in the next section how we obtained the above form for the second term. Our
Partition Evaluator (PE) function is given by

PE = EM + ER

3.5 Compatibility of PE Components

As the second component has frequency in quadratic form, we need to make sure that both the
components are compatible in terms of the units they produce. Specifically, we need to see whether
the frequency appears in the same way in the first component. In order to do that, it is instructive
to look closely at the first term. In particular, we will rewrite E$ differently so as to identify the
contributions due to each transaction to the square error term of each fragment in the partition.

M n,
EM= Y(Aj- )i(A -l )
i= j=

Now the mean Vector I for fragment i, can be defined as follows.

The attribute vector Aij is,

IS, I'qt






fi |S'il*ql
f IS I*q
f2J n,

2 M n, [ I 5 Sill *ig 1Sid t *. .
M = ^ -- *2 (7)
E = f Si1n1 f S q Sif .. (7)
i=1 =l

fi S lt *qt
The above formula can be rewritten as follows so as to allow us to identify the different components
that make up the irrelevant local attribute access term.

E i = nz (T q t 2 +(1 t ) (q (i 8)
EE E1 LEnl)2 (8)T/
i=1 j=1 t=1

Sjt = 1 if the attribute j is accessed by the transaction t
= 0 if the attribute j is not accessed by the transaction t.
The first term q Il)2 represents relevant attribute accesses and
the second term represents irrelevant attribute accesses. Even if we
have a 0 in Aij, we still have the mean squared quantity qt )


E = SZ t q2 1 i- 2 + (n Si) qt )(9)
i=1 t=1

j= Ijt = I S it and
9=i(l -t)j1) n 1


EL = L= |Sl ( 1 + q(n I\ St1) (-~ t (10)

If n = ISt then E = 0. This implies that the transaction t accesses all attributes in fragment
i whenever it accesses the fragment i. We can still reduce the above equation as follows.

lM T Kf i s 2 1 q 12 | t I
,= t= ii Sid I 1 2e + qt Sit I (no a Se (1)
Simplifying the equation above we get,



Irrelevant 12000
Local Attribute
Access Cost



12000 Relevant
Remote Attribute
Access Cost


3 4 5 6 7 8 9 10
Number of Partitions

Figure 4: Behavior of PE components for an example

i=1 [= (i
i=1 t=1


The equation above is the same as equation as 6, but in a much simpler form. We can clearly
see from this equation the contribution to E% by the irrelevant attributes; (1- s is the fraction of
irrelevant attributes in fragment i as far as transaction t is concerned. As it was mentioned earlier,
Ek is the cost factor only for local transaction processing.
Now we can see that the E2 term shown earlier, is very much similar in form to the E term.
Also the necessity for having a squared frequency term in Ej is made clear. Hence the PE is given

PE = EM + ET


Curve II of Figure 4 illustrates the behavior of E2 for the same example of 10 attributes and
8 transactions.

4 Analysis of the Partition Evaluator

The final form of the Partition Evaluator is given in equation 13. It is easy to infer that both
the components of PE satisfy the ideal characteristic for the boundary values of 1 and n. We also
suspect that the objective function derived in this paper has the i""I'..- minimum" property across

the partitions. Also, within each partition, we suspect a i""II1' minimum" for the distribution of
attributes in that partition. This is being investigated further.
In order to analyze and test the behavior of the PE for different number of partitions and
different distribution of attributes among those partitions, an exhaustive enumeration algorithm
was implemented. The input to the program is an attribute usage matrix. We used both examples
from previous work and randomly generated attribute usage matrix as input. We ran the exhaustive
enumeration algorithm on a large number of inputs to test the behavior of the partition evaluator.
Below we describe the computation of PE for the AUM shown below and also compare the outcome
of other algorithms for this input.
Input: Attribute Usage matrix.

Trans.\Attrs. 1 2 3 4 5 6 7 8 9 10

T1 25 0 0 0 25 0 25 0 0 0
T2 0 50 50 0 0 0 0 50 50 0
T3 0 0 0 25 0 25 0 0 0 25
T4 0 35 0 0 0 0 35 35 0 0
T5 25 25 25 0 25 0 25 25 25 0
T6 25 0 0 0 25 0 0 0 0 0
T7 0 0 25 0 0 0 0 0 25 0
T8 0 0 15 15 0 15 0 0 15 15

For each fragment (from 1 to 10), the Partition Evaluator was computed.
An exhaustive enumeration program was written in C/C++ to produce all the possible com-
binations of attributes with number of fragments varying from 1 to 10. For this experiment, we
assumed that the number of sites is the same as the number elements in the partition scheme.
The Partition Evaluator was applied to each of these combinations. We execute a transaction on
a fragment (i.e., site) only if that fragment contains at least a single attribute accessed by that
transaction; otherwise that transaction is not run on that fragment. Each of the transactions is run
on each fragment and the minimum, maximum and the average value of the Partition Evaluator is
calculated. Total number of fragments evaluated was 115975. The optimal values (minimum) along
with the partitioning scheme for each partition size is given below. The results are plotted in figure
5. Although not shown, the partition evaluator clearly discriminates among different distributions
of attributes within a given partition.



Evaluator 10000


1 2 3 4 5 6 7 8 9 10
Number of Partitions

Figure 5: Behavior of partition evaluator for an example

Number Of Fragments Min PE Value Partition Scheme

1 15085 (12345678910)

2 8457 (1456710)('.;'"-)

3 5820 (157)(''.;'-,)(4610)

4 6024 (15)(_'.;' )(4610)(7)

5 6874 (15)('.;'-)(46)(7)(10)

6 7724 (15)(2.' ,)(4)(6)(7)(10)

7 8976 (1)(_.;' ,)(4)(5)(6)(7)(10)

8 11692 (1)(,'1)(3)(4)(5)(6)(7)(10)

9 14000 (1)(28)(3)(4)(5)(6)(7)(9)(10)

10 1 ..-.11 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)

As we can see from the above table the optimum value (third row above) is found to be in a
partition of three fragments. The Partition Evaluator values above are the minimum of the values
where the second component of the PE is calculated using optimistic (minimum value) estimate.
For this particular example, other algorithms such as Ra's [19], Zahn's [25] and Binary Vertical
Partitioning [18] identify the above mentioned partition set (i.e., 3 fragments) as optimum. In
Zahn's approach, once the maximum spanning tree is obtained two different conditions can be

used to determine the partitions [11]. For this example, when these two conditions are applied to
Zahn's algorithm, they produce two different partitioning schemes. One of them is the same as
the optimal partitioning scheme obtained as above and the other one is not. Hence, one may want
to use another criterion (such as the PE) to guide the selection of the most appropriate condition
when faced with a number of alternatives. Since producing any partitioning scheme using the
Bond Energy algorithm is subjective in nature, this Partition Evaluator can be used to guide the
partitioning once the Bond Energy Algorithm is applied. Ra's and Binary Vertical Partitioning
algorithms were applied to another example with twenty attributes and fifteen transactions [Nava
84]. The results of these two algorithms were different. Ra's algorithm produces five fragments
and Binary Vertical partitioning algorithm produced four fragments. The Partition Evaluator was
applied to both the results and it was found that indeed the four fragment result is better than the
five fragment result. This example highlights the usefulness of the Partition Evaluator to evaluate
the results of the different partitioning algorithms and select the appropriate partitioning algorithm.
EXAMPLE: The following discussion explains how one of these figures is arrived at.
Number Of Fragments Min PE Value Partition Scheme

3 5820 (157)(-'.;' i')(4610)

Let us call (1 5 7) as fragment I, (2 3 8 9) as fragment II and (4 6 10) as fragment III. The first
step is to calculate the square error for the given input.
Mean for each fragment is:
FragmentI FragmentlI FragmentlII

25 0 0
0 50 0
0 0 25
12 18 0
25 25 0
17 0 0
0 13 0
0 8 15

1) The square error

M n,

i=1 j=1

Et = 1234 + 2078 + 0 = 3312

2) Cost due to relevant remote attribute accesses of a transaction (T4) is:
Value Minimum of the values
T4 run @ Fragment I: 352 2 2/4 = 1225
T4 run @ Fragment II: 352 1 1/3 = Iis Ii1118
T4 run @ Fragment III: cannot be run

Total cost of accessing relevant remote attributes by all transactions is
I8 + 1875 + 225 = '".11

PE = 3312 + '_.I1 = 5820.

The above example shows that utility of the Partition Evaluator developed in this paper to
evaluate the results of any partitioning scheme and especially for comparing previously developed

4.1 Implementation Status

A distributed database design testbed is under development at the Database Systems Research &
Development Center, Univ. of Florida, Gainesville. The testbed includes several different vertical
partitioning algorithms and modules that compare and evaluate the results of these algorithms.
The algorithms that have been developed as part of this effort are:
Bond Energy Algorithm [14].

Binary Vertical Partitioning Algorithm [18].

Minyoung Ra's Graphical Algorithm [19].

Exhaustive Enumeration Algorithm [16].

The figure 6 gives an overall outline of the design testbed. An attribute usage matrix (AUM)
can be provided explicitly by the user; it is also be generated by the testbed using a random
number generator for a given number of attributes and transactions. If necessary, the attribute
affinity matrix (AAM) is computed by the testbed using AUM depending on the algorithm chosen.
There are 3 choices for algorithms: i) bond energy algorithm followed by either binary vertical
partitioning or n-ary vertical partitioning (which is implemented as a recursive application of the
binary vertical partitioning algorithm on the outcome of an earlier step), ii) Ra's algorithm, and iii)
exhaustive algorithm for the approach presented in this paper. The result of each algorithm can be
viewed separately or can be compared with the exhaustive search algorithm using the PE derived
in this paper. Finally, the best partitioning scheme (using either MIN, MAX, or AVERAGE) can
be obtained and displayed.
The current implementation uses Motif on SUN workstation for providing a menu-driven in-
terface to the user. It is relatively easy to change the interface to reflect additions and changes to
the suite of algorithms and other support provided. A graph drawing package (both 2D and 3D)
is being interfaced to view the PE along the two axes shown in this paper as well along the third
dimension which corresponds to the distribution of attributes with in each partition.

5 Potential for Extensions

In this section we concretely outline ways in which te objective function can be extended. So far,
we have discussed the formulation of an objective function (Partition Evaluator) assuming that

Figure 6: Distributed Database Design testbed prototype

attribute usage matrix is the only information available to the designer (conceptual Design) during
the partitioning phase of the design process. In this section we outline how to modify the objective
function in the presence of additional information. Note that this work is in preliminary stages and
hence we provide only a brief description of the approach we are planning on taking.

Dependencies: If a universal relation or a relation which is not in the Boyce-Codd normal form
is used as input, several constraints need to be applied when a partition is generated. The
cost of maintaining functional and multi-valued dependencies is reduced if all the attributes
participating in a dependency are either in the same fragment or at least allocated to the same
site/node. Both cases can be easily accommodated without changing the objective function
by defining a dependency checker. The dependency checker evaluates whether a partition
with respect to dependencies and it either rejects the partition for cost computation or tags
the partitions to facilitate the allocation of the fragments to the same site. In other words,
during the design process any fragment that does not include all the attributes of dependencies
could be rejected even though that fragment might give the minimum value for the partition
evaluator; such partitioning schemes are deemed not admissible. In the other alternative, the
information is passed to the allocation phase.
Fixed partition size: In many cases, the number of sites are known a priori and the objective
function need to minimize the cost for a given number of fragments. This, again, does not
require any changes as one can choose the minimum cost of all the alternatives for the number
of fragments specified.
Update vs. retrieval transactions: Although no distinction was made between update and re-
trieval transaction in the earlier discussion, from the processing cost point of view, it is im-
portant to distinguish between them. Typically, retrieve only transaction access data items
only once where as there is an additional overhead of writing back for updated data items on
to secondary storage (sometimes more if the data item has already been flushed to secondary
storage). A simplistic approach to this problem is to attach a weight factor to the transaction
(or its frequency) to distinguish update type from retrieval type transactions. Multiplying the
frequency by a weight factor has the advantage of distinguishing updates from accesses at the
attribute level (although this may not be useful at the physical record level in general, if the
attributes are stored as two different relations then it will make a difference). For example,
we can increase the frequency (double it) of update type transactions to reflect their process-
ing cost. In our simulation, we can take into account the type of queries by increasing the
frequency for update type queries in the attribute usage matrix.
Local vs. remote processing cost : In our formulation of PE, we have assumed a unit cost for
processing local as well as remote data once the remote data reaches the local node. We also
assumed a uniform transmission cost between any two sites in the network. However, realis-
tically, there is a cost difference between accessing local data possibly using access methods
and remote data for which there are not likely to be any access methods (unless dynamically
created at the local site). Also, the transmission costs between any two nodes is not likely to
be the same.
As a first cut, we can include another factor to reflect the ratio of local processing cost to
the processing of data coming from remote sites. This factor can be included giving different
weights to the two component terms in equation 13. If W1 is the factor for processing local
data and W2 is the factor for processing remote data, it can be accommodated by modifying
equation 13 as follows:

PE = EM + (W2/W1)E

Note, however, that a better way would be to integrate the factors into the formula.
Non-uniform transmission cost between nodes can be taken into account by modifying equation
4 to include a multiplicative factor TRik (which reflects the actual transmission cost/unit data
between sites i and k) inside the inner summation. That is, the second term of the PE can be
modified to

ER= A Ai= [TRik -t* -itk (15)
t=1 k1 i nitk
Replication: Replication of data at two or more sites provides for availability and the reduced
cost of processing transactions using local data. This comes at the expense of maintaining the
consistency of replicated data by suitably propagating updates.
Without taking the update propagation costs into account, the Partition Evaluator can be
extended to accommodate replication of fragments as follows. We assume that attributes are
remotely accessed only if they are not available locally. Thus when computing the second
term, only those attributes that exist in a fragment remotely accessed with respect to the
site of another fragment (call it "local") but do not exist in the local fragment are taken
into account. Replication does not affect value of the first term in its current formulation.
Hence, the form of the Partition Evaluator remains the same for this limited case. Then there
is the problem of determining the number of replicated sites and their allocation. The cost
contributed by relevant remote attribute accesses for the transactions under consideration can
be used as a heuristic to determine whether the fragment should be replicated and the site(s)
where it should be replicated. Again, this does not include the cost of update propagation.

6 Summary and Future Work

In this paper, we have presented a general approach to the vertical partitioning problem. Our
study brought out the problems associated with the use of attribute affinity matrix currently used
in almost all of the earlier data partitioning algorithms. We started with the objective function
used in clustering methods and extended it to suit the requirements of database design. As a result
we have brought together work in two isolated areas and established a correspondence between
them. Using the objective function derived in this paper and the approach proposed, one can
evaluate any vertical partitioning algorithm that uses the attribute usage matrix as input. We have
identified and expressed an objective function quantitatively before embarking on the development
of algorithms. It will now be easier to develop algorithms (heuristics-based or otherwise) exploiting
the properties of the function being optimized. Our Partition Evaluator satisfies the need for a
common criteria or objective function and can be used to compare and evaluate the extant vertical
partitioning algorithms. Finally, the PE developed in this paper has the flexibility to incorporate
additional design information such as type of queries (retrieval/updates), transmission cost, and
replication as briefly outlined in the previous section.
We are currently analyzing the objective function to characterize the ""I.Iq.- minimum" prop-
erty. We are developing heuristic algorithms for the objective function derived in this paper. We are
in the process of developing integrated objective function to include other inputs to the database


design process (that are briefly outlined), such as query types, constraints on allocation, transmis-
sion cost, transaction usage pattern at any particular site, into the objective function derived in
this paper. We are also extending the approach taken in this paper for horizontal partitioning and
mixed/hybrid partitioning to include most of the alternatives shown in 2. We plan on comparing
different algorithms in more detail using the partition evaluator. We hope to integrate the the
Partition Evaluator into a database design testbed which will help designers to choose the right
algorithm for database initial design as well as for redesign.


[1] P. M. G. Apers. Data Allocation in Distributed Database Systems. AC I[ TODS, Vol. 13, No.
3, pp. 263-.;111, 1',"
[2] M. Babad. A record and file partitioning model. Commun. AC I1 20, 1(Jan 1977).
[3] S. Ceri, S.B. Navathe, G. Wiederhold. Distribution Design of Logical Database Schemas IEEE
trans. on SW iyqq. Vol SE-9 No.4, p 487 503, July l'i.;
[4] S. Ceri, S. Pernici, and G. Wiederhold. Optimization Problems and Solution Methods in the
Design of Data distribution. I,,f,,,i i Sciences Vol 14, No. 3, p 261-272, 1',1'
[5] P. Chu. A Transaction-Oriented Approach to Attribute Partitioning. I, f"i, I'. I'o, Systems,
Vol. 17, No. 4, pp. 329-342, 1992.
[6] D. Cornell, and P. Yu. A Vertical Partitioning Algorithm for Relational Databases. Proc.
Third International Conference on Data E,-i',. i',,i Feb. 1'" I, pp. 30-35.
[7] M. Eisner, and D. Severance. Mathematical techniques for efficient record segmentation in
large shared databases. J. AC 1[ 23, 4(Oct. 1976).
[8] M. Hammer, and B. Niamir. A heuristic approach to attribute partitioning. In Proceedings
AC I SI(, I[OD Int. Conf. on Mi, i. ,. ,,I of Data, (Boston, Mass., 1979), AC'.I, New York.
[9] J. Hoffer, and D. Severance. The Uses of Cluster Analysis in Physical Database Design In
Proc. 1st International Conference on VLDB, Framingham, MA, 1975, pp. 69 86.
[10] J. Hoffer. An integer programming formulation of computer database design problems. Inf.
Sci., 11(July 1976), '"-IS.
[11] A. Jain, and R. Dubes. Algorithms for clustering Data. Prentice Hall Advanced Reference
Series, Englewood Cliffs, NJ, 1''"
[12] J. Kittler. A locally sensitive method for cluster analysis. Pattern P, .. ... /'01 8, 22-33.
[13] S. Lu, and K. Fu. A sentence-to-sentence clustering procedure for pattern analysis. IEEE
Transactions on Systems, Man and Cybernetics S I[C 8, 381-.;",
[14] W. McCormick, P. Schweitzer, and T. White. Problem Decomposition and Data Reorganiza-
tion by a Clustering technique Operations Research, 20 Sep. 1972.
[15] S. March, and D. Severance. The determination of efficient record segmentation and blocking
factors for share data files. AC if Trans. Database Syst. 2, 3(Sept. 1977).
[16] R. Muthuraj. A formal approach to the vertical partitioning problem in distributed database
design. M.S. Thesis, Dept. of Computer Science, Univ. of Florida, Aug. 1992.

[17] R. Muthuraj, S. Chakravarthy, R. Varadarajan, and S. B. Navathe. A formal approach to the
vertical partitioning problem in distributed database design. To appear in Proc. of PDIS-2,
San Diego, Jan 1r' ,
[18] S. Navathe, S. Ceri, G. Wiederhold, and J. Dou. Vertical Partitioning Algorithms for Database
Design AC if Transactions on Database Systems, Vol. 9, No. 4, Dec. 1'" 1.
[19] S. Navathe, and M. Ra. Vertical Partitioning for Database Design: A Graphical Algorithm.
AC':. SIG:.IOD, Portland, June 1',-'
[20] B. Niamir. Attribute Partitioning in Self-Adaptive Relational Database System. Ph. D. Dis-
sertation, M.I.T. Lab. for Computer Science, Jan. 1978.
[21] M. Schkolnick. A Clustering Algorithm for Hierarchical Structures AC'.I TODS, Vol. 1, No.
2, pp. 27-44. March 1977.
[22] E. Shaffer, R. Dubes, and A. Jain. Single-link characteristics of a mode-seeking algorithm.
Pattern F .. .... 1 ,, 11, 65-73.
[23] N. Tamer, P. Valduriez. Principles of Distributed Database Systems. Prentice Hall Eil, .,.,,,,.1
Cliffs, New Jersey 0'; .'
[24] A. Torn. Cluster analysis using seed points and density-determined hyperspheres as an aid to
global optimization. IEEE Transactions on Systems, Man and Cybernetics S I[C 7, 610-616.
[25] C. Zahn. Graph-theoretical methods for detecting and describing Gestalt Clusters. IEEE
Transactions on Computers C 20, 68-86.

University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs