Title: Integrated verification of constraints and event-and-action-oriented business rules
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00100852/00001
 Material Information
Title: Integrated verification of constraints and event-and-action-oriented business rules
Physical Description: Book
Language: English
Creator: Shi, Yuan
Publisher: University of Florida
Place of Publication: Gainesville Fla
Gainesville, Fla
Publication Date: 2001
Copyright Date: 2001
 Subjects
Subject: Rule-based programming   ( lcsh )
Electronic commerce -- Software   ( lcsh )
Computer and Information Science and Engineering thesis, M.S   ( lcsh )
Dissertations, Academic -- Computer and Information Science and Engineering -- UF   ( lcsh )
Genre: government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )
 Notes
Summary: ABSTRACT: In collaborative e-Business, the business rules of different business partners need to be shared electronically and be used to solve business problems collaboratively. To achieve this, a neutral knowledge representation is needed to translate heterogeneous rules into the neutral representation so that 1) pair-wise translations between rule representations can be avoided and 2) the collection of rules can be verified to identify inconsistencies, redundancies, and non-termination conditions. In this thesis, an active object model (AOM) is used as a neutral knowledge representation for different types of rules commonly recognized in the literature, and several algorithms are developed and implemented for the detection of rule anomalies. Since methods may be invoked in event-and-action-oriented rules, and their side effects cannot be determined, we abstract their side effects for verification purposes. The verification process first converts all types of rules into an event-and-action-oriented representation based on the AOM. An algorithm is then applied on the set of transformed rules to detect non-termination conditions. Triggering graphs, activation graphs, and deactivation graphs are used for this detection. Next, the same set of rules is partitioned based on their associated events before applying algorithms on each partitioned subset to detect inconsistencies and redundancies. Rules in each partitioned group can be regarded as logic rules with the extension of method invocation. An existing logic rule verification algorithm is tailored to detect redundancies and inconsistencies in the partitioned rule groups.
Summary: KEYWORDS: rule warehouse system, rule verification, business rule, constraints, event-and-action oriented rule
Thesis: Thesis (M.S.)--University of Florida, 2001.
Bibliography: Includes bibliographical references (p. 64-67).
System Details: System requirements: World Wide Web browser and PDF reader.
System Details: Mode of access: World Wide Web.
Statement of Responsibility: by Yuan Shi.
General Note: Title from first page of PDF file.
General Note: Document formatted into pages; contains ix, 68 p.; also contains graphics.
General Note: Vita.
 Record Information
Bibliographic ID: UF00100852
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: oclc - 49208000
alephbibnum - 002763031
notis - ANP1051

Downloads

This item has the following downloads:

thesis_yshi ( PDF )


Full Text











INTEGRATED VERIFICATION OF CONSTRAINTS AND EVENT-AND-ACTION-
ORIENTED BUSINESS RULES


















By

YUAN SHI


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2001




























Copyright 2001

by

Yuan Shi



























To my parents and wife















ACKNOWLEDGMENTS

This research would not be complete without the contributions of several people.

I would like to thank Dr. Stanley Y. W. Su and Dr. Herman Lam for their support and

encouragement throughout the entire project. Their guidance and wisdom have allowed

me to finish this work in a timely fashion. I would also like to express my thanks to Dr.

Steve Thebaut for serving on my committee and for his valuable and constructive

suggestions on my thesis.

I would also like to thank Ph.D. student Youzhong Liu, for his great and kind help

on this work. Thanks also go to all my friends for their constant support and

encouragement. Furthermore, I would like to express my thanks to all the members at

Database Systems D&R Center, especially to Sharon Grant, for their assistance and

cooperation.

Finally, I would like to thank my wife, Feng Li; my mom, Fuyun Dong; my dad,

Mingjin Shi; and my brother Qiang Shi for their encouragement and care during my

graduate work.
















TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S ................................................................................................. iv

L IST O F FIG U R E S .... ............................ ...... ............................ .............. vii

A B S T R A C T .........................................................................................v iii

1. INTRODU CTION .................. ............................... 1

2. SU RV EY OF RELA TED W ORK ....................................................................... ......5

2.1 Active Object Model: A Neutral Knowledge Representation ............................ 5
2.2 Logic Rule V verification .................................................. ................ .............. 7
2.3 Event-and-Action-Oriented Rule Verification................... .......... .......... .... 10

3. PROBLEM DEFINITION S ................................ ........................................ 12

3.1 B asic D efinitions.............. .. ....... .... .................. 13
3.1.1 Definitions for Event-and-action-oriented Rules........................................ 13
3.1.2 Rule Execution M odel ................. ......... .. ....................... .............. 13
3.1.3 Definitions for Objects and Object States............................................ 14
3.1.4 Modeling the Side-Effects of Methods................. .............................. 15
3.2 N on-T erm nation P problem ......... ................... ................ ................................... 16
3.3 Inconsistency and Redundancy Problem s.......................................... ... ................. 18
3.3.1 D definition for Inconsistency ........................................ ......... .............. 18
3.3.2 D definition for R edundancy ........................................ .......... .............. 20
3.4 Sum m ary ........... ............................................... ............. .......... .. 22

4. VERIFICATION ALGORITHM S ........................................ .......................... 23

4.1 General Approach ............. ............ ............. .......... .... .. ........ ..23
4.2 A ssum ptions................................. ......... ..... 25
4.3 N on-Term nation A lgorithm ............................................................... ................. 26
4.3.1 Triggering Graphs, Activation Graphs, and Deactivation Graphs................. 26
4.3.2 Algorithm for Non-Termination Detection................................................... 26
4.3.3 Algorithm Completeness and Soundness............................... .................... 32
4.4 Reduction Algorithm ......................... .......... .................... 34
4.5 Extension to an Existing Logic Rule Verification Algorithm ............................. 37
4.5.1 Inconsistency A lgorithm .............................................................. .... ............ 38


v









4.5.1.1 A Resolution-based Inconsistency Detection Algorithm....................... 38
4.5.1.2 Inconsistency........................... .......... .......... ........ .............. 42
4 .4 .1.3 C contradiction ......... .......................... .............. .............. . ............ 44
4.5.1.4 Method Invocation Support ............................................. .......... 45
4.5.1.5 Putting Them Together ....................... .................... ................. 48
4.5.2 Redundancy Algorithm ....................................................... ......... ..... 48

5. SYSTEM IM PLEM ENTATION .................................................... ................50

5.1 G general D description .......... ...................................................... ...... ....... .. ... 50
5.2 N on-Term nation D election ......... ................................................. .............. 54
5.2.1 Triggering Graph Formation.............................. ..................... 54
5.2.2 Preliminary Cycles and Triggering Cycles Detection ................................... 55
5.2.3 Activation and Deactivation Graph Formation.................. .............. 55
5 .2 .4 N on -term in action ........... ................................................ ................... .. .. 56
5.3 Partitioning Form action ........................................................................ 57
5.4 Inconsistency and Contradiction Detection .................................................... 58
5.5 R edundancy D election ............................ .......................................... .............. 60

6. CONCLUSION AND FUTURE WORK ........................................... ............... 62

6 .1 C o n clu sio n .................................. ......................................... 6 2
6.2 Future W ork ................................. ........................................... 63

LIST OF REFEREN CES ...... ................................ ............................. ............... 64

BIOGRAPHICAL SKETCH .............................................................................68
















LIST OF FIGURES



Figure Page

1. Constructs of the Active Object M odel ...................................................... ............ 6

2 Integrated V verification ............................................................................ ..................... 24

3. Examples of Triggering Graph, Active Graph and Deactive Graph..............................27

4. Exam ples for Lem m a 4.2 ....................................................... ... .. ............... 28

5. Revised Examples of Triggering Graph, Active Graph and Deactive Graph ..................32

6. Rule W warehouse System Component Diagram .............................................................. 51

7. Rule Verification Graphical U ser Interface ............................................ ............... 52

8. Logic Rule Transform ation Pseudo-code...................................... ......................... 53

9. Deactivated-termination Condition Detection Pseudo-code ....................... ...........56

10. Rule Partition Pseudo-code ........................................................................ 58

11. Inconsistency Detention Pseudo-code...................... ............................... 60

12. R edundancy D election Pseudo-code..................................................................... .....61















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

INTEGRATED VERIFICATION OF CONSTRAINTS AND EVENT-AND-ACTION-
ORIENTED BUSINESS RULES

By

Yuan Shi

August 2001

Chairman: Dr. Stanley Y. W. Su
Major Department: Computer and Information Science and Engineering

In collaborative e-Business, the business rules of different business partners need

to be shared electronically and be used to solve business problems collaboratively. To

achieve this, a neutral knowledge representation is needed to translate heterogeneous

rules into the neutral representation so that 1) pair-wise translations between rule

representations can be avoided and 2) the collection of rules can be verified to identify

inconsistencies, redundancies, and non-termination conditions. In this thesis, an active

object model (AOM) is used as a neutral knowledge representation for different types of

rules commonly recognized in the literature, and several algorithms are developed and

implemented for the detection of rule anomalies. Since methods may be invoked in

event-and-action-oriented rules, and their side effects cannot be determined, we abstract

their side effects for verification purposes. The verification process first converts all

types of rules into an event-and-action-oriented representation based on the AOM. An

algorithm is then applied on the set of transformed rules to detect non-termination









conditions. Triggering graphs, activation graphs, and deactivation graphs are used for

this detection. Next, the same set of rules is partitioned based on their associated events

before applying algorithms on each partitioned subset to detect inconsistencies and

redundancies. Rules in each partitioned group can be regarded as logic rules with the

extension of method invocation. An existing logic rule verification algorithm is tailored

to detect redundancies and inconsistencies in the partitioned rule groups.














CHAPTER 1
INTRODUCTION

The emergence of the Internet-based technologies is one of the most significant

technical achievements in history. More and more organizations and people are using the

Internet to share information, to work collaboratively, and to perform business

transactions [LAC99].

E-business is one of the fastest growing Internet-based distributed applications

that can greatly benefit from pervasive computing technologies. The next stage in the

evolution of e-business is collaborative e-business [PHIOO], which is characterized by a

set of collaborative e-services that support the automation of e-business processes, which

require interaction and collaboration across enterprises. In order to automate e-business

collaboration, the business knowledge of different business partners needs to be captured

and shared electronically and be used to solve business problems collaboratively.

The business knowledge of individual companies is commonly expressed in terms

of business events and business rules, and managed by some rule processing systems

[LIUOO]. The event and/or rule representation and the rule processing system of one

company may be different from those of others. It is important to have a way of

capturing and sharing these heterogeneous events and rules because they are important

resources just like data, application systems, hardware systems, etc. The sharing of these

resources among the participating companies is essential for conducting joint business.

Similar to the concept of a data warehouse system, a rule warehouse system (RWS) is









proposed and developed to allow heterogeneous business rules to be imported and

transformed into the neutral knowledge representation, verified to eliminate rule

anomalies, and exported for use in some existing rule systems [LIUOO]. In this manner,

business partners can share business knowledge and collaboratively solve problems that

can not be solved by using the rules of individual companies.

One of the main difficulties of developing such an RWS is the neutral rule

representation for the four general types of rules commonly recognized in the literature:

logic rules, production rules, constraints, and event-and-action-oriented rules [LIUOO].

These heterogeneous rules imported into the RWS must be converted into the neutral

representation for the purpose of rule set verification. The representation must be

semantically rich enough to accept the importing request from any business rule source.

Existing works [GRE96, OUS96] extend object models to support a certain type of

business rules, such as constraints and ECA rules; but none of them is semantically rich

enough to accommodate all the known kinds of business rule specifications. An active

object model (AOM) is an object-based knowledge model capable of defining objects in

terms of attributes and methods (just like traditional object models), as well as events,

different types of rules, and triggers [LEEOO]. Therefore, we use it as the neutral

representation in the RWS, and convert all types of rules into an event-and-action-

oriented representation based on AOM [LIUOO].

Another major obstacle in developing an RWS is that the rules imported into the

rule warehouse may contain inconsistency, redundancy, and non-termination conditions,

which need to be verified by the RWS to detect and take proper actions to remove or

accommodate those anomalies. Rule base verification is an important area of research in









the expert system community [DAV76, SUW82, NGU85, CRA87, WU93a, ZHA94].

Techniques for verifying expert system rules are available: for example, the works

reported in TEIRESIAS [DAV76], CHECK [NGU85], COVER [PRE92], and EHLPH

[WU97]. There are some works on non-termination detection in active database areas

[KAR94, WEI95, BAR95, BAR98, BAROO]. However, the conditions and actions of the

rules in active databases are limited to database operations. There is little work on the

verification of event-and-action-oriented rules because this type of rules contains method

or procedure calls that may have side effects. Besides, this type of rules is triggered by

different events, and the execution of rules may post events to trigger other rules. As far

as we know, there is no work that has been done for non-termination detection of general

event-and-action-oriented rules. This thesis focuses on the design and implementation of

algorithms for the detection of inconsistency, redundancy, and non-termination anomalies

in a general event-and-action-oriented rule set.

The work reported in this thesis is a part of a larger R&D effort in building a rule

warehouse system, which is the research topic of a Ph.D. student, Mr. Youzhong Liu.

My contribution is in the design and implementation of verification algorithms and the

integration of the verification component with the other components of the rule

warehouse system. Specifically, I have implemented most of the programs for the non-

termination detection, and all the programs for inconsistency, contradiction, and

redundancy detections. The remainder of this thesis is organized as follows. Chapter 2

provides a survey of related work. It gives some details on AOM and reviews logic and

event-and-action-oriented rule verification. Chapter 3 formally defines the problems of

non-termination, inconsistency, and redundancy. All algorithms are discussed in Chapter






4


4. It first introduces a general integrated approach, and then separately presents

verification algorithms for detecting each of the anomalies that may exist in a rule set

stored in the rule warehouse. A novel approach to verify event-and-action-oriented rules

is presented. Chapter 5 focuses on the system implementation; pseudo codes are

presented. Finally, the conclusion and suggestions for future work are presented in

Chapter 6.














CHAPTER 2
SURVEY OF RELATED WORK

Our work on the design and development of an integrated verification involves

several emerging fields of research and technology. We first describe a neutral

knowledge representation of business rules in the rule warehouse system (RWS) in

Section 2.1. Logic rule verification and event-and-action-oriented rule verification are

reviewed in Section 2.2, and 2.3, respectively.


2.1 Active Object Model: A Neutral Knowledge Representation

To achieve knowledge sharing, a neutral knowledge representation is needed to

translate heterogeneous knowledge rule representations into the neutral representation so

that 1) pair-wise translations between rule representations can be avoided, and 2) the

collection of knowledge rules can be verified. In this section, we describe an active

object model (AOM) that is used as the neutral knowledge representation of RWS.

Like the traditional object model, the AOM can be used to define business objects

in terms of attributes and methods. In addition, events, constraints, rules, and triggers

applicable to business objects can also be defined. The model is active in the sense that

rules, which capture business policies, regulations, constraints, strategies, and so on, can

be automatically triggered to perform some meaningful operations upon the occurrences

of events. AOM provides a neutral knowledge representation for all business objects

imported into a rule warehouse. Figure 1 shows an overview of the modeling constructs

of AOM. Detailed specification can be found in Lee 2000 [LEEOO].













Schema

-


Class
Schema
Schema-level KS




Instance-level KS
Class-level KS


Schema-level KS



Class-level KS




Instance-level KS




Event-and-Action-oriented KS





Figure 1. Constructs of the Active Object Model



Four types of rules are commonly recognized and distinguished in the literature:

logic rules [GON97], production rules [GON97], constraints [KIM98], or event-and-

action-oriented rules [WID96]. Production rules, logic rules, and constraints can be

processed by different types of logic-based rule engines using different inference

schemes (e.g., forward chaining vs. backward chaining) to solve various types of

problems. Although the rule engines work in different ways, the rules are the same. The

general form for production rules and logic rules is P- Q, which stands for "If P is true,

then Q is true," where P and Q are logical expressions and P is the antecedence and Q is

the consequence. This is also true for constraints, although the syntax of one constraint

language may be different from another. Since the semantics captured by constraints can

be expressed as logic rules, we can treat constraints as logic rules and convert them into

the same representation. Because of their unified representation and semantics, the terms









"constraints" and "logic rules" will be used interchangeably in this work. To be

consistent with current literature, we will use "constraints" when discussing AOM and

will use "logic rules" in the verification section.

The semantics of an event-and-action-oriented rule is different in two ways from

that of a constraint. First, an event-and-action-oriented rule has an explicit event

specification, and the evaluation of the condition and action parts is triggered by the

occurrence of an event; for a constraint, the concept of a triggering event is implicit. The

second difference is that constraints are more declarative; whereas, event-and-action-

oriented rules are more procedural. In other words, an event-and-action-oriented rule

explicitly specifies the operations to be performed in order to enforce a business rule;

whereas, a constraint simply states the business rule, without specifying how to enforce it.

Since business rules commonly exist in both forms, both constraint-oriented and

event-and-action-oriented rules can be explicitly specified in AOM.


2.2 Logic Rule Verification

Existing works on rule verification deal mostly with verification of logic rules.

The approaches commonly used to verify a knowledge-based system (KBS) are as

follows [PRE98]:

1. Inspection: It essentially involves human proof-reading of the text of various

artifacts. Typically, a domain expert is asked to check the statements in a

knowledge base.

2. Static Verification: It consists of checking the knowledge base of a KBS for

logical anomalies. The most commonly identified anomalies are redundancy

and conflict.









3. Formal Proof: It is a more thorough form of logical analysis of the (formal)

artifacts in the development process than that provided by static verification.

Proof techniques can be employed to verify that the formal artifact meets the

specified requirements.

4. Cross-reference Verification: When descriptions of a KBS exist at different

"levels", it is desirable to perform cross-checking between them, to ensure

consistency and completeness. For example, we would expect the concepts

that are specified as being required at the conceptual level to be realized in

terms of concrete entities at the design level and in also terms of concrete data

structures in the implemented system. Therefore, the most appropriate use of

cross-reference verification is to check the correspondence between 1) the

conceptual model and the design model, and 2) the design model and the

implemented system.

5. Empirical Testing: It involves running the system with test cases designed

for both structure-based and function-based tests.

Early works [CRA87, DAV76, NGU85, SUW82] detected simple manifestations

of redundant, contradictory, and missing rules. Detection was based on rule connectivity

and pair-wise checking with respective time complexities of O (n) and O (n2). Pioneering

systems include TEIRESIAS [DAV76], RCP [SUW82] and CHECK [NGU85]. More

recent works [GIN88, PRE92, ROU88, STA87] extend the definitions of anomalies

beyond the simple pairs of rules. They detect redundancies and contradictions existing in

chains of rules as well as more subtle cases of missing rules. The time complexity of the

detection is usually O (bd) where b stands for the "breadth" of the rule set which is the









average number of literals in rule antecedents, and d is the "depth" of the rule set which is

the average number of rules in an inference chain. There is no correlation between the

number of rules' declarations in the system and the cost of verification [PRE94]. Three

rule set verification systems are worth noting: COVER [PRE92], PREPARE [ZHA94]

and EHLPN [WU97]. In COVER, only relevant combinations of data items are

considered. Smaller environment cases are tested first, and any larger environment cases

that are subsumed by the smaller ones are not detected. Based on COVER, COVERAGE

[PRE99a] and KRAFT [PRE99b] extend verification to support two or more knowledge-

based systems to interoperate for the fusion of knowledge that comes from multiple,

distributed, and heterogeneous sources. COVERAGE does not change any of COVER's

original functionality. KRAFT is under developing. PREPARE applied Petri-net models

to study rule verification; however, it assumes that no variables appear in the rule bodies,

and it does not deal with relationships involving negative information in rules. EHLPN

utilized an enhanced high-level Petri-net, which requires the closed-world assumption,

conservation of known and unknown facts, and refraction.

Other works [WU93a, WU93b, ROS97a] detect anomalies based on the semantics

of rules instead of their syntactic representations alone. Wu and Su [WU93a] formally

defined the problem of inconsistency and redundancy of knowledge base and related it to

the concept of unsatisfiability in formal logic in order to make use of the solid theoretical

foundation established in logic for rule verification. A unified framework was developed

for both rule verification and rule refinement. Furthermore, a reversed subsumption

deletion strategy is used to improve the efficiency of rule base verification.









2.3 Event-and-Action-Oriented Rule Verification

Some work has been done to address the termination and confluence problem of a

rule set in active databases; however, this work is limited in that the conditions and the

actions of rules are restricted to database operations. Termination means that the rule

processing is guaranteed to terminate for any user-defined application [BAR98].

Confluence means that the execution of a rule set will reach the same final database states

for an application regardless of the order of rule execution [BAR98]. Karadimce and

Urban [KAR94] reduce rules into term rewriting systems and applied known algorithms

to attack the termination problem; however, the approach is very complicated even for a

small rule set. Weik and Heuer [WEI95] present techniques for the termination analysis

of rules with delta relations in the context of OSCAR, an object-oriented active database

system. The concept of Triggering Graph is introduced in by Baralis et al. [BAR93] to

detect the termination of a database rule set. The work is extended in Baralis and Widom

[BAR94] to support both termination and confluence by limiting the rule set to

Condition-Action (CA) rules. Aiken et al. [AIK95] try to determine the termination and

confluence of a database production rule set statically for both CA and Event-Condition-

Action (ECA) rules. Baralis et al. [BAR95] propose a technique to deploy the

complementary information provided by Triggering Graphs and Activation Graphs to

analyze the termination of ECA rule sets; however, techniques for constructing the

graphs are assumed. In Baralis and Widom [BAROO], the extended relational algebra is

presented and applied by a "propagation" algorithm to form the graphs. Baralis et al.

[BAR98] combine the static analysis of a rule set and the detection of endless loops

during rule processing at runtime.









Although some important research results have been achieved in the above works

on the verification of database rules, all of them require that the conditions and actions of

rules be database operations, that is, insertion, update, deletion, and selection. Moreover,

most of the works handle CA rules only. Even in those that can handle ECA rules, either

the solution is very restrictive (e.g., Baralis et al. [BAR95] require that there is no cycle

in a Triggering Graph) or some limitations are put on the ECA rules to reduce them to

CA rules (e.g., the Quasi-CA rules introduced in Baralis and Widom [BAROO]). Finally,

none of these works address the invocation of methods in the condition part and the

action part of a rule.














CHAPTER 3
PROBLEM DEFINITIONS

In a rule warehouse system (RWS), rules from heterogeneous sources are

imported into the rule warehouse (RW) to support knowledge sharing and business

collaboration. These rules can be broadly classified into two categories: logic (or

constraint-oriented) rules and event-and-action-oriented rules [LIUOO]. In RWS, the

different types of rules will be translated into the active object model (AOM), a neutral

representation, so that they can be verified along with the rules that are already verified

and stored in RW.

Several types of anomalies have been identified by the existing works on the

verification of logic rules in expert systems: inconsistency, redundancy, subsumption,

unnecessary if, dead end, and unreachable rules [GON97]. In this work, we focus on the

integrated verification of business rules. Consequently, some of the anomalies defined

for expert systems in the literature are not applicable to this work. For example,

anomalies such as dead end and unreachable rules can cause problems for expert systems;

however, in an RWS, rules can be exported from the RW to a legacy rule system and be

applied together with some other rules. Thus, even if the rules are unapplied in the RW,

they may become active in conjunction with other rules in the legacy rule system. Hence,

from the point of view of the RWS, it is not regarded as an anomaly.

This chapter first gives some definitions of terms and concepts related to our rule

verification approach in Section 3.1. Sections 3.2 and 3.3 formally define three types of









anomalies (i.e., non-termination, inconsistency, and redundancy) that may exist in rules

stored in a RW. Finally, a summary is provided in Section 3.4.


3.1 Basic Definitions

In the context of RWS, we will consider three types of anomalies: namely, non-

termination, inconsistency and redundancy. Since these anomalies have been extensively

studied for logic rules, we will focus on these anomalies in the context of event-and-

action-oriented rules. Before we formally define and present these anomalies, we shall

define the terms and concepts that will be used in the remainder of this chapter.

3.1.1 Definitions for Event-and-action-oriented Rules

Definition 3.1.1: A rule set R is a set of event-and-action-oriented rule

specifications under consideration.

Definition 3.1.2: An event-and-action-oriented rule r (from here on, we shall

call it rule for short) is a "condition-action" (CA) rule.

Definition 3.1.3: A rule r is fired if it is activated and triggered.

Definition 3.1.4: A rule r is activated if the condition part of the rule evaluates to

true. If the condition evaluates to false, r is deactivated.

Definition 3.1.5: A trigger t specifies a triggering event e and the rule r it

triggers. The rule r is triggered when the event e has been posted.

3.1.2 Rule Execution Model

When an event el is posted, it will cause all the triggers that have el as a

triggering event to trigger the corresponding rules. Each triggered rule r is a CA rule. If

the condition part of r is true (i.e., activated), then the action part of the rule is executed










(i.e., fired). In the execution of the action part of a rule, two types of actions can have

impact on rule verification:

1. An action may post another event e2, initiating another rule triggering cycle.

2. An action may change the state of some objects, which may affect the

condition part of another rule (i.e., activate or de-activate a rule).

3.1.3 Definitions for Objects and Object States

Definition 3.1.6: The property set P of an object O is a set containing all the

attributes defined in the class c to which O belongs. For example, in Example 3.1 shown

below, the property set of the object Oas of class Accident statistics is P {model,

accident probability}.

Example 3.1
Class Accidentstatistics{
String model;
int accident probability DERIVED;
Rulel{
Condition: model = "Honda Accord";
Action: accidentprobability = 0.2;
}
Rule2{
Condition: model = "Honda Accord";
Action: accidentprobability = 0.15;
}
Rule3{
Condition: model = "Honda Accord";
Action: accidentprobability = 0.1;





Definition 3.1.7: An object state So of object O is an instantiation of the property

set P of object O. An example object state of object Oas is {"Honda Accord", 0. 1}.

Definition 3.1.8: An object execution state SE is a pair (So, RA), where So is the

object state and RA E R is a set of activated rules of So. An example object execution

state is ({"Honda Accord", 0. 1}, {R1, R2, R3}), where R1, R2 and R3 are activated rules.









3.1.4 Modeling the Side-Effects of Methods

One important issue concerning the event-and-action-oriented rule verification is

the invocation of methods in both the condition and the action part of a rule. The

execution of a method may have side effects that can cause rule execution anomalies.

Example 3.2 shows that events may be posted within a method, that operations in a

method may change the states of some objects, and that an evaluation result of a rule

condition part may depend on the data read by a method invoked in the condition part.

Thus, for rule verification purposes, it is important to capture the side effects of methods

that are invoked by a rule.

Definition 3.1.9: The side effect of a method m in a class is a tuple {RS, WS, ES,

CS, QS}, where

RS stands for read set, which is the set of attributes in this class or related classes

that are read by m;

WS stands for write set, which is the set of attributes in this or related classes

whose values are changed by m;

ES stands for event set, which is the set of events that are posted by m;

CS stands for contradictory set, which is the set of methods that perform

semantically opposite operations of m. Logically, contradiction is equivalent to ml =

NOT(m2) and m2 = NOT(ml);

QS stands for equivalent set, which is the set of methods that perform

semantically equivalent operations to m. A method ml is said to be equivalent to method

m2 if their executions will bring an initial object state to the same final object state. We

shall use ml = m2 to denote their semantic equivalence.












We note here that a method of a class may make reference to an object of some


other class. It may read and write some attributes of that object, and/or post events


defined in that class. Our definition of a method side-effect includes the full path of


attributes, methods, and events that may be affected by the method execution. For


example, if method ml in class A reads attribute X in class B, which is in schema S,


S.B.X is included in the read set of method ml.


Class Insurance {
int A;
boolean B;
boolean C;
}
Knowledge Specification:
METHOD





METHOD




EVENT
EVENT

RULE
CONDITION
ACTION


RULE
CONDITION
ACTION



TRIGGER
TRIGGEREVENT
RULESTRUC

TRIGGER
TRIGGEREVENT
EVENTHHISTORY
RULESTRUC


ml,
ReadSet: X,
WriteSet: A, Y,
EventSet: post el,
Contradictory: ml NOTm2)

m2,
ReadSet: Z,
WriteSet: A,
EventSet: post e2,

el TYPE EXPLICIT;
e2 TYPE EXPLICIT;

R1
A>0
A -10;
MethodSet(ml)

R2
A<0
B false;
MethodSet(m2)
post el;

Tl
el
R1

T2
e2
{el, e3}
R2


Method side effects can be derived automatically by analyzing the method


implementation in a reverse engineering process or defined manually by using a high-









level specification language. Our work is independent of the method used to obtain the

side effect information. The side effect information is assumed to be available.


3.2 Non-Termination Problem

Event-and-action-oriented rules may interact in a complex and sometimes

unpredictable way. In particular, there may be cycles in the rule set in which rules trigger

and fire each other indefinitely, causing a non-termination problem [BAR98]. We will

give the definition of the non-termination anomaly in the context of an event-and-action-

oriented rule set and an example.

Definition 3.2.1: An externally-generated event is an event posted by a source

outside of a rule set. For example, an event can be posted by an application program or

within the implementation of a method.

Definition 3.2.2: A rule-generated event is an event posted by rules in a rule set.

For example, an event can be posted in the action part of a rule or in a method invoked in

the condition or action part of a rule.

Definition 3.2.3: A rule set R is non-terminating if there exists a rule r E RA

such that r is fired repeatedly and indefinitely, with or without an externally generated

event e being posted.

Recall from Definition 3.1.3 that a rule is fired only if it is triggered and activated

(i.e., the condition part of the rule is true). Thus, in order for a rule to be fired repeatedly

and indefinitely after an event is posted from outside a rule set (e.g., from an application

program), the rule must be triggered repeatedly and indefinitely by some action within

the rule set. Furthermore, the rule must be activated when it is triggered.









For instance, the rule set in Example 3.3 will not terminate. If el is posted by an

application, trigger tl will trigger the firing of rule rl, which posts event e2. Event e2

will cause trigger t2 to fire rule r2, which in turn posts event el to repeat the entire

triggering process again. Thus, rules rl and r2 will be fired repeated and indefinitely

after el is initially posted. This is also true if e2 is posted by an application.

Example 3.3
Class A {
//attribute and methods definitions...

Event el, e2;
Trigger t1 {TriggeringEvent el; RuleStruct: rl;}
Trigger t2 {TriggeringEvent e2; RuleStruct: r2;}

Rule rl {Condition: true; Action: post e2;}
Rule r2 {Condition: true; Action: post el;}



3.3 Inconsistency and Redundancy Problems

3.3.1 Definition for Inconsistency

Informally, a rule set is inconsistent if different activated rules set an object to

different states. Inconsistency can be either conflicting or contradictory. Conflicting

rules will set an object to different object states that are not contradictory. Whereas,

contradictory rules will set an object to different object states which are contradictory.

Rules 1, 2 and 3 of Example 3.1 are conflicting rules because they share the same

condition; however, their actions set the object to different (but not contradictory) object

states. Rules 1 and 2 of Example 3.4 are contradictory rules. Most of the current works

do not distinguish conflicting and contradictory rules; however, it is useful to distinguish

whether two rules are conflicting or contradictory. If we can define a proper resolution

rule to resolve the conflict, we can realize a resolution scheme "on demand" to take









different experts' opinions into consideration. On the other hand, it is not possible to

resolve contradictory rules. They must be reconciled at the verification time. We aim to

detect both conflicting and contradictory rules.

Definition 3.3.1: A rule set R is inconsistent if and only if there exist object

execution states S, such that the action a] of rl and the action a2 of r2 set an object O to

different states Sol and S02, where rl e RA and r2 e RA; RA E R is the set of activated rules

of S; Sol and So2 are object states of O.

Definition 3.3.2: The domain D of an attribute A P is all the legitimate values of

A where P is a set of properties (or attributes) of O.

Definition 3.33: A value set V of an attribute AcP is a subset of D, where D is the

domain of A.


Example 3.4
Class Supplier
{
boolean platinum;
int credit;
Rulel {
Condition: credit > 100;
Action: platinum = true;
}
Rule2{
Condition: credit > 100;
Action: platinum = false;
}



Definition 3.3.4: A value set VI is contradictory with value set V2 if their set

intersection is empty. For example, if VI has all the integers that are larger than 2 and V2

has all the integers that are less or equal to 2, VI and V2 are contradictory. For a Boolean

attribute, if VI is true and V2 is false, then VI and V2 are contradictory. For an









enumerated attribute type, if VI is "male" and V2 is "female," then VI and V2 are

contradictory.

Definition 3.3.5: Two object states Sol and S02 of an object O are contradictory

if 3 A E P, such that VI and V2 are contradictory. P is the property set of 0, and VI and

V2 are the value sets of attribute A in object states Sol and S02, respectively. For example,

for an object of class Employee, if an object state Sol says that the gender of an employee

is male and S02 says that the gender of the same employee is female, then the two states

are contradictory.

Definition 3.3.6: A rule set R is contradictory if and only if there exist object

execution states S, such that the action a] of rl and the action a2 of r2 set an object O to

contradictory states Sol and S02, where rl E RA and r2 E RA.; RA E R is the set of activated

rules of S; Sol and S02 are object states of O.

Definition 3.3.7: A rule set R is said to be in conflict if it is inconsistent but not

contradictory.

3.3.2 Definition for Redundancy

There are many types of redundancy anomalies defined in the existing works on

logic rules [WU93b], including subsumed rules, unnecessary-if, syntactical redundancy,

semantic redundancy, and so on.

For example, the unnecessary IF anomaly can be defined as follows [NGU87]:

Definition 3.3.8: Two rules contain unnecessary IF conditions [NGU87] if (1) the

rules have the same conclusion, (2) one of the IF conditions in one rule contradicts with

an IF condition of another rule, and (3) all the other IF conditions in the two rules are

equivalent.










Example 3.5
R1: Condition: A^ B
Action: C
R2: Condition: A ^ AB
Action: C


In Example 3.5, the IF condition "B" in RI and "-B" in R2 are unnecessary and,

in a sense, redundant.

The following definition of the subsumption anomaly is taken from Wu [WU93b].

Definition 3.3.9: If two rules have the same consequence but one contains more

restrictive conditions to fire the rule, then it is said that the rule with more restrictive

conditions is subsumed by the rule with less restrictive conditions.

Example 3.6
R1 Condition A^B
Action: C

R2: Condition: B
Action C


In Example 3.6, rule RI is subsumed by rule R2. Intuitively, if a more restrictive

rule succeeds, the less restrictive rule must also succeed, but not vice versa. The

subsumed rule is not necessary and is redundant [WU93b].

Although there are many types of redundancy anomalies defined in the existing

work, Wu [WU93b] argued that generalized semantics-based definition is necessary in

order to detect a variety of redundancies. Wu [WU93b] gave a definition of redundancy

based on the semantic properties of logic rules. We will adopt and extend this definition

to incorporate event-and-action-oriented rules.

Definition 3.3.10: The execution behavior of a set of consistent rules is the

transformation of some objects from their initial object states S, to their final object states









Sf by applying these rules. Two sets of rules have the same execution behavior if, given

the same initial object states of some objects, they reach the same final object states of

the same objects.

Definition 3.3.11: A rule set R is redundant if the removal of a rule and/or part of

a rule from the set will not change its execution behavior.

Example 3.7
R1: Condition: A^ B
Action C

R2: Condition: -B
Action: C


Rule set redundancies can be divided into two general categories. The first type

of redundancies can be removed by reducing the number of rules in the rule set without

modifying its execution behavior. In Example 3.6, if we remove R1, the execution

behavior of the rule set will not be changed. In Example 3.5, we can rewrite the two

rules into one rule (condition: A, action: C) without changing the execution behavior.

The second kind of redundancies can be eliminated by removing a part of a rule in the

rule set without changing its execution behavior. In Example 3.7, it is easy to show that

if we refine RI to be (condition: A, action: C), the execution behavior of the resulting

rule set will not be changed.


3.4 Summary

In this chapter, non-termination, inconsistency, and redundancy problems were

formally defined. The following two chapters will provide the algorithms for the

detections of anomalies, and the system implementation of integrated verification,

respectively.














CHAPTER 4
VERIFICATION ALGORITHMS

Based on the definitions given in previous chapter, a general approach of rule

base verification is introduced. Then, algorithms for detections of non-termination,

inconsistency, and redundancy are provided.


4.1 General Approach

In a rule warehouse system, we have both logic (or constraint-oriented) and event-

and-action-oriented rules. These two types of rules may be used independently in some

applications; however, for collaborative problem solving, they may have to be used

together. Thus, there is a need for an integrated verification of logic and event-and-

action-oriented rules. Two approaches are possible:

1. Rewrite event-and-action-oriented rules into logic rules

2. Rewrite logic rules to event-and-action-oriented rules.

In the first approach, events can be considered as additional conditions of the rule;

whereas, in the latter approach, explicit events for each logic rule can be introduced. The

latter approach is used in our work because, as we shall point out in Section 4.4, events

can be used to partition the rule set into several related small groups and apply

verification algorithms to those small groups individually to achieve better performance.

The rewriting process is as follows: We assume in the class definition that the

only way to change the value of an attribute is through a corresponding Set function. We

shall introduce explicit method-associated events to some attributes. Each attribute X









involved in r is considered. We will introduce an event afterSetX, which is associated

with the method setX(, if the event does not already exist. We introduce a trigger to

allow this method-associated event to trigger the rule. In the trigger definition,

TRIGGERINGEVENT is afterSetX; EVENTHISTORY is always true; and

RULESTRUCT contains a single rule r, whose condition is the LHS of the rule and its

action is the right hand side (RHS) of the rule. This process guarantees that whenever an

attribute X of the LHS is changed by the Set method, the afterSetX event will be posted to

trigger r. For example, for logic rule "A=3--B=4," we will introduce an event afterSetA,

if it does not already exist. This event will be posted after the activation of method

SetA(. We also introduce a trigger afterSetATrigger to trigger rule Y. The condition of

Y is "A=3" and its action is "B=4."

Constraint-oriented Event-and-action-
Rules oriented Rules


I
Event-and-action-oriented Rules



Partioned Rul Subsets






Figure 2. Integrated Verification



Figure 2 illustrates our general approach to integrated verification of the rules of

RWS. The verification process first converts all types of rules into the event-and-action-









oriented representation. An algorithm is then applied on the set of transformed rules to

detect non-termination conditions. Next, the same set of rules is partitioned based on

their associated events before applying algorithms on each partitioned subset to detect

inconsistencies and redundancies. By rewriting logic rules into event-and-action-oriented

rules, the verification of rules in a rule warehouse becomes the verification of event-and-

action-oriented rules, which is the emphasis of this thesis.

The remainder of this chapter is organized as follows. Assumptions are presented

in Section 4.2. In Section 4.3, the non-termination detection algorithm is provided. The

algorithm will make use of triggering graphs (TGs), activation graphs (AGs) and

deactivation graphs (DGs) [BAR95, BAR98, LIUOO]. In Section 4.4, our general

approach for detecting inconsistency and redundancy problems is presented. Section 4.5

provides the extension to an existing rule verification algorithm to support the new

requirements of method invocation.


4.2 Assumptions

In order to render verification of event-and-action-oriented rules a tractable

problem, we will first make some basic assumptions about this type of rules. Recall from

Section 2.1, event-and-action-oriented rules defined in the active object model (AOM)

are represented in ETR (event-trigger-rule) constructs. The trigger specification has a set

of triggering events, an optional event history specification, and a structure of triggered

rules. For the current work on verification, we will assume the following:

1. There is a single rule in a rule structure.









2. The triggered rules are CA (condition-action) rules, not CAA

(condition-action-alternative action) rules. Note that a CAA rule can

be expressed by two CA rules.


4.3 Non-Termination Algorithm

4.3.1 Triggering Graphs, Activation Graphs, and Deactivation Graphs

Baralis et al. [BAR95, BAR98] proposed the use of TGs and AGs to detect the

non-termination anomaly in active databases. We will extend the idea to support the

detection of non-termination for general event-and-action-oriented rule sets. The basic

idea is as follows: TGs will be used to determine triggering cycles in a rule set which are

sub-graph of TG and form cycles in TG. Upon the detection of a triggering cycle, we

will use AGs and DGs to determine if the triggering cycle may cause the non-termination

problem. In particular, if there exists one rule in the triggering cycle whose condition is

false (i.e., de-activated) when triggered, then the triggering cycle will be broken and thus

will not cause a non-termination problem. The triggering cycle will cause the non-

termination problem only if, for every rule in the cycle, its condition is true (i.e.,

activated) when triggered.

Example 4.1 shows several sample events, triggers, and rules. If we use a

pessimistic approach by which we count each possible activation, and those guaranteed

deactivations, the corresponding TG, AG, and DG are shown in Figure 3. We shall

explain the pessimistic approach and the optimistic approach in detail in Section 4.3.3.










TG
TO O

A-^TH--


AG


;7


S


Figure 3. Examples of Triggering Graph, Active Graph and Deactive Graph


Example 4.1
Class Supplier{
//Attributes... Methods ...
int credit;
boolean platinum;
booolean discountable;


Knowledge Specification:
EVENT creditPenalty
EVENT untrustable Supplier
EVENT specialService


TYPE
TYPE
TYPE


EXPLICIT;
EXPLICIT;
EXPLICIT;


RULE
CONDITION
ACTION

RULE
CONDITION
ACTION

RULE
CONDITION
ACTION

TRIGGER
TRIGGEREVENT
RULESTRUC

TRIGGER
TRIGGEREVENT
RULESTRUC

TRIGGER
TRIGGEREVENT
EVENTHISTORY
RULESTRUC


R1
credit < 0
discountable:


=false; post untrustableSupplier event; ...


R2
platinum == false;
... post specialService; ...


R3
true
... post creditPenalty; ...

Tl
creditPenalty
R1

T2
untrustableSupplier
R2

T3
specialService
{creditPenalty, untrustableSupplier, specialService}
R3








4.3.2 Algorithm for Non-Termination Detection

Before we provide an algorithm of non-termination detection, we shall present

three Lemmas that are proved in Liu et al. [LIUOO]. Based on these Lemmas, we have a

theorem on the detection.

Lemma 4.1: A rule set R may not terminate if there is a triggering cycle in the

triggering graph. If there is no triggering cycle in the triggering graph, the rule set R will

terminate.

Lemma 4.2: A rule set will terminate if, for each triggering cycle in the TG,

r1, rk in DG such that there is an edge from r; to rk and

NOT 3 r,, rk in AG such that there is an edge from r, to rk,

where r,, r,, and rk are in the triggering cycle and r, < r, < rk.

In other words, a rule set that contains a triggering cycle will terminate if some

rule (e.g., rk) in the triggering cycle is deactivated by the action of another rule (r1) in the

cycle and no other rule (r,) in the cycle activates it again after the deactivation.




S AG DG*
S0;4 % 0*--

Figure 4. Examples for Lemma 4.2


The example in Figure 4 illustrates Lemma 4.2. In the triggering cycle consisting

of (R1, R2, R3), rule R2 is deactivated by rule R3, as shown in DG. AG shows that R2 is

activated by R1. Since R1 is triggered and fired after R3 in the cycle, its effect will









overwrite the effect of R3. Thus, R2 is activated every time it is triggered.

Consequently, the rule set will not terminate.

Lemma 4.3: A rule set will terminate if, for each triggering cycle TC in a TG,

a rule set S, which is a subset of the rules in TC, such that the conjunction of

the conditions of the rules in S is unsatisfiable, and

each of the nodes that correspond to the rules in S has no incoming edge in

AG.

Theorem 4.1: A rule set is guaranteed to terminate if one of the following three

conditions hold:

1. There is no triggering cycle in TG of the rule set. Otherwise, for each

triggering cycle in TG, either

2. 3 ri, rk in DG such that there is an edge from ri to rk and NOT 3 rj, rk in AG

such that there is an edge from rj to rk, where ri, rj, and rk are in the triggering

cycle and ri < rj < rk; or

3. 3 a rule set S, which is a subset of the rules in TC, such that the conjunction of

the conditions of the rules in S is unsatisfiable, and each of the nodes that

correspond to the rules in S has no incoming edge in AG.

is satisfied.

Since condition (2) in Theorem 4.1 determines the termination of a rule set by

detecting whether a rule is deactivated by the other rules in a triggering cycle, we call this

condition deactivated-termination condition (DTC). On the other hand, condition (3) in

Theorem 4.1 used the unsatisfiability of the conditions of rules to decide the termination

of a rule set, we call it contradictory-condition-termination condition (CCTC).










Algorithm 4.1:
1. Form the TG based on the event posting relationships among the rules in the rule set
under verification. The side-effect descriptions of the methods invoked within the
rules will also be used to determine the relationships.
2. Determine all the triggering cycles in the TG. If there is no cycle, report that the rule
set will terminate and exit. Otherwise, continue.
3. Form the AG and DG based on the relationships among the conditions and actions
of the rules. Again, the side-effect descriptions of the methods invoked in the rules
will also be used to determine the relationships.
4. For each cycle in the TG,
Determine whether condition (2) or condition (3) in Theorem 4.1 is satisfied. If
neither condition is satisfied, report that the rule set may not terminate because
of this triggering cycle and quit.
Continue to examine the next cycle.
5. Report that the rule set will terminate.


Based on Theorem 4.1, an algorithm for non-termination detection in a rule set is

given in Algorithm 4.1. In Step 1, the TG for the rule set is formed based on the event

posting relationships among the rules in the rule set under verification. The event set

given in the side effect description (see Definition 3.1.9) of each method invoked in the

rules will also be used to determine the triggering relationships. In Step 2, all the

triggering cycles in the TG are determined. If there is no triggering cycle in the TG, the

rule set will terminate and the algorithm ends. Otherwise, it is possible that the rule set

will not terminate. The algorithm continues to Step 3 to form the AG and DG for the rule

set. The read and write sets of the side effect description of each method invoked in the

rules will also be used to determine the AG and DG. Step 4 tests DTC and CCTC

presented in Theorem 4.1 to determine whether a particular triggering cycle will cause a

problem. If neither condition can be satisfied, the algorithm will report that the rule set









may not terminate and identify the specific triggering cycle. If all the triggering cycles

can satisfy either condition, the algorithm will report that the rule set will terminate.

If the event posting relationship is known, the time complexity of forming TG in

Step 1 is O(|u|+|v|), where |ul is the number of rules in the rule set and Ivl is the number of

edges in TG. In Step 2, the time complexity for the triggering cycle detection is

O(l|u|+|v). If there are cycles in the rule set, we need to identify all of them. The time

complexity for identifying cycles is O(l|u2)1. The time complexity for both Step 3 and

Step 4 is O(lul+lv'|), where iv'| is the number of edges in AG or DG. Thus, if there is no

cycle in TG, the total complexity of this algorithm is O(|u|+|v|), otherwise, O(|u12).

Let us apply the algorithm to Example 4.1. From the TG, AG and DG shown in

Figure 3, we can see there is a cycle (R1, R2, R3) in the TG. In this case, Condition 2 in

Theorem 4.1 is not satisfied because there is no edge between any of the three nodes (R1,

R2, R3) in DG. Let us assume that the conjunction of the conditions of the rules (R1, R2,

R3) is satisfiable, thus Condition 3 is not satisfied, resulting in the non-termination of the

rule set.

Let us change the rules in the example by letting the condition of rule R4 be

"discountable==true". The corresponding TG, AG, and DG are shown in Figure 5. The

change results in an edge from node R1 to node R3 in the DG. This means that the

execution of R1 will deactivate R3. In this case, Condition 2 in Theorem 4.1 will be

satisfied. Thus, the rule set will terminate.

If we change the condition of rule R3 to "platinum==true", the TG, AG and DG

will still be similar to the original example, as shown in Figure 3. The only difference is

the disappearance of all the links to R3 in AG. It seems that the rule set is still unable to









terminate. However, R2 and R3 have contradictory conditions (i.e., the conjunction of

the conditions of R2 and R3 is unsatisfactory), and there is no action in the cycle to

explicitly activate the rules (i.e., no incoming edge in AG for R2 or R3), thus, satisfying

condition 3 in Theorem 4.1. Consequently, the rule set will terminate.



AG DG




Figure 5. Revised Examples of Triggering Graph, Active Graph and Deactive Graph



4.3.3 Algorithm Completeness and Soundness

As we show in Theorem 4.1, the TG, AG and DG can be used to detect the

termination of a rule set. In this subsection, we will discuss some issues concerning the

construction of these graphs. The construction of these graphs requires the analysis of

the rule set under verification to capture the triggering, activation, and deactivation

information. In addition, since our work aims to verify general event-and-action-oriented

rules, the conditions and actions of which may include method invocations, side effects

of methods need to be taken into consideration in the construction of these graphs.

However, in general, the triggering, activation, and deactivation information cannot be

captured deterministically. For example, a rule rl may conditionally post an event el,

which is the triggering event of rule r2. Since the triggering relationship is conditional

and can only be known at runtime, we cannot construct TG without making some

assumptions. This is also true for AG and DG. For example, some action in rule rl may


1 The time complexity of the identification is O (|u|2) because one rule may be involved in multiple cycles The justification is straightforward









conditionally change the state of an object, which in turn activates or deactivates the

condition of rule r2. Again, because it is conditional we cannot construct AG or DG

without making some assumptions.

Two approaches can be taken to deal with the above problem: optimistic approach

and pessimistic approach. In the optimistic approach, we assume the best case and form

the graphs that will bias the verification toward termination. In other words, we will

ignore all conditional event postings when we form TG. In building DG, if the action of

one rule conditionally deactivates another rule, we will assume it will. In forming AG,

we ignore all conditional activations.

On the other hand, in the pessimistic approach, we assume the worst case and

form the graphs that will bias the verification toward non-termination. In making TG, we

will assume that all the conditional event postings will indeed post the events. In forming

DG, the conditional deactivations are ignored. In forming AG, we assume all conditional

activations.

The optimistic approach is sound in the same that every reported non-termination

anomaly turns out to be an anomaly, but is not complete because the algorithm does not

detect all the non-termination anomalies in a rule set. In other words, since the optimistic

approach is a "best case" analysis, it will miss some non-termination anomalies; however,

there will not be any "false positives" in which non-anomalies are reported as anomalies.

The pessimistic approach is complete, but not sound Since the pessimistic approach is a

"worst case" analysis, it will detect all the non-termination anomalies; however, the

algorithm may return some "false positives".









In conclusion, since the triggering, activation, and deactivation relationships may

be conditional, it is not possible to have a sound and complete detection algorithm. Even

if we model all the conditions in the graphs to determine whether the action of one rule

will activate or deactivate the condition of another rule, there is still a satisfiability

problem; a well-known undecidable problem.

The practical outcome of this project is that we can bias Algorithm 4.1 to take the

optimistic or pessimistic approach. This allows a rule warehouse administrator (RWA) to

select a proper approach based on the requirement of a specific application. In Algorithm

4.1, the decision of the RWA will determine what assumptions are made in forming the

TG in Step 1, and the AG and DG in Step 4.


4.4 Reduction Algorithm

Our objective for detecting inconsistency (or redundancy) is to answer the

following three questions concerning a rule set: (1) Is the rule set inconsistent (or

redundant)? (2) If Yes, which portion of the rule set is causing the problem? and (3)

under what situation (i.e., what object state) will the rule set cause a problem? The

answer to question (1) will inform the RWA whether such anomalies exist in the Rule

Warehouse. If so, the answer to question (2) will help the RWA locate and eliminate the

problem. If the problem cannot be eliminated, the answer to question (3) will give the

RWA the information concerning what object state can potentially lead to the anomalies.



































The approach used to detect both inconsistency and redundancy anomalies is the

same. There are many algorithms available for the detection of inconsistency and

redundancy in a set of logic rules [ROS97a, WU93a, and WU93b]. We observed that

both inconsistency and redundancy problems for event-and-action-oriented rules can be

reduced to their corresponding problems for logic rules. Thus, the existing results

obtained for logic rules can be applied with some modifications to account for the added

features of event-and-action-oriented rules, i.e., the existence of events and the invocation

of methods in the condition and action parts of a rule.

Recall from Section 3.1 the execution semantics of an event-and-action-oriented

rule. If an event el is posted, and if el is one of the triggering events of a rule rl, then

the condition of the rule rl will be evaluated. If the condition evaluates to true, the action

of rl will be performed. Note that only those rules that may be triggered by the same


Example 4.2
Class Accident statistics{
String model;
int accidentprobability;
Event el, e2;
Trigger tl {
TRIGGERINGEVENT el;
RULESTRUCT: Rule 1;
}

Trigger t2 {
TRIGGERINGEVENT e2;
RULESTRUCT: Rule2;
}
Rulel {
Condition: model = Ml;
Action: accidentprobability = 0.2;
}
Rule2 {
Condition: model = M ;
Action: accidentprobability != 0.2;
}
}









event (either directly or indirectly) need to be verified for inconsistency and redundancy.

In Example 4.2, Rulel and Rule2 are contradictory if they are put in the context of logic

rule verification. However, in the context of event-and-action-oriented rule verification,

Rulel can only be triggered by el, whereas, Rule2 only by e2. In this case it is not

possible for Rule 1 and Rule 2 to be triggered together. Thus, they are not inconsistent in

the context of action-oriented rule verification.

Algorithm 4.2 (Reduction algorithm):
1. Get the list of triggering events in the rule set.
2. Partition the rules into groups based on the triggering events. If one rule has
multiple triggering events, it will be put into multiple groups.
3. Rewrite the conditions and actions of the rules to include the side effect of
methods.
4. Combine the groups gi and gj if the action of a rule in gi may post the event
required for group gj. This is to take care of the indirect triggering.
5. Verify the CA rule sets in each group using the available logic-based
inconsistency detection algorithm F.


Since only the rules that can be triggered by the same event need to be verified for

inconsistency and redundancy, we can partition the rule set based on the triggering events

for the purpose of verification. Each partition contains rules which can be triggered by

the same event either directly or indirectly [LIUOO]. Also, each rule in a partition is a CA

(condition-action) rule, which can be represented by the logic expression C-A (reads "C

implies A"). Both C and A may contain method invocations. Most of the existing logic

rule verification algorithms cannot handle methods in C and A. They need to be

extended to deal with the side effects of methods. We shall revisit this issue in Section

4.5.1.4.









Algorithm 4.2 performs two main functions: (1) it uses triggering events (directly

and indirectly) to partition the rule set so that rules associated with a triggering event are

verified together; and (2) it reduces an inconsistency or redundancy problem for event-

and-action-oriented rules into the corresponding problem for logic rules so that some

existing algorithm Fcan be used. Step 1 gets a list of all the triggering events in the rule

set. Step 2 partitions the rules based on triggering events (direct triggering). Step 3

considers the side effects of methods if methods are involved. Step 4 combines the rules

that may be indirectly triggered into the same group. Step 5 applies the existing

algorithm for logic rules to finish the detection of inconsistency/contradiction and

redundancy. Following the partition and reduction process, the rules in each group are

verified for the appropriate type of anomalies using F. Note that F is some existing

algorithm for detecting inconsistency or redundancy anomalies for logic rules; however,

none of the existing algorithms can (1) distinguish inconsistency and contradiction and

(2) handle method invocations in the condition and consequence part of a rule. We need

to tailor an existing algorithm to serve the new requirements of a rule warehouse system.


4.5 Extension to an Existing Logic Rule Verification Algorithm

In the previous section, we assume that F in Algorithm 4.2 is an existing

algorithm for detecting inconsistency or redundancy anomalies for logic rules. The

algorithm presented in Wu 1993 [WU93b] is an example of such an algorithm. Wu

[WU93b] detects inconsistencies and redundancies in a set of logic rules by using a

mechanical theorem-proving technique. In the algorithm, a level-saturation resolution is

performed if a certain condition is satisfied. The resolution results are used to determine

the redundancy and inconsistency in a rule set. In the previous subsection, we have









introduced a reduction algorithm to partition rules into groups based on events. In each

group, there are many CA rules. In this subsection, we will show how to tailor Wu 1993

[WU93b] to verify each group's CA rules.

4.5.1 Inconsistency Algorithm

During the course of this project, two additional challenges in the development of

an inconsistency detection algorithm are faced. One is to distinguish two types of

anomalies: contradictory and conflicting rules. The other is the possible use of method

invocations in CA rules, both on the left and the right hand side (LHS and RHS). In

Section 3.3, we have pointed out the importance of distinguishing inconsistency and

contradiction and formally defined them. The presentation of a separate algorithm for

detecting conflicting rules is not necessary because under a condition C if the rule set is

inconsistent but not contradictory, the rule set is considered to be conflicting.

4.5.1.1 A Resolution-based Inconsistency Detection Algorithm

Resolution principle [ROB65] is an inference rules that is suitable for machine-

oriented deductions. We assume that the reader has some basic understanding of

automatic theorem-proving [MAN74, CHA73]. For details about the resolution principle

and the linear resolution (including input resolution and unit resolution), the reader

should refer to Chang et al., Ginsberg, and Robinson [CHA73, GIN88, ROB65], and to

Lloyd [LLO87] for Ordered Linear Deduction resolution.

To detect inconsistency, Wu [WU93b] proved a theorem about the relationship

between the inconsistency of a rule set and the unsatisfactory in formal logic. The

definition of inconsistency given in Wu 1993 [WU93b] is slightly different compared

with ours because it does not distinguish between conflict and contradiction. A rule set is

inconsistent with respect to some input facts {a} if and only if this set of rules together










with the input facts {a} comprises an unsatisfiable set of clauses. Based on the presented

theorem by Wu 1993 [WU93b], the following Algorithm 4.3 is proposed to detect

inconsistency in a rule set.


Algorithm 4.3 (Original Inconsistency Detection Algorithm):
The input rules are written into clausal form.
A partial detection set (PDS) is a set of clauses which are the deduced intermediate
results, i.e., the resolvents of other parent clauses. The rule verification algorithm for
detecting the inconsistency of a set of rules {ri} is as follows:
1. PDS = a.
2. Perform level-saturation resolution to the set of rules {ri} using the
tautology and subsumption deletion strategy until no further resolvent can
be generated; at the same time, record the list of the parent clauses for
each resolvent.
3. If an empty clause is generated, then {ri} is inconsistent by itself, even
without input, stop; otherwise do the next step.
4. Put all the unit clauses whose literals appear in the LHS of some rules in
{ri} into the PDS.
5. Put every multi-literal clause that satisfies the following conditions into
the PDS as well:
i. All literals in the clause appear in the LHS of some rules in {ri};
ii. The clause is not subsumed by any clause kept in the PDS.
6. For each clause in the PDS, construct a "resolution complement formula"
which is a formula in conjunctive form and can resolve to an empty clause
with the clause selected from the PDS. Put all the resolution complement
formulas into a set called "potentially contradiction-causing input set."



Using this algorithm, a partial detection set (PDS) and a potentially contradiction-

causing input set can be obtained. If an empty clause is generated in step 3, the rule set is

inconsistent by itself; otherwise, if the potentially contradiction-causing set is empty, the

rule set is consistent. Each clause in the PDS represents a potential contradiction, which

can be deduced from the set of rules {ri}. The resolution complement formula

corresponding to a clause in the PDS represents the input fact combination that will cause









the contradiction. By looking into the list of parent clauses which derive a clause in the

PDS, we can determine which rules in the {ri} contribute to the contradiction.

The conditions that the algorithm performs resolution can be abstracted and

justified as follows. To simplify the explanation, the presence of a predicate in both LHS

and RHS of each rule was assumed.

1. The two LHSs are contradictory predicates. That is to say, one and only one

of the two conditions is true. Consequently, either RHSrl or RHSr2 is

correct; i.e., RHSrl u RHSr2 is true, which is exactly the result of performing

a resolution between rl and r2.

2. The two RHSs are contradictory predicates. That is to say, one and only one

of the RHSs is true. Since RHS is the logic result of LHSs, we know that one

LHS is true; i.e., -LHSrl u -LHSr2 is true, which is exactly the result of

performing a resolution between rl and r2.

3. If one rule's LHS is the same as another rule's RHS. W.l.o.g, let's consider

the case when LHS of rule rl and RHS of rule r2 are the same. Consequently,

we have LHSr2 RHSr2 LHSrl RHSrl. Written in a clausal form,

-LHSr2 u RHSrl, which is exactly the outcome of performing a resolution.

Example 4.3 is given in Wu 1993 [WU93b] to illustrate the procedure used to

check a rule set composed of propositions for contradiction.

From the potential contradiction-causing input set, we know that when the input

fact A or B is given, some contradiction will be deduced from the set of rules. We can

trace back to determine which rules are involved to deduce the contradiction. The first









clause in PDS is -A, i.e., resolvent (7) in the partial deduction. Tracing back the partial

deduction, we know that

(7) (1) + (6) (1) + (2) + (3)

Example 4.3
(1). A B
(2). B C
(3). C ,B

1. PDS =
2. Perform level-saturation resolution
(1). -AuB
(2). -B u C
(3). -C u -B SO

(4). -AuC (1)+(2)
(5).A u -C (1)+(3)
(6). -B (2) + (3)S1

(7). -A (1)+(6)
(8). -A u -B (2) + (5), subsumed by (7)
(9). -A u -B (3) + (4), subsumed by (7)
(10).-A (4) + (5), subsumed by (7) S2

Terminated (Here, Si represents the ith level of the resolutions)

Here, S' represents the ith level of the resolutions.

3. No empty clause is generated, so go to next step.
4. PDS = { A, B}
5. No multi-literal clause can be put into PDS; therefore, PDS = {-A, -B}
6. Potentially contradiction-causing input set = {A, B}

where "-" is read as "resolved from". Thus, we see that when fact A is given, a

contradiction can be derived using all three rules in the rule set. Let us verify this result:

when fact A is given, the following facts will be deduced.

A given input fact

B from (1)

C from (2)

B -B from (3)









Therefore, we have deduced the contradiction B and --B.

The second clause in the PDS is -B, i.e., resolvent (6) in the partial deduction.

Tracing back the partial deduction, we have

(6) (2) + (3)

It shows that when the fact B is given, a contradiction can be deduced from only

the last two rules of the rule set. This result is verified as follows:

B given input fact

C from (2)

-,B from (3)

Therefore, the contradiction, B and -B, has been concluded. In summary, we

have proven that the rule set {A-*B, B--C, C---,B} is a contradiction with respect to

fact A; moreover, its subset {B-4C, C- --B} is a contradiction with respect to fact B.

4.5.1.2 Inconsistency

Inconsistency includes both conflicting and contradiction. To detect an

inconsistency, a resolution should be performed for any two predicates if they are

unsatisfiable. In case of unsatisfactory, we will perform a resolution for "Y = 9" and "Y

= 7"; whereas, we do not perform a resolution for them in contradiction detection, which

we will explain later. This can be justified as follows: in contradiction detection,

whenever two predicates are contradictory, there is a chance for us to obtain

contradictory facts through derivation. Hence, in inconsistency detection, whenever two

predicates are unsatisfiable, there is a possibility for us to get conflicting facts through

derivation.









A subsumption simplification was performed if two predicates are on the same

attribute and if they are in the same clause generated by the resolution, given that one

subsumed another. For example, we can simplify -(Y = 9) u(Y = 7) to -(Y = 9)

because for any predicate p, if p and (-(Y=9) u(Y=7)) are unsatisfiable, p will also be

unsatisfiable with (-(Y=9)). Simplification will not modify the condition to perform a

resolution.

Example 4.4
(1). X= 3-Y= 9
(2). Y= 9-C =5
(3). C= 5 -Y= 7

1. PDS =0
2. Perform level-saturation resolution
(1).(X = 3) u (Y = 9)
(2). (Y = 9) u (C = 5)
(3). -(C = 5) u (Y = 7) So

(4). (X = 3) u (C = 5) (1) + (2)
(5). (X = 3) u (C = 5) (1) + (3)
(6). -(Y = 9) u (Y = 7) (2) + (3) (sumsumption simplification)
-(Y = 9) S1

(7). (X = 3) (1) + (6)
(8). -(Y = 9) u -(X = 3) (2) + (5), subsumed by (7)
(9). (Y = 7) u -(X = 3) (3) + (4), subsumed by (7)
(10). -(X = 3) (4) + (5), subsumed by (7) S2

Terminated

3. No empty clause is generated, so go to next step.
4. PDS = {I(Y = 9), -(X = 3)}.
5. No multi-literal clause can be put into PDS; therefore, PDS = {-(Y = 9), -(X = 3)}.
6. Potentially inconsistency-causing input set = {Y = 9, X = 3}.

Example 4.4 shows a case by applying our extensions to Algorithm 4.3. The

conclusion is that the rule set may be inconsistent with the input set {Y=9} or {X=3},

which can be verified as follows: Given {Y=9}, rule (2) and (3) will deduce {Y=7},

which is inconsistent with the input fact {Y=9}. Given {X=3}, rule (1) will derive









{Y=9}; whereas, rule (2) and (3) will infer {Y=7}, which is inconsistent with rule (1)

derivation.

4.4.1.3 Contradiction

We will extend the existing inconsistency detection algorithm in two ways to

detect contradiction in a rule set as defined in Definition 3.3.6. First of all, we achieve a

resolution only when two predicates are contradictory. For example, a resolution will be

preformed for predicates "Y = 9" and "Y != 9" whereas we will not perform a resolution

for predicates "Y = 9" and "Y = 7." Recall that we produced a resolution for these

predicates in the inconsistency detection. Meanwhile, if one predicate subsumes another

predicate, subsumption simplification will not be accomplished to keep every possibility

of performing a resolution in the process of resolution. If AuB was simplified to A,

where A subsumes B; if -B is available in a further resolution, we will not perform a

resolution because A is not the complement of --B. On the other hand, if we keep it to

AuB, when we see -B, we know we should perform a resolution because of B and -iB.

For example, if we have -(Y=9) u(Y=7), in the existing work, it will be simplified to

-(Y=9). In our work, we leave it unmodified because we do not know whether "Y=9" or

"-(Y=7)" will be used in a future resolution. As shown in Example 4.5, these rules are

not contradictory. However, as we have shown in example 4.4, these rules are

inconsistent when {Y=9} or {X=3} is given. Consequently, they are in conflict when

{Y=9} or {X=3} is provided.





































4.5.1.4 Method Invocation Support

In a P-Q logic rule, LHS (P part) specifies which specified condition will be

checked against the current object state; whereas, RHS (Q part) specifies that the values

of the attributes involved in Q should be reset to the values specified in Q. That is to say,

we need to read the attributes in P and write the attributes in Q. A method may have its

read set and write set if method invocation is involved, it is possible for both LHS and

RHS to have read and write operations. With the use of method invocation in a predicate,

it is very difficult, if possible, to determine whether a predicate is contradictory with

another predicate. However, if the read or write operations satisfy certain conditions, we

know that the predicates might be contradictory. To deal with this problem, a read set

(RS) and a write set (WS) have been defined for each rule. This will be followed by

Algorithm 4.3 extension by adjusting the conditions to perform a resolution based on the

intersection of the RSs and WSs of the rules in a rule set.


Example 4.5
(1).X=3 -Y=9
(2). Y= 9 C =5
(3). C= 5 Y =7

1. PDS = 0
2. Perform level-saturation resolution.
(1).(X = 3) u (Y = 9)
(2). (Y = 9) u (C = 5)
(3). -(C = 5) u (Y = 7) SO

(4). (X = 3) u (C = 5) (1)+(2)
(5). -(Y = 9) u (Y = 7) (2) + (3)

(6). (X = 3) u (Y = 7) (1)+(5)
(7). (Y = 7) u -(X = 3) (3) + (4), subsumed by (6)

Terminated

3. No empty clause is generated.
4. No clause to be put into PDS.
5. No multi-literal clause can be put into PDS; therefore, PDS = 0.
6. So the rule set is not contradictory.









Definition 4.5.1: The read set (RS) for a predicate a is the set of attributes read by

a, denoted by RS (a). The read set (RS) of rule r, denoted by RS(r), is defined as RS

(r.LHS); i.e., the read set of the LHS of rule r.

Definition 4.5.2: The write set (WS) for a predicate a is the set of attributes

written by a, denoted by WS (a). The write set (WS) of rule r, denoted by WS(r), is

defined as WS (r.LHS) u WS (r.RHS); i.e., the union of the write sets of the LHS and

RHS of rule r.

Based on the above definitions, we can further broaden Algorithm 4.3 to verify a

rule set that contains rules with methods in their conditions and actions. We can again

apply the theorem proofing approach used in Algorithm 4.3 to tackle the problem by

altering the condition to perform a resolution. As previously shown in the contradiction

detection, a rule verifier would perform a resolution when two predicates are

contradictory; whereas, in the inconsistency detection, the verifier would carry out a

resolution when two predicates are unsatisfiable. In this case, the intersection of rules'

RSs and WSs will be used to establish whether or not a resolution should be completed.

Specifically, we perform a resolution on rule RI and R2 when

1. RS (R1) n RS (R2) 0 or

2. WS (R1) n WS (R2) 0 or

3. RS (R1) n WS (R2) 0 or

4. RS (R2) n WS (R1) 0.

The given conditions can be justified as follows. For condition (1), if RS (R1) n

RS (R2) # 0, based on Definition 4.4.1, RS (R1.LHS) n RS (R2.LHS) # 0 will be

obtained. Thus, the LHSs of R1 and R2 may be evaluated based on some common









attributes. However, it is possible that these two LHSs are contradictory. If this is the

case, at least one of the RHSs of these two rules will hold true. Therefore, resolution

should be done because of this nonempty intersection. The other three cases can be

justified in the same manner.

Example 4.6
Assume the following read set and write set information for the following methods:
For method ml(), RS= {A}, WS = {B}
For method m2(), RS={C}, WS = {B}
For method m3(), RS = {B}, WS = {E}
Case 1:
Rule Rl: ml() > 0 -- consequence
Rule R2: A < 5 -- consequence
Since RS (R1 ) n RS (R2) = {A} # 0, we will perform a resolution because of R1
and R2.
Case 2:
Rule Rl: X> 0 ml();
Rule R2: y < 0 m2();
Since WS (R1) n WS (R2) = {B} # 0, we will perform a resolution because of R1
and R2.
Case 3:
Rule R1: X>0 ml();
Rule R2:m3()> 0 Y = 5;
Since WS (R1) n RS (R2) = {B} # 0, we will perform a resolution because of them.
Case 4:
Rule Rl: A = 5 X = 9;
Rule R2: Y = 7 ml();
Although RS (RHSR2) n RS (R1) = {A} # 0, we will not perform a resolution
because the read operation in the RHS of a rule will not contribute to any derivation.


Example 4.6 indicates the above four conditions. In case 1, since RS (R1 ) n RS (R2) =

{A} # 0, it is possible for both conditions to be true. Naturally, it is also possible for

both consequences to be true. In case 2, since WS (R1) n WS (R2) = {B} I 0, both

consequences might be contradictory, which leads us to say that at least one of the

conditions is false. In case 3, WS (R1) n RS (R2) = {B} 0, the consequence of R1









renders the condition of R2 true, making a derivation possible. Thus, we should perform

a resolution for these two rules.

The reader may have noticed that the read set of a rule is defined as the read set of

its LHS. We intentionally leave out the read set of its RHS because when the RS of the

RHS intersects with the RS or WS of another rule, no conclusion can be drawn from it.

This is shown in case 4 of Example 4.6. Although RS (RHSF) n RS (R1) = {A} A 0,

we only know that the RHS of rule R2 reads A and generates a new result for B, which

does not affect the condition of rule R1. No derivation is possible because of these two

rules. Hence, no resolution should be performed.

4.5.1.5 Putting Them Together

We call a predicate without methods to be a primitive predicate. For primitive

predicates, an automated rule verifier will have enough information to judge the

contradiction and unsatisfiability relationship between two predicates. In the

inconsistency detection, it will check the unsatisfiability of predicates to decide whether

to perform a resolution or not; whereas, in the contradiction detection, the contradiction

of predicates are used. If methods are involved, it cannot achieve such a fine

determination. It will have to make a decision based on the identified four types of RSs

and WSs intersections of the rules being considered. Given an intersection, the verifier

can only say if it is possible for the corresponding rules to produce a contradiction or an

inconsistency; however, it does not know for sure. The best it can do is to report the

potential problem in both contradiction and inconsistency detections.

4.5.2 Redundancy Algorithm

Wu [WU93b] also proves that the deductive behavior of a set of rules will not

change if (1) a subsumed rule is removed from the set, or (2) a rule in the set is replaced










by its subsuming rule which is derivable from the set of rules. Case (1) describes the

situation when some redundancy exists in a set of rules: a rule in the set is subsumed by

another rule in the same set; whereas, case (2) says that the redundancy exists when a rule

in the set can be subsumed by a derived rule. Algorithm 4.4 is presented to detect these

two cases.

This algorithm uniformly handles different types of redundancies in the literature.

Wu [WU93b] has several examples to show how "Subsumed rules," "Unnecessary IF

conditions" and other forms of redundancy are handled by this algorithm. The readers

are referred to Wu 1993 [WU93b].


Algorithm 4.4 (Original Redundancy Detection Algorithm):
1. Perform deductions based on the level-saturation resolution on a given set
of rules S.
2. If a clause r' in S is subsumed by another clause r1 in S, delete the
subsumed clause r1.
3. If a clause r' in S is subsumed by a clause deduced from a subset S' of S,
and S' includes r', then replace r' with the deduced clause. If a clause r' in
S in subsumed by a clause deduced from a subset S' of S, and S' does not
include r1, then delete r1 (i.e., delete derivable clauses).














CHAPTER 5
SYSTEM IMPLEMENTATION

This chapter presents the design and implementation of a rule verifier using the

methodologies and algorithms presented in the previous chapters. Java is used as the

programming language to increase the system's portability. Web programming

techniques (Applet and Servlets) are used to make the rule warehouse system (RWS)

accessible through the Internet without client software installation and configuration.

This chapter begins with a general description of the system. The implementation

of non-termination algorithm is described in Section 5.2. Section 5.3 describes how to

handle rule partition. The implementations of algorithms for inconsistency and

contradiction detections are presented in Section 5.4. Following that, how to carry out

the redundancy detection is discussed in Section 5.5.


5.1 General Description

Figure 6 shows the component structure of RWS. At the front-end, we have a

rule warehouse interface (RWI), a knowledge definer, and a rule verifier interface (RVI).

The rule warehouse manager (RWM), rule verifier, a metadata manager (MDM), and a

persistent object manager (POM) are at the back end. Logically, RWI, KD, RVI, RWM,

MDM, and POM work together to provide the functions of the RWM specified in Liu

2001 [LIU01]. Please refer to Liu 2001 [LIU01] for detailed explanation of the

components in Figure 6.









RWM is a set of servlets. It accepts requests from RWI, KD, and RVI and

interacts with RV and MDM to serve the requests. The rule verifier verifies the input

rules and returns the verification result to RWM. This will be explained in detail in the

following sections.


Use r Uer






Rule Warehouse Reauest
System esu t








Figure 6. Rule Warehouse System Component Diagram



RVI, an interactive user interface, is provided for the user or warehouse manager

to verify the knowledge stored in the warehouse. It is implemented as an Applet (see

Figure 7). A list of authorized schemas is obtained from MDM and listed for selection

after a user logs in. The knowledge specification for the selected schema can be retrieved

from the database through MDM and RWM. It is displayed on the left pane in the

Applet. Four buttons are provided to trigger the detection of non-termination,

inconsistency, contradiction, and redundancy anomalies. The verification results and

some key information about the verification procedures are displayed on the right pane.








52



The rule verifier (RV) is implemented as a class named Verifier. It verifies the


given rule set and returns the result. It receives two parameters. One is an object of


KnowledgeSpec, which contains the rules to be verified. The other is a flag, which


indicates whether the optimistic or pessimistic approach will be used in the non-


termination detection.


.....


lq l -,n
EelS




KI

em E Am I r




m4- mr tiIm


C':lL Shiwsrir *rPy
a imammnh
PTr ~ wr etr n.


-r--- --w-







:C If iusm iT~n TmIt
oTicn I ui m me I
bl-Min-a w Ti

a *nr m u.a I f t1




- rrr -






I~f1rrlCpll I
"1 ion-" W$0 If l 4I
&-p- tw' riaw q om Os ja lwa Iqm ha


Figure 7. Rule Verification Graphical User Interface


A KnowledgeSpec object consists of the lists of attributes, methods, events,


triggers, event-and-action-oriented rules, and constraints associated with the given


schema. The side effects of methods, the read set, and the write set of event-and-action-


oriented rules and constraints are included as well.


Upon receiving the input KnowledgeSpec object, the Verifier converts all types of


rules into an event-and-action-oriented representation using the pseudo-code shown in










Figure 8. An algorithm is then applied on the set of transformed rules to detect non-

termination conditions. Next, the same set of rules is partitioned based on their

associated events before applying algorithms on each partitioned subset to detect

inconsistencies and redundancies.


Figure 8. Logic Rule Transformation Pseudo-code



A parser is used to parse the condition and action of an event-and-action-oriented

rule, mainly for the unsatisfaction checking of two conditions, which is needed for both

non-termination and contradiction/inconsistency detection. Arbitrary satisfaction

checking is an NP-complete problem. We detect unsatisfaction of two predicates by

detecting conflicts on the same attribute. For example, predicates "salary > 50K" and

"salary <= 50K" are unsatisfiable. The parser is used to parse rules' conditions and/or


//Purpose: Transform constraints into event-and-action-oriented rules.
//Input: A knowledge specification
//Output: A transformed knowledge specification

for each constraint c in the knowledge specification{
//Find the attributes to work on.
define an attrList al to contain the attributes to work on for c;
put every attribute involved in c into al;

//Introduce the explicit events if needed.
for ( each attribute X in al)
if ( NOT exists afterSetX event)
add event afterSetX into the event set for the object being evaluated;

//Transform the constraints into event-and-action-oriented rules.
Transform c into an event-and-action-oriented rule newC by letting the
LHS as the condition of the new rule and the RHS as the action;

Define a trigger to link the events and the rule newC by setting the
triggering event to be the combination of all the events associated with the
attributes in al, event history to be true, and the rule structure to be a
single rule newC;
I









actions. If two predicates are on the same attribute and each of them is in the form of A

O B, where A and B are variables, which have integer or boolean values, and presents

=, !=, ==, >,>=, <, <=, -, -=, +, or+=


5.2 Non-Termination Detection

Based on Algorithm 4.1, there are four key steps in the non-termination detection

algorithm:

Form the triggering graph (TG) of a rule set.

Detect the preliminary cycles (PCs) and triggering cycles (TCs) in the TG.

Form the activation graph (AG) and de-activation graph (DG) for the rule set.

Detect the conditions under which the rule set may not terminate.

We shall explain the implementation for these four steps in detail in this section.

5.2.1 Triggering Graph Formation

To form TG, first of all, the condition and action of each rule r are analyzed to

identify all the events, which may be posted by r. The side effects of the methods

invoked by the condition or action of r are considered as well. Since a method may

invoke another method, this process runs recursively until all the methods that may be

invoked are considered.

As we pointed out in Chapter 4, the events to be posted by the condition and

action of a rule cannot be determined statically. Pessimistic or optimistic approach can

be applied. In the pessimistic approach, all the events that may be posted are considered;

whereas, in the optimistic approach, only those guaranteed postings are assumed.

This list of events is compared with the triggering events of the triggers. If one of

them is among the triggering events of a trigger t, rule r may trigger t. Then r is









associated with t. The combination of r and t becomes a node n in the triggering graph.

If there exists a node m, such that the rule associated with m belongs to the rules defined

in the rule structure of t, there is an edge from n to m in TG.

5.2.2 Preliminary Cycles and Triggering Cycles Detection

It is straightforward to detect the preliminary cycles in a TG by using a depth first

search (DFS) algorithm. All the preliminary cycles are identified first. Then they are

checked one by one to identify the triggering cycles in the TG by considering the event

history (EH) specifications of the triggers in each preliminary cycle. The following steps

are used to determine whether a given preliminary cyclepc is a triggering cycle.

Start from any element in pc; get the list of events posted by the rules in the

PC.

Verify whether the EH requirements for each trigger in pc is satisfied by

comparing the EH requirement and the list of events obtained in step 1. If the

EH of every trigger inpc is satisfied, thenpc is a TC.

5.2.3 Activation and Deactivation Graph Formation

If there are triggering cycles in the TG, we need to consider the AG and DG to

determine if the input rule set can terminate. When forming the AG and DG, the

relationship between the action of one rule and the condition of another rule is

considered. Pair wise comparison is performed for each pair of rules, and the

corresponding activation or deactivation relationship is determined and added to the AG

and DG.

Again, depending on the approach (optimistic or pessimistic) is used, the same

condition and action can lead to a different conclusion. For example, if the condition of

rule rl is "price > 500" and the action of rule r2 is "price-=10" in the pessimistic










approach, we will consider that there is an activation relationship from r2 to rl; whereas,

in the optimistic approach, a deactivation relationship from r2 to rl is assumed because it

is possible for rl to be deactivated by the action of r2.

5.2.4 Non-termination


//Purpose: To check if deactivated-terminate condition holds for
// A triggering cycle.
//Input: A triggering cycle
//Output: A flag indicates if condition 2 holds or not.

//assume the rules involved in this cycle are in a Vector ruleList.

boolean flag[]; //indicating the (de)activation status for the rules.
Initialize every elements in flag[] to be true.

//Keep tracking of the last (de)activation to each rule.
for (int i=0; i < ruleList.size0; i++)
{
if (there is an edge in AG from the current rule to another rule j)
flag[j] = true;
if (there is an edge in DG from the current rule to another rule j)
flag[j] = false;
I

for (int i=0; i < ruleList.size0; i++)
I
if (! flag[i] ) return false;


Return true;
Figure 9. Deactivated-termination Condition Detection Pseudo-code


As stated in Theorem 4.1, if there are no triggering cycles in TG, the rule set will

terminate; otherwise, AG and DG are considered. Condition 2 in Theorem 4.1,

deactivated-termination condition (DTC), considers the effect of AG and DG. The

pseudo-code shown in Figure 9 is used to check if the DTC holds for a triggering cycle.

If the DTC is not satisfied, we will move on to check the Condition 3 in Theorem, the

contradictory-condition-terminate condition (CCTC), which checks the unsatisfiability of









the conditions of the rules in a triggering cycle. Some heuristics are used in our

implementation. Two conditions are considered to be unsatisfiable when they are on the

same attribute, and it is impossible for both of them to be true.


5.3 Partitioning Formation

The rule set can be partitioned using the events to achieve better performance for

inconsistency, contradiction, and redundancy detections. The pseudo-code shown in

Figure 10 illustrates the partition procedure. The input parameter is a knowledge

specification. The return is a list such that each element in the list represents a group of

rules that may be triggered by the same event either directly or indirectly.

First of all, we associate each event e to a list of rules, which may be triggered by

e. Then we check the condition and action of the rules to see if a rule r belongs to the

rule list associated with an event el and r posts event e2. The rule list associated with e2

is then combined into the rule list associated with el. By the end of this process, it is

impossible for an event to trigger two rules in different groups. After getting the

partitioned groups, we can detect inconsistency, contradiction, and redundancy in each

group.




































Figure 10. Rule Partition Pseudo-code




5.4 Inconsistency and Contradiction Detection

Every group of rules obtained in the partitioning process goes through the

inconsistency and contradiction detection processes, which are performed by two

different methods in class Verifier. Since their internal logic is very similar, we only use

inconsistency as an example to explain their implementations.

Before we move to the implementation of the inconsistency verification, let us go

through the data structures that are used to record the level-saturation process. An

ArrayList is used to record the resolution process. Each element in this ArrayList

corresponds to a resolution space, which is all the clauses generated in one level of

resolution. Each resolution space is in turn an ArrayList containing all the clauses in the


//Purpose: To partition the rules into groups, such that rules in each
// group may be triggered by the same event.
//Input: knowledge specification
//Output: The partitioned groups

For each event{
for each trigger {
if e belongs to the triggering events of t {
add the rules included in the rule struct specification of t into the
rule list of e;
}
}
}

For each rule r {
For every event e posted by rule r {
Combine the rule group associated with e and the rule group that r
belongs to;
}
}

Return the groups;









resolution space. Each clause is represented by a list of predicates and the clauses that

generate this clause through resolution.

Figure 11 shows the pseudo-code for inconsistency detection. First, all the

predicates that are in the LHS of some rule are kept in an ArrayList lhsList. They will be

used to determine whether a clause should be kept in the PDS or not. Second, every

input rule is converted into the clausal form. The result for each rule is a clause. These

sets of clauses become the first-level resolution space. Then, the level-saturation

resolution is performed. In each round of resolution, a subsumption checking is first

performed within the new resolution space. This is followed by another subsumption

checking across the resolution spaces. They will be eliminated from the resolution space

to improve the verification efficiency. Then each of the clauses in the new resolution

space is compared with the clauses in the other resolution spaces to perform resolutions.

The newly generated clauses are put into the new level resolution space. The resolution

will not stop until there is no new resolution space generated. After the resolution

process, all the remaining clauses are compared against the lhsList. If all the predicates

in a clause are in lhsList, the clause is moved into PDS. The clauses in PDS are rewritten

to get the inconsistency-causing input set.

When methods are involved, the verification procedure is the same. The only

change is that we will apply the conditions presented in Section 4.4 to decide whether to

perform a resolution between two predicates or not.

As we pointed out in Chapter 4, the contradiction detection is similar to the

inconsistency detection. The only difference lies in the conditions under which to

perform a resolution.














































Figure 11. Inconsistency Detention Pseudo-code




5.5 Redundancy Detection


Redundancy detection is slightly different from inconsistency detection. The

pseudo-code is shown in Figure 12. The resolution process is still the same. The rules to

be refined are identified during the resolution process. The algorithm terminates when

the resolution process finishes. It is not necessary to identify the PDS. The method used

to identify redundancy is a direct implementation of Algorithm 4.4. After converting the

input rules into their clausal forms, they are compared to check for subsumptions. If one


//Purpose: To detect inconsistency in the input rule
//Input: An object of ArrayList containing the rules partitioned in a group
//Output: The evaluation result.

Record all the predicates that are in some rule's LHS into a list lhsList;

Convert the rules into the clausal form;

While (exists new resolution space) {
For (every resolvent in an existing resolution space)
For (every rule in the new resolution space) {
perform level-saturation resolution;
record the resolution action into the log;
put the newly-generated resolvent in a new resolution space;
if ( empty resolvent is generated) return;
S

Subsumption detection in the new resolution space;
Subsumption detection across resolution spaces;


For (every resolvent in a resolution space)
Every predicate in the resolvent appears in lhsList, add it to the PDS;

Generate the inconsistency-causing input set based on PDS;

Generate the inconsistency causing process based on the log;

Return the inconsistency-causing input set and the deduction process;







61


subsumes another, the subsumed one is recorded and removed from the resolution space.

In each level of resolution, the newly generated clauses are compared against the input

rules, which correspond to the clauses in the first level resolution space S. If a clause ri

in S is subsumed by a clause deduced from a subset S' of S, and S' includes ri, then

replace ri with the deduced clause. If a clause ri in S is subsumed by a clause deduced

from a subset S' of S, and S' does not include ri, then delete ri (i.e., delete derivable

clauses).


Figure 12. Redundancy Detection Pseudo-code


//Purpose: To detect redundancy in the input rule
//Input: An object of ArrayList containing the rules partitioned in a group
//Output: The refined rule set and the refinement actions;

Convert the input rules into clausal form;
//Compare the clauses pair-wisely, if one subsumed another, delete the
//subsumed clause; record the result clause set in S and the refinement
//actions;

While (exists new resolution space) {
For (every resolvent in an existing resolution space)
Subsumption detection in the new resolution space;
Subsumption detection across resolution spaces;

For (every rule in the new resolution space) {
perform level-saturation resolution;
record the resolution action into the log;
put the newly-generated clause in a new resolution space;
if ( empty clause is generated) return the results and quit;
}

For (every clause x in the new resolution space){
if (x subsumes a clause y in S) {
If ( y is not in the deduction path of x) {
delete y; record the refinement actions;
}
Else {
replace y with x;
record the refinement actions;
}
}

Return the results;
}














CHAPTER 6
CONCLUSION AND FUTURE WORK

There is little work on the verification of event-and-action-oriented rules. The

goal of this thesis is to provide a solution to detect inconsistency, redundancy and non-

termination anomalies of event-and-action-oriented rules.


6.1 Conclusion

In this thesis, we have formally defined the three types of anomalies. Algorithms

for detecting these types of anomalies have been implemented. Our work is different

from the existing work in the following ways:

Non-termination, inconsistency, contradiction, and redundancy rule anomalies

are defined based on the semantics instead of the syntaxes of these rules.

A uniform verification approach is used to verify both constraints and event-

and-action-oriented rules in a rule warehouse.

We provide a way to abstract the side effects of methods and apply them in

the rule verification.

We take event history into consideration in non-termination anomaly

detection.

A graphical user interface (GUI) is provided to allow the user to track the

verification process.









6.2 Future Work

Research on business rule sharing is still in its infancy. In this work, we have

presented our solutions to some key problems related to business rule sharing and

verification. There are many additional research problems for further investigation. We

list some of them below:

Currently, every time when a new rule is imported into the rule warehouse, we

must combine it with the existing rules and verify them together. This is very

inefficient. Incremental verification is an important future research topic for a

rule warehouse that tends to change frequently.

Currently, we only check the event history that is the conjunction of multiple

events. The algorithm can be extended to support more complicated logical

expressions in EVENTHISTORY specifications.

The partition algorithm partitions rules into groups based on their triggering

relationships. The triggering sequences of the rules in a partitioned group are

ignored when we apply the mechanical theorem-proving-based algorithms to

detect inconsistency, contradiction, and redundancy anomalies.

Consequently, the algorithms may report more anomalies than actually exist.

Our algorithms can be refined to derive a more accurate detection conclusion

by taking the event triggering sequence into consideration.

The side effects of methods are abstracted and assumed in this work.

Reversing engineering techniques can be applied to derive such information

from method implementations.















LIST OF REFERENCES


[AIK95] Aiken, A., Hellerstein, J., and Widom, J., "A Static Analysis Techniques for
Predicting the Behavior of Database Production Rules," in ACM TODS,
March 1995, pp. 3-41.

[BAR93] Baralis, E., Ceri, S., and Widom, J., "Better Termination Analysis for Active
Databases," in Proceedings of the First International Workshop on Rules in
Database Systems, Edinburgh, Scotland, August 1993, pp. 163-179.

[BAR94] Baralis, E., and Widom, J., "An Algebraic Approach to Rule Analysis in
Expert Database Systems," in Proceedings of the Tii entieit International
Conference on Very Large Data Bases, Santiago, Chile, September 1994, pp.
475-486.

[BAR95] Baralis, E., Ceri, S., and Paraboschi, S., "Improved Rule Analysis by Means
of Triggering and Activation Graphs," in Proceedings of the Second
International Workshop on Rules in Databases Systems, Athens, Greece,
September 1995, pp. 165-181.

[BAR98] Baralis, E., Ceri, S., and Paraboschi, S., "Compile-Time and Runtime
Analysis of Active Behaviors," IEEE Transactions on Knowledge and Data
Engineering, Vol. 10, No. 3, May/June 1998, pp. 353-370.

[BAROO] Baralis, E., and Widom, J., "Better Static Rule Analysis for Active Database
Systems," http://dbpubs.stanford.edu/pub/2000-17

[CHA73] Chang, C.L.,. and Lee, R.C., Symbolic Logic and Mechanical Theorem
Proving, Academic Press, Inc., New York, 1973.

[CRA87] Cragun, B.J., and Steudel, H.J., "A Decision-table-based Processor for
Checking Completeness and Consistency in Rule-based Expert Systems,"
International. J Man-Mach. Stud., Vol. 26, 1987, pp. 633-648.

[DAV76] Davis, R., "Applications of Meta-mevel Knowledge to the Construction,
Maintenance, and Use of Large Knowledge Bases," 1976, Ph.D. dissertation,
Dept of Computer Science, Stanford University.

[GIN88] Ginsberg, A., "Knowledge-base Reduction: A New Approach to Checking
Knowledge Bases for Inconsistency and Redundancy," 7th National
Conference on Artificial Intelligence (AAAI 88), St, Paul, MN, Vol(2), 1988,
pp. 585-589.









[GON97] Gonzalez, A., and Dankel, D.D., The Engineering of Knowledge-based
Systems Theory and Practice, Prentice Hall, Englewood Cliffs, NJ, 1997.

[GRE96] Grefen, P., and Widom, J., "Integrity Constraint Checking in Federated
Databases," in Proceedings of the First IFCIS International Conference on
Cooperative Information Systems (CoopIS'96), Brussels, Belgium, pp.38-47.

[KAR94] Karadimce, A.P., and Urban, S.D., "Conditional Term Rewriting as a Formal
Basis for Analysis of Active Database Rules," in Fourth International
Workshop on Research Issues in Data Engineering (RIDE-ADS' 94),
Houston, Texas, February 1994, pp. 156-162.

[KIM98] Kim, M., and Stuckey P. J., Pi egi ,uiniiug n ilth Constraints: An Introduction,
MIT Press, Cambridge, MA, 1998

[LAC99] Lacerra, S., Benson, R., and Wong, K., "e-Commerce Services, Business
Services for the New Economy," eBusiness & eCommerce Services Industry
Report, Jefferies & Company, Inc., Wilkommen bei Multex Investor
Deutschland, Fall 1999.

[LEE00] Lee, M., "Event and Rule Services for Achieving a Web-based Knowledge
Network," Ph.D. Dissertation, Department of Computer and Information
Science and Engineering, University of Florida, April 2000.

[LIUOO] Liu, Y., Pluempitiwiriyawej, C., Shi, Y., Lam, H., Su, S. Y. W., and Chan, H.,
A Rule Warehouse System for Knowledge Sharing and Business
Collaboration, Technical Report, UF CISE TR01-006,
http://www.cise.ufl.edu/tech-reports/tech-reports/tr01-abstracts. shtml,
University of Florida, 2001.

[LIU01] Liu, Y., "A Rule Warehouse System for Knowledge Sharing and Business
Collaboration Ph.D. Dissertation, Department of Computer and Information
Science and Engineering, University of Florida, August 2001.

[LL087] Lloyd, J.W., Foundations of Logic Programming, Springer-Verlag, New
York, second, extended edition, 1987.

[MAN74] Manna, Z., Mathematical Theory of Computation, Computer Science Series,
McGraw-Hill Inc., New York, 1974.

[MUR97] Murrell, S., and Plant, R.T., "A Survey of Tools for the Validation and
Verification of Knowledge-based Systems," 1985-1995, Decision Support
Systems, 21, 1997, pp. 307-323.

[NGU85] Nguyen, T.A., Perkins, W.A., Laffey, T.J., and Pecora, D., "Checking an,
Expert System Knowledge Base for Consistency and Completeness," in









Process 9th International Joint Conference on Artificial Intelligence (IJCAI
85), Los Angeles, volume 1, 1985, pp. 375-378.

[NGU87] Nguyen, T.A., Perkins, W.A., Laffey, T.J., and Pecora, D., "Knowledge Base
Verification," AIMagzine, 8(2), summer 1987, pp. 69-75.

[OUS96] Oussalah, C., and Puig, V., "Integrating Constraints in Complex Objects,"
CIKM'96, Rockville, MD, 1996, pp. 189-196.

[PHI00] Phillips, C., and Meeker, M., "The B2B Internet Report Collaborative
Commerce," Morgan Stanley Dean Witter, April 2000.
http://www.morganstanley.com/techresearch/

[PRE92] Preece, A., Shinghal, R., and Batarekh, A., "Verifying Expert Systems: A
logical Framework and A Practical Tool," Expert Systems in ith Applications,
vol. 5, nos. 2-3, 1992, pp. 421-436.

[PRE94] Preece, A., and Shinghal, R., "Foundation and Application of Knowledge
Base Verification," International Journal ofIntelligent Systems, vol. 9, no. 8,
1994, pp. 683-702.

[PRE98] Preece, A., "Building the Right System Right," AAAI-98 Workshop on
Verification and Validation ofKnowledge-Based Systems, Technical Report
WS-98-11, AAAI Press, 1998.
http://www.csd.abdn.ac.uk/-apreece/Pubs/banff98.html

[PRE99a] Preece, A, "COVERAGE: Verifying Multiple-Agent Knowledge-Based
Systems," Knowledge-Based Systems, vol. 12, 1999, pp. 37-42.

[PRE99b] Preece, A., Hui, K., Gray, A., Marti, P., Bench-Capon, T., Jones, D., and Cui,
Z., "The KRAFT Architecture for Knowledge Fusion and Transformation," in
M Bramer, A Macintosh & F Coenen (eds), Research and Development in
Intelligent Systems XVI (Proc ES99), Springer, New York, 1999, pp.23-38.

[ROB65] Robinson, J.A., "A Mechine-oriented Logic Based on the Resolution
Principle," JACM, vol. 12, 1965, pp 3-41.

[ROS97a] Rosenwald, G.W., and Liu, C.C., "Rule-based System Validation through
Automatic Identification of Equivalence Classes," IEEE Transactions on
Knowledge and Data Engineering, vol. 9, no. 1, Jan./Feb., 1997, pp. 24-31.

[ROS97b] Ross, R., "The Business Rule Book: Classifying, Defining and Modeling
Rules," Second Edition, Business Rule Solutions, LLC, Houston, TX, 1997.

[ROU88] Rousset, M.C., "On the consistency of knowledge bases: The COVADIS
system," Computant. Intelligent, vol. 4, 1988, pp. 166-170.










[STA87] Stachowitz, R.A., Combs, J.B., and Chang, C.L., "Validation of Knowledge-
based System," in Proc. 2nd AIAA/NASA/USAF Symposium on Automation,
Robotics and Advanced Computingfor the National Space Program,
Arlington, VA (Report No. AIAA-87-1685), 1987, pp. 1-10.

[SUW82] Suwa, M., Scott, A.C., and Shortliffe, E.H., "An Approach to Verifying
Completeness and Consistency in A Rule-Based Expert System," AI
Magazine, 3(4), 1983, pp. 16-21.

[WEI95] Weik, T., and Heuer, A., "An Algorithm for the Analysis of Termination of
Larger Trigger sets in an OODBMS," in Proceedings of the International
Workshop on Active and Real-Time Database Systems, Skovde, Sweden, June
1995.

[WID96] Widom, J., and Ceri, S., "Active Database Systems, Triggers and Rules for
Advanced Database Processing," Morgan Kaufmann Publishers, Inc., San
Mateo, CA, 1996.

[WU93a] Wu, P., and Su, S., "Rule Validation Based on Logical Deduction,"
Proceedings of ACM 2nd International Conference on Information and
Knowledge Management, Arlington, VA, 1993, pp. 164-173.

[WU93b] Wu, P., "Rule Validation in Object-oriented Knowledge Base," Ph.D.
Dissertation, Department of Electrical Engineering, University of Florida,
August 1993.

[WU97] Wu, C., and Lee, S., "Knowledge Verification with an Enhanced High-level
Petri-Net Model," IEEE Expert, September/October, 1997, pp. 73-80.

[ZHA94] Zhang, D., and Nguyen, D., "PREPARE: A Tool for Knowledge Base
Verification," IEEE Transaction on Knowledge and Data Engineering, vol. 6,
no. 6, 1994, pp. 983-989.















BIOGRAPHICAL SKETCH

Yuan Shi was born in Yinchuan, China. He received his medical degree from

Shanghai Medical University, Shanghai, in July 1990. He worked as a clinical researcher

and physician in Huaren Hospital in Shanghai. He also has a very deep interest in

computer science.

He joined the University of Florida in spring 1999 and studied in the Database

Systems R&D Center of the CISE Department in the University of Florida during 2000-

2001, and engaged in research for Dr. Stanley Y. W. Su.

He graduates in August 2001 with the Master of Science degree. His research

interests include knowledge rule warehouse and bioinformatics database systems.




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs