Title: Event and rule services for achieving a web-based knowledge network
Full Citation
Permanent Link: http://ufdc.ufl.edu/UF00101373/00001
 Material Information
Title: Event and rule services for achieving a web-based knowledge network
Physical Description: Book
Language: English
Creator: Lee, Minsoo
Su, Stanley Y. W.
Lam, Herman
Publisher: Department of Computer and Information Science and Engineering, University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2000
Copyright Date: 2000
General Note: UF CISE TR 00-002
 Record Information
Bibliographic ID: UF00101373
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.


This item has the following downloads:


Full Text


UF CISE TR 00-002



The Internet and the World Wide Web technologies have gained a tremendous

amount of popularity among people and organizations because they provide a

powerful means for people and organizations to share multi-media data, to do

collaborative work, and to perform business transactions. At present, the Internet is

no more than a multi-media data network, which provides tools and services for

people to browse and search for data. It does not provide the facilities for

automatically delivering the relevant information that are useful for decision making

to people or applications. Nor does it provide the means for users to enter and share

their "knowledge" that is useful for making the right decisions. This dissertation

introduces the concept of a Web-based knowledge network which allows users and

organizations not only to publish their multi-media data but also to specify their

knowledge in terms of events, rules, and triggers that are associated with their data

and application systems. Operations on data and application systems may post events

to trigger the processing of rules. The knowledge network is constructed by a number

of replicable software components, which can be installed at various network sites

together with existing Web servers to form the knowledge Web servers. The

knowledge Web servers provide several build-time facilities such as a graphical user

interface that allows data providers to easily publish their knowledge based on the

Active Object Model (AOM), a registration facility that dynamically creates forms

which enable users to selectively subscribe to event notifications and connect them to

remotely executable rules on the provider-site, a tool to manage the knowledge

elements defined by users as providers and/or subscribers. Run-time facilities that

carry out the event filtering, event delivery, and trigger and rule processing are also

included. Both data providers and consumers' knowledge can be captured and applied

to benefit all Internet users in the knowledge network. A prototype knowledge

network with the above build-time and run-time features and facilities has been

implemented. We have used a number of e-commerce applications to demonstrate the

utility of the Web-based knowledge network.


In recent years, Internet use has become widely popular among people and

organizations that are interconnected through the Internet. The great impact of the

Internet can be seen in the changing life styles of many people. The way many people

obtain information has now changed to performing just a few mouse clicks on a computer

which is connected to the Internet. People can communicate quickly and easily with

almost anyone who has access to a computer and the Internet by using e-mail. Many

kinds of businesses are aggressively putting their home pages on the Internet; and buying

and selling things over the Internet has become common. Entertainment is also being

provided through the Internet in the form of games, movies, and music.

The Internet is now a vast sea of information where people surf on its waves in

search of their desired information. The amount of information being provided and the

number of users and businesses being connected on the Internet are constantly increasing.

Approximately 1 million hosts were connected to the Internet in 1993, but now more than

40 million hosts exist on the Internet [NW]. The Internet has made it possible for people

and organizations to easily share all kinds of data. It has also provided a basic

infrastructure to deploy applications built on distributed technologies.

As described above, it is evident that we are experiencing an explosive growth in

the use of the Internet. But, at the same time, we are recognizing a fundamental problem

related to the current Internet technology. This problem is preventing further growth and

the development of new applications on the Internet. The problem with the current

Internet is that it is merely composed of a network of data. There is an abundance of data

stored all over the Internet, but no form of knowledge exists on the Internet to help in the

extraction of meaningful data, timely delivery of data, activation of the correct

application systems to process the data, timely notifications on the occurrences of events.

Here, we define "data" as all facts that are recorded in the Internet and can potentially be

used to aid human decision-making, "information" as data of value used in decision-

making, and "knowledge" as data, events, rules and triggers useful for making decisions,

not just any decision but the right decisions which produce correct results.

As we look into this problem, we point out several detail limitations of the

Internet technology resulting from this problem. Then we describe our knowledge

network framework as a solution to the problem and explain how our proposed

knowledge network framework alleviates each of these detail limitations.

The current Internet can be characterized as a network of data. Data are stored all

over the Internet with a physical connection provided by networks. The current

architectural framework of this data network is composed of Web servers, browsers,

HTML/script language, server side applications, and the HTTP protocol. Web servers

contain data in the form of HTML or scripts, which focus only on the display format of

data rather than the semantics of the data. Web servers can provide data residing on the

servers through server side applications that are developed by programmers. Web servers

are deployed worldwide and interconnected through physical networks. Browsers act as

clients to the Web servers and access data via the HTTP protocol. Data is requested from

a client site and provided by the web server on the remote site via the client-server

paradigm. This architectural framework has been successful so far in terms of enabling

people with access to the Internet to share data. The current limitations resulting from this

architecture are the following:

First, most of the data transfers on the Internet are based on a pull model. This is

mainly due to the client-server paradigm of computing that is widely adopted over the

Internet. In the pull model, a client system will perform the pull by initiating a request to

a server, and the server will respond to the request by transferring the needed data to the

client system. Servers on the Internet are initially isolated from each other and only

respond to requests that are given to them. The pull model of data transfer requires a

system that is doing the pull to know what to pull and when to pull. Therefore, systems

that want to collaborate and receive data from other systems need to have precise

knowledge about what interfaces are provided to them by the other systems and may also

need to periodically poll other systems to see if data is available on another system. The

way people do the browsing on the Internet is also based on the same pull model. The

pull model can currently work only with one-to-one interaction. This pull model is highly

inefficient because of the one-to-one interaction limitation and may waste processing

power and bandwidth from polling, especially when performing collaboration among

multiple systems. This mode of interaction cannot scale up to the millions of servers

being deployed on the Internet.

Second, the current Internet is built upon stateless technologies. Stateless

technologies do not store any information about the state of the system, and thus cannot

remember anything about previous states of the system. HTTP [Fil97] is a stateless

protocol which does not remember anything from previous connections to even the same

site. Therefore, every client request is independent of each other. This requires redundant

information to be passed to the web server every time a request is made. HTTP makes it

highly inefficient for continued interactions with a web server. On top of this stateless

protocol, home pages are being designed with the same approach, where each access to

the data on the Web server is independent and not very much is remembered about the

surfer. The surfer knows what he is interested in when he visits a home page, and it is

most likely that he would want the same or similar data when he visits the home page

again. There should be a way for the surfer to explicitly request and also specify what he

is interested in and enable the Web servers to capture this knowledge. This knowledge is

sensitive information of the surfer and thus must be carefully stored on the Web servers.

Some Web servers dealing with a huge amount of data tend to let surfers provide some

information about their interest and keep them as user profiles in order to change the

display or contents of the home page when visited again. A similar function is provided

by a file called cookie that is stored on the surfer's computer. Cookies store the

connection information about clients. But all of these solutions for storing the interests of

surfers are proprietary solutions developed individually without relying on any kind of

general framework, thus resulting in a lot of programming effort that is not reusable nor

can it be integrated with other sites in the Internet. Storing this kind of knowledge about

the clients within the Web server will enable the Web server to automatically identify

those clients that are interested in the data when new data is available on the Internet.

Moreover, if the Web server is also equipped with an appropriate communication

infrastructure, it can notify the client about the new data. Therefore, a general framework

that can store this kind of knowledge in the Web server is needed.

Third, the format of the data published and transmitted over the current Internet is

only suitable for the human eye. The information on the Internet is currently provided

with a focus on how it should be displayed. This means that humans are needed to

interact with the web servers to retrieve information. This severely limits the possible

applications that could be developed. If machine processing of data on the Internet were

possible, a variety of applications could be developed. By enabling machine processing

of the data, knowledge could be easily built into applications for carrying out some tasks

instead of requiring human intervention. Such applications are search engines and

intelligent agents that can carry knowledge about the tasks with them and travel over the

Internet and communicate with each other by exchanging data formats appropriate for

machine processing. Also, collaboration among applications on several servers can be

more easily supported by making the exchange format suitable for machine processing.

Fourth, time-critical notification and processing is impossible on the current

Internet. There is no framework for a person (or system) to be notified about new or

updated data available on a Web server, and also no method for automatically performing

several operations as a response to the new data. Currently, a person can be informed of

newly available data on a Web server in the form of e-mail. However, immediately

performing an action as a response to the e-mail is not always possible. It would be

impossible for a person or group of people to manually react to the huge amount of data

being generated on the Internet. Therefore, a mechanism is needed not only to

automatically notify a person (or system) of newly available data but also to immediately

react to this notification and intelligently perform certain operations without human


Fifth, embedding executables into a Web server is currently done by low-level

program codes, which is not what a typical Web surfer can do. In most cases, developing

the executables requires expertise. Typical Web surfers now want to do something with

the data that they obtain, but cannot easily specify nor install their knowledge into a

system that can automatically perform operations using the obtained data. This is due to

the lack of a high-level language and framework that can be easily used by anyone to

support this functionality. As the use of the Internet is rapidly spreading through people

with limited technical expertise, a way for easily specifying and installing their decision-

making and action performing procedures as knowledge in a high-level fashion is

increasing in demand.

The current detail limitations stated above result from the problem that the current

Internet architecture is a data network. We propose a novel framework, a knowledge

network architecture, to solve the problem. The knowledge network allows both

consumers and providers of data to express their knowledge in forms of events, rules, and

triggers that are associated with data and data processing. The contributed knowledge can

be incorporated into the current data network. Events are any things of interest (e.g., data

states, software system operations, signals from external devices) that occur in the

knowledge network. The occurrence of an event will cause the notification of users or

software systems which have registered for the event. An event can carry data over the

Internet to its subscriber. Rules represent a granule of control and logic using a high-level

language. Each rule specifies some condition that needs to be evaluated in order to

determine whether or not to execute a structure of operations or an alternative structure of

operations. Triggers are specifications that relate events with rules or rule structures,

making it possible to fire rules upon the occurrences of events. The trigger specifications

provide a very flexible way of linking events with potentially complex structures of rules

to capture semantically rich and useful knowledge.

The knowledge network is composed of several key concepts: publishing events

and rules, event filters, push-based event delivery, knowledge profile, and processing of

triggers and rules. These key features are incorporated into an extension of the current

Web server, namely, the knowledge Web server. Information providers can use the

publishing mechanism to put events and rules on their Web pages, which allows Web

surfers to register themselves as subscribers to certain event notifications and also

connect the event notifications to those rules that are published. Event filters are used to

support the personalized subscription of the events, where meaningless event instances

will be filtered out and only the specific subset of event instances of interest will be

notified. In the knowledge network, events are delivered via a push-based mechanism to

subscribers' knowledge Web servers to provide a more active and scalable mode of

communication among the web servers. Also, the providers and subscribers of

information can specify and store and manage their knowledge (i.e., events, triggers, and

rules) in knowledge profiles. Triggers and rules are executed within the knowledge Web

servers to perform the validation of complex relationships among events, scheduling

among rules, and finally execute various operations via rules.

The proposed knowledge network remedies the previously identified limitations

of the current Internet technology. The following describes the advantages of a

knowledge network in contrast to a data network. First, a knowledge network employs

the push technology for information dissemination instead of the inefficient pull model of

interactions among current web servers. However, pulling information is still permitted.

The push approach supports scalability by allowing information providers to efficiently

serve a large number of subscribers over the Internet. It also enables a large number of

web servers to collaborate. Second, knowledge Web servers store detailed information

about the subscriber's interest in the form of event filters. This saves bandwidth by not

clogging up the network with irrelevant event notifications, and makes it possible to push

events and their associated data to the subscribers only if they are relevant to them.

Third, knowledge specifications are represented in XML [Bray98] format, which includes

the semantics of the information rather than just the display formats. Knowledge

specifications can be interchanged among knowledge Web servers, and machine

processing of data carried by the event becomes possible. Fourth, timely and automatic

reaction to events is possible because events are linked to rules by triggers; and rules are

automatically executed when an event of interest occurs. Fifth, rules are specified in a

high-level language, which makes it easy for a typical web surfer or an information

provider to add his/her knowledge into the knowledge Web server.

A knowledge network can be used for a variety of applications and also provide

the essential framework for future applications based on the Internet. Some examples are:

virtual enterprises, intelligent supply-chain management systems, intelligent agents used

in e-commerce, military command/control systems, web-based workflow systems,

cooperative information systems, replication servers, web data integration systems,

intelligent information dissemination systems. It is possible to support these

collaborative and distributed applications over the Internet by designing and

implementing the knowledge network.

The main contributions of this work can be summarized as follows. First, an

event, trigger, rule model, and language is designed to provide a high-level specification

of event notification, parallel rule execution, and control/logic representation for Internet

applications. Second, a graphical user interface (GUI) editor and code generator are

implemented to assist the application developer in inputting and editing high-level events,

triggers, and rules and eventually generating the low-level code required for execution.

Third, event filtering and push-based event delivery mechanisms are provided to enhance

the performance of event notification. These mechanisms are future trends of the Internet

technology because bandwidth usage and scalability are important. Fourth, an Event-

Trigger-Rule (ETR) server, which can automatically execute provider side or subscriber

side rules upon receiving an event and can also perform scheduling of various rule

execution sequences--including parallel execution sequences--is implemented. The

replicable ETR server eliminates human intervention in processing massive amounts of

data, and provides scalability in processing a large number of rules. Fifth, mechanisms to

support dynamic changes of events, triggers, and rules are developed. These mechanisms

help the management of events, triggers, and rules on the Internet where they are subject

to frequent changes. The mechanisms allow the changes to be done without bringing

down the system or interfering with the tasks being carried out at the time of change. The

GUI editor and code generator were improved to support this capability, and a dynamic

class loader was additionally implemented. Sixth, a specification and efficient processing

mechanism for event history is developed in order to allow complex relationships among

distributed servers to be modeled and efficiently evaluated during trigger processing.

Lastly, a platform-independent and integrated component developed with Java, which can

be added into any standard web server, is implemented in order to support the rapidly

emerging collaborative applications that interconnect web servers via events and rules.

The organization of the remainder of this dissertation is as follows. In Chapter 2,

some related research on events and rules on the Internet are surveyed. In Chapter 3, the

basic concept of the knowledge network is explained to provide an overview of the

framework. In Chapter 4, an Active Object Model (AOM), which is the basis for

providing event and rule service on the Internet, is presented. Chapter 5 describes the

overall architecture of the integrated component that is to be added into the web servers.

Chapter 6 discusses the design of the system related to the knowledge network

construction (i.e., defining events, triggers, and rules), processing (the events, triggers,

and rules), and management. Chapter 7 gives the implementation details about the

component modules. In Chapter 8, example scenarios that demonstrate the usefulness of

the knowledge network are provided. Finally, Chapter 9 gives a summary of the work

with suggestions for future work.


Our work on the design and development of a knowledge network involves

several emerging fields of research and technology. The key technologies that have

motivated us to pursue this research are the publishing and sharing of data on the Internet,

notification services via the Internet, and the rule technology.

2.1. Publishing on the Internet

The advance in Internet technology has been enormous in a short period of time.

The Internet now connects millions of Web servers world-wide. Anyone who has access

to the Internet can easily look for whatever they need on the Internet. The concept of

client-server computing is now widely accepted, even to computer novices, in the form of

browsers and Web servers. The Internet is indeed changing the life style of people and

the way companies perform their business globally.

The biggest reason that the Internet has become so popular is its capability to

provide anybody in the world having access to the Internet with virtually any information

they need. The amount of data on the Internet is tremendous and is still rapidly

increasing. The technology to allow people and companies to publish this kind of data

started out as HTML (HyperText Markup Language) [HTML]. The HTML language

allows a person to display text, pictures, animation, and even embed sound files into a

home page in any way he or she designs it. However, HTML is a static display, which

does not perform interactive or sophisticated operations in the way a simple program can.

Therefore, CGI [CGI], JavaScript [JavaS], applets [Applet], and servlets [Serv] have been


introduced to provide additional capabilities to HTML. They give the information

provider a means not only to display his/her data to the users but also to interact with the

surfers on the Internet. The information providers in this case need to develop codes or

scripts that can be embedded into the home page or Web server. Currently, the Java

language has gained quite a lot of interest in the distributed computing area, and its

platform independence feature makes the concept of applets a very powerful alternative

to developing downloadable client side programs. Applets, together with servlets, which

are server side programs, can form a very flexible and powerful client-server application.

All of these technologies that are related to publishing data and developing

programs on the Internet have proven to be very useful and successful thus far. The main

problem that we now face is that these technologies are based on only human

interactions, such as displaying data on a screen or pushing a button to initiate an

operation. This is because HTML can specify how to display data, but it does not give

any information about what is being displayed. Therefore, a machine (or program)

reading the HTML file cannot find out what the content of the home page is. This creates

a major obstacle to applications such as search engines, and data extraction/gathering

utilities. To this end, XML (Extensible Markup Language) [Bray98] has been proposed

and is currently one of the hottest subjects in research. XML allows users to create their

own DTD (Document Type Definition), which is a template that contains a set of tags

defined by the user. An XML document uses these tags to wrap specific parts of the

documents. These tags specify the semantics of the data that is contained within each part

of a document. By knowing the semantics of the tags, the XML document can be

processed by a machine (or program), which can easily extract information from the

document. As XML only specifies the content of the document and not the display

format, another document which specifies the rules for displaying each tagged part of the

XML document is needed, namely, XSL (Extensible Stylesheet Language) [Oasis].

XML relies on DOM (Document Object Model) [DOM99], which models a

document as a tree of objects. These objects altogether form a semi-structured document,

which can be easily parsed and also navigated. By giving the document a structure, the

browser can easily identify specific parts of the document and handle simple user

interactions with the document without going all the way to the Web server just to

perform a simple interaction. This is the concept of DHTML (Dynamic HTML)

[DHTML]. DHTML makes a home page more active, and most of the user interactions

can be handled on the client site rather than the server site.

RDF (Resource Description Framework) [RDF98] is an effort to use meta-data to

describe data on the Web. The term meta-data in this case means data about data, such as

a library catalog, which is meta-data for the books in the library. RDF is basically a

framework to describe resources. Anything that has a URI (Universal Resource

Identifier) can be a resource, and the resources are described by a set of property types

and values, where a property type may be "Author" and the value may be "John." The

format used is similar to XML, making it possible to be processed by a machine (or

program). It supports interoperability by allowing applications to describe and

interchange machine-understandable information on the Web.

The past and current research in the Internet area has mostly focused on

publishing data. As discussed above, publishing data for displaying to the human eye

started the Internet revolution, and then came the interactive scripts and programs to

make the data more alive. The next issue was to make published data more machine-

understandable. In our work, we would like to introduce another dimension to publishing

on the Web: publishing knowledge. In this work, knowledge is represented by events

and rules specified by Internet users and embedded into the network system. Knowledge

enhances the active capability of the Internet. Knowledge is not just data sitting on the

system, but data that can be timely shared through events and associated rules to create a

very active system. Therefore, the publishing of knowledge not only involves displaying

the events and rules (either in a human-readable format or a machine-understandable

format), but also provides a mechanism to subscribe to events, deliver events, and process


2.2. Notification on the Internet

The basic communication paradigm on the Internet so far has been based on the

pull model of interaction. Browsers pull data from a Web server at the request of a user.

The pull model of interaction is basically the client-server paradigm, which is very simple

to implement. Although the pull model of interaction is currently prevalent on the Web,

there are serious limitations. The pull model is a passive approach to obtaining data. No

data other than the data requested at the time of the request will be provided. Therefore,

in the Internet environment where an enormous amount of data exists, there is a

limitation on the amount of data that a user can access and process because every

access must be initiated by the user.

To remedy this problem, a data delivery model based on the push model began to

gain interest in the research community. The push model of interaction allows subscribers

to specify their interest in certain data, and these preferences are kept on the server. The

server will then push the data of interest to the subscribers based on their preferences.

This model of interaction makes it possible for subscribers to receive data in a timely

fashion without additional effort. This approach has an additional advantage of scalability

when the same data needs to be disseminated to a large number of subscribers, because it

can avoid the point-to-point request/response connection overhead.

Some early work on push-based systems were Teletext [Amma85, Wong85],

Datacycle at Bellcore [Herm87, Bowe92], and Boston Community Information System

(BCIS) at MIT [Giff90]. These early efforts focused on information broadcasting. The

Teletext system provided results on one-way, two-way, and hybrid broadcasts. The

Datacycle project used a repetitive broadcast medium. BCIS broadcasted information

over FM channels.

The first widespread push-based system was the Pointcast system [Point], which

created a huge worldwide interest in the push technology in 1996. Although Pointcast

looks as if it is performing its communications using the push model, the actual

implementation of how it works is based on the pull model. A large number of Pointcast

data centers, which are servers that have updated data, are geographically dispersed

around the U.S. Each of the Pointcast client software that is running on the subscriber's

machine is actually polling the data centers periodically to download the data that the

subscriber is interested in.

Marimba's Castanet [Marim99], Netscape's Netcaster [Nets99], and Backwebs

Polite Agent [Back99] are also push-technology-based products. These products also

work in a similar way as Pointcast, and therefore have their own smart ways to reduce the

overhead of downloading by making use of the idle time of users. Also, a standard for

the push technology called Channel Definition Format (CDF) [CDF97] has been

developed by Microsoft and submitted to the W3 Consortium. It allows publishers to

specify channels, the contents, and the update schedule for pushing the data.

Event notifications are also a form of the push-based technology. The concept of

events is now commonly accepted in many areas. Events can represent a thing that is

happening at some time in a high-level format. The difference between the data push

described above and the event push is that the event push systems are actually

implemented in accordance with the push model. Also, event data are usually much

smaller than huge chunks of data.

Several efforts related to designing a protocol to support event notification on the

Internet have recently gained a considerable amount of interest. The Basic Lightweight

Information Protocol (BLIP) [BLIP98] provides real-time, reliable, transactional,

message queuing services based on the publish/subscribe model of communication. It

can be used for both notification services and delivery of MIME. Microsoft's General

Event Notification Architecture Base (GENA) [Coh98] defines a notification architecture

that transmits notifications between HTTP (HyperText Transfer Protocol) resources.

Products that support event notification services have been introduced, each with

their own proprietary solution. The Keryx Notification Service [Keryx] by KeryxSoft, a

group in Hewlett Packard Laboratories, Bristol, provides a language and platform

independent infrastructure implemented in Java to distribute notifications on the Internet.

Notifications are considered as structured information describing the events. The target

applications are distributed agents, workflow, World-Wide-Web (WWW) site

management, personal communication services, and distributed virtual environments.

Vitria's Businessware Communicator [Vitria] allows applications to publish business

events to multiple information channels, while other applications can subscribe to the

business events of interest. Multiple Quality-of-Service (QOS) levels and security

protocols are also supported. WebLogic Events [WebL] allows any WebLogic

application on the network to register for interest in an event and also install action codes

that is to be executed when the event occurs. WebLogic applications generate event

messages, which the WebLogic Events server will receive, and if an application has

registered for the event it will execute the installed action. The user-encoded actions can

be things such as sending an e-mail, paging, or updating the database.

Because the Internet is an open community and it connects people and companies

all over the world, it is desirable to work with standards rather than with proprietary

solutions. Future event notification systems may be built upon communication

infrastructures that are more compliant with standards. Some of the following

infrastructures strongly encourage the usage of XML and HTTP, which possibly is a

good foundation for future event notification systems.

WebBroker [WebB98] developed by DataChannel is a distributed object

communication framework extended to the web, and it adopts some of the features of

OMG's CORBA and Microsoft's COM+. It uses HTTP as the transport protocol, XML as

the syntax for specifying object interfaces and message formats, and URIs as addresses

for software objects. It is implemented as a servlet that can be embedded into Web

servers. The interface-based communication paradigm makes communications

transparent by having a client side proxy and a server side skeleton, where the proxy and

skeleton are similar to stubs in CORBA. The main advantage of WebBroker is that it has

blended the distributed object communication framework into Web standards.

Webmethod's B2B [WebM] is another product that uses XML and HTTP as a solution to

inter-company integration for supporting scalable business-to-business applications. Veo

Systems [Meltz98] uses the XML as an exchange format, which is currently targeted for

trading in e-commerce over the Internet.

In order for the push technology to work, one of the essential problems to solve is

to establish a good open standard. The standard should not only decide how to

communicate but also deal with the important issues of security. But, as shown above,

even for the underlying communication framework, a variety of mechanisms are being

proposed and implemented. It may take some time for a standard of the push technology

to be developed and be widely accepted.

All of the efforts for the push technology seemed to be promising initially, but

soon afterwards, a serious problem was recognized. The data that is pushed is not only

poorly organized but also too much data is being pushed to subscribers. In many cases, a

lot of data that is not of the subscriber's interest was being pushed.

The excitement for the push technology has calmed down, but still more research

is continuing in this area. This technology is still regarded as an ongoing effort, but

seems to be the only way to deal with the massive data on the Internet. We therefore

have undertaken our research on the basis of push-based concepts with the anticipation of

a better solution and an improved implementation in the future.

An approach to solve the problem of the abundant data being pushed to

subscribers is to employ filtering techniques. If subscribers can provide more specific

information to the server about their interest, irrelevant data can be filtered out before

being sent to the subscribers. Some work in this area has been performed with respect to

text documents within SIFT (Stanford Information Filtering Tool) [Yan93a, Yan93b].

The SIFT server keeps client profiles, which consist of keywords and weights. The SIFT

server will then use a highly efficient indexing technique to perform the filtering of the

large amount of text against the keywords in the client profiles. The connections from the

document source to the SIFT server and again to the SIFT client are all push-based,

point-to-point, and initiated periodically (i.e., doesn't follow a pre-arranged schedule).

WebLogic also incorporates a form of event filtering by using a topic tree. The topic tree

is a tree of events where each subscriber can register for interest in a specific event node.

The events close to the root are more general events, whereas, the events at the branches

are more specific events. When an event occurs, the event flows through the branch

towards the root, activating the event nodes of interest to the subscribers. This makes it

possible to allow the subscribers to register in a finer granularity of events. WebLogic

requires the subscribers to perform some coding to do the registration.

Our work emphasizes the use of event filtering for enhancing the usefulness of

pushing events. We focus on developing a more efficient and powerful event filtering

method that can filter events based on the attributes using a number of powerful

operators. Moreover, we propose an approach, which is immediately deployable on the

Internet, and does not require any user programming for event registration.

2.3. Rules on the Internet

Thus far, the Internet has connected the world in a very passive way by using the

pull model of interaction. All information requests and operations need to be explicitly

initiated by somebody. Employing the event push model with event filtering on the

Internet is one step towards making the Web more active. Using the push model, events

can be delivered to each Web site. But what do we do with the events? If a person needs

to be sitting in front of a computer to do something with the events, we are not making

sufficient use of the possible advantages of the technology provided to us. The next step

is to design an executable program that can react to these events without any need of

human intervention. These executable programs should also be very easy to specify using

a high-level language. This is why rules are needed.

The concept of rules has originally emerged in the research area of artificial

intelligence and expert systems. The declarative and simple forms of rules were

appropriate to model knowledge and started out as the condition-action (CA) type of

rules. The condition-action type of rules has the semantics of "when a condition is true,

perform the action." Expert systems such as OPS5 [Brown85] or CLIPS [Giarr91] use

this type of rules. The rules were soon incorporated into databases to create a new

category of databases, namely, active databases. Some examples of these active database

systems are HiPAC [Day88], Ode [Geh91], Sentinel [Chak94c], Ariel [Hans96],

OSAM*.KBMS [Su93], Postgres [Sto91], and Starburst [Hass90]. ECA (Event-

Condition-Action) rules have been used in many of these systems. ECA rules are

composed of three parts: event, condition, and action. The semantics of an ECA rule is,

"When an event occurs, check the condition. If the condition is true then execute the

action." The event provides a finer control as to when to evaluate the condition and gives

more active capabilities to the database systems. Rules can automatically perform

security and integrity constraint checking, alert people of important situations, enforce

business policies and regulations.

Using rules in the distributed environment has been researched in the context of

providing autonomy for sites and also enhancing the performance of processing rules

[Ceri92]. Cooperative information systems have also recently started to make use of

rules as a glue-component for putting together heterogeneous systems [Bern97, Su95].

Some initial approaches toward using rules to integrate servers on the Internet

have been experimented. WebRules [Ben97] is a framework developed at the Israel

Institute of Technology. The WebRules server has a set of built-in events that can notify

remote systems, and has a library of system calls that can be used in a rule to hook up

Web servers; however, the approach requires a skilled programmer to be able to create

the rules. The WebRules server uses a S2S/HTTP protocol, which is an extended version

of HTTP. The framework focuses on connecting companies that own Web servers rather

than individuals surfing on the Internet. It does not consider the push concept nor event

filters. Moreover, composite events also are not considered. Composite events are an

important tool for interconnecting several servers with a complex relationship.

Nevertheless, it is the most influential work that motivated our research in knowledge

networks. WebLogic also includes a basic form of rules, which are called actions. These

actions need to be provided to the WebLogic server at the time when an application is

registering for an event. These actions are actually specified with program codes rather

than a high-level specification. Thus, the developer of the action needs to deal with

system-level issues.

Agents that are being deployed on the Internet also incorporate rules as their basic

form of knowledge. The SIMAGENT tool kit [Slom96] is an example, which employs a

condition-action rule-based programming style. IBM's Agent Builder Environment

(ABE) [ABE] provides an open architecture where additional functions can easily be

added. ABE includes a rule-based reasoning system.

Our approach for embedding knowledge, more specifically events, rules and

triggers, into the Web servers can make the Web more active. We focus on how to make

this infrastructure easier to deploy and use, scalable enough to connect millions of

servers, and powerful enough to incorporate complex reactions. Dynamic change of rules

during the run-time operation of Web servers is also investigated to ensure uninterrupted

operation of the servers.


The main goal of the knowledge network is to share the knowledge available on

the Internet among the users of the Internet. This would promote efficient exchange of

knowledge and the development of more organized and interconnected knowledge among

individual expertise that is currently isolated from other Web sites and users. To explain

the concept of the knowledge network more clearly, we start with a few preliminary

definitions. After these preliminary definitions, the requirements for the design of the

framework, the key features, and the steps for constructing the knowledge network are

presented to provide a comprehensive understanding of the concept.

3.1. Preliminaries for the Knowledge Network

We start with defining the concept of knowledge which is frequently used in a

broad area of research and technology. Then, the data network, which specifies the

current Internet architecture, and our definition of the knowledge network are discussed.

These concepts form the basis of our work

* Definition 1. Knowledge: In the cognitive sciences, knowledge is referred to as a

permanent structure of information stored in memory [Rob99]. Various definitions

exist based on the different viewpoints on knowledge. Here, we define knowledge

based on the nature of the content rather than the representation. There are two types

of knowledge: procedural and declarative. The procedural type of knowledge consists

of skills acquired through interacting with the environment and is what we call

"know-how". Declarative knowledge is based on facts and is concerned with the

properties of objects, persons, and events and their relationships. Although the mental

processing and representation of knowledge are complex activities and the research

community's understanding of this field is on-going, our approach is to model both

the declarative and procedural knowledge via events, triggers, and rules.

* Definition 2. Data Network: The data network is composed of all kinds of data (i.e.,

texts, formatted data, graphs, audio, videos) that are recorded in the Internet. These

data may or may not involve in some form of human decision and reasoning

processes. Those that do (i.e., data of value for decision making) become

"information" to the users of the data. The data network itself does not have any

capability to actively respond to new data nor the intelligence to reason nor make

decisions using the data. The main goal of the data network is to share data among

users on the Internet but always requires human intervention for reasoning or decision

making tasks to create new and more valuable data.

* Definition 3. Knowledge Network: The knowledge network is composed not only of

data elements but also knowledge elements which can be used to perform automatic

reasoning and decision-making tasks. In this work, knowledge elements are presented

by events, triggers, and rules. The events encapsulate timely information of what is

happening on the Internet and makes the knowledge network actively responsive

without human intervention. The rules express the decision-making factors allowing

the intelligence of humans to be embedded into the knowledge network. The triggers

model complex relationships among events and rules, checking histories of events

and enabling various reasoning or activation sequences to reflect the complex

decision making process. The knowledge network has a goal not just of sharing data

but also sharing knowledge to make the Internet into a more active, collaborative, and

intelligent infrastructure.

To summarize our view, "data" is defined as all forms of media that are recorded

in the Internet and can potentially be used to aid human decision-making, while

"information" is data of value used in decision-making. And "knowledge" is data, events,

rules and triggers useful for making the right decisions.

3.2. Requirements for Designing the Knowledge Network Framework

The knowledge network should satisfy the following requirements. First, the

knowledge network should allow publishing not only data but also events and rules, and

the specification of triggers that links events to rules. Second, the knowledge network

should support an event notifying mechanism that is push-based in order to be efficient

and scalable. Third, the knowledge network should support event filtering to reduce

irrelevant data delivery. Fourth, the knowledge network should support automatic

execution of triggers and rules connected to the event notifying mechanism. Fifth, the

knowledge network should provide a mechanism to easily define and manage the events,

triggers, and rules for each user on the Internet. Sixth, the knowledge network should

dynamically adapt to changes such as adding, deleting, and modifying the rules or events

in order not to affect the standard server operations. Last, the knowledge network

components should be easily incorporated into standard Web servers and provide

platform independence.

3.3. Key features of the Knowledge Network

The knowledge network can enhance active collaboration of Web servers and

users on the Internet by providing a framework to (1) publish data, applications,

constraints, events, and rules, (2) register for subscription of events and deliver events to

subscribers, (3) define rules on subscribed events.

The framework to accomplish these tasks is based on the idea of providing a

component that can be plugged into any standard Web server. This should allow the Web

servers that need to collaborate to have a symmetric architecture. Another technology that

is needed to support this framework is event, trigger, and rule processing capability being

built into the component. This is the major part that should be developed in order to

provide any type of intelligent, distributed and collaborative infrastructure. The idea of

using user profiles is also adopted to support a wide variety of users on the Internet who

wish to have their own individually customized applications.

The architectural framework of the knowledge network shown in Figure 1 is used

to explain the key features of the knowledge network: publishing events and rules, event

filters, push-based event delivery, knowledge profile, and processing of triggers and

rules. In Figure 1, several Web servers are interconnected through the Internet. Each

server is extended with several components that form the basis of the knowledge

network. Only the extensions to the Web server are shown in the figure for simplicity.

We refer to a Web server with these extensions as a knowledge Web server (KWS).

Assume that the knowledge Web server A takes the role of a data provider who is user A

and knowledge Web servers B and C are maintained by two different users, namely, user

B and user C, who need information from the knowledge Web server A. The active users

A, B, and C are each residing on the sites for the KWS A, KWS B, and KWS C with a

browser interface to the systems and the Internet.

Figure 1. Architectural framework of the knowledge network.

Data providers can provide data and define events and rules, and publish them on

web pages. Publishing of events will enable Web surfers to know what kind of data can

be delivered to them in a timely manner. Interested Web surfers can register for the

events and become subscribers of the event. Rules published by data providers can

perform several operations on the knowledge Web server of the data provider.

Subscribers of events can conveniently select these rules that will be executed remotely

on the data provider's knowledge Web server when the subscribed event occurs. Figure

1 shows that the knowledge Web server A has published two events El and E2, and two

rules R1 and R2. User B has subscribed to event El and linked it to rule R1, while user C

has subscribed to event E2 and linked it to rule R2.

Event filter templates are provided by data providers to allow event subscribers to

more precisely specify the subset of the event occurrences in which they are interested.

The subscribers can give various conditions on the values that the event carries. Only

those event instances that satisfy the condition will be delivered to the subscriber. By

using event filters, only meaningful data will be provided to the subscribers. Thus,

network traffic can be significantly reduced. Figure 1 shows that the event filter Fl is

installed on event El by user B, while the event filter F2 is installed on event E2 by the

user C.

In a knowledge network, events are delivered via a push-based mechanism to

subscribers' knowledge Web servers. When the event occurs, the push mechanism is

activated in order to deliver the event to a large number of knowledge Web servers in a

timely fashion. This push-based mechanism can radically change the paradigm of how

interactions on the Internet are performed. Moreover, the combination of event pushing

with the event filtering creates a more powerful communication infrastructure for the

knowledge network. Figure 1 shows the extension related to the push-based event

delivery combined with the event filtering in each knowledge Web server.

The providers and subscribers of knowledge can specify and store their

knowledge (i.e., events, triggers, and rules) in knowledge profiles. Each knowledge Web

server is extended with a component that can provide a web-based graphical user

interface to the provider or subscriber of knowledge to edit their knowledge profile. The

knowledge profile is persistently stored. The events, triggers, and rules stored in the

knowledge profile are provided to other run-time components of the knowledge Web

server. Figure 1 shows the knowledge profiles existing on different knowledge Web


Triggers and rules are executed within the knowledge Web server when an event

linked to it has occurred. Processing the triggers involves checking of complex

relationships among event occurrences and also the scheduling of several rules. Rules

can activate various operations on the Web server. The execution of a rule may again

cause new events to occur, resulting in a chained execution of rules. Figure 1 shows the

processing components for triggers and rules residing within each knowledge Web

server. Knowledge Web server B will execute rule R3 upon receiving filtered event El,

and knowledge Web server C will execute rule R4 upon receiving filtered event E2.

User A (Provider)

Ai (\) Publish

User B (Subscriber)
awn I


Figure 2. Steps for constructing the knowledge network.

3.4. Steps for Constructing the Knowledge Network

The knowledge network is constructed through a process involving a series of

steps that need to be followed by the providers and subscribers participating in the

knowledge network. This section explains each of the steps in the order they occur.

Figure 2 shows an example for a single provider and single subscriber

participating in the construction of the knowledge network. This simplified view of the

construction process is used as the example to be explained throughout this section.

3.4.1. Publishing Data, Applications, Events, and Rules

Currently, a user (or organization), say A, that has data and applications (i.e.,

methods that may be connected to a database in his/her home directory) can publish this

data and application on his/her home page. Using the knowledge network concept, user

A can also publish the events that can be raised from his/her own data and applications

and allow other Web surfers to subscribe to those events. The definition of the events are

input into the knowledge profile to enable the knowledge Web server to process the

events. All other knowledge elements described in this section are also input into the

knowledge profile. User A can easily hookup the event to his/her home page afterwards.

An event filtering mechanism may also be provided by user A. Subscribers will later on

give some value ranges for the filters during the event registration step (to be explained in

Section 3.4.2), and when the event is being posted, the system checks if the event

attribute values satisfy these given value ranges prior to sending out the event to the

subscriber. Rules that are applied to user A's data can also be published for use by

various applications that require meta-data (e.g., in e-commerce applications.) Several

parameterized rules that can be triggered by user A's own events may also be published.

The subscriber of user A's event can link the event to the parameterized rules during

event registration so that automatic rule processing can be conducted on the provider site

(i.e., user A's site) with the guarantee that these operations are authorized and safe for

user A's own Web server. This is shown as steps ) and @ in Figure 2.

3.4.2. Event Registration

Another user, say B, is surfing on the web and discovers the homepage of user A

and finds an event of interest. User B then accesses the event registration form and

registers for an event that user A has published on his/her home page. User B may

subscribe to the event to be sent out either as an e-mail notification or a pushed event to

his/her knowledge Web server. At the time of registration, user B may also provide

values that are to be used later on for filtering out irrelevant events. If some

parameterized rules linked to the subscribed event are supported by the event provider,

the user B may select some rules to be executed on the event provider's site. An example

of such a rule could be changing user B's subscription information (i.e., discontinue

subscription of an event after some specified number of postings) automatically after

sending the event. The event registration steps are shown as steps to T in Figure 2.

After user B performs this registration, the event that occurs later on will be filtered and

then either be sent out as an e-mail notification or be posted to the knowledge Web server

on which the user B has his/her own knowledge profile. The knowledge profile should

contain the events that the user B has subscribed to as well as the triggers and rules that

are defined for the event. User B can also defined additional triggers and rules that are to

be processed at his/her own knowledge Web server when an event notification has

reached the knowledge Web server. This is further described in the following subsection.

3.4.3. Trigger and Rule Specification

After subscribing to an event, user B may then access the Knowledge Profile

Manager--a module which manages the user's knowledge profile (event subscription,

trigger and rule definition information)--of his/her own knowledge Web server and

specify the additional triggers and rules that should be executed upon the occurrences of

the events he/she has subscribed to. Several events that user B has subscribed to may be

linked to a set of rules, forming composite events and structures of rules. In Figure 2,

these steps are shown as and .

3.4.4. Event Posting, Filtering, and Rule Execution

Service providers will later generate events that first go through a filtering process

to identify the relevant subscribers of the event. Once the subscribers are identified, rules

on the provider's site can be executed. These rules are remote executable rules, which

are intended to allow remote users to have a limited capability to execute units of code on

the provider's site. The event is then posted either as an e-mail message to the subscriber

or an event notification to the subscriber's knowledge Web server. If the subscriber has

some triggers and rules defined on his/her own knowledge Web server linked to the

event, the event will trigger the execution of these rules which may perform some

operations within the subscriber's web server and/or generate another event that can be

again posted to another site. This is step in Figure 2.

3.5. An Example of a Simple Web Page

An example web page of the Gator Travel Agency, which publishes data,

applications, events, constraints, and rules, is shown in Figure 3. An example event

registration form is also shown.

The basic concept is that each page is an object. An object on the Internet can be

referred to by its URL. The example shows that each object can have data, methods,

events, and rules (along with triggers).

Figure 3. Example of Gator Travel Agency Web page and event registration form.

3.5.1. Data

All of the items on the homepage are regarded as data. These are attribute values

of the object. There is a title attribute with the value of "Gator Travel Agency," a notice

attribute with the value of "Special Offer! Airfares are reduced to Europe." Links to

other pages may be regarded as a special type of attribute of the object, and can make the

currently displayed object a composite object.

3.5.2. Methods

Each object (page) can embed methods that can be exported. The methods can be

thought of as applications that can be used by whoever accesses the object. The methods

[Data] Gator Travel Agency
Special Offer!
Airfares are reduced to Europe.
Other Links

[Method%] MakeReservations

[I entlJ] TicketSpecialOfferEvent

[Rules] StopSubscriptionOfEvent

FlightCancelled Event
Registration Form

D Flight Number


D Subscriber E-mail I
o Subscriber URL

0 RefundCancelledFlight


may be actually implemented as applets or servlets. If the method needs parameters, a

message box may pop up within an applet when the method name is clicked. If an input

parameter is a primitive type (i.e., int, char, boolean, String), then the value may be

directly typed in (such as 1000, 'g', true, "hello"). URLs may also be given as

parameters for the method. For servlets, parameters may be provided through the HTTP


3.5.3. Events

The object can post events, and those events are published together on the Web

page. By clicking on the event name, an implicit method (which is not explicitly

exported) of the event registration object (which can be implemented as a servlet) is

invoked and takes the registration information from a subscriber of the event. Figure 3

shows an example event registration form for the FlightCancelled event.

3.5.4. Rules

Rules that are defined by the object are exposed. These rules may be tied to events

by triggers when a subscriber goes through the event registration process. The rules are

predefined but have the flexibility of customization by allowing some parameter values to

be defined by the event subscriber at the registration time. In other words, the rules can

be tailored to fit the individual user's needs. In the following example, the customizable

parameters in the rule RefundCancelledFlight are UserTimeLimit and User.

Rule: RefundCancelledFlight

Condition: NextAvailableFlightTime(cancelledflight no) >UserTimeLimit

Action: MakeRefund (User, flight no)

The remote user can define these customizable parameter values and install the

rules. The rules may also receive some parameter values from events. In this case, the

event parameter to rule parameter mapping needs to be specified in the trigger. Figure 3

shows an example of how the RefundCancelledFlight rule is included in the event

registration form of the FlightCancelled event. Note that the parameters UserTimeLimit

and User are renamed in the form as TimeLimitToNextFlight and UserName in order to

give the user a better understanding of the parameters. A description of the rule or the

rule definition itself can also be provided in the registration form, if necessary.


The Active Object Model (AOM) is an extension of the traditional object model

and is intended to model various resources (i.e., data entities, events, constraints, action-

oriented rules, triggers, and component systems) of distributed objects in a

heterogeneous, distributed network. Different from the traditional object model, which

captures the structural properties of objects in terms of their attributes (or properties) and

their behavioral properties in terms of methods, AOM extends these properties to include

events (similar to the object model of Java Beans), constraints which are declarative

forms of assertions, condition-action-alternative-action (CAA) rules, and triggers which

relate events and rules. Furthermore, AOM allows the nesting of schemas to encapsulate

the structural and behavioral properties and knowledge specifications of a set of classes

into a hierarchical component architecture.

Knowledge Spec (for Schema)
Knowledge Spec
Class Triggers
Attributes Rules
Methods Constraints
Knowledge Spec (for Class)

Figure 4. Overview of the Active Object Model.

The basic building block of AOM is the object class. As shown in Figure 4, an

object class defines the properties of a set of like objects in terms of attributes, methods

and an optional knowledge specification. The knowledge specification consists of

events, constraints, CAA rules and triggers. A trigger specifies what events and event

history would cause the firing of a structure of rules. The knowledge specification of a

class is used to define the constraints and rules that govern the operations on the objects

of the class. In other words, it specifies knowledge that is tightly coupled to the class.

A set of object class specifications forms a schema. Additionally, a schema may

optionally contain a knowledge specification, which defines events, constraints, CAA

rules and triggers that govern the interoperation of the objects of the classes defined in a

schema (see Figure 4). Thus, a schema can be defined to capture the structural and

behavioral properties and knowledge specifications of a component in a component

architecture. Since components in a component architecture may constitute a higher level

component, AOM allows the nesting of schemas in a schema to any number of levels. At

any level, a knowledge specification can be given to define the events, constraints, rules

and triggers that are applicable to the objects of the classes defined in the schemas of the

same and lower levels. A component can be visualized as a set of classes and their

knowledge specifications defined and used within a single system, or systems on a sub-

domain of the network, or an arbitrary group of systems on the network.

In the following sections, we shall present the event, rule and trigger

specifications of AOM. Constraints are declarative assertions that are applied on

attributes or method executions within or among classes. Their specifications are similar

to the condition specifications of CAA rules. We, therefore, will not separate them from

CAA rules in the following discussion. Attribute and method specifications are similar

to those of the traditional object model and they are not presented in the following

sections. In fact, it may be desirable to de-couple the knowledge specification from the

traditional object specification in terms of attributes and methods. The knowledge

specification language or GUI developed for event, rule and trigger specifications can

then be coupled with some existing object modeling tools to implement the full

capabilities of the AOM.

4.1. Event Specification

AOM distinguishes three types of events with respect to how they are raised:

namely, events associated with methods, explicitly posted events and timer events. The

general syntax of the event specification language is given below.

[IN schemaname / schemaname :: classname]
EVENT eventname(type parl, type par2, ..., type parN)
[DESCRIPTION textdescription]
[OPERATION methodname]
[AT [ '[' starttime ']' ]
MM/dd/yyyy:hh:mm:ss (,MM/dd/yyyy:hh:mm:ss)*
[ '[' endtime ']' ] ]
[EVERY [ '[' starttime ']' ]
nn time measure
[ '['endtime']' ] ]
[RETURNS returntype]

Not all the clauses shown in the aggregated general syntax are applicable to an

event type. We shall explain the three event types and provide examples to show their

syntactic structures in the following subsections.

4.1.1. Events Associated with User/System Defined Methods

An event can be defined and associated with a method execution. Once defined,

an event posting statement (which is a method call to an event manager) can be

automatically generated to post the event at a proper place of program execution relative

to the activation of the method. An event can be posted "before" or "after" the method is

executed, or at the "commit" time of the transaction in which the method is activated.

The associated transaction manager will post the events related to the commit time. These

different ways of posting an event with respect to a method execution have been called

by the database community as "coupling modes." Events with the above coupling modes

are always posted synchronously, which means that the program code, which posts an

event of this type, will wait for a response from the event and rule server that handles the

event. Another coupling mode, which is called de-coupled, allows a program to post an

event and continue its execution without waiting for a response from a server. An event

with this coupling mode is posted asynchronously. Thus, the coupling mode of an event

specification implicitly determines whether the event should be posted synchronously or

asynchronously. In addition to the above modes, an event can have a mode called

"instead of." This mode allows an event to trigger the execution of some rules) instead

of the original method; however, the rules) may include the original method in addition

to other operations. This mode is very useful for "customizing" the behavior of a

component system modeled by a class.

An event can have parameters, which are the parameters given to the associated

method. When an event is posted, parameter values will be passed to rules that are

triggered by the event. This allows the object that posts the event to pass some data for

use in rule processing.

For an event that is posted synchronously and needs a return type, the return type

of the associated method is assumed.

The syntax of a method-associated event is shown below:

IN schema name:: class name
EVENT event name
[DESCRIPTION textdescription]
OPERATION method name

4.1.2. Explicitly Posted Event

An event can be defined without being tied to a method. This type of event can be

posted within an application program or a method body of an object class. The posting of

an event would create an instance of the event type. An explicit event can be posted

synchronously or asynchronously by a method call to an event manager. This type of

event can also carry parameters. This provides a way for a program to pass data to rules

for use in rule processing. If the event is posted asynchronously, data is not expected to

return and the return type is ignored. If it is posted synchronously, the RETURNS

statement defines the return type.

This type of event can be defined at a schema level or within a specific class. If

this type of event is defined within a specific class, it is posted only within the methods of

the class. Otherwise, the event may be posted within any class of the schema. The syntax

of this type of event specification is given below:

IN schema name / schema name :: class name
EVENT eventname (type parl, type par2, ..., type parN)
[DESCRIPTION description text]
[RETURNS return type]

4.1.3. Timer Event

The timer event is an event posted by a Timer at some pre-specified time point or

in some time interval. The Timer can be a dedicated server that keeps track of the global

time, or if this facility does not exist, just a local module that keeps track of the local

time. Three types of timer events are distinguished. The first type is an absolute timer

event, which means that an event is posted by the Timer at an explicitly specified time

point. The time is given in the MM/dd/yyyy:hh:mm:ss:nnn format, where MM is the

month, dd is the day, yy is the year, hh is the hour, mm is the minute, ss is the second,

and nnn is the millisecond. If hh:mm:ss:nnn is not given, the default value of

00:00:00:000 is assumed. A blank space in a field defaults to 0. The syntax of this

absolute time event is given below with an example.

EVENT event name
[DESCRIPTION description text]
AT MM/dd/yyyy:hh:mm:ss:nnn (,MM/dd/yyyy:hh:mm:ss:nnn)*

EVENT myimportanttimes
DESCRIPTION this event is for some important times of interest
AT 01/22/1999, 02/23/2000:02:45:30::

The second type of timer event is the absolute recurring timer event, which is an

event that is repeatedly posted every month or every day or every time interval. This type

of event uses a "*" to express the repeating time points. As an example, if"*" is given in

the month field, it means that the event is posted every month. This type of event can also

have a starting time and an ending time. Together, they specify the time window in which

the event should be actively posted. The keyword "NOW" can be used as the starting

time to indicate that the event should be posted immediately. The following gives the

syntax and an example stating that the event should be posted every 30 minutes of every

hour on the 10th day of every month in 1998 starting from May 2, 1998, until November

1, 1998.

EVENT event name
[DESCRIPTION description text]
AT [ '[' start time']' ]
mm/dd/yyyy:hh:mm:ss:nnn (,mm/dd/yyyy:hh:mm:ss:nnn )*
[ '[' endtime']' ]

EVENT myrepeating_event
[DESCRIPTION description text]
AT [5/2/1998] */10/1998:*:30:: [11/1/1998]

The third type of timer event is the relative recurring timer event, which specifies

that an event should be posted by the Timer every several seconds, minutes or hours. This

event specification uses a separate keyword EVERY, and also allows for the specification

of a start time and an end time. The syntax is as follows: The allowable time units for

timemeasure are milliseconds, seconds, minutes, hours, or days.

EVENT event name
[DESCRIPTION description text]
EVERY [ [' start time ']' ]
nn time measure
[ [' endtime ']' ]

An example for posting an event every 2 minutes starting from March 20, 1998, is

given below:

EVENT every_2_minutes_event
DESCRIPTION this event is posted every 2 minutes and continues forever
EVERY [3/20/1998] 2 minutes

All of the above timer events are asynchronously posted.

4.2. Rule Specification

CAA (Condition-Action-Alternative Action) rules provide a very general way for

specifying integrity and security constraints, business rules and policies, regulations that

are relevant to the operation of a real or virtual enterprise. Each CAA rule represents a

small granule of control and logic needed to enforce a constraint, business rule, or policy.

A number of these rules, when executed in a certain order or structure, can represent a

larger granule of control and logic. By defining and processing rules, it is possible to

automatically enforce various constraints on objects of a class, objects among classes in

the same schema, or objects among classes in different schemas.

A rule can have a number of parameters just like a procedure call and perform

some desired operations. When the rule is invoked, it first checks the condition part of

the rule. If the condition is true, the operations specified in the ACTION clause are

executed. Otherwise, the operations specified in the ALTACTION clause are executed.

The overall syntax of a rule is as follows.

IN schema name / schema name::class name
RULE rulename (parameter list)
[RETURNS return type ]
[DESCRIPTION description text]
[RULEVAR rule variable declarations]
[CONDITION guarded expression]
[ACTION operation block]
[ALTACTION operation block]
[EXCEPTION exception & exception handler block]

The optional clauses are surrounded by brackets. The IN clause specifies where

the rule is defined (i.e., in which class or schema level). The rulename is a unique

identifier for the rule. The list of parameters is specified together with the rulename. The

DESCRIPTION clause contains a text string describing what the rule does. The TYPE

clause is used to specify if the rule is to be frequently modified (i.e., DYNAMIC) or is

not likely to change once defined (i.e., STATIC). This information can be used to

internally generate the most efficient code for the rule. The STATE clause specifies if

the rule should initially be active or suspended. Suspended rules can be activated at run-

time by other rules or under a program control. Active rules can be deactivated (or

suspended) at run-time. Note that the state specification of a rule does not reflect the

current state of the rule in the system, as the current state can be changed after the rule is

defined. The current state information is managed as an internal attribute of the rule and

is separated from the rule definition. The RULEVAR clause allows the rule to have

variables defined. Also, variables, which need to be persistent, are declared within this

clause. The CONDITION clause is specified using a guarded expression. A guarded

expression can be divided into two parts: the guard part and the condition expression

part. The guard part is composed of a sequence of expressions all of which must be true

in order to continue the processing of the rule. If any of the expressions within the guard

evaluates to false, the processing of the rule is discontinued and the rule is skipped. The

expressions within the guard are evaluated in a sequential order, and if all of the

expressions evaluate to true, the condition expression part of the CONDITION clause

will be processed. The condition expression is the final condition that is to be checked in

order to decide whether to go to the ACTION clause or the ALTACTION clause. The

reason for employing the guarded expression is that it allows for efficient and ordered

processing of pre-requisite conditions, which must be satisfied in order for the rule

processing to be meaningful. A simple example of a CONDITION clause with a guarded

expression is given below.

CONDITION [ flag = TRUE, count > 0 ] count 2 < quantity

The guard part is surrounded by the brackets, and the condition expression part is

the expression 'count 2 < quantity'. A rule processing system first checks if the

expression 'flag = TRUE' evaluates to True. If not, the entire rule is skipped. Otherwise,

it checks if 'count > 0' evaluates to True. If not, the rule is skipped. Otherwise, the

condition expression is evaluated. If the condition expression is True, the entire guarded

expression is True. Otherwise, the guarded expression is False. Therefore, a guarded

expression returns one of these three values: Skip, True or False. Note that the guard

part is optional. Without it, the condition expression returns True or False.

The ACTION clause and ALTACTION clause consist of operations to be carried

out. The EXCEPTION clause specifies what to do when an exception, such as a division

by zero or the failure of an operation, occurs within the rule. The exception model is

similar to the exception model of the programming language Java. The type of exception

and the method to handle this type of exception is specified as pairs in the EXCEPTION

clause. The exception types are defined as classes and the exception handlers may exist

within a specific exception handler class or a user-defined class.

A rule can also return a value, which can be a primitive type or a user-defined

type. The return type is specified by the RETURNS clause. The value to be returned is

specified in return statements in the ACTION and ALTACTION clauses.

In a rule specification, the CONDITION clause can be omitted. In that case, the

ACTION clause must be given and will be unconditionally processed. When the

CONDITION clause is given, either the ACTION clause or the ALTACTION clause may

be omitted, but not both. An omitted ACTION clause or ALTACTION clause means that

no operation is to be executed by the clause.

4.2.1. Rule Variables

Variables for rules can be declared using the RULEVAR clause. There are three

types of variables that are considered useful for rules. First, temporary variables for a

rule may be defined. These temporary variables are the same as local variables defined

within the scope of the rule. No special keywords are needed to declare this type of

variables. These variables can have data types similar to those provided by the Java

language (in thejava.lang orjava.util packages) such as int, String. Also, user-defined

types are allowed. Second, persistence is often required for storing rule information, so

that the rule can have a state that can be passed on from one instantiation of the rule to

another instantiation of the same rule. A persistent rule variable is similar to the static

variable as defined in the Java or C++ programming language, i.e., a single instance of

this variable is shared among all the instances of the same rule. The keyword 'persistent'

is used for this type of variable. Third, existing objects such as CORBA objects may

need to be referenced in a rule. This is to support a special case where a CORBA

infrastructure may be additionally attached to the Web server and the ETR server (our

implementation of the rule server) may at the same time be acting as a CORBA client

and/or server to this CORBA infrastructure. In order to allow this type of variable to be

declared and used, we define a third type of variable, which uses the keyword 'existing'.

Parameters needed to identify an existing object are specified using a constructor style

call. An example of a simple declaration for three variables in the RULEVAR clause is

given below:

RULEVAR int i; //temporary
persistent int limit; // persistent and static
existing CORBAobject Cobj("Serverl 1"); // existing CORBA object

It is possible to specify the desired initial value of a persistent variable, when the

variable is first created and stored. This can be done by specifying the parameters in

parentheses beside the variable name, similar to a constructor call. A simple example is

shown below:

RULEVAR persistent int limit (0);

This declaration will initialize the value of the limit variable to 0 only when this

variable is first created and stored. Note that this initialization of the value will only

occur "once" when the variable is created in the persistent store. Subsequent executions

of the same rule will only read the variable from the persistent store, as the variable is

already created and stored in the persistent store. An example for initializing a persistent

variable, which is an instance of a composite class, is given below. Assume that "Part"

is a class, which needs a "String" and a "Project" instance to initialize its values. And

"Project" is again a class, which needs a "String" and an "Integer" to initialize its values.

RULEVAR persistent Part pl ("CylinderPart", ("EnginePjt", 5020) );

The parameters needed for p are the String "CylinderPart" and a "Project"

object, which is initialized by the parameters surrounded by the nested parentheses. The

"Project" initialization parameters are "EnginePjt" and 5020. As a result, the needed

Project instance is first created, and then this instance and the "CylinderPart" String are

used to initialize the Part instance pl.

Afterwards, this variable can be used in the same way as an ordinary variable

would be used in the CONDITION, ACTION or ALTERNATEACTION part.

Note that persistent rule data is data shared by different instantiations of the same

rule type. Data that is shared among different types of rules can be stored into a global

persistent repository, which could be a database providing querying capability. Since this

level of sharing data can be easily done by a global repository, we do not focus on this

level of sharing in the rule language.

4.2.2. Method Calls to the ETR Server

Rule commands can be used in the ACTION or ALTACTION clauses. Rule

commands are method calls to the ETR server. Commands which enable, disable, or

delete rules and rule groups, are shown below:

Enablerule ( String rulename)

Disablerule ( String rulename)
Deleterule ( String rulename )
Enable_group ( String groupname)
Disable_group ( String groupname)
Delete_group ( String groupname )

The concept of rule groups will be explained later. All of the above method calls

will be directed to the local ETR server that processes the rules.

4.2.3. Posting an Event

A rule can also post an event synchronously or asynchronously in the ACTION

and ALTACTION parts by using the following statements:


The eventvariable is declared in the RULEVAR and the event parameters are set

via assignment statements in the action/alternative action part of the rule.

4.2.4. Example Rule

A sample rule is given below to show how a rule can be defined. The rule is

basically evaluating the programmer salaries of companies located only in San Jose, and

it returns the result of the evaluation. During the evaluation, different criteria of

evaluating the salaries are applied based on the revenue (or size) of the company. It

additionally posts an event containing the evaluation results when it finds a large size

company. The BNF for clarifying the rule syntax is given in the Appendix.

RULE salary_rule (int companyrevenue, String location, Employee emp)
DESCRIPTION "evaluate programmer salaries of companies in SanJose,
applying different criteria based on company revenue"
RULEVAR String result, // declare temporary object




NotifyingEvent foundlarge_company_event;
persistent large_count; // declare persistent object
existing MyCORBAServer my_server("m012");
// declare existing object
[company_revenue > 0, location = "SanJose", "emp.job=
programmer"] companyrevenue > $100,000,000
large_count = large_count +1; // update persistent value
result = my_server.EvaluateLargeCompany( companyrevenue,
emp.salary); // CORBA call
foundlarge_company_event.revenue = companyrevenue;
foundlarge_company_event.Evaluation = result;
PostAsync( foundlarge_company_event);
Return result; // optional return value
result = my_server.EvaluateSmallCompany( companyrevenue,
emp.salary); // CORBA call
Return result; // optional return value

4.3. Trigger Specification

Triggers relate events with rules. A trigger specifies an event structure that would

fire a structure of rules. An event structure has two parts, namely, a TRIGGEREVENT

part and an EVENTHISTORY part. The TRIGGEREVENT part specifies a number of

alternative events each of which, when posted, would trigger the evaluation of the event

history specified in the EVENTHISTORY part. If the event history is evaluated to True,

the structure of rules specified by the trigger is processed. Otherwise, the structure of

rules will not be processed. The TRIGGEREVENT part is purposely kept very simple.

It allows the logical OR of a number of simple events (that is, any one of the events

specified in the list, when posted, will trigger the evaluation of the EVENTHISTORY).

The EVENTHISTORY part can be a complex event expression stating the inter-

relationship of a number of events that have been posted. For example, "El and E2 but

not E3" have been posted or "E5 occurred before E4 within a specified time window".

The EVENHISTORY part expresses a "composite event", a term used in the active

database literature. This separation of TRIGGEREVENT and EVENTHISTORY

provides a way to more specifically name the events that will trigger the evaluation of a

more complex event expression, thus avoiding the repeated evaluation of the complex

event expression. This is different from the event specification of some existing ECA

rule systems in which, when a composite event is specified, all the events mentioned in

the composite event all implicitly become the trigger events. In the above two composite

event examples, El, E2, E3, E4 and E5 are implicitly the trigger events in more

traditional ECA rule systems, i.e., the posting of any one of the events will trigger the

evaluation of its corresponding composite event. In some applications, one may want to

specify that only the posting of E2 should trigger the evaluation of "El and E2 but not

E3" and only the posting of E4 will trigger the evaluation of "E5 occurred before E4

within a specified time window". The separation of TRIGGEREVENT and

EVENTHISTORY allows more explicit specification of what triggers the evaluation of

an event history.

The structure of rules, given in the RULESTRUC clause, can be a linear structure

or a general graph structure. In a linear structure, the rules are processed sequentially,

following the rule order. In a graph structure, rules can be executed sequentially, in

parallel, or with synchronization points. The parameters of the TRIGGEREVENT can be

passed to the rules in a RULESTRUC and the trigger would include the specification of

their mappings.

The overall syntax of the trigger is as follows:

TRIGGER trigger name ( triggerparameter list)
TRIGGEREVENT events connected by OR
EVENTHISTORY event expression
RULESTRUC structure of rules using subsets of the triggerparameterlist
RETURNS returntype : rule inrulestruct

The TRIGGEREVENT or EVENTHISTORY can be omitted, but not both. If the

TRIGGEREVENT is omitted, the default is the OR of all the events referenced in an

EVENTHISTORY expression.

4.3.1. Events in Trigger Specification

The event specification consists of the two parts, which are described in more

detail in the following two subsections.


This part specifies which event triggers the actual processing of the

EVENTHISTORY expression. Only the parameters of the events specified in the

TRIGGEREVENT part are passed. Recall that when the TRIGGEREVENT is omitted,

the default mode is ORing all of the events in the EVENTHISTORY. In this case, the

parameters of the events are not passed because there is no event specified in the

TRIGGEREVENT part. The rules that are triggered in this case do not depend on event

parameters passed. If the parameters are to be passed, the parameters which correspond

to each of the trigger event parameters should be specified in the parameter list of the

TRIGGER clause, and the data types of these parameters must match with those of each

trigger event as shown below.

Assume that El and E2 have the following matching parameter lists (i.e.,

parameters with the common data types exist between the two events).

El(int jl, classXj2, String j3, intj4)
E2(String kl, int k2, classX k3)

The trigger event part defined on these two events can be specified as the

following. The parameters passed by the trigger events are specified as parameters of the

trigger. Here, vl, v2 and v3 correspond toj 1, j2 and j3 of El, and also to k2, k3 and kl

of E2, respectively.

TRIGGER sample trigger (int vl, classX v2, String v3)
TRIGGEREVENT E1 (vl, v2, v3, j4) OR E2 (v3, vl, v2)

The trigger events can only be connected with OR as shown below.



Events of the past can be tested to allow for composite event processing. The

history of events can be checked, but parameters of the historical events cannot be passed

to the rules. If we allow for historical events to pass parameters, we must be able to

differentiate between the various occurrences of the event history instances and decide

which event instances compose the event history instance. This is called parameter

contexts in other research systems [Chak94b] that focus on the processing of composite

events. This can be very complicated, and still an easier semantics of parameter contexts

is needed to actually apply them to real world applications. We shall consider the

inclusion of this capability in a future version of the knowledge specification language

and its implementation. In this version, we focus on keeping the semantics of the event

history as simple as possible. Therefore, only the truth value of the event history will be

checked. A simple way to access actual event parameters in an event history is by

accessing them within the rule via a method call to the event history processor which logs

all of the event instances.

Historical events can be expressed using logical or sequence operators to combine

simple events into a composite event. The syntax of a historical event is given below.

Historical_event:= ['[' starttime']' ] Composite_event ['[' endtime']' ]
Composite_event := Composite_event Ev_op Composite_event
Ev_op := Logical_op I Sequence_op
Logicalop := AND '[' timewindow ']' I AND I OR I NOT
Sequence_op := >'['timewindow']' >

A time window for processing the EVENTHISTORY can be specified. The

following example specifies that only the event history from 20 to 10 days before the

posting of a trigger event is considered in the EVENTHISTORY clause. If during that

time window both El and E2 have occurred, the EVENTHISTORY will return True.

EVENTHISTORY [-20 day ] El AND E2 [-10 day]

The time window can be specified relative to the trigger event using a '-' sign

which indicates time points "before" the trigger event occurs. Or, a calendar format time

can be given to specify the absolute time points of the window independent of the time

the trigger event occurs. The keyword 'NOW' can be used to denote the time of the

triggering event.

The logical operators are AND, OR and NOT. The AND operator can have an

optional time window specified within brackets (i.e., [ ] ) following the operator. Two

events connected by the AND[time window] operator means that, if the two operand

events occur within the given time window, the composite event has occurred. If the

time window is not specified, it defaults to the time window specified for the whole

EVENTHISTORY clause. The point of time at which a composite event defined by an

AND operator occurs, is the time when the later event of the two operands occurs. This

point of time is important for evaluating an AND expression that has a time window

specification. The OR operator means that if any of the operand events has occurred, the

composite event is regarded as having occurred. The NOT operator is used to assure that

a specific event did not occur during the time window given by the EVENTHISTORY


The following examples show some historical events that use AND, OR and NOT

logical operators.

El AND[5 min] E2
E4 OR E6

The sequence operator denoted by '>' can have an optional time limit for the

sequence. For example, assume we want to check the occurrence of some historical

events "El occurs before E2 and the time between these two events does not exceed 2

minutes". We can use the following expression:

El >[2 min] E2

The sequence may also be cascaded :

E2 >[30 sec] E4 >[2 min] E5

The time that a composite event defined by a sequence is considered to have

occurred is the time the last event in the sequence occurs.

A historical event can be expressed with a mixture of logical operators and

sequence operators. A couple of examples are given below.

(El AND[2 sec] E2) >[3 sec] (E3 ORE5)
(El >[2 min] E4) AND (E5 >[3 sec] E6)

4.3.2. Rule Structure in Trigger Specification

Recall that a CAA rule represents a small granule of logic and control. A number

of CAA rules together can implement a larger granule of logic and control. The

RULESTRUC clause of a trigger specification allows a structure of CAA rules to be

triggered when the event specification is satisfied. It specifies the rule execution order

and maps the parameters of the events to the individual rules.

There are two basic operators for specifying the execution order. The first

operator '>' is used to specify a sequential order of rule execution, and the second

operator ',' is used to specify a parallel execution. For example, the following expression

means that rules R1, R2, R3, and R4 are to be executed sequentially following the

specified order.

R1 > R2 >R3 >R4

Note that the '>' operator can be cascaded. It is also straightforward to specify a

parallel execution of rules. The following example shows that rules R1, R2, R3, and R4

are to be executed in parallel:

(R1, R2, R3, R4)

For expressing a more complicated rule structure, the structure can be broken into

pieces, which are divided by ';' and each piece is specified using the concept of fan-in

and fan-out.


R1 R2 R3


R7 R8 R9

Figure 5. An example of a rule structure.

Assume that the rule execution structure shown in Figure 5 is desired. The

semantics of the rule execution in the graph is the AND semantics, which means that

each rule must wait for all of its predecessors to finish before it can execute. Rule R8

must wait for both R4 and R5 to finish before it can execute. The above structure can be

specified as follows:

R1 >R7;
R2 > R4;
R3 > (R5, R6);
AND (R4, R5 ) > R8 ;
AND (R5, R6 ) > R9

The sequential executions such as R1 > R7 and R2 > R4 are specified as before.

A fan-out is used to specify that, after the execution of R3, rules R5 and R6 can start their

execution independently. The following two fan-in sequences are also specified. After

both R4 and R5 finish their execution, R8 can then start. And after both R5 and R6 finish

their execution, R9 can then start its execution.

Any kind of complex rule structure can be decomposed using this fan-in and fan-

out mechanism. The fan-out is only used when the destination rules (R5, R6 in the above

example) of the fan-out have a common single originating rule (R3 in the above

example). The fan-out may be specified in various ways (i.e., in the above structure it

may be specified as R3>R5 ; R3>R6); whereas, specifying the fan-in exactly as it is

shown in the structure is mandatory for assuring the correctness of the specification.

In the above example, only the AND semantics are used for fan-in. To allow

more flexibility in defining a rule execution structure, we allow the OR semantics to be

included into the fan-in construct. The OR semantics means that when a subset of the

predecessors of a rule are finished, the rule can start its execution. In the previous

example, to say R8 needs to wait for only one of the two rules R4 and R5 to finish, we

can use the following expression:

OR[1](R4, R5 )> R8

The 'OR[1]' means that the fan-in needs only to wait for one of the predecessors

to finish. The number of rules to wait for can be specified in the brackets.

Some other examples are:

OR[2] (R1,R2,R3,R4,R5,R6) > R7
OR[3] (R1,R2,R3,R4,R5,R6,R7,R8) > R10

The 'OR[2] means 'wait for two out of R1, R2, ..., R5 and R6', and 'OR[3]'

means 'wait for three out of R1, R2, ..., R7 and R8'.

These AND and OR operators can be nested to specify a more complicated

execution structure, as shown in Figure 6. The textual representation of the graph is

given below:

AND (R4, R5, R6, OR[2] (R1,R2,R3) ) > R7

R1 R2 R3

O [2]

R4 R5 R6



Figure 6. Example rule execution structure with nested AND and OR operators

The same rule may appear more than once in a single RULESTRUC. In that case,

a rule alias mechanism is needed to differentiate between these different occurrences of

the same rule. Rule R1 may be executed at the beginning of the RULESTRUC and also at

the end of the RULESTRUC. In this case, a simple naming method is to use R1 and

RI-I to differentiate them in a RULESTRUC. Additional occurrences of R1 can be

denoted such as RI-2, RI-3, RI-4.

One thing that needs to be considered in the execution of a RULESTRUC is when

a rule is disabled or deleted from the rule system. When a rule is disabled at run-time, the

execution of the rule is skipped within the RULESTRUC. The successor of the disabled

rule will inherit all of the relationships that the disabled rule has with its predecessors.

As an example, if rule R5 is disabled in Figure 5, the rule server will bypass the

processing of R5 when R3 is completed. As soon as R6 is completed, it will then process

rule R9 without considering R5. If a rule is deleted, then every occurrence of the rule in

all the RULESTRUCs will be bypassed.

Since event parameters can be passed to rules, the mapping of event parameters to

the individual rules within a RULESTRUC need to be specified. The following example

shows how the mapping is done:

TRIGGER sample trigger ( int vl, int v2, classX v3 )
TRIGGEREVENT El(vl,v2,v3) OR E2(v3,v2,vl)
EVENTHISTORY [03/10/1998::::] E5 >[2min] (E3 AND E6) [-10 hours]
RULESTRUC Rl(vl,v2) > R2(v3,vl) > R3(vl,v2,v3)

In the example, R1 uses event parameters vl and v2, R2 uses v3 and vl, and R3

uses vl, v2 and, v3. The original values of event parameters that are passed to the trigger

are passed to the individual rules. Of course, the mapped parameters of the rules must be

type compatible with the event parameters.

A trigger can return a value after executing the RULESTRUC. This returned

value is also the value returned to the event that was posted synchronously to cause the

processing of the trigger. The value can be a return value from one of the rules

participating in the RULESTRUC. If the rule that generates the return value is disabled,

the default return value is null. The above trigger returns an integer value that is

generated by R1.

4.4. Rule Group Specification

In a complex application environment, many rules may be required to capture

various constraints, business rules, policies and regulations. However, not all the rules

are useful for a particular situation. It would be ideal to provide a mechanism to activate

or deactivate some subsets of rules dynamically. For this purpose, the concept of rule

groups is introduced in our rule specification language. Rules can be grouped to allow

for easy activation or deactivation of a set of rules. A rule can participate in one or

multiple groups or may not belong to any group in this case, it belongs to the default

group. When an application activates a rule group, all the rules in that group become

active (i.e., can be triggered by some events if they participate in the rule structures of

some triggers). However, within an activated group of rules, individual rules could have

been deactivated. When an event and rule server is in the process of processing the rules

specified in a rule structure in response to the posting of an event, it will check if each

rule in the rule structure should be processed or not. The algorithm for deciding the

execution status of a rule is as follows:

(1) If the rule is active (i.e., the rule has not been explicitly deactivated), then go

to (2). Otherwise, the processing of the rule is bypassed.

(2) If there exists a group that the rule is a member of and the group is active, then

the rule is processed. Otherwise (i.e., all the groups that the rule is a member

of have been deactivated), the rule is bypassed. Note that if the rule is only a

member of the default group, it is processed because the default group is

always active.

An example for defining a rule group is :

RULEGROUP rule_groupl

The STATUS clause specifies if the group is initially active or suspended at the definition



The infrastructure to support the knowledge network concept is composed of a

large number of knowledge Web servers on the Internet. Each knowledge Web server

has the capability to interact with users and other knowledge Web servers. The

knowledge Web server includes additional modules to extend the capability of the current

Web servers. We will discuss the architecture and the detailed components of the

knowledge Web server in the following sections.

5.1. Architecture of the Knowledge Web Server

The general architecture of the knowledge Web server is shown in Figure 7.

Knowledge Web Server

Figure 7. Overview of architectural components in the knowledge Web server.

Each knowledge Web server has an Event Manager, an ETR Server, and a

Knowledge Profile Manager, which are additional components installed on a typical Web

server. Because these components are installed on each Web server, the whole

infrastructure based on the Internet will have a symmetric architecture.

5.1.1. Event Manager

The Event Manager handles the incoming and outgoing events to and from the

Web server. When a new event is defined through the Knowledge Profile Manager (to be

explained in subsection 5.1.3.), the meta-data for the event is given to the Event Manager

in order to enable it to recognize and handle the event. The Event Manager provides an

interface to allow the local applications to connect to itself and generate an event. Also,

the Event Managers can communicate with each other for the purpose of sending and

receiving events. The Event Manager is also responsible for performing event filtering

before it sends out events to the subscribers in order to support a selective subscription of

events. It also includes the event registration capability for remote clients to register their

interest in subscribing to certain events provided by the knowledge Web server. During

the registration, the Event Manager may contact the ETR Server to install parameterized

rules. Also, when the Event Manager receives an event from a remote web server, it

passes it to the local ETR Server to initiate the processing of triggers and rules.

5.1.2. ETR Server

The ETR Server processes the triggers and rules in the knowledge Web server.

Triggers and rules are defined by users who are authorized to login to the local

Knowledge Profile Manager as a knowledge provider or subscriber. The trigger and rule

definitions that are input through the Knowledge Profile Manager are provided to the

ETR Server and are transformed into internal data structures used for executing the

triggers and rules. The Event Manager also gives information about parameters for

parameterized rules provided by subscribers of events during the event registration

process. Note that a limitation on the capability of rule installation by remote users is

needed for security reasons. The ETR Server receives events from the local Event

Manager and performs the trigger and rule processing. On receiving an event, the ETR

Server can immediately identify the trigger related to the event, efficiently process the

event history, and schedule the rules specified in the trigger. The ETR Server executes

rules that are composed of method calls, which may execute local or remote applications,

or invoke methods of distributed objects. The rule can also generate an event to trigger

other rules.

5.1.3. Knowledge Profile Manager

Each user that has data and applications on the web server has a knowledge

profile that is maintained by the Knowledge Profile Manager. The knowledge profile

stores information about events, triggers, and rules. The knowledge profile for a specific

data (or service) provider contains the events, triggers and rules that were defined by the

data (or service) provider. Also, users can think of the knowledge Web server as his/her

agent server. A knowledge profile for the user will show what events the user has

subscribed to, and also the trigger and rules that were defined on the subscribed event. A

Meta-data Manager module within the Knowledge Profile Manager provides persistence

for storing the user knowledge profiles.

The knowledge profile is updated when new events, triggers, and rules are

defined. When a new event is defined, its definition is passed on to both the ETR Server

and the Event Manager to enable the components to recognize the event and perform

installation operations. For trigger and rule definitions, they only need to be passed to the

ETR Server.

A special user of the system may be regarded as the super-user of the knowledge

Web server. For the purpose of maintaining system level information about the

knowledge Web server, a super-user knowledge profile may exist to deal with the

management of system-level events, triggers and rules. Some of this system-level

knowledge may also be published on a specific web page. The super-user also manages

the user accounts on the knowledge Web servers.

Data Provider
- -- (owns profile on
server A)
Install event,rule ( )

Data & Applications

knowledge Web Server

Figure 8. Component interactions between two knowledge Web servers.

5.2. Component Interaction Sequences

The architectural components described above are replicated within each Web

server on the Internet, which makes the infrastructure symmetric. But during the

interaction between the knowledge Web servers, although the components are identical,

they can have different functional responsibilities based on their roles in the interaction.

Figure 8 shows two interacting knowledge Web servers: the left one as the provider of

events and data, and the right one as a client to events and data.

The activities that occur between two interacting knowledge Web servers are

divided into two phases: knowledge network construction (or build-time) activities and

knowledge network processing (or run-time) activities. The construction activities are

denoted by solid arrows and the processing activities are denoted by dotted arrows.

Circled numbers indicate the sequence of the activities that occur.

The knowledge network construction activities include:

) The data provider installs events, triggers, and provider-side rules. The

provider-side rules can be executed on the provider's server if desired by an

event subscriber.

( The event and event filter template installation is carried out by the Event

Manager. Event filter templates enable subscribers to specify event filter

instances during the registration process performed by step .

Provider-side parameterized rules are given to the ETR Server. The event and

trigger information is also provided to the ETR Server.

A client accesses the web page of the data provider and registers for the

subscription of an event and specifies an event filter. Provider-side rules are

also selected. After the registration has been successfully performed, the event

subscription information is forwarded to the subscriber's site.

The client accesses his own knowledge profile and installs additional triggers

and rules related to the subscribed event to be processed at the client site.

The knowledge network processing activities include:

The data provider generates an event to be posted to the subscribers, which is

first filtered by the Event Manager.

T If a subscriber of the event had tied the event to a provider-side rule during the

registration, the relevant rule is now executed on the provider's knowledge

Web server.

The event is posted over the Internet to the subscribers of the event and the

Event Manager on the subscriber site receives it.

The event received is forwarded to the Knowledge Profile Manager and can be

kept in the subscriber's profile for the purpose of logging the events to be

viewed later on.

The event received is given to the ETR Server to execute any relevant

subscriber-specified triggers and rules.


The knowledge network concept can be realized with the architecture composed

of knowledge Web servers described in the previous section. The issues and details of our

approach regarding the construction of the knowledge network, the processing that occurs

within the framework, and the management of the knowledge network are discussed in

this section.

6.1. Constructing the Knowledge Network

Constructing the knowledge network includes several stages that add some type

of knowledge into the framework. The stages are identified as: (1) defining and

publishing new events, triggers, and rules in the knowledge network by the provider,

(2) performing event registration by the subscriber, and (3) defining triggers and rules by

the subscriber. The issues regarding this stage are very important, as this stage is the

starting point for incorporating the knowledge into the Internet.

6.1.1. Defining and Publishing Events and Rules

A knowledge provider can define new events that he/she would like to post over

the Internet. The detailed description about these events will also be provided on the

provider's web page. By publishing these events on a web page, users on the Internet can

browse the event description and subscribe to the event by registering themselves as

subscribers to the event. Provider-side rules are intended to allow subscribers of events

to automatically execute rules on the provider's knowledge Web server when the

subscribed event occurs. Rules can be defined by the provider and displayed within event

registration forms. During the registration process, after the subscriber enters all

information needed for subscribing to an event in the event registration form, he/she can

also select a subset of the rules given by the provider. In order to support the definition

and publishing of events and provider-side rules, the following issues need to be

considered. First, there should exist authorization levels for publishing knowledge in the

knowledge Web server. Some users should have the authority to publish on the

knowledge Web server while others should not. Second, defining events, filters, and

provider-side rules should be easy for even novice users. In other words, low-level

program coding should be avoided. Third, mechanisms to prevent name conflicts and

type mismatches for event names and event parameters must be devised within the

Internet community. Fourth, registration form generation should be handled

automatically by the underlying system instead of the user needing to create separate

HTML pages and scripts for it. Fifth, the underlying system needs to be able to

effectively support provider-side parameterized rules without creating a huge number of

almost identical rules customized for each subscriber. Considering these issues, our

approach to support the definition and publishing of events and provider-side rules is

described below in the order that the publishing is performed.

(1) Log into the knowledge profile in provider mode

When logging into the knowledge profile, the authorization level is checked to

determine if the user can work in the subscriber mode and/or the provider mode. The

authorization levels are set by the administrator of the knowledge Web server. Provider

levels are given to users that provide data and knowledge on the knowledge Web server.

Once logged in as a provider, the user can now define events, event filter templates (i.e.,

templates describing a filter defined by a provider), provider-side rules, and event

registration forms.

(2) Define events

In order to define an event, the event name, event parameters, and generating

mechanism need to be specified. These can be done using the implemented Knowledge

Profile Manager GUI, which is easy to use for even novice users. The definitions will be

stored in persistent store and also be used for installing run-time components such as

generating event Java classes within the system.

The event name may be considered as any unique string that could identify it on

the knowledge Web server. Different providers on the same system may want to use the

same event name. This would create a conflict in the event name space of the knowledge

Web server. One way to make each provider transparent from each other on the same

knowledge Web server is to append the provider name to the event name as a prefix.

Another issue that may arise in the Internet community is related to guidelines in defining

the event name to prevent conflicts over the whole Internet. Because different knowledge

Web servers may possibly use the same event name, the event name space would be hard

to manage and result in chaos. Thus, a prefix can be used such as the web server address

to make the event names unique over the whole Internet. Therefore, a prefix combining

the web server address and the provider id would be appended to the desired event name

to create the final Internet-unique event name. Another approach would be to establish an

ontology on the event names to be universally used over the Internet. This requires the

cooperation among the whole Internet community, which would be difficult to achieve

but provides the most benefits to the Internet community.

The event parameters can have any names. There are no particular restrictions on

the event parameter names. The parameter types need to be considered more carefully.

Primitive data types can be used for the parameter types. The primitive data types are

considered as those built in by the Java language. User-defined classes can also be used

for event parameter types. If user-defined classes are used, they must also be passed over

to other knowledge Web servers in order to properly receive and interpret the events

containing these classes. Again, conflicts may occur among the class names between

different knowledge Web servers. There are a couple of approaches for this issue. First,

only events containing well-defined classes may be passed, as these classes may be

defined in a commonly accepted ontology and can have an identical class used over the

whole Internet. This makes things cleaner and results in less redundant class definitions.

But complex types that are not defined in the ontology cannot be used as event

parameters, or the complex type must be somehow disassembled into several parameters

of the event if needed to be passed. Second, all classes defined on each knowledge Web

server should include a prefix of the server address and the provider id, similar to event

names discussed above. This will make class names unique over the Internet and make it

easier to pass them to other web servers without creating any conflict among existing

events. However, this approach can create many redundancies and complicated class

names. For our purpose of showing the concept of Internet event and rule services, we

assume that the first approach is taken due to its simplicity.

The event generating mechanism can be specified in several ways. The events can

be generated by several different components attached to the Web server or within the

Web server. Some examples are: (1) the Web server can generate an event when a

specific Web page is being accessed. (2) An active DBMS attached to the Web server can

be used to dynamically create Web pages on the fly. This DBMS can generate events

when a method is executed on its data. These methods can be update operations on the

Web page data, making it possible to generate events for Web page updates. (3) A

CORBA server may be attached to the Web server and can relay events that are generated

within the ORB. The ETR Server may also be a CORBA server that can interact with

other CORBA servers on the ORB. Events that are coupled with methods of CORBA

servers can be defined and installed into the ORB or CORBA servers. (4) A daemon

program may be monitoring some data (i.e., a file) that is not stored in a database, and

also can generate events when the data is changed. This type of daemon program may be

provided for Web pages stored as files. (5) Any general program may generate an event

and provide it to the Event Manager on the provider's knowledge Web server. For this

case, the event generation mechanism need not perform any task, as the event generation

is solely the responsibility of external programs. For all the other event generation

mechanisms (1) to (4) specified above, there are specific tasks to be carried out, which

are related to the event generating modules, such as making a request to the web server,

coupling events to methods within the active DBMS, coupling events to methods within

the CORBA servers, and informing the daemon to monitor data.

Once the event is defined, the Event Manager will install all of the relevant code

for processing the newly defined event. Each of these event specifications will

additionally be stored into persistent storage and used by the Knowledge Profile Manager

for defining event filter templates in the next step.

(3) Define the event filter template

When publishing an event, a subscriber may not be interested in subscribing to all

of the event instances that will be posted by the provider. For example, assume that there

is an Airfare Special Offer event being posted, but the subscriber is actually only

interested in airfares for flights that depart from Orlando, FL. This is a subset of the

Airfare Special Offer event. If the provider allows the subscriber to give some values to

indicate which subset of events he/she is interested in, a filter which screens out all

irrelevant events for that particular subscriber can be established. In order to support this,

the provider must give to the subscriber a parameterized filter. In other words, a filter

with some undefined values is created and given by the provider. For the Airfare Special

Offer event, a parameterized filter on the departure place attribute of the event can be

created. This will let the subscriber to input (or choose from a list) the actual value of the

departure place. This kind of parameterized filter on the event can be displayed on the

event registration Web page by printing out the departure place attribute with a blank box

beside it. The subscriber can input the value Orlando into the blank box. The

parameterized filter created by the provider is also called an event filter template, which

maps directly to an actual input form.

The provider creates an event filter template as follows. First, picking the event

on which he/she desires to provide an event filter template. Second, one or more

attributes of the event are selected. Third, for each of the attributes, an operator is

specified by the provider, such as an equal, range, greater (or less) than, single value

selection, multiple value selection. The operator decides how the event filter template

will be displayed to the subscriber. The equal operator means that the subscriber should

specify the exact value of the event attribute, which is of interest to the subscriber. This

will be displayed as a single blank box to the subscriber requesting input. The range

operator means that the subscriber should specify the minimum and maximum values for

the event attribute which are of interest to the subscriber. Two blank boxes indicating the

maximum and minimum value will be displayed to the subscriber for the range operator.

The greater (or less) than operator requires the subscriber to provide a lower bound (or

upper bound) of the event attribute. This results in a single blank box being displayed to

the subscriber requesting a lower bound or upper bound value to be input. The single

selection operator allows the subscriber to select one of the values that the provider has

pre-defined. Thus, the subscriber can select only one value among the multiple

candidates. This results in a drop-down box, which includes all of the candidate values.

The multiple selection operator is similar to the single selection operator except that it

allows the subscriber to select multiple values rather than just a single value, meaning

that the subscriber can receive events that have attribute values falling into any of the

multiple selected values. The multiple selection operator is displayed as multiple radio

buttons that can be individually selected and unselected.

The event filter template specification results in an XML file, which is used by

the Knowledge Profile Manager to define the event registration form, and by the Event

Manager during event registration to interpret the file and display the parameterized

filters defined by the provider. Figure 9 shows an example of an event filter template for

the CheapAirplaneTicket event, defined in XML (shown on the left side of the figure), as

well as the corresponding HTML form automatically generated during registration time

from the XML file (shown on the right). For the attribute Departing, a single selection

operator filter is specified, with a predefined list of values: New York, Los Angeles,

Atlanta, San Francisco, and Seattle. The user can select a single value from the combo-

box of the corresponding HTML form. Also, Figure 9 shows an example of the multiple-

selection-operator filter on the attribute Destination. The user can check several choices

from the predefined list of cities.

HTML form

Registration Form for

New York

IZ Orlando
[I Miami
| Tampa
I Price< I I


Figure 9. Example of automatically generating event filter registration form.

(4) Define the provider-side rules

In addition to events, provider-side rules can be defined and the specification of

the rules can be published on the event registration form. Therefore, when a user

XML file

Filters for CheapAirplaneTicket

1 Departing
String New York, Los Angeles,
Atlanta, San Francisco, Seattle

2 Destination
String Orlando, Miami, Tampa

3 < AtrrName> Price
String Range

1 AND 2 AND 3

subscribes to an event, he can also select rules to be automatically executed on the

provider's knowledge Web server at the time the event is posted.

The definition of rules can be easily done through the Knowledge Profile

Manager GUI. Provider-side rules, the same as subscriber-side rules, have the format of

the rules defined in our rule model having a rule name, rule parameters, rule variables,

condition clause, action clause, alternate action clause, and return type. The classes and

methods used in the rule body must be accessible by the ETR Server by storing those

classes under a designated directory of the ETR Server. Therefore, not only standard

classes provided in Java but also user-defined classes can be used within the rules.

The provider also needs to define triggers, each of which links the published

event with a single published rule. Although the subscriber may see only the rule, the

trigger that links the event to the rule is what is actually used in the underlying system

during registration. If a provider-side rule is selected by the subscriber during the event

registration, the subscriber's id is attached to the relevant trigger.

These provider-side rules are not only installed in the ETR Server but also kept in

an XML file, which is used by the Knowledge Profile Manager for managing the rules

and the Event Manager to display the rules within the event registration form.

(5) Define skeleton of event registration form

Each web page can contain a link to an event registration form, which publishes

the event information that can be subscribed to by remote users. The event registration

form also contains a form to input event subscription information (i.e., user id, password,

notification method as either e-mail or event push, subscribing Internet address, e-mail

address), and filter specifications, along with a set of rules that can be automatically

executed on the provider side when the event occurs. Each event registration form only

deals with a single event. The event registration form is dynamically generated by the

knowledge Web server by parsing a document called the skeleton of the event

registration form.

The skeleton of the event registration form is defined by, first, picking the event

and event filter template to be displayed; second, provider-side rules to be displayed are

also selected and stored into an XML file. All other things are automatically taken care

of by the underlying system. Thus, two files in XML will be used; one to display the

event part and the other for the rule part. These two files are given to the Registration

Servlet within the Event Manager, which dynamically creates the event registration form.

These two files are interpreted at run-time and transformed into two HTML pages and

then merged to create a Web page displaying the contents of the event registration form.

6.1.2. Event Registration

As described in the previous section, providers can publish events on their Web

pages. A user on the Internet can then view a Web page and subscribe to the event by

accessing the event registration form and providing the filtering information that

determines the receipt of future notifications of the event. Also, provider-side rules may

be selected. The following issues are relevant to the process of event registration. First,

the information needed to allow the subscriber to receive an event should be decided. It

should be more than just an e-mail address when using event notifications. Second,

security issues about personal information exposure should be considered carefully.

Third, the display format of the provider-side rules should be easily understandable to a

subscriber. Fourth, the subscribed information should not only be kept in the provider

site but also forwarded to the subscriber site to allow the subscriber site to be prepared for

event notifications.

Our approach to performing the event registration is described below:

(1) Access the event registration form

While surfing on the web, the user comes to a Web page with an interesting event

published. He/she can click on the link that leads him/her to the event registration form.

Each event has its own event registration form. This form is dynamically created by the

Event Manager of the knowledge Web server.

(2) Install an event filter

The subscriber will insert values that are required for installing an event filter.

Some of the values may be input as text, selected from a list, or selected by checkboxes.

These values will be used to filter out irrelevant events for the subscriber.

(3) Input the subscriber information

The subscriber must input information relevant to receiving the event. A user id

must be provided that can be identified on the subscriber's system, and a password is

provided for security reasons. This user id and password may also be used on the

provider's system for later identifying the subscriber that registered and for pulling up his

registration information so that it could be modified by the subscriber.

The event can be delivered as an e-mail or an event object notification. The

method of event delivery is selected by the subscriber. If the e-mail method is selected, a

valid e-mail address must be provided. Otherwise, the URL of the Event Manager on the

subscriber's site must be specified.

Another interesting feature that may be added is to allow the subscriber to specify

a period in which he wishes to subscribe to the event. The events may be immediately

subscribed to or may be delayed until a specific time.

(4) Select provider-side rules

In addition to being notified of the occurrence of an event, provider-side rules can

be executed when the relevant event is posted. The rules that are eligible to be executed

when the event occurs are displayed together on the event registration form. Multiple

rules can be selected. But, how can the semantics of provider side rules be easily

understood by a subscriber and installed easily by the subscriber? Basically, the

complexity of the rules should not be revealed to the subscribers, but what the rule is

capable of should be clearly exposed. This could be in the form of a good natural

language description of the rule for novice users or the actual rule specification for

advanced users. The installation process should mostly be a simple selection operation, if

the input of any rule parameter values is not required. If the rule requires a value to be

input, a simple combo box or text box is displayed for each customizable parameter of

the parameterized rule. These values are stored in a table within the ETR Server. The

selected rules will form a parallel trigger on the provider's site and be entered into the

ETR Server.

(5) Forward the event registration information

Following the whole process of event registration, the subscribed event

information must be forwarded to the Event Manager on the subscriber side. The

information should include the event specification, the URL of the event registration

form, the user id and password used for event registration.

6.1.3. Subscriber-side Trigger and Rule Definition

After registering for subscription of several events, the subscriber can now log

into his knowledge profile as a subscriber and define triggers and rules related to the

subscribed events. These triggers and rules will be installed in the ETR Server on the

subscriber's site. The steps that are generally taken are: (1) Log into knowledge profile as

subscriber. (2) Define subscriber-side rules. (3) Define subscriber-side triggers.

(1) Log into knowledge profile as subscriber

Once the subscriber has logged into his profile as a subscriber, several tasks can

be carried out. The events that the subscriber had registered for will show up on the

knowledge profile. The event names and parameters, along with the event registration

URL, the user id and password that were used during event registration are all displayed.

The event instances that were received so far by the subscriber's knowledge Web server

may also be viewed. An event alias may also be displayed for each event in order to

differentiate among events of the same type that were registered for multiple times, each

with different event filters installed.

(2) Define subscriber-side rules

The subscriber-side rules can be defined in the same way as provider-side rules

are defined. The rule will show up in the relevant user's knowledge profile when logged

in as a subscriber. The rules are then generated into code and installed into the ETR


(3) Define subscriber-side triggers

The subscriber-side triggers are defined using the subscribed events and the

subscriber-side rules. A trigger can relate a set of these events with a set of the rules. The

triggering events, event history, and rule structure are specified within the trigger. The

rule structure can be a sequential structure, a parallel structure, or an AND-OR

synchronized structure. The parameter mapping from events to rules is also specified in

the trigger. After the trigger is defined, it is given to the ETR Server.

6.2. Processing Events, Triggers and Rules

Once the construction phase of the knowledge network is finished and the

knowledge elements are populated into the framework, the run-time processing of events,

triggers, and rules take place. The event is generated at the provider's knowledge Web

server. This event is posted to the subscribers. It also executes provider-side rules. Upon

receiving the event, the subscriber-side rules are executed.

6.2.1. Posting Events

The events are generated on the provider's knowledge Web server in several

ways : (1) Web server generated, (2) Active DBMS generated, (3) CORBA server

generated, (4) Daemon program generated, or (5) External application generated.

Regardless of how an event is generated, the Event Manager on the provider side

will receive an event instance. The provider's Event Manager will then use a special data

structure to efficiently look up the subscribers who installed filters that are relevant to the

generated event instance. The data structures used here are the widely used Inverted

Index for discrete value filtering and the Range Index Table, which is used to efficiently

match a value against multiple range conditions.

The event is then posted from the provider to each of the subscribers. If the

subscriber specified the delivery mechanism as e-mail, it is posted by e-mail. Otherwise,

it is delivered by an event notification from the provider's Event Manager to the

subscriber's Event Manager.

If the subscriber selected a provider-side rule, the event is also posted to the

provider site itself. This event notification fires the rules that were selected by the

subscriber on the provider's site.

6.2.2. Trigger and Rule Processing by the subscriber

When the Event Manager on the subscriber's knowledge Web server receives the

event notification, it will automatically forward it to the ETR Server on the subscriber's

web server. The ETR Server will then check who the subscriber of the event is, and

execute the triggers and rules defined by the subscriber.

The trigger execution involves evaluating the event history, mapping the

parameters from the events to the individual rules specified in the trigger, and scheduling

of the rules according to the structure specified.

The execution of a rule may in turn generate another event, which will create a

chain reaction among web servers. The rule also may activate local applications, based on

how it is defined.

6.3. Event, Trigger and Rule Management

Events, triggers, and rules that have been defined may not permanently exist nor

remain valid. Thus, they need to be managed after they are defined. The techniques

introduced in this chapter are extensions to our framework to support the management of

events, triggers and rules.

6.3.1. Expiring and Deleting Events

An event log can be kept on the subscriber's knowledge Web server to enable the

subscriber to view the instances of the subscribed events. The problem of having an event

log is that it will continue to grow and eventually take up a large amount of storage space.

In order to alleviate this problem, an expiration mechanism of events may be used. Each

of the events may contain an expiration date, which specifies the date from which the

event becomes meaningless. The expiration date can be used to automatically purge

events from the event log. A daemon can periodically go through the event log and

perform the purging based on the expiration date.

Another problem with subscribing to events is the fact that the provider may

delete the event from his knowledge Web server and no longer post the events. In this

case, the subscriber does not know that the event will no longer be posted and will keep

the triggers related to the subscribed event. A mechanism to inform the subscriber of the

deletion of the event on the provider site should be devised. One approach is to make use

of the same infrastructure to support this capability. A special event, namely,

management event, which is pre-defined by the system, is the only additional thing

needed. This event can carry the event name that has been deleted from the provider's

site. When a provider deletes an event from his system, this special type of event will be

posted to all of the subscribers of the deleted event. Then each of the subscribers can

perform some operations to clean up the unnecessary data structures related to the deleted

event, such as the event table and trigger tables. Triggers and rules, namely management

triggers and management rules, can be tied to these events to perform such operations.

Additional operations can be carried out by specifying them in the management rules.

6.3.2. Editing Triggers and Rules

Triggers and rules defined by a subscriber may need to be changed afterwards.

The editing of triggers involves taking out or adding new rules into the structure, or

modifying the execution sequence of the rules. This can be done through the Knowledge

Profile Manager. The ETR Server is informed about the changes and modifies its internal


data structures. The editing of rules involves not only changing the knowledge profile but

also the generation of the new rule code. The new rule code must also be dynamically

loaded into the ETR Server. The ETR Server is capable of reloading the new rules

without being brought down and also the currently running rule instances are not affected

by the change. In other words, dynamic reloading of rule code is carried out.


In Chapter 5, an overview of the architecture of the knowledge Web server was

given. This chapter elaborates in more details on how each of the components is

implemented and how these components interact.

7.1. Detailed Architecture

The detailed architecture of the knowledge Web server is shown in Figure 10.

There are three key components as described in the previous sections: Event Manager,

Knowledge Profile Manager, and ETR Server.

Event Infrastructure APIs
S EventListener & Authentication

Filter Event
Processor Distributor

Install Event


Rule Codes

vent Casse

Event Registration

Figure 10. Detailed architecture of the components of the knowledge Web server.

7.1.1. Event Manager

The Event Manager [Gru99] deals with accepting event registrations and

delivering the events to subscribers of the event. It is composed of an Event Registration

Servlet, which can generate an HTML form for the registration of events and provider-

side rules and also stores the subscription information into a persistent storage. The

internal format of the registration form is in XML format. The Event Registration Servlet

will read the XML files and create an HTML form, which displays the event, event filter

input, and provider-side rule selection. The Event Manager also includes a Filter

Processor, which allows certain undesired events to be screened out before being sent to a

subscriber. The Filter Processor uses a special data structure to perform event filtering

based on specific event parameter values provided by the subscribers. The Event

Distributor activates the Filter Processor and carries out the process of delivering the

events to subscribers. The Event Listener will accept events from other systems and

forward the events to the ETR Server. Java classes that are needed for sending/receiving

the events are maintained in a persistent store.

7.1.2. Knowledge Profile Manager

The Knowledge Profile Manager [Par99] consists of an applet and a servlet,

which allow both the provider and subscriber events, triggers, and rules to be

displayed/defined through a browser interface. The Knowledge Profile Manager stores

the event, trigger, and rule information into a Metadata Manager, which manages the

persistent storage of this information. The Knowledge Profile Manager also creates XML

files used for the event registration forms. These XML files are stored in a special

directory that is also accessible by the Event Registration Servlet of the Event Manager.

When the provider defines an event, the related event generation mechanism can

be identified and the code for generating the event can be automatically installed.

Examples are method associated events for CORBA infrastructures, events generated

from active databases, events generated from web servers. The event delivery

mechanism is isolated from the definition mechanism. Therefore, the Knowledge Profile

Manager will notify the event installer within the Event Manager about the event class, so

that the related code for delivering the events can be generated.

7.1.3. ETR Server

The ETR Server executes triggers and rules at run-time. It can schedule complex

rule structures based on a sequential, parallel, or AND-OR synchronized structure

specified in a trigger. A block diagram of the ETR Server is shown in Figure 11. The

ETR Server has an Event Hash Table, Trigger Hash Table, Dispatcher, Rule Group

Manager, Rule Code Loader, dynamically created Rule Schedulers and Event History

Processor. The Event Hash Table stores information about which events map to which

trigger, and the Trigger Hash Table stores information about the parameter mapping and

rule execution sequence for each trigger. The Rule Group Manager maintains the

information about which rule is currently activated, and the Rule Code Loader can load

the rule code dynamically and execute them at run-time. When an event is notified to the

ETR Server, the dispatcher looks up the Event Hash Table and finds the triggers that are

related to the event. A Rule Scheduler is dispatched to start processing a trigger. The

Rule Scheduler, which is created at run-time for each invoked trigger, performs the

scheduling and parameter mapping tasks while interacting with the Rule Group Manager,

and whenever a rule code is changed and need to be reloaded or executed it calls the Rule

Code Loader.

The ETR Server has the capability of grouping rules that can be more easily

enable or disabled. The grouping of rules is managed by the Rule Group Manager.

Dynamic rule change is also possible at run-time without bringing down the server. A

rule can be changed and immediately installed without affecting the previously running

rules. Additional code for the Java class loader was written to support this capability,

resulting in our own Rule Code Loader. The rule code is stored in a persistent store as

Java classes. The internal data structures of the ETR Server are also stored in a persistent


The ETR Server can also process complex relationships among the event

occurrences using the Event History Processor. The Event History Processor can

accumulate event occurrences and evaluate an event history expression to check if a

certain relationship among events exists within the past history. This evaluation is

invoked during the trigger processing before the rule structure is executed.

ETR Server
Notify Event .._ I I I I

Figure 11. Architectural components of the ETR Server.


7.2. Detail Component Interaction and Interface

This section explains how the components interact with each other through their

interfaces during the various stages of event, trigger, rule publishing and processing.

7.2.1. Component Interactions for Publishing Events, Triggers, and Rules

Figure 12 shows the interactions among the components that are carried out when

a provider publishes events, triggers and rules. When the provider wants to publish

events, triggers and rules, he first accesses the Knowledge Profile Manager. Through the

Knowledge Profile Manager, a provider can define events, triggers and rules, which are

then transformed into an XML file format. The XML file format is later used to

dynamically create and display an event registration form. The event, trigger, and rule

information are all stored into the Metadata Manager. The event definitions are given to

the Event Manager so that the appropriate code needed for event delivery (e.g., event

classes in Java) can be generated. The ETR Server also installs the triggers and rules into

its system and is ready to fire the provider-side rules.

Event Infrastructure APIs
EventMgr ETR
S EventListener & Authentication Server
Filter Event
Processor Distributor

Infrastructure Posting ventClasses Install Trigger,Rule
Xr MLb
Event Registratio
ERule Rule

Profile per User

Figure 12. Interaction among components for publishing events, triggers, and rules.

7.2.2. Component Interactions for Event Registration on Provider Site

The interactions among the components on a provider site when a subscriber

registers for an event are shown in Figure 13. The subscriber will access the event

registration form via the Event Registration Servlet, which reads the event, trigger, and

rule information that is in XML format. The subscriber will input the event filter

information and subscriber information and also select certain provider-side rules. The

event filter information along with the subscriber information are given to the Event

Manager to be stored into persistent store and also update the data structures used for

efficient filter processing. The information about the event such as the event class that

is needed for event delivery is passed to the remote subscriber site by the Event Manager.

The provider-side rules that were selected by the subscriber are installed within the ETR

Server along with the subscriber id.

Event Infrastructure APIs Provider Site
S EventListener & Authenticatior
Filter Event
Processor Distributor

Registration Form per Event Event, Rule

Figure 13. Interaction among components for event registration on provider site.

7.2.3. Component Interactions during Event Registration on Subscriber Site

The interactions among the components on a subscriber site while a subscriber

registers for an event are shown in Figure 14. The Event Manager on the provider site of

the event will pass to the subscriber site the event information needed for receiving the

event. The subscriber's id is also passed along at this time. The Event Manager on the

subscriber site will store the event information and perform additional code generation if

needed for receiving the event. The subscriber's id and the event that was subscribed to

will be added to the persistent store, which contains all the event subscription

information. The subscriber can later on define triggers and rules on those subscribed


Event Infrast

rec ive

ructure APIs

Subscriber Site

Lr EventListener & Authentication

Filter Event Import
Processor Distributor event

instIll Event I


Rule Codes

Event XML
Event subscription info
Event Registration

Figure 14. Interaction among components for event registration on subscriber site.


7.2.4. Component Interactions during Subscriber-Side Trigger and Rule Definition

The interactions among the components on a subscriber site while a subscriber

defines triggers and rules on a subscribed event are shown in Figure 15. The subscriber

first accesses its Knowledge Profile Manager. The Knowledge Profile Manager displays

the events for which he/she has subscribed. The subscriber can then define triggers and

rules related to these subscribed events. Once these are defined, they are stored into the

Metadata Manager and also installed in the ETR Server on the subscriber site.

Event Infrastructure APIs Subscriber Site
EventMgr ETR
EventListener & Authentication Server
Filter Event
Processor Distributor

Rule Codes
Install Event
Posting ent Install Trigger,Rule

Event subscription info
Event Registration


Profile per User

Figure 15. Interaction among components for defining trigger and rule on subscriber site.

7.2.5. Component Interactions for Posting an Event on the Provider Site

The interactions among the components on a provider site while posting an event

are shown in Figure 16. The event is generated by any mechanism that is supported by

the system. In the figure, the CORBA event service is shown. Once the event is

generated, it is passed to the Event Manager. The Event Manager will then perform the

filtering and post the event to the subscribers of the event. During this process, the Event

Manager uses the subscription information and the event classes. The event is also given

to the ETR Server that resides in the provider site in order to fire provider-side rules that

were selected by the subscribers.

Event Infrastructure APIs Provider Site

Event Registration

Figure 16. Interaction among components for posting event on provider site.

7.2.6. Component Interactions for Receiving an Event on the Subscriber Site

The interactions among the components on a subscriber site when receiving an

event are shown in Figure 17. The event is delivered to the Event Manager on the

subscriber site. The Event Manager may then log the event into a persistent store. It will

then pass the event to the ETR Server. The ETR Server looks up triggers and rules that

are defined by the subscriber and executes the rules.

Event Infrastructure API Subscriber Site
Event Infrastructure APIs

Event Registratioon

Figure 17. Interaction among components for receiving event on subscriber site.

7.2.7. Component Interactions within the Knowledge Web Server

Figure 18 shows all of the interactions that take place within the knowledge Web

server. As described in detail in the previous sections, the interactions take place at

different stages of the whole process.

Event Infrastructure APIs Provider /Subscriber Site

Figure 18. Component interactions within the knowledge Web server.


To illustrate the usefulness of knowledge networks, we have developed two

example e-commerce applications to demonstrate the presented concepts and

technologies: a business-to-customer scenario based on a travel agency named "Gator

Travel Agency", and a business-to-business e-commerce scenario based on a company

named "IntelliBiz". Due to the space limitation, the scenarios will be kept simple, but

will be sufficient to explain the possible applications of all the key components and

features of a knowledge network. The scenarios assume that there exists a knowledge

Web server hosting the home page of the Gator Travel Agency, and the IntelliBiz,


8.1. Business-to-Customer E-Commerce Scenario

Business-to-customer e-commerce is a very popular type of e-commerce. The

Gator Travel Agency scenario described in this section illustrates how the knowledge

network technologies can enhance this type of e-commerce.

8.1.1. The Gator Travel Agency

The events that are published on the home page of the Gator Travel Agency are as

follows. The parameters of the events are shown within the parenthesis.

* AirfareSpecialOffer ( departure_city, destination_city, price, period): This event is

posted when a special offer on airfares is announced. The event notification includes

the departure city, destination city, price, and the time period in which this special

offer is valid.

* FlightCancelled ( flightno, reason): This event notifies that a flight has been

cancelled. The flight number and the reason for the cancellation are provided through

the event parameters.

* FlightDelay (flight no, reason, delay time): This event notifies that a flight has been

delayed. The flight number and the reason for the delay are provided through the

event parameters.

The event filters provided for each of the events are as follows:

* AirfareSpecialOffer departure_city, destination_city, price, period: The

AirfareSpecialOffer event can be filtered based on the departure city, destination city,

price, and the period. Using filters, the subscriber of the event can specify which

specific subset of the AirfareSpecialOffer events he/she would like to receive.

FlightCancelled flightno: The FlightCancelled event can be filtered based

on the flight number. For example, a subscriber is interested only in the flights

for which he/she has made reservations.

FlightDelayed flightno: The FlightDelayed event can be filtered based on

the flight number. As with the FlightCancelled event, a subscriber would only

be interested in the flights for which he/she has made reservations.

The provider-side rules for each event are given as follows.

AirfareSpecialOffer Book ( names, departure_date time, returndatetime ):

This is a rule provided by the travel agency which can automatically book a

ticket for the special offer.

* FlightCancelled Refund, Rebook( nextnearestflag, nextdirectflag): Refund is a

rule which automatically refunds the airfare to the subscriber if the traveler considers

the flight to be not worth rebooking. Rebook is a rule which performs an automatic

rebooking based on the traveler's preference such as the next earliest flight, or the

next flight that is a direct flight without connections.

* FlightDelayed Refund: This rule automatically refunds the airfare to the traveler.

8.1.2. The Subscribers

In the scenario, there are several subscribers. Each subscriber demonstrates a

different usage of the knowledge Web server.

Subscriber Cl maintains a knowledge Web server that publishes the lowest

airfares available on the Internet. It also maintains a separate mirror site S1, which has

the identical information as Cl's site. Cl subscribes to the event AirfareSpecialOffer.

Cl compares the price information that is received through the event with the prices

stored on its own knowledge Web server using the rule R1. If Cl finds that the newly

announced ticket prices are lower than the ones stored in its knowledge Web server, it

immediately updates its prices. It also notifies the mirror site S1 about this fact by posting

an event to S1. S1 will then automatically update its prices using the rule RS. This

scenario shows the advantage of our approach by chaining the events and rules to

propagate information to several servers in a timely and intelligent fashion.

Subscriber C2 wants to be notified via e-mail, but he wants to immediately book

the ticket when the airfare and seats are available. Therefore, he also selects the Book

rule provided by the Gator Travel Agency. This scenario shows how the rules on the

University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs