KBMS-based evolutionary prototyping of object-oriented software systems

MISSING IMAGE

Material Information

Title:
KBMS-based evolutionary prototyping of object-oriented software systems
Physical Description:
ix, 118 leaves : ; 29 cm.
Language:
English
Creator:
Chatterjee, Raja, 1971-
Publication Date:

Subjects

Subjects / Keywords:
Computer software -- Development   ( lcsh )
Rule-based programming   ( lcsh )
Object-oriented databases   ( lcsh )
Computer and Information Science and Engineering thesis, Ph.D   ( lcsh )
Dissertations, Academic -- Computer and Information Science and Engineering -- UF   ( lcsh )
Genre:
bibliography   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1998.
Bibliography:
Includes bibliographical references (leaves 115-118).
Additional Physical Form:
Full text also available from UMI Current Research @ database; Adobe Acrobat Reader required to display text; see LINKS to connect
Statement of Responsibility:
by Raja Chatterjee.
General Note:
Typescript.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 029224058
oclc - 39540434
System ID:
AA00012950:00001


This item is only available as the following downloads:


Full Text












KBMS-BASED EVOLUTIONARY PROTOTYPING OF
OBJECT-ORIENTED SOFTWARE SYSTEMS








By

RAJA CHATTERJEE


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA

































To
My Parents,
and My Sister

















ACKNOWLEDGMENTS


I wish to express my immense gratitude to my advisor, Dr. Stanley Y. W. Su,

for his excellent guidance and support in the past four years. Without his help, this

research work could not have been done.

I would like to thank Dr. Eric Hanson, Dr. Herman Lam, Dr. Steve The-

baut, and Dr. Douglas Cenzer for serving on my committee and providing valuable

feedback.

Many thanks are due to our secretary, Ms. Sharon Grant, who provided great

support and assistance on my research during this long period of time. I would like

to thank my friends in the Database Center, for all their efforts in supporting my

work. I would also like to thank all the students at the Database Systems Research

and Development Center for being such great fellows.

Last, but not least, I want to express my gratitude to my parents and my sister,

Anusua, for their constant encouragement and inspiration during this long period of

time.

This research was supported by a grant from the National Science Foundation,

(grant # CCR-9200756).













TABLE OF CONTENTS




ACKNOWLEDGMENTS ............................. iii

LIST OF FIGURES .............................. vii

ABSTRACT ............ ............... ......... viii

CHAPTERS

1 INTRODUCTION .. ... ..................... 1


2 SURVEY OF RELATED WORK ........... ...... ...... 7


3 KBMS-BASED EVOLUTIONARY PROTOTYPING METHODOLOGY 13


4 KNOWLEDGE-BASED PROTOTYPING LANGUAGE .......... 21

4.1 Object Model ...... .............. .... 21
4.2 Modeling Prototype ..... .......... ............. 26
4.3 Implementation Details of Code Generation .............. 45

5 VERIFICATION OF THE FUNCTIONALITY AND PERFORMANCE OF
EVOLVING PROTOTYPES ............. ................ 48

5.1 Requirements ..... ................ ..... 48
5.2 Mechanisms ..................... .......... 49
5.3 Techniques .......... ...................... 51

6 ECAA-RULE-BASED MONITORING OF SYSTEM BEHAVIOR ..... 54

6.1 Capturing the Execution Data in the Knowledge Model ....... 58
6.2 Specification of Monitor Rules ........... ...... ..... 61
6.3 Implementation of Monitor Rules .......... ....... .. 64

7 INFERRING OF SYSTEM BEHAVIOR USING DEDUCTIVE RULES .. 70








7.1 Specification of Deductive Rules .................. .. .71
7.2 Behavior Abstraction .................. ....... .. 79
7.3 Behavior Analysis .......... ... 80
7.4 Implementation of Deductive Rules ...... 82

8 KNOWLEDGE-BASED PROTOTYPING ENVIRONMENT ......... 94

8.1 Prototyping Environment ...... ..... 94
8.2 Performance and Functional Tracing of Prototypes 98

9 CONCLUSION AND FUTURE WORK ..... 104

9.1 Conclusion .............. .. 104
9.2 Future Work ... ......... ....................... 108

APPENDICES

A DEDUCTIVE RULE BNF .................. ....... ..109


B RESULTS OF EXECUTION TRACING ...... ..... 112


REFERENCES .................

BIOGRAPHICAL SKETCH ..


. 115

. .. 118













LIST OF FIGURES


3.1 An Overview of the Evolutionary Prototyping Process ........ 17

4.1 Control Associations .......................... .. 37

4.2 The Program Generator Infrastructure .... 44

4.3 Intermediate Structure Representation of a Code-block ... 46

6.1 System Information about the Invocation History for a Method 61

6.2 The Translation Mechanism to Implement ECAA Rules ... 67

7.1 Example of an Inferencing Rule and Its Corresponding Rete Network 82

7.2 Active Rules Triggering the Flow of tokens through the Rete Network
from a Leaf Node .................. ......... .. 83

7.3 Active Rules Triggering the Execution of the Consequent of an Infer-
encing Rule ............... ...... 84

7.4 The Deductive Rule Structure ................... ..87

7.5 Activate / Deactivate Methods ... ..88

7.6 Temporal Clause in the Rule Antecedent ..... 91

7.7 The factTableEntry Structure ... ..91

7.8 The factTable Structure ........................ .. 92

7.9 The ruleTableEntry Structure ................... ..92

7.10 The ruleTable Structure ................... ..... 93

8.1 The Evolutionary Prototyping System Architecture ... 97








8.2 The Environment Interface .................. .... 99

8.3 A Method Model .................. .......... .. 101

8.4 The Functionality Tracing Process .... 102

8.5 The Performance Evaluation Process 103















































vii













Abstract of Dissertation
Presented to the Graduate School of the University of Florida
in Partial Fulfillment of the Requirements for the
Degree of Doctor of Philosophy



KBMS-BASED EVOLUTIONARY PROTOTYPING OF OBJECT-ORIENTED
SOFTWARE SYSTEMS

By

Raja Chatterjee


Chairman: Dr. Stanley Y. W. Su
Major Department: Computer and Information Science and Engineering

The development of a complex object-oriented software system is a costly en-

deavor. Prototypes would not be "throw-aways" and much time and effort could be

saved if a complex software system were developed by a series of refined and veri-

fied prototypes as the prototyper gains more and more knowledge about the func-

tionality and performance requirements of the system being developed. To support

such an evolutionary prototyping process, a powerful knowledge base management

system (KBMS) has been developed in this work to provide: 1) a powerful object

model for modeling the structural and behavioral properties and constraints of soft-

ware components and the data entities they manipulate, in a uniform manner, 2)

a persistent knowledge base (KB) to maintain the models of these prototypes and

the data related to design decision, requirements, schedules, milestones, etc., 3) a

knowledge base programming language for querying and manipulating the persistent








knowledge base, as well as for writing code, and 4) a prototyping environment to

support the functionality tracing and performance evaluation of the target system.

The existing debuggers and profilers provide support for object-oriented software sys-

tem evaluation using a tracing mechanism, which allows a system analyst to follow

the program execution step by step, and to stop at any particular point of execu-

tion so that visible variables can be observed and evaluated. However, the execution

profile data are not stored in a persistent store nor managed by an intelligent data

management system to support the analysis of the profiling data and to derive more

useful information about the system behavior. In this work, we use Event-Condition-

Action-Alternative-Action (ECAA) rules to specify points of system monitoring and

antecedent-consequent rules for behavior abstraction and analysis. The implemented

KBMS is used to manage execution profile data and to process both types of rules

to support system monitoring and analysis.
















CHAPTER 1
INTRODUCTION



The development of a complex software system is a costly and time-consuming

process. In the traditional prototyping approach a prototype of the target system

is developed for the purpose of gaining a better understanding of the users' require-

ments. It is used to help the developers to remove misunderstandings between them-

selves and the users and to verify with the users their expected system functionalities.

The prototype is then thrown away and the target system is developed from scratch,

based on the experience and knowledge gained during the prototyping process. Us-

ing this approach, much of the efforts of the prototype development phase is wasted.

Sometimes the cost of prototype development represents an unacceptably large por-

tion of the total system development cost. Similarly, the traditional "waterfall" soft-

ware life cycle selves little but to exacerbate the problems of software development

and maintenance by delaying the discovery of incorrect or inappropriate specifications

and requirements until the testing phase that follows the implementation phase. The

cost and time can be greatly reduced if systems can be developed through a series

of evolving, executable, and valuable prototypes, as the prototyper gains more and

more knowledge about the functional and performance requirements of those systems

being developed. In the evolutionary prototyping process the conceptual model of








the prototype system is gradually modified and extended, as more and more knowl-

edge about the system requirements are identified until the final system is developed.

In this process, the prototyper first develops a initial prototype based on his/her

knowledge of the system requirements and then evaluates the initial prototype to see

whether its functionality and performance meet the desired requirements. If the re-

quirements are not met, then he/she would modify the design and re-test the refined

prototype. The cycle would repeat until the target system is developed and evaluated

[Bal82]. In other words, each prototype system can be thought of as an executable

model of the target system. The model can gradually evolve into the target system

as more and more details are specified in it. All the testing, debugging, modifica-

tion, and maintenance can be performed directly against the executable models. The

executions of the evolving prototypes allow errors and design misconceptions to be

identified in various stages of the prototyping effort, thus allowing a complex system

to developed much faster and at a lower cost.

To support such an evolutionary prototyping process, it is useful to have a pow-

erful knowledge base management system (KBMS) to provide: 1) a powerful object

model for modeling the structural and behavioral properties and the constraints of

the software components that constitute the evolving prototypes and the data they

manipulate in a uniform manner, 2) a persistent knowledge base to maintain the

models of these prototypes and the data related to design decision, requirements,

schedules, milestones, etc., 3) a querying facility to allow ad hoc inquiries about the

prototype system and the data gathered about it. When modifications are made








to a prototype, consistency checking and integrity enforcement need to be carried

out. So, in addition to the above features, a KBMS needs to have a knowledge rule

specification and processing capability. This is because the structure and behavior

of a complex system are often subjected to design, operational, and system rules and

constraints. If these rules are explicitly specified in the model of a prototyping sys-

tem, then they can be used by a KBMS to automatically maintain system constraints

and/or activate operations when certain events occur. Although the semantics rep-

resented by rules can be implemented in methods, high-level declarative rules make

it much easier for a prototyper to clearly capture different semantic properties, and

thus simplify the task of implementation, debugging, and maintenance. For example,

rules can be used to dynamically modify the control flow between methods without

affecting the application code, thus improving the modularity of the system [Day90].

They can be used both during the development phase as well as in the evaluation

phase of the target system. The details will be discussed later in this dissertation.

In addition to a KBMS, we need a knowledge base programming language for

querying and manipulating the persistent knowledge base as well as for writing code.

We also need to provide the prototype a prototyping environment with monitoring

and profiling tools to carry out the following functions: 1) tracing the execution

of a prototype at any stage of development, 2) generating the information about

time taken to execute different parts of a prototype, 3) monitoring a prototype be-

havior at run-time and 4) inferring on the monitored data to analyze a prototype

behavior and understand the problems and bottlenecks in the prototype in various








stages of development. We, therefore, take a knowledge-based modeling approach to

evolutionary prototyping by treating each evolving prototype system as a high-level

executable model of the target system. The executable model defines the structural

and behavioral properties of the target system at any level of abstraction, as desired

by the prototyper. It evolves gradually through a series of schema modifications

and refinements to provide more and more details about the requirements and the

implementation of the target system. At each stage of evolution, the model of the

system (i.e., the prototype) can be executed to test its functionalities and perfor-

mance. This approach to evolutionary prototyping is achieved by the use of (i) a

reflexive and extensible object model to model the structural and behavioral proper-

ties and constraints of data entities and software components (including the control

structure of methods) in a unified object-oriented framework; (ii) a knowledge base

management system to manage persistent objects, object classes and their various

types of associations, which model application data, software systems, and their inter-

relationships, (iii) a powerful knowledge base programming language as the common

prototyping language to define, query, and manipulate the knowledge base as well

as to code methods, and (iv) a KBMS-based prototyping environment to carry out

functionality tracing and performance evaluation of the evolving prototypes.

In this dissertation, we present a KBMS-based evolutionary prototyping method-

ology for object-oriented system development and evaluation. A KBMS-based proto-

typing language and a KBMS-based prototyping environment are described. In this

work, we have extended the knowledge base programming language K.3 (which is the








third version of the K language reported in [Shy96, Arr97]) to support the following

features: 1) modeling of method implementations [Su97a], 2) monitoring of inter-

action among system components using Event-Condition-Action-Alternative-Action

(ECAA) rules, and 3) inferring about system behaviors using antecedent consequent

rules [Su98]. The language is used as a specification language for modeling prototypes

as well as an implementation language for coding methods. K.3 code is translated

into C++ code by a K.3 compiler, which is then compiled into object code by the

C++ compiler. We have also developed a prototyping environment consisting of a

Functionality Tracing Monitor and a Performance Monitor to support the evaluation

of evolving prototypes [Su97b]. Our goal is to reduce the gap between specification

and implementation, and to provide the prototyper with tools and high-level expres-

sive language to easily model and evaluate a target application system. Our intended

contribution is to the field of Software Engineering by showing how a semantics-

rich object model, a knowledge base programming language, a KBMS, and the tools

and facilities of a prototyping environment can be used to support the evolutionary

prototyping process, thus reducing the cost and time of developing object-oriented

software systems.

The remainder of this dissertation is organized as follows: Chapter 2 contains a

survey of the related research works pertinent to high-level specification languages,

behavioral modeling and knowledge based software environments. Chapter 3 presents

the overall concept of KBMS-based evolutionary prototyping and discusses its domain




6


of applicability and some of the overhead associated with this methodology. Chap-

ter 4 presents the knowledge-based prototyping language used for modeling evolving

prototypes and writing executable code. Chapter 5 introduces the goals of veri-

fying the functionality and performance of prototypes and the role of a KBMS in

achieving these objectives. Chapter 6 discusses the details of a rule-based monitoring

mechanism for monitoring the interactions among software components. Chapter 7

details the inferencing process to support behavior abstraction and behavior analysis

of software systems. Chapter 8 discusses a prototyping environment, and its roles

in functionality tracing and performance analysis. Chapter 9 summarizes the main

contributions of this research work and discusses some possible directions for future

work.
















CHAPTER 2
SURVEY OF RELATED WORK



Only a few widely applicable and wide-spectrum programming languages and

computing environments, such as GIST [Par83] and V [Smi85], are available for sup-

porting prototyping efforts. Most of the existing prototypes are written either in (i)

some traditional implementation languages, which do not have proper high-level spec-

ification facilities to capture the structural and behavioral properties of the target sys-

tem and are applicable only to limited application domains due to their lack of multi-

paradigm constructs, or (ii) some specification languages [Luq88, Kun89, Rei87],

which are suitable for conceptual designs rather than for executable specifications.

In the former case (i), those properties, which are not explicitly captured in the

domain specification, are buried in the application code, thus reducing the system

readability and maintainability. Besides, prototypes become dependent on the im-

plementation languages and, therefore, cannot be reused across different execution

environments. This problem introduces functional redundancy, communication over-

head, development and maintenance burden, and causes difficulty in sharing data

and program code among software systems. In the latter case (ii), instead of evolv-

ing towards the target system, prototypes are regarded as throw-aways and have a








short lifetime in the software life cycle. Since prototyping is separated from imple-

mentation, it is difficult to debug and maintain an implemented system based on

its requirement specification and design. For the above reasons, it was suggested

in [Bal82] that a common prototyping system (CPS) is needed for developing com-

plex software systems in an evolutionary manner. The system should consist of (i)

a common wide-spectrum prototyping language (CPL) for both specifying and im-

plementing a target system, and (ii) a common prototyping environment (CPE) for

developing, executing, and testing the evolving prototypes. We shall review some

related work in this chapter.

The G-Nets [Den90], Petri-Nets [Rib88], and the Semantic-nets [Ran88] were

proposed for prototyping software systems, but all of them lacked the support of a

KBMS, which, as we shall show, is very useful in the prototyping process. Brodie

and Ridjanovic [Bro83] proposed the ACM/PCM (Active and Passive Component

Modeling) for the structural and behavioral modeling of a database application using

an integrated object behavior schema. But there is not enough information captured

in the behavior schema to make it executable and evolvable into a target system.

Kappel and Schrefl [Kap91 proposed the use of object/behavior diagrams as a uni-

form graphic representation of the object structure and behavior based on a semantic

data model and Petri-Nets. Although-this is closely related to our work, their proto-

typing system is more like a graphic tool and software systems modeled by this type

of diagrams are not modeled uniformly and managed by an underlying KBMS. Our

prototyping language K.3 is similar to model-based specification languages such as








VDM HBp :'] and Z [Abr80] in that they are also used to formulate models of software

systems. But VDM and Z are strictly specification languages without any persistent

knowledge base support. Our language K.3 is similar to Eiffel [Mey92], an object-

oriented programming language, in many respects. However, Eiffel does not provide

persistent support to application objects and does not have a KBMS support for mod-

eling constructs equivalent to our control associations. Carolyne Pe Rosiene and Reda

A. Ammar [Ros93] proposed a data modeling approach for carrying out performance

evaluation; however, it lacks a refinement operator such as Decomposition, which is

provided in our prototyping language for gradually decomposing a prototype system

into lower levels of details. Moreover, their language lacks active rules and a KBMS

support for carrying out various activities of software development. The commercial

product Rational Rose combines Booch, Rumbaugh, and Jacobson's object-oriented

design and analysis methodologies. Rational Rose supports the Unified Modeling

Language (UML) specification [Uml98] of a system through use/case diagrams, class

diagrams, behavior diagrams, and implementation diagrams. It generates class and

method skeletons of the target system from user-specified diagrams. The methods

are then coded by developers to implement the desired semantics. Rational Rose is

a system for supporting system design and analysis instead of an executable proto-

typing tool. It also lacks rule specification and processing facilities, which are used

for modeling and enforcing constraints among different system components and data

entities.








KEE (Knowledge Engineering Environment) [Fik85, Fi188, Twi89] system which

integrates the concepts of production rules, frames, and object-oriented programming,

has been used as a knowledge-base system development environment. However, it

lacks the support of active rule specification in their frames specification, which is

very useful for specifying constraints at various levels of abstraction, as we shall see

later on in this dissertation. Our prototyping system supports the specification of

both active and deductive rules. Performance monitoring of the target system is not

supported by the KEE system but is supported by our knowledge-based prototyping

environment. The procedures attached to the frames in KEE are in lisp which is

an interpretive language. Whereas our K.3 implementation code is translated into

C++ code, which is compiled into object code. The compiled code is much faster at

run-time. Moreover our prototyping system supports a wider variety of constraints

than those supported by the KEE system. ART (Automated Reasoning Tool) [Art88]

is an expert system written in lisp which supports facilities like objects, production

rules, truth maintenance, and object-oriented programming. ART and it's deriva-

tives ART-IM (Automated Reasoning Tool for Information Management, written in

C) [Art89], and ART-Ada (an expert system which was built based on the architec-

ture of ART and ART-IM and was written in Ada) [Lee90] have been used to provide

the environments for developing knowledge-based applications. However, they lack

the active rule specification and processing capabilities. Our prototyping system

supports both the state-based production rules and the event-based active rules and

thus has an advantage over these systems. Moreover, ART and its derivative systems








do not provide performance monitoring and analysis facilities for evaluating appli-

cations systems that are being developed, nor do they support the specification and

processing of constraints which is a part of the declarative specification mechanism

in our system. ART supports both schema (which have all the object property, like

inheritance, attributes, etc.) as well as lisp lists (which are a set of typeless fields).

Whereas, our system is strongly typed and all the objects are either entity class ob-

jects or domain class objects. This strongly typed property helps to make the target

specification less error prone because the type mismatch error can be detected by the

compiler at compilation time.

In our work, we extend an object model and a knowledge base programming

language to model data entities, software components and complex methods and to

write simulated or real code for simple methods that the prototyper can write. The

modeling of complex methods and the generation of executable code from method

models are not supported by existing prototyping systems. We also introduce rule-

based specifications of constraints among system components, to enhance the ex-

pressive power of our model and language. Unlike any existing prototyping system,

our system supports both Event-Condition-Action-Alternative-Action (ECAA) rules

and state-based inferencing rules. The ECAA rules are used for implementing sys-

tem constraints at different levels of abstraction. For example, they can be used

to implement inter-component, inter-method, and intra-method constraints. They

are also used for monitoring the interactions among different components of a pro-

totype system at run-time. The state rules are used for deducing useful information




12


from monitored data for the behavior abstraction and analysis of a prototype system.

Our prototyping language is supported by a KBMS-based prototyping environment

which consists of a Functionality Tracing Monitor and a Performance Monitor for

tracing program execution and recording the time taken to execute different parts of

a prototype.
















CHAPTER 3
KBMS-BASED EVOLUTIONARY PROTOTYPING METHODOLOGY



The KBMS-based evolutionary prototyping process is shown in Figure 3.1.

Based on the initial knowledge of the prototyper about the system requirements,

the prototyper uses the common prototyping language K.3 to model the components

and complex methods of the initial prototype. The language is also used to write

code for simple methods. Its language constructs will be discussed in Chapter 4.

The specifications are then executed in the prototyping environment. The Perfor-

mance Monitor and the Functionality Tracing Monitor which will be discussed in

Chapter 8, are used for performance evaluation and for tracing the execution of the

system. If the results of evaluation are not satisfactory, then the prototyper can go

back to change the model and repeat the process of execution and analysis. The

conceptual model of the prototype system is gradually modified and extended (i.e.,

schema evolution). The conceptual model derived at any stage is executable since the

methods associated with the object classes that model the system components are

either implemented by real or simulated codes in K.3 or explicitly modeled as a con-

trol structure of code blocks which is translated into executable code by a Program

Generator. The Program Generator will be discussed in Chapter 4. Since the struc-

tural and functional relationships among system components are explicitly specified








in the conceptual model of the system being developed and are stored in a knowledge

base, they can be queried and accessed by the prototyper as knowledge base accesses.

Through an iterative process of modeling the system, executing the model for testing

its functionality and performance, inquiring the structural and functional relation-

ships of the system components, and modifying the model as more knowledge about

the system has been gained, a series of prototypes will be generated and tested until

the final target system (the last evaluated prototype) is derived and released for use.

Throughout this software development process, a KBMS as well as a knowledge

base programming language, which serves as the high-level interface to the KBMS,

are used to provide the following facilities: (i) a persistent storage and schema evo-

lution mechanism for recording the changes to the prototype system, the execution

results of the prototype system at every stage of prototyping, and other meta-data

such as designers' information, requirements, schedules, milestone reports, design de-

cisions, comments, and expert knowledge rules, (ii) querying and object management,

facilities for accessing the KBMS, (iii) a rule processing mechanism for supporting

consistency checking, integrity enforcement, deduction, and automatic triggering of

Event-Condition-Action-Alternative-Action (ECAA) rules.

The knowledge base programming language called K.3 uses an object-oriented

semantic association model called OSAM* [Su83, Su89a, Su89b] as its underlying

object model. It has an extensible kernel called XKOM [Yas91], and is used during

the prototyping process. In this language, object classes are used as the knowledge

definition facility to provide knowledge encapsulation by allowing an object class to








be defined in terms of its structural and functional associations with other classes,

operations, and rules in an integrated fashion. Rules in K.3 are Event-Condition-

Action-Alternative-Action (ECAA) rules, which can be used to model all sorts of

system and data constraints, and deductive rules, which can be used for inferring

system behavior at run-time, and for supporting its behavior analysis. These two

types of rules will be discussed in details in Chapters 6 and 7, respectively. There are

two types of classes: domain classes, which define domains of the possible values from

which descriptive attributes of objects draw their values, and entity classes, which

define and contain objects found in an application's world, such as physical entities,

abstract things, events, processes, and relationships. The encapsulation and inheri-

tance features are also built into OSAM* classes. The OSAM* class system has been

extended to achieve model extensibility by uniformly modeling application systems

in any application domain (including the model itself) as classes. Parametrized rules

are used to define new class types, association types and constraint types. Structural

associations, methods, and rules are all first class objects. Thus, the structural and

the behavioral properties of entities are uniformly captured and represented in the

knowledge model as objects.

One of the main features of K.3 is the support for persistence which is an

instance property rather than a class property. Persistence is independent of the

physical address of the object instances and is orthogonal to queries and object ma-

nipulations. Moreover, a declarative and set-oriented high-level query language has

been seamlessly incorporated into the language constructs of K.3 for retrieving and








manipulating persistent and transient objects. K.3 also provides a multi-paradigm

computational facility. As a computationally complete programming language, K.3

provides all the basic control structures such as sequential, iteration, and branching

statements. Parallel, distributed, non-deterministic and real-time computational fa-

cilities can be built into the extensible object model as extensions of K.3. Besides

all the procedural constructs, we have also provided a mechanism for modeling a

complex method implementation by a structure of code blocks which are defined as

K.3 classes and are linked with one another by a number of control associations:

namely, Decomposition, Sequential, Testing, Case, Parallel, Synchronization, ForE-

ach, and Loop. Each code block contains a piece of code that a prototyper can

comfortably and correctly write. The structure of code blocks is used to generate

the executable code for the complex method. The underlying extensible OSAM*

model enables the prototyper to model data entities, program modules, and method

models uniformly in terms of object classes and their various types of associations.

By using a single unified knowledge model and schema notation, we eliminate the

mismatch between the traditional data-oriented models [Hul87] and process-oriented

models [Nas73, Pet81] to support both structural and behavioral prototyping within

an object-oriented framework.

A KBMS is used for supporting evolutionary prototyping process. It consists of

a query processor for processing queries, a rule processor for processing rules defined

in the different classes, a data dictionary handler for managing the meta-data stored

in the KB, and an object manager, which is built on top of EXODUS, for providing








supports for object persistence and basic object maintenance and manipulation.



satisfy
prototoyper test data perf
prototyping tools & unirelon
I requiremen? deri*ved?


KBMS



Figure 3.1. An Overview of the Evolutionary Prototyping Process


The KBMS-based evolutionary prototyping methodology can be used for devel-

oping any object-oriented software system in such application domains as decision

support, manufacturing automation and business data processing. There are several

advantages in using this methodology. First, the prototyping language is semanti-

cally rich. It provides constructs for both the specification and implementation of

an evolving prototype system. Second, the KBMS provides persistence, query, and

rule processing supports for prototyping. Third, the prototyping environment sup-

ports the tracing and performance evaluation of prototypes. Our methodology also








addresses the maintenance issue which is one of the major problems with the evolu-

tionary prototyping process. Continual change in the prototype tends to corrupt the

software structure and therefore makes the system maintenance difficult and costly.

We present a semantically rich object-oriented modeling language with constraints

and ECAA rules as a part of object specification, which reduces the unstructuredness

in the code brought about by continual change, increases cohesion of a component,

reduces coupling between components, makes the system more adaptable, and makes

the code more easy to read and understand, thereby keeping the maintenance cost

low. We address the issue of reducing the chances of adding unstructuredness in

the code during the changes made during the maintenance phase by the following

mechanisms. First, our programming language supports all the control statements

(if-then-else, while-do, do-while etc.) of any structured programming language, and

avoids the usage of go-to statement which is an unconditional jump statement, which

tends to disrupt the structure of a program. Secondly, we present modeling con-

structs to represent knowledge about system constraints and behaviors, in terms of

high-level declarative constructs like ECAA rules and constraints which are much

easier to specify and change, rather than procedural code. The procedural code to

implement the semantics of these rules and constraints is automatically generated,

thus reducing the chances of making mistakes or corrupting design integrity. Thirdly,

we support modeling constructs in the language for specifying structural, behavioral,

and knowledge abstractions, to represent all the structural and behavioral properties,








and constraints of an object, as a part of object's specification rather than embed-

ding them through out the system code where the object's services are being used.

Thus when the rules and constraints of the system changes we only need to change

the object's specification, rather than changing all those places in the code where

the object's services are being used. Thus, this mechanism also supports controlled

change during the evolution of a prototype. We will discuss the language features

and constructs in details in Chapter 4. However, there is a learning curve involved

in our methodology. The prototyper needs to be knowledgeable about the various

language/environment capabilities provided by the system to make the best use of

it.

To summarize the concept of KBMS-based evolutionary prototyping, we stress

that the next generation prototyping system should treat a prototype as a high-

level executable model of the target system for gathering structural, behavioral, and

performance information about the system. The model can gradually evolve into the

final system as more and more details are specified in the model. To achieve this,

the following components are needed: i) a KBMS, which is based on an expressive

and extensible knowledge model, which can model software systems to be developed

for a wide range of application domains, ii) a wide-spectrum and computationally

complete prototyping language capable of defining and implementing any part of

a prototype system to any level of detail. It should be reflexive of the knowledge

model so that modification or extension to the knowledge model will automatically

modify and extend the prototyping language to cope with changing environments




20


and requirements. It should provide constructs for specifying sequential, parallel,

non-deterministic, real-time, distributed, and rule-based processing to capture the

various behavioral aspects of different application domains. Lastly, on top of the

KBMS and the prototyping language, a common prototyping environment is needed

to support all the activities in an evolutionary software development life cycle.
















CHAPTER 4
KNOWLEDGE-BASED PROTOTYPING LANGUAGE


4.1 Object Model

The underlying data model of our prototyping language K.3 is based on the

object-oriented semantic association model called OSAM*. In this model, all things

of interest in an application are modeled by means of object classes. An object class is

defined in terms of its structural properties, operational characteristics, and knowl-

edge rules. The relationships amongst classes are modeled by means of semantic

associations. The operational properties are defined by means of methods. Con-

straints, and expert knowledge related to a class are specified by means of knowledge

rules. The model of the model, i.e., the meta-model, is modeled by using the con-

structs provided by the language. Since our underlying data model is object-oriented,

and the implementation language K.3 is also object-oriented, they both possess the

advantages of any object-oriented model/language; namely, information hiding, en-

capsulation, inheritance, polymorphism, etc. Also different objects interact with one

another via message passing instead of operating directly on each other's data. They

are loosely coupled with one another. This reduces coupling between different mod-

ules. Moreover, the support for inheritance feature in our language makes components

readily adaptable. The adaptation relies on creating a new component which inherits








the attributes and operations of the original component. Only those attributes and

operations which need to be changed are modified. Components, which rely on the

base component, are not affected by the changes made. These features reduce the

maintenance cost of the system. We now present different modeling constructs in

OSAM*.

Schema: A schema, in this work, is a specification of the components of a

software system being developed and the data entities these components use and

manipulate. It consists of domain classes, entity classes, and deductive rules. These

constructs and their properties are discussed below.

Objects: Objects are representations of things that occur in an application's

world, including physical entities, processes, abstract things, relationships and values.

Objects are interconnected by means of semantic associations, which specify the

semantics of their relationships. At the extensional level an object-oriented database

is represented as a network of interconnected objects.

Classes: Classes are abstractions used to define and group objects that have

common properties. A class is used to define the common properties of a group of

objects in terms of their structural and operational semantics (behavior). These prop-

erties are described by means of semantic associations, methods and ECAA rules, as

shall be described below. The data representation of an object in a class is called the

"instance" of the object in that class. The "type" of a class describes the properties

that are common to all the instances of the class. Two general types of classes are

defined in the basic (kernel) object model: entity class and domain class. An entity








class defines and groups objects of interest in an application. Entity objects have their

independent identities and are uniquely identified by their object identifiers (oids).

Entity classes may have associations with other Entity classes and Domain classes,

which represent their structural properties. Their behavioral properties are defined in

terms of methods. They may also have Event-Condition-Action-Alternative-Action

(ECAA) rules, which specify knowledge rules and constraints. Domain class defines

objects that do not exist independently in the database but are used to specify the

descriptive properties (attribute values) and/or associate properties (object associa-

tions) of entity objects. Domain objects are self-naming. For example, the integer

10 and name "John Smith" can be domain objects.

Associations: Semantic associations represent relationships among object classes

and their instances. An association is defined by a set of links each of which is di-

rected from the class where it is defined called the "defining class" to each of the

"constituent classes." The "type" of an association describes the semantics of the

relationship it represents. Two types of associations are part of the kernel object

model: generalization and aggregation. A generalization association is used to model

the relationship that a constituent class is a subclass of the defining class (the su-

perclass), and the subclass inherits all the properties (associations, methods, rules)

of the superclass. An aggregation association models the relationship that an object

of the constituent class as the value of a data attribute (or "data member") of the








defining class. Notice that these two association types are common in most object-

oriented data models. Other association types are defined using a model extensibility

mechanism [Arr97].

Methods: Methods are used to implement some procedural semantics of objects.

A method consists of a "signature" (or specification), which includes the method

name, the names and types of its parameters and an optional return type, and a

"body" which is the implementation of the method. A method, when executed,

changes the "state" of an object and/or other objects. A method is executed by

sending a message to the desired object. The message contains the method signature

and the parameter values.

Rules: Two types of rules are supported in OSAM*: ECAA rules and deductive

rules. ECAA rules are event-based procedural rules whereas the deductive rules are

state-based inferencing rules. ECAA rules are used to represent operational semantics

of objects at the specification level instead of the implementation level. An ECAA

rule is an abstraction which represents a set of actions to be performed when certain

events occur (i.e. the rule is "triggered"). An event can be a system-defined operation

such as create, update, delete, insert or a user-defined operation specified as a method.

The "activation policy" or "coupling mode" of an ECAA rule indicates when the rule

is to be triggered with respect to an event, i.e. before the event, after the event,

etc. The "event" specification of the rule indicates which events cause the rule to be

triggered. The "condition" specification in a rule is a guarded expression which is

evaluated upon the triggering of the rule and used to determine whether to perform








the rule's "action" (if the condition is TRUE) or the rule's "alternative action" (if

the condition is FALSE) or to skip the action and the alternative action (if any guard

is FALSE). The action and alternative action parts of the rule specify the predefined

operations associated with objects. The ECAA rule specification is used to specify

the constraints and behaviors of an object as a part of the object specification, so

they need not be enforced by the clients which use that object. In other words,

the knowledge about the behaviors and constraints of the object is represented in a

structured manner in terms of ECAA rules as a part of the object specification, rather

than being dispersed throughout the system in the places where the object's services

are being used. Thus, the specification for checking of constraints is done only once at

one place. If any constraint changes at a later point in time, it needs to be modified

at only one place, and all the client classes of that object will automatically see the

effect of the change, without having to change all the client code that made use

of the object. This reduces the possibility of adding unstructured code in multiple

places, thus simplifying the system maintenance task. This also reduces the coupling

between different objects, increases cohesion of a particular object, and thus results

in a well-structured specification of the system because all the knowledge about its

structures, behaviors and constraints are specified within its own specification, rather

than being scattered throughout the system. All the above properties make the code

easier to maintain. ECAA rules are high-level specifications of system behaviors,

which are translated into program code.








In addition to ECAA rules, it is useful to have another rule specification mech-

anism to allow the prototyper to state the implication of some observed behaviors

of a system (e.g., a deadlock is implied by a cyclic wait-for condition). Since the

implied fact(s) is not generated by the execution of the prototype and thus cannot

be generated by ECAA rules, a state-based deductive rule specification mechanism

is also used in this work. The deductive rules have two major components, namely

antecedent and consequent. The antecedent is made up of a list of clauses which,

if evaluated to true, will imply that the statements in the consequent are also true.

Both types of rules will be discussed in details in later chapters.

4.2 Modeling Prototype

A software system or its prototype is made up of data entities and software

components, which manage and manipulate the data entities. Using the prototyping

language K.3, data entities and software components are uniformly modeled by object

classes with attributes, methods and knowledge rules. Method implementations or

simulation code for methods are also written in K.3. Thus, the prototyping language

K.3 is both a specification language as well as an implementation language. The

following example shows the use of K.3 to model a data entity type called Product.



define Product : Entity in ManufSchema is
associations:
public:
Aggregation ->
{
productcode : Integer where UNIQUE(product_code);
description : Text;
components : Set;








design : ProductDesign;
};

methods:
method displayO;
method add_comp(cno : String, p:Part);
where{method_model := "ProductAddCompSchema"};
method product_cost();

rules:
rule shouldhaveboard is
triggered after create(), update(components)
// create() and update( system defined methods
condition
exist b in this [components] Component
p:Part b:Board
// the above condition checks whether the newly
// created or updated
// Product (identified by "this")
// is associated with a board via some objects
// of Component and Part
otherwise
"RULE: Product::should_have_board\n".display();
"*ERROR* Component should have a board !\n".display();
delO; // if condition is not satisfied then delete
// this Product instance
end;
end Product;


In the above example, the structural properties of Product is specified in the

section labeled associations. It consists of productcode, description of the product,

information about the set of components of the product, and some information about

the design of the product. The UNIQUE constraint of the productcode attribute

specifies that the product code must be unique for each instance of the product

type. There are many other types of constraints supported by our system. They

are Range, Set Exclusion (SetExcl), Set Equality (SetEq), Set Subset (SetSub), Key,

Fixed, maximum number of instances of an entity (MaxObjects), Total Participation








(Total), Cardinality among Interaction members (ICard), Cardinality among defin-

ing class and a constituent class (ACard), Total Specialization (TS), Composite Key,

Derive, Inverse, Enumeration (Enum), and Partial Participation with count. These

constraints can be used to explicitly and declaratively specify the constraints associ-

ated with data and software components. Without them, the constraints will have to

be implemented and thus buried in procedural code. When the constraints are to be

modified, we can make changes in a declarative manner instead of making changes to

the procedural code. Since the procedural code that implement the semantics of these

constraints can be automatically generated from the specifications, it avoids the mis-

takes that are frequently made in making changes to written code. Also, high-level

specifications are easier to read and understand than procedural code, thus reducing

the system maintenance overhead. The methods provide an interface for performing

operations on different structural parts of the class, e.g., the method add.comp( is

used to add a component of the product. In the above example, a rule is used to

specify a data integrity constraint. It is triggered after creating or updating a product

to check whether one of the component parts of the product has a board. If not, an

error condition is raised, and the Product instance is deleted from the KB.

Similar to data entities, software system components can also be modeled using

K.3. The following definitions of MainMaint and ProductMaint illustrate this:



define MainMaint : Entity in ManufProgSchema is
associations:
public:
Aggregation ->
{








pmaint : PartMaint;
prmaint : ProductMaint;
ctmaint : CircuitMaint;
menu_choice : Integer;


methods:
public:
method mainO;
method display_menu();
method get_choice();
method branch);
end MainMaint;


define ProductMaint:Entity in ManufProgSchema is
associations:
public:
Aggregation ->
{
product : Product;
};
private:
Aggregation ->
{
menu_choice : Integer;
part : Part;


methods:
public:
method display_menu();
method get_choice();
method branch);
method add_product);
method modifyproduct(product_code:Integer,part_no:Integer);
method general_registeralgoO;
method minimum_register_algoO;
method del_productO;
method getproduct();
method add_comp( where {method_model := "AddCompSchema";}
method mainO;


rules:








// Note the registering is required after an update is performed
rule chooseregister_algo is
triggered after modify_product(productcode:Integer,
partno:Integer)
condition
exist p in p:Product [components] Component
pt:Part where
p != this and p.product_code < product_code
and pt.part_no < part_no
action
general_register_algo();
otherwise
minimum_register_algo();
end;
end ProductMaint;


The structural, behavior, and knowledge abstractions shown in the above ex-

amples are explained below:

Structural Abstraction: The MainMaint is the software component which

maintains all the data entities of the system. The association section of the Main-

Maint class contains the aggregation of pmaint, prmaint, ctmaint, and menuchoice.

The menuchoice is a data attribute of the MainMaint class, whereas the other three

are the software components which are used to model software components for main-

taining the part, product, and circuit data entities, respectively. Due to lack of space,

we have only shown the ProductMaint class, which is the component for maintain-

ing the Product data entity. The ProductMaint class has the public data attribute

product, and the private data attributes menuchoice and part.

Behavior Abstraction: The behavior abstraction is specified in terms of meth-

ods in a class. Each method definition consists of two parts: (i) a signature, which

is given in the methods section of a class definition and specifies the name of the








method, the types of the parameters, and the type of the return value, and (ii) the

actual method body, which is given in the implementation section of a class defini-

tion and is a sequence of K.3 statements that contain local variable declarations and

do general computations. The signature part provides the behavior abstraction. It

contains the information as to whether the method is public or private, the name of

the method, and the names and the types of the different parameters. The class Pro-

ductMaint has a method "modify-product" whose specification is as follows: "public:

method modifyproduct(product code:Integer, part.no:Integer)." This method spec-

ification provides a high-level abstraction of the process of modifying a product by

hiding the details of the actual modification.

Knowledge Abstraction: ECAA rules serve as a high-level mechanism for

specifying declarative knowledge that governs the manipulations of objects made by

the KBMS. Each rule is given a name for its identification, which must be unique

within its defining class. Each rule is specified by a set of trigger conditions and a

rule body. Each trigger condition consists of a timing specification and a sequence

of knowledge-base event specification. The timing specification (or coupling mode)

can be "before," "after," or "on commit." The event specification can be a KBMS

operation, an update, or any user-defined method. The rule body consists of (i) a

"condition" clause which is a predicate expression which evaluates to True or False

or a guarded expression that evaluates to True, False or Skip (i.e., neither the action-

clause nor the otherwise-clause part of the rule is executed), and (ii) "action" and

"otherwise" clauses, both of which can be a sequence of K.3 computation statements.








Similar to method invocation, rule checking is performed at the instance level, and

the pseudo variable "this" can be used in a rule body to refer to the current instance

of the defining class on which some operation (event) is performed.

The rule body of each rule is evaluated as follows: (i) if the condition-clause

returns true, then the action-clause (if provided) is executed, (ii) if the condition-

clause returns skip (see a guarded expression below), then do-nothing, and (iii) if the

condition-clause returns false, then the otherwise-clause (if provided) is executed. For

example the choose.register.algo rule in the ProductMaint class is used to specify the

property that if the product code number and the part code number of a product

is minimum amongst all the products, then the minimum.registeralgo() should be

invoked immediately after modifying a product, or else the general.register-algo()

should be invoked. The condition part of this rule consists of an association pattern

specification (i.e., a query for identifying all Products and their Components and

Parts). Based on its evaluation, the control flow of the software system is determined.

We give another example which has a guarded expression in its condition part.



Rule rl is
trigger after update(), create (
condition
( ssnno > 0 I license_category = Al )
action
process_category_Al();
otherwise
processcategory_general();
end;


In the above example, the conditional expression in rule rl, is a guarded ex-

pression. First, the expression to the left side of '1' (the guard) is evaluated. Note








a sequence of guards could have been specified and evaluated. If it is evaluated to

True, then the logical expression to the right side of the '1' is evaluated. If the right

side is True, the action part of the rule is executed, or else the otherwise part of the

rule is executed. In case the guard is evaluated to False, then the rest of the rule is

skipped, i.e., neither the right side expression, the action, nor the otherwise-clause is

executed.

For each knowledge base event that occurs to instance "this" of class "X," all the

applicable rules will be triggered (i.e., the evaluation of the rule body) according to the

trigger conditions of each rule at either (i) before the triggering event, (ii) immediately

after the triggering event, or (iii) after the execution of all the operations of the current

transaction, but before the transaction commits. Thus, our rule processing system

supports two types of coupling modes or activation policies: namely, the immediate

mode for "before" and "after" and the deferred mode for "oncommit."

In our prototyping system, ECAA rules give high-level specifications of control

and logic associated with program modules and data entities. They are translated

into executable code and are activated at the appropriate time in relationship with

their triggering events.

Method Implementation: So far in this section, we have shown that data

entities as well as software components can be modeled uniformly as object classes.

We have already shown that the behavioral properties of data entities and software

components of any software system can be captured by methods in class definitions.

We can implement a method body either by writing real or simulated code for the








method or by modeling the method using a method model, which will be described

later.

Real or Simulation Code for Methods: If the prototype has a precise

knowledge about the functionality of a method in a class and the method is simple to

implement, then he/she can directly write the implementation code for the method. If

the prototyper is not interested in or is not able to implement the method, but rather

wants to simulate it for the sake of testing some other methods that depend on it,

then he/she can write some simulated code for the method. The simulated code takes

some input and provides some estimated output to allow the other methods, which

are of interest to the prototyper, to perform their functionalities. Since K.3 combines

general-purpose programming language constructs with query language constructs

and offers persistent support to all data entities, real or simulated code can thus be

written in K.3.

The development and execution of software systems using the evolutionary pro-

totyping approach are supported by a KBMS. Any program execution would generally

involve the processing of a persistent knowledge base. For knowledge base retrieval

and manipulation, a knowledge base programming language should include some

knowledge manipulation constructs in addition to general-purpose programming lan-

guage constructs. In K.3 language, we use pattern-based querying constructs for

retrieving and manipulating the knowledge base. We use the context expression of

an Object-Oriented Query Language (OQL) [Ala89, Su93] as the primitive construct

for specifying the structural relationship among objects that are to be retrieved or








manipulated. For example, the expression "CircuitMaint [ctmaint] m:MainMaint *

[prmaint] pm:ProductMaint" would identify all maintenance software modules (Main-

Maint) which are associated with some product maintenance module (ProductMaint)

but not with any circuit maintenance module (CircuitMaint). "m" and "pm" are

variables which represent the MainMaint and the ProductMaint modules that satisfy

the above association pattern specification. "!" and "*" are non-association and

association operators. "[ctmaint]" and "[prmaint]" are the names of two attributes

of MainMaint whose values are the objects of CircuitMaint and ProductMaint, re-

spectively. The following example creates a circuit maintenance module which is a

sub-component of the main maintenance module, if the main maintenance module

does not yet contain a circuit maintenance module.



context m:MainMaint [ctmaint] CircuitMaint
do
cm := CircuitMaint.create);
m.ctmaint := cm;
end_context;


Statements for the retrieval and manipulation of a knowledge base may in-

volve existential and universal quantifiers in forms of "exist suchthat

" and "forall suchthat ", respectively. Quantifiers

make it much easier for the users to declaratively pose logic questions upon the knowl-

edge base. For example, the following statements display the product descriptions

of the products maintained by the product maintenance module, which is a sub-

component of the main maintenance module, if the product description is non-null.








context m:MainMaint
do if exist p:Product [product] pm:ProductMaint
[ptmaint] MainMaint
suchthat p.description != null
then
p.desription.display("%s\n");
end_if;
end_context;


As a computationally complete programming language, K.3 provides some basic

data structures (set, list, and array) and control structures (sequential, testing, repe-

tition, and context looping). We illustrate the object-oriented computation facilities

of K.3 by implementing the method "getproduct()" in class ProductMaint.



method ProductMaint::get_product() is
local pcode: Integer;
begin
product := null;
"\nEnter product code (O=end): ".display();
pcode.readO;
if pcode = 0 then
return;
end_if;
context p:Product where p.productcode = pcode
do
product := p;
// the product attribute in the ProductMaint
// entity is assigned the proper value
return;
end_context;
"\nProduct does not exist !\n".displayO;
end;
end ;


The program body of "get_product" is a single block statement "local < vars

> begin < statements > end," in which, we define local variable "pcode" to record

the product code. Initially, we read the product code from the user and, if the user









enters a non-zero product code number, then we use the context looping statement

to retrieve the product from the knowledge base which has the desired code num-


ber. A detailed description of implementing a method body using general-purpose

programming constructs can be found in [Shy96].



Bod Tennoimtioi condition



(S) (b)



Sdechm Coditi. BBa
Bod Condtion I WITWh Co.tdIonm B dio



(c) (d) (e)

Parallel
CodeBfiockSthm.
Strting Code Block BnMch0 BranchI




B-ranch Brao.chi






Figure 4.1. Control Associations


Method Models: Often a method may perform a complex function, and is

difficult for the prototyper to implement it directly. In our work, we extend our ob-

ject model to support the modeling of the method body in a uniform manner as the

modeling of data entities and software components. In the rest of this section, we

explain the modeling of method bodies to produce the executable specifications for

the methods that are too complicated to be implemented directly by the prototyper.








Our meta model is enhanced by the incorporation of two new class types, namely

the CodeBlockSchema and the CodeBlock. Moreover, a number of control associa-

tions, namely, Sequential, Testing, Case, ForEach, Loop, Parallel, Synchronization,

and Decomposition (see Figure 4.1), are used to specify the flow of control of a struc-

ture of code-blocks that constitutes a method implementation. These code-blocks

are modeled by object classes and their control associations. Thus, the prototyper

can model the method body of a complicated method as a CodeBlockSchema, in

which each code-block class has an attribute called source.code for which the pro-

totyper can provide the implementation code as its value if the code can be easily

written. If a code-block is still too complex to code, then the prototyper can further

decompose the code-block and define it as a sub-code-block-schema which contains

a structure of lower-level code-blocks linked by control associations. In this way,

the prototyper can gradually evolve a method implementation by the above mod-

eling process as he/she gains more insight into the functionality and the coding of

the complex method. Since methods are modeled by code-blocks and their control

associations, which corresponds to the control structures of any structured program-

ming language, this modeling technique can also used for a function-oriented design

of software systems. In this case, code-blocks are the unit function building blocks

which are associated with one other by control associations. Moreover, a prototyper

can specify ECAA rules within a code-block to express inter-code-block attribute

constraints. For example, consider two code-block's A and B. The prototyper may

want to specify that a certain action needs to be taken after executing code-block








B, if the state of certain variable satisfied certain condition before the code-block A

was executed the last time. For specifying these kind of inter-attribute constraints

between different code-blocks, we use "paramrule" in the code-block's. These rules

are termed "paramrule" because their execution depends on a number of parameters.

They are as follows: 1) what condition needs to be evaluated, 2) the point in the

system where the condition needs to be evaluated, i.e., before or after the execu-

tion of which code-block and code-block schema, 3) when should the conditions be

evaluated: i.e., at which point of execution, this information can be captured by an

invocation identifier of the concerned code-block. If the invocation identifier is not

specified, then the last execution of the code-block is taken into consideration. The

triggering of these rules is associated with the execution of the code-block in which

they are defined. The condition part of these rules has a boolean operator called

"testing" which has the parameters described above. The action and the alterna-

tive action of the paramrules are similar to any ECAA rule used in classes which

model data entities and software components. A conjunction of "testing" operator

can be specified in the condition part of these rules to capture the inter-code-block

integrity constraints. Appropriate code is generated and attached to the respective

code-blocks and triggered at execution time to realize the inter-code-block attribute

constraints. These rules can be used for testing, verification, and maintainance pur-

poses. Since these rules are specified separately from the attribute sourcecode of the

code-block, they can capture the knowledge about the constraints more explicitly,

thus improving the understandability and the maintainability of the target system.








For example, the addproduct() method of the ProductMaint class can be modeled

by the CodeBlockSchema called AddCompSchema.



define AddCompSchema : CodeBlockSchema
associations:
Aggregation:
{
c_no : String;
p_no : String;
Temp : Integer;
}

where: start_point := "Initialization";//starting code-block name
end; // of AddCompSchema

define Initialization : CodeBlock in AddCompSchema
public:
associations:
Sequential ->
{
seq : getCompInfo; // code-block named "getCompInfo"
} // is sequentially linked to code-block
// named "Initialization"
where: source_code
begintext
c_no := "";
p_no= "
< actual code for fetching a product >
end_text;
end;

define getCompInfo : CodeBlock in AddCompSchema
public :
associations:
ForEach -> {
Body : getPart;
} where (CONTEXTEXPR(p:Part where p.partno = p_no);
Sequential -> {
seq : addComp;
};
where: source.code
begin_text









< actual code to interactively enter the
component information and the
environment temperature in variable Temp.
Also check to see if the part into which the
component is to be inserted is null or
not. If it is null then the method
terminates>
end_text;
end;

define getPart : CodeBlock in AddCompSchema
public :
where: sourcecode
begintext
< actual code for assigning part >
end_text;
end;

define addComp : CodeBlock in AddCompSchema
where: sourcecode
begintext
< code to add a component >
end_text;
rules:
paramrule VerifyInsertionProcess is
triggered after execute()
condition
testing(out,AddCompSchema, getCompInfo,"Temp= SUITABLE")
and testing(out,ProdAddCompSchema, insertComp,
"insertion_process=SUCCESS")
action
"insertion of component is successful\n".display();
otherwise
< code to control the environment temperature
to enable addition of the component if Temp is
UNSUITABLE else try to reinsert the component >
end;



The addcomp() method in Product class which is a data entity can also be


modeled in our knowledge model by the ProdAddCompSchema.








define ProdAddCompSchema : CodeBlockSchema
public:
associations:
Aggregation:
{
c : Component;
insertionprocess : Integer;
}
where startpoint := "ProdInitialization";
end;

define ProdInitialization : CodeBlock in ProdAddCompSchema
public:
associations:
Sequential ->
{
seq : insertComp;
}
where: sourcecode
begin_text
< Product Initialization code >
end_text;
end;

define insertComp : CodeBlock in ProdAddcompSchema
where: sourcecode :=
begin_text
< code to insert the component
in the Product object and then
set variable named insertion_process to SUCCESS
if insertion is successful >
end_text;
end;


In the above example, we see that the add-comp() method of the ProductMaint

class is modeled by the CodeBlockSchema called AddCompSchema. This method

is supposed to perform the following functions in the following order: 1) fetch the

product, 2) get information about the part into which the component is supposed

to be inserted, and the new component information, 3) check to see if the part into








which the component is to be inserted is null or not: if it is null, then the method

terminates; else, the actual component insertion function is performed by invoking

the add comp() function of the Product entity.

The starting code-block (named Initialization) of the AddCompSchema is spec-

ified as value of attribute (named start point) of the AddCompSchema. In this code-

block, the prototyper can write the initialization code and the code for fetching the

product, and then specify the next code-block (named getCompInfo) that is to be

executed by the Sequential control association. In the attribute (named source code)

of the getCompInfo code-block, the actual implementation code for obtaining the

new component information and the environment temperature is specified. The

environment temperature information is stored in the variable called Temp. The

getCompInfo code-block is associated with the getPart code-block by the ForEach

control association. The getPart code-block is executed for each object selected by

the association pattern which is specified in the CONTEXTEXPR constraint of the

ForEach control association. The getCompInfo code-block is linked to the addComp

code-block by the sequential control association. In the attribute called source.code

of the addComp code-block, the actual call to the add comp( method of the Prod-

uct entity is done. A paramrule named VerifyInsertionProcess is used in the add-

Comp code-block to specify an inter-code-block attribute constraint. This rule is

triggered immediately after executing the addComp code-block. It verifies the fol-

lowing conditions: 1) after (the "out" parameter in the first testing clause) executing

the getCompInfo code-block of the AddCompSchema, the Temp variable has a value








SUITABLE, and 2) after (the "out" parameter in the second testing clause) execut-

ing the insertComp code-block of the AddProductCompSchema, the insertion process

variable has a value SUCCESS. If the condition clause is evaluated to True, then the

action part of the rule is executed. Else, the code in the otherwise clause is exe-

cuted. This rule is used to specify the inter-code-block attribute constraint of the

getCompInfo code-block and the addComp code-block of the AddCompSchema, and

also the inter-code-block attribute constraint between the insertComp code-block of

the ProdAddCompSchema and the addComp code-block of the AddCompSchema.

Program Generator


Method Model Method Code Block j Cod. Generator Mehod
Luad Model Analyr -t.1Code ImIyIe Kc pe
-------- Cod
Method Model

KBMS
Executable code





Figure 4.2. The Program Generator Infrastructure


This methodology of modeling the implementation of a complex method by a

structure of code-blocks linked by various control associations enables the prototyper

to decompose a complex method into more manageable sizes of code, which can be

more easily verified for their correctness and performance. This can help him/her

to isolate the source of problems more easily. Moreover, by taking advantage of

the code-block class rules, he/she can specify intra-method or inter-code-block con-

straints which facilitate the detailed testing of the method implementations. Since








our prototyping language K.3 has the KBMS support, all the test results and the per-

formance evaluation data can be stored and processed by the KBMS to facilitate the

evaluation process. We have developed a Program Generator (see Figure 4.2) which

transforms a method model into implementation code in K.3. A K.3 compiler then

translates the generated K.3 code into executable code in C++. Thus, all method

model specifications are executable.

So far in this section we have demonstrated how we have used our model to

model data entities, software components, and method models. We also model the

execution context information, which is the schema in which the execution data

are stored, using the same object-oriented semantic model. The execution context

contains the user-defined parameters as well as the system generated run-time data.

These context information are modeled by entity classes and associations. We shall

give the detailed specifications of these classes later.

4.3 Implementation Details of Code Generation

The structure of the Program Generator is shown in Figure 4.2. It loads the

K.3 specifications from the KB in bulk and stores the specifications in the main

memory buffer. This bulk loading reduces the number of I/O significantly and thereby

speeds up the overall code generation process. The Code Block Analyzer module

parses the specifications in the main memory and builds the intermediate code, which

represents the code-block network structure in the main memory. The data structure

for storing the intermediate code of a code block contains all the information about the

aggregation variables, source code, and main memory pointers to other code-blocks








to which it is linked by control associations. The information about the control

associations is also stored in the code-block's main memory data structure. The

intermediate code also contains information about the code-block class's ECAA rules.

The information about the code-block class's rules is stored in a rule list. Each

element of the list contains the corresponding rule string. The main memory structure

of a code-block is shown in Figure 4.3.

codeBlockStruct


Figure 4.3. Intermediate Structure Representation of a Code-block


Finally, the Code Generator Module generates the method implementation from

the intermediate structure. During the code generation process, calls are made to the

Performance Monitor and the Functionality Tracing Monitor so that at run-time the

monitors can monitor the behavior of the executable prototype, and collect valuable


char* schemaName;
char* className;
char* methodName;
char* codeBlockName;
char* sourceCode;
ruleList* rules; II code block class rules
assocArrList* assocs;
codeBlockStruct* associatedCodeBlocks;
variableList *vList;// aggregate variables
int isSimulated;
int estimatedTime;
codeBlockStruct* parent;




47


execution data to be used for further analysis. The code generator also parses the

code-block class's ECAA rule strings and generates code in appropriate places so

that conditions are checked before or after the execution of the code-blocks. The fact

that certain conditions were evaluated to True is recorded in a condition list. Code

is generated before and after the appropriate code-blocks in which ECAA rules are

specified. It checks the condition list to see if the condition clause specified in that

rule is evaluated to True or not. If the condition is true, then the action part (if any)

of the rule is executed. Otherwise, the alternative action (if any) is taken.
















CHAPTER 5
VERIFICATION OF THE FUNCTIONALITY AND PERFORMANCE OF
EVOLVING PROTOTYPES



5.1 Requirements

A simple approach for verifying the functionality and the performance of an

executable prototype system is by tracing its execution. A prototyper needs the

support of a computing environment for following the execution step by step, and

stopping the execution at any particular point in the code so that he/she can check

the variables which are in the visible scope of that point. It would also be desirable to

generate the performance profile information, which includes the information about

how much time different parts of the system took to execute. This profile information

may include the time taken to execute different methods or parts of these methods.

It would help the prototyper to go back and find out the bottleneck of the prototype

after its execution.

In an object-oriented software system, in which the data entities, software com-

ponents, and method implementations are all modeled uniformly in terms of objects,

it would be useful to analyze and understand the interactions between objects, and

the state of the objects before and after their interactions. The various parameters

of interest which are either system-tracted or user-determined should be monitored








at run-time and stored in a structured manner in the knowledge base for further

processing and analysis. These monitored data represent low-level behavioral infor-

mation and are not always readily useful for analyzing the system behavior. It would

be very helpful to the prototyper to have an automated process which can further

analyze the monitored data to derive useful high-level information about the system

behavior as well as to compare the expected behavior of the system against the actual

observered behavior. These automated supports would make the job of evaluating

the evolving prototypes much easier for the prototyper.

5.2 Mechanisms

We have developed three mechanisms to support the requirements mentioned

in the previous section. First, we provide a monitoring mechanism for tracing the

execution of a system at the code-block level and the method level. The performance

profiling is also done at both levels. The performance data collected are stored in

the knowledge base just like the other application objects discussed in the previous

chapters. We use the object model discussed in Chapter 4 to model the control infor-

mation, which keep track of the states of different objects and interaction parameters

between objects of interest to the prototyper. These control object instances are

stored in the knowledge base for further processing and analysis. We have developed

a technique to generate the control object instances at run-time whenever an object

interaction of interest takes place or an object's state of interest is modified. The

modeling and the generation of the control instances are specified by the prototyper.








He/she determines which parts of the system are to be monitored, and what specific

items need to be monitored.

Secondly we provide a behavior abstraction mechanism to derive more useful

information about the behavior of system at run-time based on the monitored data.

By an automated behavior abstraction process using deductive/inferencing rules (see

BNF in Appendix A), high-level information can be generated from the low-level

monitored data generated by the system. These high-level information are much

more meaningful to the prototyper than the low-level monitored data. Although,

they can be deduced by the prototyper by examining the system generated monitor

data directly and applying his/her inferencing logic, it will be much better if some

system support is available to automate this process by incrementally updating the

memory of the system as events occur in the system, and detecting typical state

scenarios as soon as they occur.

Thirdly, we provide a behavior analysis mechanism to allow the prototyper to

specify constraints amongst different control entities so that state conditions of inter-

est can be detected and some pre-specified actions can be taken when such conditions

are detected. This is very useful for checking if the system is behaving as desired or

not. We allow the prototyper to specify what behavior is expected, and what is not

expected of the system. Monitoring and behavior abstraction mechanisms generate

information about the actual behaviors of the system, which is then matched with the

specifications of desired or undesired behaviors, to carry out the behavior analysis.

A simple example on the use of the above mechanisms is as follows. Let us consider a








system S with components A, B, and C. In the object-oriented modeling paradigm, S,

A, B, and C are all modeled as first class objects, where A, B and C are sub-objects

or components of S. We want a prototyper to be able to specify that, at any point of

execution of the above system, if the different components interact with one another

in a specific sequence and the interaction parameters satisfy some specific condition,

it represents an incorrect execution of the system. The specification is: if object B

invoked method Ml of object A before object A invoked method M2 of object C,

and the state of object A at the point of an interaction with object C is the same

as the state of object B at the point of an interaction with object A, then system

S is not operating as desired. In order to identify such an execution scenario and

take necessary action automatically, the prototyping system needs to (1) monitor the

interactions between objects of A, B and C and keep track of their states, and (2)

based on the monitored data, infer whether they satisfy the temporal relationship

and data conditions which constitute an incorrect execution. The step 2 involves

abstraction and analysis of the monitored data gathered in step 1.

5.3 Techniques

We use different techniques to implement the above three mechanisms. A Func-

tionality Tracing Monitor (FTM) has been implemented to support the tracing of

the system execution at the code-block level as well as the method level. During

code generation, calls to the Functionality Tracing Monitor are embedded in selected

places so that it is always aware of which code-blocks or methods are under execu-

tion. The break-point information are stored as object instances in the knowledge








base and, whenever the control reaches a break-point, the execution can be halted by

using the information generated by the FTM. To carry out the performance profiling,

a Performance Monitor (PM) has been implemented, and calls to this monitor are

embedded in the methods and the code-blocks at the time of code generation. The

PM keeps track of the time taken to execute a certain code block or method in a

particular execution and also the time taken in different invocations of a particular

execution. This profile information is of great help to the prototyper for figuring out

the bottleneck of the system, in case the performance of the system is not as desired.

We use our object model described in the previous chapter to model the control

information which consists of information about the low-level system behavior. The

behavior information mainly consists of inter-object interaction information, and ob-

ject state information before and/or after interactions with other objects. The control

object instances are made up of user contextual information and system contextual

information. Details of these interaction contextual parameters will be discussed in

the next chapter.

The generation of monitored data is based on some ECAA Rules. These rules

are called monitor rules. They are triggered by invocations of the methods that the

prototyper wants to monitor. These rule are specified by the prototyper so that the

monitored data are gathered according to the specification of the prototyper. More

details on the generation of the monitored data and the monitor rule specification

and implementation will be given in the next chapter.




53


The support for processing inferencing rules has been implemented so that the

prototyper can specify behavior abstraction logics and the behavior analysis logics

in a declarative manner. The implemented system automates the execution of these

rules as the monitored data are gathered at run-time. The chained triggering of these

deductive rules implements the mechanisms of behavior abstraction and behavior

analysis. In Chapter 7, we will discuss in detail the specification and implementation

of these rules.
















CHAPTER 6
ECAA-RULE-BASED MONITORING OF SYSTEM BEHAVIOR



In order to understand the behavior of a target system under development, we

must have some means of monitoring the system behavior at run-time. The behav-

ioral information includes both its functionalities and performance. We identified

two kinds of context that are of interest at system execution time. They are the user

context and the system context. The user context contains the information provided

as parameters by the prototyper whereas the system context is made up of control

information, which include the caller context, the callee context, the execution time,

and the invocation information automatically generated by the system at run-time.

In order to collect the user context and associate them with the system context

information, active ECAA rules are used. These rules are specified by the proto-

typer when a prototype system is modeled and triggered before or immediately after

the execution of some specified methods to generate the user context information and

associate them with the corresponding system context information. These context in-

formation are stored as instances of object classes which model the user's contextual

views. The instances of these "execution context" classes are generated by active

rules. They represent the low-level system behaviors which deal with inter-object

interactions. These active rules are specified in entity classes that model program








modules and code-blocks and are triggered to gather both performance and function-

ality data at run-time. They can be used to perform selective monitoring, i.e. the

prototyper can use them to monitor whichever part of the system he/she wants to

monitor. In a later section in this chapter, we shall give some examples of their use.

Since the execution of any target system progresses with time, we consider time

to be an important dimension in any kind of analysis of the target system's behavior.

In order to take the temporal dimension into consideration, we have incorporated

the Start-time, and End-time concepts within each of the execution context record.

Thus, each instance of an execution context class will have these two time attributes

associated with it. Start-time records the time of the creation of that execution

record, whereas End-time records the end of a valid time interval.

Besides the incorporation of the time component into the structure of the ex-

ecution context information, we have incorporated the following operators in our

prototyping language:


1) (last instance) : returns the last version of the instance, or null if such an

instance does not exist.


2) (first instance) : returns the first version of the instance, or null if such an

instance does not exist.


3) (nth instance) : returns the nth version of the instance, or null if such an

instance does not exist.


4) (last className) : returns the last instance of className or null.








5) (first className) : returns the first instance of className or null.


6) (nth className) : returns the nth instance of className or null.


7) temporalfunction(classinstance) : temporalfunction can be STARTTIME,

ENDTIME, or INTERVAL function. It returns the appropriate time value

(depending on the actual function called) of the class instance, or null in case

the classinstance is null.


8) temporalfunction(class.instancel) Before temporal-function(classinstance2) :

this expression returns true if the time value returned by temporal-function(cla-

ssinstancel) is earlier (or smaller) than that of temporal.function(class.instan-

ce2). Otherwise, it returns false if either of the temporal functions evaluates to

null, or the Before condition is not true.


9) temporalfunction(classinstancel) After temporalfunction(classinstance2) :

this expression returns true if the time value returned by temporal-function(cla-

ssinstancel) is later (or greater) than that of temporal-function(class-instan-

ce2). Otherwise it returns false if either of the temporal functions evaluates to

null, or the After condition is not true.


10) temporalfunction(classinstancel) When temporalfunction(classinstance2) :

this expression returns true if the intervals, in which the classinstancel and

class-instance2 are valid, overlap each other. Otherwise it returns false if either

of the temporal functions evaluates to null, or the intervals do not overlap.








11) temporalfunction(classinstancel) temporalfunction(class-instance2) : this

expression returns the difference in time between the two class instances. If one

of the operands is the INTERVAL function, then the other has to be INTERVAL

function also.


12) NOW : returns the current time.


13) classinstance.(boolean expression) : this expression returns the class-instance

if the boolean expression evaluates to true on that class instance. Otherwise, a

null is returned.


14) class-instance.field.name : returns the fieldvalue.


15) class.name.(assignment statements) : creates a class instance with the appro-

priate values.


16) (exists variable:className).(qualifiers) : this expression returns true if an in-

stance of the named class exists and satisfies the qualifiers. Otherwise, it returns

false. In case the above expression returns true, the variable refers to the class

instance which satisfies the qualifiers.


17) class-instance.initclassName(field values,systemContext) : initializes the speci-

fied instance with the specified values, and also populates the associated system

context passed as a parameter with the appropriate values, containing the caller

context and the callee context which will be elaborated later on in this chapter.

In the above list of temporal operators and functions, the last instance refers

to the one which has the largest STARTTIME value. The temporal-function above








can be either STARTTIME, ENDTIME, or INTERVAL, while NOW returns the

current time value. The last, first, nth operators have higher precedence over the

'dot' operator. For example the expression (last className).(boolean expression) is

processed as follows. First of all, the (last className) returns the last class instance.

Then, the boolean expression which can be a conjunction of boolean expressions is

evaluated on that class instance. The result of this evaluation is either the class

instance or null. In the former case, the expression is evaluated to true with respect

to the instance. In the latter case, it evaluates to false.

In the next chapter, we will see some examples of the state-based inferencing

rules where some of the above functions are used. The incorporation of the above

functions enhances the expressive power of our prototyping language.

6.1 Capturing the Execution Data in the Knowledge Model

At run-time, whenever a method is executed, there are two kinds of information

that are of interest. One is the prototyper supplied parameters, e.g. the parameters

of inter-class or inter-method interactions, whose types and names are determined by

the prototyper at the system specification time. The other is the system evaluated

parameters which are generated by the system at run-time. The system context at

any point of system execution includes the information about from where the control

came from, the time when the interaction took place, and the total time during

which the interaction lasted (which is the time taken by the method to execute

in the current invocation), the number of invocation of this method prior to the

current invocation, etc. All these information are very useful for understanding the








system behavior and performance, and for detecting the bottlenecks and problems

of the system. These contextual information are stored as instances of object classes

modeled by the same model used for modeling data entities, system components,

and method implementations. We generate both the user context and the system

context based on active ECAA rules specified by the prototyper. These rules are

triggered either before or after method invocations. The execution context gathered

are stored in the KB, and can be queried by the prototyper using the OQL interface.

These execution data contain useful information about the actual functionality of the

system at run-time and the performance results of the target system. The structures

of the SystemContext and the UserContext entity classes are given below:



define SystemContext : Entity
associations:
public:
Aggregation ->
{
calleeclass_name : String;
callee_method_name String;
caller_class_name : String;
callermethod_name String;
starttime :Time;
endtime : Time;
num ofinvocation : Integer;
interval : Time;
// interval after which callee method
// was invoked
executiontime : Time;
invocation_context : String;
// before or after the invocation
// of callee method the context
// was generated
};
end;
Example of a Class representing the user context information:








define UserContext : Entity
associations:
public :
Generalization <- { SystemContext };
Aggregation ->
{
//Specifications of the parameters
//of the prototyper's interest
//which may include the parameters of a
//method invocation or the state of the
//attributes of the object at the point
//of generation of the user context;
//These are modeled by the application prototyper
//according to his/her requirements which are
// governed by the application parameters.
}
end;


The information about the number of invocations, the last time of invocation

by another method, and other system contextual fields are stored in the system for

each method of each class, as shown in Figure 6.1. The fields of the SystemCon-

text entity instances are populated with system generated time information and the

invocation history for each method which capture the details of the interclass inter-

action. We note that the UserContext class has a Generalization association with

the SystemContext class, so that, for each UserContext record generated, there is

a SystemContext record which contains the control information at the point of the

context generation. The generation of these context records are triggered by active

ECAA rules and many of the items, which appear inside the context, are generated

by the Performance Monitor (PM): e.g., the starttime, endtime, and executiontime.

These values that are monitored by the PM can be used in the action part of the









ECAA rules to generate the appropriate context records at run-time. Thus, the pro-

totyping language and the prototyping environment work together in revealing the

actual system behavior at run-time.

For each method in each class the system maintains the InvocationDetailsList
whose structure is shown below:






r--~~----------------
r ---------------
className;
methodName;
callerClassName; -- --- Unique Id of the list Element
callerMethodName;
selfOid;
callerOid;
SnumnberOfnivTillNow;

lastTimeOflnvocation;

---------------------I

Figure 6.1. System Information about the Invocation History for a Method



6.2 Specification of Monitor Rules


We show some examples of monitoring rules below:




define Machine : Entity
associations:
public:
Aggregation ->
{
state : Integer;
name : String;
id : Integer; where { UNIQUE(id); }


methods:








public:
method operate(inputsignal : Integer, input_descriptor : String);

rules:

rule monitorrule_l
triggered before operate(inputsignal,inputdescriptor)
action
local m : MachineControl;
begin
m.init_MachineControl(id,input_signal,state,BEFORE_OPERATION,
systemContext);
// init_MachineControl() function
// initializes a MachineControl instance
end;
end;

rule monitor_rule_2
triggered after operate(input_signal,input_descriptor)
action
local m : MachineControl;
begin
m.initMachineControl(id,inputsignal,state,AFTEROPERATION,
systemContext);
end;
end;

end; // Machine

define MachineControl : Entity
associations:
public:
Generalization <- { SystemContext };
Aggregation ->
{
machine_id : Integer;
input_signal : Integer;
machine_state : Integer;
time : Integer;
};

method init_MachineControl(machine_id : Integer,
input_signal : Integer,
machine_state : Integer,








time : Integer,
systemContext : SystemContext);

// the body of the above initMachineControl() method is
// automatically generated from the class definition
// information of the MachineControl entity class stored
// in the meta schema.
end;



The method named operate in the above example is invoked by the external ob-

jects to operate on the machine. It may change the state of the machine. In the above

example, we see how the active monitoring rules, monitorrule 1 and monitorrule2,

can be used to monitor the state of machine and the parameters of interaction with

its caller. The MachineControl entity stores the control information of the Machine.

The control information consists of the user contextual parameters and the system

contextual parameters. The user contextual parameters in the MachineControl entity

specify the machine state, input signal, machine identifier, and the time of generation

of the control information, i.e. either before or after the operation. The structure

of the system context has already been discussed before. The system contextual

parameters contain the information which are useful for system analysis as we shall

explain in another example later on. The control information generated by the ECAA

monitoring rules facilitate the target system evaluation.

The monitoring rules perform the following functionalities:


1) Monitor an object's state and its interaction with the external world.








2) Keep track of the last method of a particular class which was executed. This

information will guide the prototyper to determine where the system termi-

nated. In case of faulty termination, this will be a good point from which the

prototyper can trace back to find the error.


3) Gather information about the data at the point of interaction between two

objects. This is very useful for trapping illegal interaction between two objects.


4) Gather statistics about the invocation of the methods of different classes. This

will help in analyzing the overall performance of the system at a gross level.


5) Get a global picture of the control flow of the system in execution. This will

help the prototyper to better understand the target system he/she is trying to

develop.


6.3 Implementation of Monitor Rules

As discussed in the previous section, the main function of the monitor rules is

to generate the control entity instances, at the point when interaction takes place be-

tween different object entities. This is enabled by using the Event-Condition-Action-

Alternative-Action (ECAA) Rules in the following way. The ECAA rule processor

injects calls to the rule handler before and after the execution of every method on

which a rule has been defined. The rule is translated into a method which we call

a "rule method." The rule handler is responsible for calling the rule method which

executes the condition, action, and alternative action (CAA) parts of the rule. Two

different execution modes are supported by the rule processor, namely the immediate








mode and the deferred mode. In the immediate mode, the rule method is executed

either before or after the execution of a method, whereas, in the deferred mode, rules

are fired at the transaction commit time.

During the translation of the CAA part of a rule into a rule method, the pa-

rameters of the method invocation which activates the rule is packed into a generic

parameter list and passed to the rule handler. The rule handler passes this generic

parameter list to the rule method, in which the parameters are unpacked and are

bound to variables whose names and types are the same as the method parameters.

This is how the visibility of the method parameters is implemented within the rule

method.

For each method, before the actual body of the method (i.e. the implementa-

tion code) is generated, code for preparing the parameters of the system context for

that method is generated. For example, the attributes for the caller class and the

caller method, which form a part of the system context, are assigned the values of

the class name and the method name of the caller. The contextual parameters are

passed during method invocations which might take place within the body of this

method. Also, for every method, besides the regular user-defined parameters, there

is a contextual parameter which contains the callee's class name, the callee's method

name and the other invocation parameters as described earlier in this chapter. The

K.3 compiler generates these contextual parameters both during the generation of

the method signature as well as during the generation of the method invocations.

These contextual parameters are hence passed to the rule method by the mechanism








of parameter packing and unpacking as discussed in the previous paragraph. Thus,

the contextual parameters also become visible within the rule method. Thus, the

control entity instances can be generated from the ECAA monitoring rules. They

contain the user contextual parameters and the system contextual parameters. We

will now explain the translation of ECAA rules into executable C++ code with an

example. The following is the specification of a typical class definition in K.3:



define A : Entity
associations:
public:
// assoc definitions

methods:
public:
method ml( parami : Typel, param2 : Type2, ....)

// body of method ml in K.3

end;

// other methods of class A

rules:
rule rl is
triggered before ml(param_objects of method ml),
after ml(paramobjects of method ml)
condition < condition expression >
action

otherwise

end;

// other rules of class A

end;





























K da A specification nanig method ml
and rale rl


K3 COMPILER


C++ code generated to implement the ECAA rule
_KCLASS_A::I(paramter object,
caller cOntet object)

KLS1. call nthe ntfo.nnce Mnior (PM)
KCASSA:: KBEGIN_mlN(pnmter objects, .ad Fun toraeuty Traing
contest object) MonitT (FTM)
1. code to prepare generic parameter call KBEGIN_ml(
li cantlning pa trobjectd I __ parameter object,
context object, call it _KpList tller context object)
Z.cau _KRleltandl.er :tri rule( X. code to prepare its cnt object
oIrtlee_ i" tiendtrhJe rdfA' [ so that it can be pMeod to method PM
Sinvocatiou m. idetl body f ml. M
KJtCLASS A::_KENDnml(p eter objects, 4. Actal body of the method ml
tI Ki- 5 Ndla _iKEND_mlMI ee-r 6b*ct, ()
1. code to prepared generic prameter __ caKrmat tobjct)
(3) listcontainingparameterobjectsand 6 calthePerforance Monitor (PM)

2al _KRleukller::rigerle( Monitor (FM)
rule method_fdcion_pt r, object_ptr_ef_A.
-KpList) I
) _KCLA5 A::rIjobjcrtr.g ri mteis)
KR dr:ri r tet re bj Ir, u packpa meterl andbIdval l utl re
Svribles wbos names add typesa the s onlin
.allnleathlod _KCLASA1b r, a th speidfled in the method iegature Iefer
1) L2. body of dh rule d..hod ---------------- imlmntllg
Note.: h mrtld p.ar.m.tandhe ctMl th. CAA pdrt of th
object are visible wll the body U this metho EC AAle
so they cm be captured within the montond data




Figure 6.2. The Translation Mechanism to Implement ECAA Rules








In the above example, we show the definition of a typical class A, with method

ml() and ECAA rule rl which is to be triggered before and after invocation of method

ml() to check whether certain conditions are satisfied before and after invocation of

ml(). The class A in K.3 language is compiled into a C++ class called -KCLASSA,

and the ECAA rule and method specifications are compiled into C++ methods of that

class as shown in Figure 6.2. The Performance Monitor (PM) and the Functionality

Trace Monitor (FTM) are invoked in the beginning of method ml() (see link 1 in Fig-

ure 6.2). They keep track of the execution profile information. The -KBEGINml()

method is then invoked (see link 2). The parameter objects of method ml() and

the caller context object of method ml() are passed to the -KBEGINml() method.

Within the body of the -KBEGIN.ml(), a generic parameter list is created containing

the parameters of the method ml() and the caller context object of method ml().

A pointer to this generic parameter list, a pointer to class A object and a pointer

to the rule method for rule rl are passed to the trigger.rule method of the Rule

Handler class (see link 3). The Rule Handler is a component of the KBMS. The

application classes that constitute the prototype system make use of the service of

the Rule Handler. The trigger.rule method of the Rule Handler applies the generic

parameter list and the class A object pointer to the rule method (see link 4). Thus,

the rule method for rule rl gets executed before the execution of method ml. Since

the generic parameter list, and the pointer to the object of class A are passed to the

rule method, they are visible within its body. The generic parameter list is unpacked

and the binding to the local variables takes place. These variables have the same








names, types, and values as the parameters of method ml(). Thus, the visibility of

the method parameters and the contextual parameters within the rule method body

enables these parameters to be captured as a part of the monitored data. The invo-

cation of the rule method of rl after the execution of ml() takes place in a similar

fashion (see links 5, 6, 7). Finally, a call is made again to the two Monitors to notify

them of the end of ml() execution (see link 8).

Within the rule method, code is generated to compute the start time of invo-

cation, the end time of invocation, and some other parameters such as the interval,

and the number of invocation as discussed in Section 6.1. These information are

derived from the system stored information as shown in Figure 6.1. These contextual

parameters can be passed into the control entity instances, which constitute the data

generated based on the monitoring rules, so that further inferencing and analysis can

be done based on these contextual information. The processes of behavior abstraction

and behavior analysis are the topics of discussion in the next chapter.

To support the generation of the control entity instances for each control entity

class named X, a method definition of initX(parameters) is automatically generated

using the attributes of X so that the prototyper only needs to invoke initX() to

generate the monitored data of his/her interest. The method initX() actually does

the instantiation of instances of class X.
















CHAPTER 7
INFERRING OF SYSTEM BEHAVIOR USING DEDUCTIVE RULES



So far in this dissertation, we have discussed the use of ECAA monitoring rules

for gathering run-time contextual information. These rules are attached to some

class and are event-based and procedural as they are triggered when some methods

of some object classes are invoked. Since these rules are triggered by events which

are method invocations, contextual information can be gathered at the points of

method invocations. The data gathered are low-level data representing a system's

execution behaviors. It is useful to provide a state-based rule specification facility

to allow the prototyper to state that, if the states of some object instances exist,

then the states of some other object instances should also exist. This type of rules

are high-level, declarative, and deductive rules each of which has an antecedent and

a consequent. The antecedent consists of data conditions that involve a single or

multiple object instances. Whenever the antecedent is evaluated to true, it implies

that the consequent part of the rule is also true. We implemented and enforced the

semantics of inferencing rules by translating them into entity classes having active

ECAA rules and methods for testing the antecedent part, and for executing the

consequent part of these rules. The active rules and methods are then transformed








into executable C++ code. The details of the implementation strategy is discussed

in a Section 7.4 of this chapter.


7.1 Specification of Deductive Rules


In the previous chapter, we have shown how the active monitoring rules gener-

ated the run-time execution data. In some situations, the data may be too low-level

and too detailed. They may not be of much use to the prototyper. However, a col-

lection of them satisfying some constraints determined by the application logic may

represent a behavior scenario of the target system that is of interest to the prototyper.

In the following example, temporal operators discussed earlier in this dissertation are

used in the deductive rules.






define InputDevice : Entity in EngineSchema
associations:
public:
Aggregation -> {
state : Integer;
fuel_level: Integer;
powerlevel : Integer;
name : String;
};
end;


define OuputDevice: Entity in EngineSchema
associations:
public:
Aggregation -> {
state : Integer;
name : String;
output : String;
temperature : Real;









pressure : Real;
};
end;

define Processor : Entity in EngineSchema
associations:
public:
Aggregation -> {
state : Integer;
power : Boolean;
versionno : Integer;
temperature : Real;
};
end;


define InputDeviceControl : Entity in EngineSchema
association:
public:
Generalization <- { SystemContext };
Aggregation -> {
engine_id : Integer;
state : Integer;
fuel_level : Integer;
inputsignal : Integer;
signal_rate : Real;
};
methods:
public:
method init_InputDeviceControl(engine_id : Integer,
state : Integer,
fuel_level : Integer,
inputsignal:Integer,
signal_rate:Real
systemContext : SystemContext);
end;

define OutputDeviceControl : Entity in EngineSchema
association:
public:
Generalization <- { SystemContext };
Aggregation -> {
engine_id : Integer;
state : Integer;









temperature : Real;
external_temperature : Real;
};
methods:
public:
method initOutputDeviceControl(engine_id : Integer,
state : Integer,
temperature : Real,
external_temperature : Real,
systemContext:SystemContext);
end;

define ProcessorControl : Entity in EngineSchema
association:
public:
Generalization <- { SystemContext };
Aggregation -> {
engineid : Integer;
state : Integer;
power : Boolean;
temperature : Integer;
};
method initProcessorControl(engine_id : Integer,
state : Integer,
power : Boolean,
temperature : Integer,
systemContext:SystemContext);
end;

define Engine : Entity in EngineSchema
associations:
public:
Aggregation -> {
id : Integer; where { UNIQUE(id); }
input_component : InputDevice;
output_component: OutputDevice;
core_component : Processor;
name : String;
state : Integer;








methods:
public:
//These methods can affect the state of different
//components of a Engine in various ways. The prototyper
//is responsible for implementing the functionality of
//these methods. For evaluation purpose, in order to see
//whether the methods are performing their functions properly
//the prototyper can define the monitoring rules to extract
//the execution data at run time, and deductive rules to
//identify the defect and the faults of the system.

public:
method process_input(input_signal : Integer,
signal_rate : Real,
processor_connection:String);
// this method could affect the state of InputDevice
// and Processor
method extract_output(external_temperature : Real,
processor_connection :String);
// this method could affect the state of OutputDevice
// and Processor these method can be invoked by other
// objects to operate on the Engine Entity

rules:
rule monitor_rulel
triggered after process_input (input_signal,signal_rate,
processor_connection)
condition input_component.state= ON
action
local
al : InputDeviceControl,
a2 : ProcessorControl;
begin
a2.init_ProcessorControl(id,outputcomponent.state,
processor.power, processorconnection, systemContext);
al.init_InputDeviceControl(id,input_component.state,
inputcomponent.fuel_level,
input_signal, signal_rate, systemContext);
end;
end;

rule monitor_rule2
triggered after extract_output(external_temperature,
processor_connection)








condition core_component.state = ON
action
local
a4 : ProcessorControl,
a3 : OutputDeviceControl;
begin
a4.init_ProcessorControl(id,processor.state,
processor.power, processor_connection, systemContext);
if(output_component.state = ON) then
a3.init_OutputDeviceControl(id,output_component.state,
output_component.temperature,
external_temperature, systemContext);
end_if;
end;
end;

end; // of Engine Entity

define ActualEngineStatus : Entity in EngineSchema
associations:
public:
Generalization <- { SystemContext };
Aggregation -> {
id : Integer;
power : Boolean;
temperature : Integer;
};

end;


define ExpectedEngineStatus : Entity in EngineSchema
associations:
public:
Generalization <- { SystemContext };
Aggregation -> {
engine_id : Integer;
temperature : String;


end;};
end;


define AffectedClient : Entity in EngineSchema
associations:








public:
Generalization <- { SystemContext };
Aggregation -> {
clientclass : String;
server.engine : Integer;
};
end;




define EngineSchema : Schema

rules:
// rulel carries out behavior abstraction
deductiverule rulel
antecedents
( (STARTTIME (last pc_inst:ProcessorControl).
(state = ON, temperature < ocinst.temperature))
Before
(STARTTIME (last ic_inst:InputDeviceControl).
(engineid = pc_inst.engine_id, state = ON))
) and
(exist oc_inst:OutputDeviceControl.
(external_temperature > THRESHOLD,engine id=pc_inst.id)
)
consequents
ActualEngineStatus.(machine.id := pc_inst.id,
power := pcinst.power,
temperature := ocinst.temperature,
callerclass := ic_inst.caller_class)
end;

// rule2 carries out behavior analysis
deductiverule rule2
antecedents
(exist ainst:ActualEngineStatus.temperature
> e_inst.temperature) and
(exist e_inst:ExpectedEngineStatus.id
= a_inst.machineid)
consequents
AffectedClient.(client class :=
a_inst.caller_class,
server.engine := a_inst.engineid)









end; // EngineSchema



In the above example, we have shown the specification of an EngineSchema. The

Engine entity is made up of corecomponent, inputcomponent, and output.component.

The structure of each component is defined in terms of different entity types, e.g.,

the input component is of type InputDevice. Each component of the engine has a

corresponding entity which maintains the control information of that component. For

example the InputDeviceControl maintains the control information of the InputDe-

vice. The control information contains two parts: the user context information and

the system context information. The user context information in InputDeviceControl

consists of the engine.id of the Engine to which the input device belongs, the state

of the input device, the fuelJevel, the input-signal, and the signal-rate of the input

device. The values of these fields are determined dynamically at run-time, when they

are generated by the active monitoring rules. The description of system context in-

formation has been given before. It is generated and maintained by the system and

can be used by the prototyper for analyzing the target system. We notice that the

monitorrulel and the monitor.rule2 are responsible for generating execution context

information which are stored as the instances of the InputDeviceControl, Processor-

Control, and OutputDeviceControl entities referred by variables ic.inst, pcinst, and

ocinst respectively in rulel. The deductive rules are state-based, declarative, and

high-level in nature. They facilitate the prototyper to detect the presence of typical

faults and defects in the target system being developed. In the above example, we see








that, if (1) the Processor of some engine reached the ON state, before the InputDe-

vice of that engine could reach the ON state, (2) the current Processor temperature

of that engine is less than its OutputDevice temperature, and (3) the external tem-

perature of the OutputDevice of that engine is greater than some THRESHOLD

value, then the antecedent part of rulel will be evaluated to true. The consequent

part of rulel will then be executed to generate an instance of the ActualEngineS-

tatus, with appropriate field values as specified in the rule. The triggering of one

deductive rule may lead to the triggering of other deductive rules, if their antecedent

clauses are evaluated to true. In the above example, if the temperature attribute

of the ActualEngineStatus instance of some engine happens to be greater than the

temperature of ExpectedEngineStatus instance for that engine, then the antecedent

of rule2 will be evaluated to true, and this will result in the generation of an instance

of the AffectedClient entity. The value of the ExpectedEngineStatus instance is de-

termined apriori by the prototyper based on his/her knowledge about the desired

system behavior. The deductive rule, rule2, performs the job of behavior analysis

since it compares the actual and the expected system behavior pointing out the caller

that has been affected. Whereas, rulel performs behavior abstraction since it infers

high-level information about the status of the engine from the low-level interaction

information of its components with its caller, and the state information of different

components of the engine. The generation of instances in the consequent part of the

rule, may again lead to the triggering of other rules in those classes to carry out more

inferencing. Thus, we see that deductive rules can be effectively used to detect defects








in the target system, locate sources of problems, and generate feedback control use-

ful for understanding the behavior of the target system. The process of modification

and evaluation continues until the prototyper is satisfied with the functionality and

performance results of the target system. The deductive rules can perform a wide

range of functions. Some of these functions are listed below:


1) Inferencing: deduction based on complex execution behavior (functionality and

performance).


2) Simulating control system: extracting execution behavior and accordingly gen-

erating the feed-back.


3) Verifying functional correctness: e.g., verifying properties about input to out-

put transformation, the state at the point of interaction, and termination of a

method.


4) Detecting performance bottlenecks: e.g., detecting which method took the max-

imum time to execute, and performing deduction based on the performance

information. For example, if two methods have the same functionality, then

one with better performance characteristic will be chosen for execution. This

decision can be derived by a deductive rule at run-time. This will improve the

system performance.


7.2 Behavior Abstraction

The process of deducing high-level behavioral information from low-level behav-

ioral information is known as behavior abstraction. This concept can be illustrated








by the following scenario. Suppose component X is waiting for a resource of compo-

nent Y and component Y is waiting for a resource of component Z and component

Z is waiting for a resource of component X, then this gives rise to a cyclic wait-for

condition known as deadlock. The fact that one component is waiting for the other

component represents a piece of low-level behavioral information, while the deadlock

scenario represents a high-level behavioral information about the system. Similarly,

in the traditional process of debugging, it is the developer's responsibility to trace the

execution of a program and keep in his/her mind all the low-level interaction informa-

tion between different components and then deduce by himself the high-level behavior

scenario. Essentially, by using monitoring and inferencing rules, we automate this

process of behavior abstraction which helps the prototyper in his/her understanding

of the behavior of the target system. Monitor rules collect the low-level interaction

information about the system behavior, which consists of changes to the states of

object instances and interactions between different object instances. Whereas, the

process of abstraction is carried out by inferencing rules. In the previous section,

rulel in the EngineSchema illustrates the usage of state-based deductive rules for

behavior abstraction.

7.3 Behavior Analysis

So far, we have shown how data entities, software components, method imple-

mentations, and the execution context information can be modeled uniformly in our

object-oriented model. The desired system properties and the expected system be-

havior can also be modeled in the same model by using various modeling constructs.








We can model the expected output of different methods, and their boundary condi-

tions, i.e., the state before and after the execution of certain methods, in terms of

entity classes. The expected control flow, either system wide or within a method,

and the dataflow between different methods and classes can also be modeled in terms

of various entity classes and associations supported by our underlying object model.

This expected behavioral data can be used to compare with the actual execution data

to perform behavior analysis of the target system.

System behavior are captured at run-time by active ECAA monitor rules and

are used by deductive rules to perform abstractions, thus producing high-level data

which convey more meaningful information about the actual system behavior at run-

time. These high-level execution data can then be compared with the expected

behaviors, which are also specified in the same model, to analyze the deviation of the

system from the desired behaviors. The behavior analysis can be facilitated by high-

level deductive rule, since they can be used to derive which expected behavior was

observed during the actual system execution and at which points the system deviated

from the expectations of the prototyper. Thus, we see that deductive rules can be

used not only to derive more meaningful high-level information about the system

behavior but also to carry out behavior analysis of the target system, provided that

the prototyper also models the expected behaviors of the system using the same

object model. In Section 7.1, rule2 in the EngineSchema illustrates the usage of the

state-based deductive rules for behavior analysis.









7.4 Implementation of Deductive Rules


The inferencing/deductive rules consist of two parts, namely, the antecedent

part and the consequent part. If the antecedent of a rule is evaluated to true, then

it implies that the consequent part is also true.

rule rl
antecedent (exists a_inst:A(al > 7, a2 < binstb3) and
exists b_nst:B(bl < 9, b2 > ainst.al) and
exists C(c3 > ainst.a2, c2 < binst.bl)

consequent
(D.(dl:= ainstal, d2:=b_inst.b3, d3:= 5));

end;






ABCAB C




join condition:
(C.c3 > AB.A.a2) and
A, B are sibling nodes ( C.c2 < AB.B.bl)
AB, C are sibling nodes slcdselcond:
AB,C are alpha nodes A7 bl
AB is a beta node joi condition
ABC is a PNode ( A.a2 < B.b3) and ( B.b2 > Aal)
D is a consequent node
Note : there could be multiple consequent nodes, connected to the PNode

Figure 7.1. Example of an Inferencing Rule and Its Corresponding Rete Network


The antecedent part of deductive rules is evaluated by building a Rete network

[For82] for the antecedent's condition expression. The leaf nodes represent different

fact classes appearing in the rule condition. In a Rete network, the terminal leaf

nodes are called alpha nodes whereas the intermediate nodes are called beta nodes.









rule A::addlnstance_rl role A::addlnstancerldeactive
triggered when an A instance is created triggered when an A instance is created
condition (rl::ActiveFlag = 0) check to
condition (rl::ActiveFlag = 1) check to see if this instance satisfies the selection
see if this instance satisfies the selection condition of alpha node A
condition of alpha node A action
action insert the reference of this A instance in the
insert a reference to this instance in the alpha node A List of A References in rlDeactivatePool
end; end;

rule alphaA::addInstancerl // this rule is in alpha node A
triggered when a new reference instance is inserted in alpha node A

action

check to see if this instance satisfies the join condition
with the other instances of the alpha node B
for each match between the alpha node A instance and
the alpha node B instance insert a compound reference to
the beta node AB, which contains a reference to the A instance
and the B instance
end;

Figure 7.2. Active Rules Triggering the Flow of tokens through the Rete Network
from a Leaf Node

The root node of the network is called a PNode. An example of a Rete network

structure is shown in Figure 7.1. Within each alpha node, we store references to

those object instances that satisfy the selection condition for that node. In each beta

node, we store only the references to those compound instances which satisfy the join

condition of the child alpha/beta nodes. Figure 7.1 explains an example rule and the

structure of the Rete network corresponding to that rule condition. Active rules are

generated for each class which appears in the antecedent part of the deductive rule.

They are triggered whenever any instance of that class is created/updated/deleted. In

Figure 7.2 the active rules addInstancerl and addInstance-rl-deactive are generated

for class A. Whenever an instance of class A is created, if the deductive rule rl is

active, then only addInstancerl is triggered while addInstancerl-deactive is skipped

since its guard evaluates to false. But if the rule rl is deactivated, only the active
























rule PNodeABC::addlnstance rl // this rule is in PNode ABC

triggered when an instance is inserted into this node

action

execute the consequent function for the rule rl,
which creates a new instance of Class D with
appropriate binding as specified in the specifications
of the rule and insert a reference to this newly created
instance of class D into the consequent node
end;



/ Note when the object instance D is generated as a result
//of firing the rule rl other deductive rules can also get fired

//as a chain reaction.

// Also in case object instances are deleted we have similar
/ ECAA rules which removes the instance references from the

// terminal alpha nodes to the PNode, and retraction of the

/ derived consequent from the consequent node.
//An update is treated by a delete followed by an insert


Figure 7.3. Active Rules Triggering the Execution of the Consequent of an Inferencing
Rule








rule addInstance rldeactive is triggered. The active rule addInstancerl is skipped

because its guard condition evaluates to false in this case. If the deductive rule rl is

active, whenever an instance of class A is created, addInstancerl is triggered. If the

newly created instance satisfies the selection condition ("al > 7") corresponding to

the class A specified in the antecedent of rl, then a reference to this newly created

instance is inserted into the alpha node A of the Rete network, by the action of

rule addInstance-rl. In case the deductive rule rl is deactivated and the above

mentioned selection condition is true, then addInstance-rldeactive is fired which

inserts a reference to the newly created A instance in the List of A References in

the rl-DeactivatePool structure shown in Figure 7.4. Active rule addInstance-rl is

generated in the alpha node A, which is triggered whenever an instance is inserted

into the alpha node A. This rule is fired if the newly inserted instance satisfies the

join condition with its sibling alpha node B which, according to the deductive rule

specification, is "A.a2 < B.b3 and B.b2 > A.al." For each match of the join condition

between this newly inserted instance in the alpha node A and an instance of the alpha

node B, a compound reference instance is inserted into the beta node AB. It makes

reference to the corresponding A and B instances. Active rules are also generated for

the beta nodes. They serve the purpose of triggering the flow of token through the

Rete network. Active rule addInstancerl, which is generated for the PNode ABC

as shown in Figure 7.3, is triggered when an instance is inserted into the PNode.

It invokes the consequent function in its action part, which creates a new instance

of class D with the appropriate bindings as specified in the consequent part of the








rule. A reference to the newly created D instance is stored in the consequent node

D, shown in Figure 7.1. We store only the object references within each alpha and

beta node instead of storing the entire instance. Inferencing rules can be activated

or deactivated at will by the prototyper. Only those deductive rules that are active

will be subject to the condition matching at any point in time. The mechanism to

achieve dynamic activation and deactivation is as follows. Since a deductive rule is

translated into a set of active ECAA rules, we include the testing of an active flag

for each rule as a guard in its guarded condition expression during the translation

process. An active rule can thus be deactivated dynamically by setting its active flag

to zero i.e., the guards of the associated active rules will be evaluated to false. So

if the flags of all the active rules corresponding to a deductive rule are turned off,

the deductive rule is deactivated. For activating an inferencing rule, the active flags

of all the corresponding active rules are set to one; i.e., the guards are evaluated to

true. The guarded expression of a rule was discussed in detail in Chapter 4. For

each deductive rule we maintain a DeactivePool. During the time when a rule is

deactivated, whenever an instance of a class, which appears in the antecedent part

of the rule, is inserted/updated, a reference to that instance is stored in a list in

the DeactivePool, if it satisfies the selection conditions specified in the antecedent

of the rule (see rule A::addInstance.rldeactive in Figure 7.2). Note that the guard

condition of the rule A::addInstance is evaluated to false when rl is deactivated.

Thus, this rule is skipped. No change to the state of the Rete network of rl occurs

when rl is deactivated. See Figure 7.4 for the details of the DeactivePool structure.








Any deductive rule can be activated or deactivated by invoking the activate() or the

deactivate() method. The activate method in Figure 7.5 sets the active flag of the rule

to 1 and inserts all the references to the instances in the lists of the DeactivatePool

into the corresponding alpha nodes of the Rete network. This will cause the system

to take into account those facts which occurred while the rule was inactive. Initially,

whenever a rule is defined, it is activated by default.

rlDeactivatePool


Figure 7.4. The Deductive Rule Structure


The inferencing rule specification discussed in Section 7.3 is first parsed by a

parser to build the rule tree. The rule processor then processes the tree to generate the

ruleTable and the factTable, which represents the intermediate code. The factTable








method rl::activate()
1. set the ActiveFlag to 1

2. for each element in the List of A references do
insert the reference into the alpha node A
of the Rete network for rl, by calling its
insert function.
3. for all the other list which has references to
B and C respectively, repeat step 2 by inserting the
reference into the corresponding alpha node of
the Rete network of rule rl.
end;


method rl::deactivate()
1. reset the ActiveFlag to 0.
end;

Figure 7.5. Activate / Deactivate Methods

data structure is shown in Figure 7.7 and Figure 7.8, and the ruleTable data structure

is shown in Figure 7.9 and Figure 7.10. The factTable data structure contains all

the information about those classes which are present in the antecedent part of an

inferencing rule. The ruleTable contains all the information about inferencing rules.

It is made up of ruleTableEntry list, in which each list element contains information

about all the fact classes in the antecedent part of the rule, information about the

consequent part of the rule, and the predicates appearing in the antecedent part of the

rule. The ruleTable and the factTable represent the normalized structures generated

from the rule tree. The ruleTable and the factTable are further processed to build

the Rete network for each deductive rule. The leaf nodes of each network are made of

a fact class, while the root is called a PNode. The mechanism by which the instances

are populated within a Rete network is as follows. Whenever an instance is inserted








in a class which appears as an antecedent in any of the deductive rules, active rules

are fired to check which of the selection conditions of leaf nodes are satisfied by that

inserted instance. In case the instance just inserted satisfies the selection condition of

a leaf node, a reference to this instance is inserted in that leaf node. This function is

carried out by an ECAA rule which is triggered by the insertion of a new instance into

the class that appears in the antecedent of a deductive rule. Each leaf node contains a

list of references to those instances that satisfied the selection condition for that node.

Whenever an instance is inserted into the leaf class, the system would try to match

it with its sibling alpha/beta node to see if it satisfies the join condition (if any) with

its sibling node. If it satisfies the join condition, then a reference instance is inserted

into the corresponding parent beta node. This reference instance contains the oids

of the instances of the child nodes that satisfied the join condition. If the beta node

is not the root (PNode) node of the network, then the system would try to match

this new reference instance with its sibling alpha node to see if it satisfies the join

condition (if any) with its sibling node. If it does, then, again, a reference instance

is inserted into the parent beta node. This process continues until an instance is

inserted into the PNode. Whenever an instance gets inserted into the PNode, the

action corresponding to the consequent part of the rule is executed. An action is an

execution of a consequent function. The consequent part of the rule results in the

generation of more control entity instances. The reference instance inserted into the

PNode, which caused the execution of this function, is passed as a parameter to the

function. This helps in creating control entity instances with the appropriate bindings








within the body of the consequent function. For example the assignment "D.dl :=

ainst.al" specified in the consequent part of rl, where ainst refers to a class A

instance, can be achieved as follows: "D.dl := ABCi.AB.A.al," where ABC-i is the

reference instance inserted into the PNode ABC. The generation of these instances

may trigger the firing of other inferencing rules. In case of deleting an instance, a

reference to the deleted instance (if any), is deleted from an alpha node before the

actual instance is deleted. Also, whenever a reference instance is deleted from a child

node in a Rete network, the reference instance in its parent node (if any), which

points to it, is also deleted. The case of update is handled as a deletion followed by

an insertion.

If an antecedent clause has a temporal operator like "Before," "After," "Be-

tween" etc., the alpha node in the Rete network corresponding to this antecedent

clause, has references to the corresponding instances of the classes, which appear as

operands of the temporal operator. In Figure 7.6 we see that the alpha node has

references to an instance of class A as well as an instance of class B which satisfies

the temporal and selection constraints as shown in Figure 7.6. If A and B occur in

more than one clause in the antecedent of the same rule, then in order to distinguish

the two corresponding alpha nodes in the Rete network of the rule, we use an unique

index. The index value is an integer like 1, 2 etc. In the above case, the alpha node

will be AB_-1R since it corresponds to the first clause in which A and B appears in

the antecedent of rule R. The propagation of tokens through the Rete network takes

place by the generated active rules as explained in the example given before.










Say the first antecedent clause in a rule R is as follows:


((STARTTIME (last A).(id = 5)) Before (STARTTIME (last B).(id = 7)))


Then the leaf Node structure in the Rete Network for the above fact is as follows:


Instance


B instance


Such that: ( SystemContext$A_instance.time < SystemContext$B_instance.time

and Ainstance is the most recent instance of class A with id = 5

and Binstance is the most recent instance of class B with id = 7

Figure 7.6. Temporal Clause in the Rule Antecedent


factTableEntry


char* className;

factlnfo* fL;

linkTable* IT; //aggregate attributes

int isTemporal; // temporal/non-temporal fact

factTableEntry* next;

factTableEntry* prev;


/ the fL is the factlnfo list which is the
//information of different fact instances
//in different deductive rules in which
I this dassName appears as antecedent


factlnfo


Figure 7.7. The factTableEntry Structure


char* varName;
char* ruleName;
int qualifier; //EXIST, ALL, FIRST, LAST
char* mNamne; / select condition method
char* joinClass;
temporalrefNode* tNode; / non-null in case
//of temporal fact




Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E8F4UZJYS_PDY2UC INGEST_TIME 2013-01-23T15:26:20Z PACKAGE AA00012950_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES